The Trolley Dilemma and Autonomous Cars — Critical Decisions
Fleets of autonomous cars are navigating U.S. highways and byways every day. Once an unfathomable dream, as most technology has been throughout history, driverless vehicles with programmed minds of their own have arrived.
The idea of learning, thinking machines is both exciting and concerning. It is a feat of engineering that such a milestone has been reached. At the same time, such a feat brings with it serious safety issues.
“We think that before automated vehicles are put on the roads, they should be required to go through a functional safety evaluation,” Cathy Chase, of Advocates for Highway and Auto Safety, told Automotive News in an article titled “Safety advocates want Congress to go slow on autonomous cars.” “We think that’s a very basic precursor.”
Congress is considering legislation that essentially would open the floodgates for autonomous cars. A proposed bill would allow manufacturers to release 100,000 four-wheelers onto public streets without having to meet the standards that manufacturers of traditional cars, trucks and SUVs must meet. It further would block states from passing laws curbing autonomous cars. Proponents say the ability to roll out and test the technology in real time, on real roads, is key to perfecting its safety.
“If there is to be a future for driverless cars in the United States, Congress and the NHTSA cannot sit on the sidelines opposing all new federal rules, nor should NHTSA simply issue voluntary guidance,” Alan Morrison, a professor at George Washington University, was quoted as saying in the article.
The National Highway Traffic Safety Administration in 2016 released a set of federal guidelines for autonomous vehicles that includes how they should respond in certain traffic conditions. That is where things get tricky. How should the software be programmed to make a split-second decision that could take a life to spare a life? Such an ethical decision is known in philosophical circles as The Trolley Dilemma. The scenario puts a trolley driver in a situation in which five people are working on the track the trolley is barreling along while one worker is on an alternative track. If the driver diverts to save the five, the one will be killed.
Can a machine make those types of decisions and, even if it is able, can we live with the decision?
These are some of the situations that – it’s strange to think about this – but that are going to have to be in some way programmed into a computer by human beings to tell the computer what to do under situations like that. And these are some difficult ethical considerations that manufacturers are going to have to take into account as this technology develops.