Can Self-Driving Cars Develop a Moral Code?

A new MIT computer program considers the ethics of autonomous vehicles, but critics doubt the tool's effectiveness

There is some doubt as to whether driverless cars can make the same snap judgments as human drivers.
There is some doubt as to whether driverless cars can make the same snap judgments as human drivers.

Self-driving cars are the new frontier in automotive design. These autonomous vehicles detect their surroundings using a variety of technologies such as radar and GPS. Their control systems analyze sensory data to distinguish between different vehicles on the road, so the car can sense when it’s safe to change lanes or perform other maneuvers.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a rel="nofollow noreferer" href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

Many companies are trying to get in on the actElon Musk recently announced that all of Tesla’s cars now have self-driving capabilities, and Google (GOOGL) plans to make its self-driving cars commercially available by 2020. The Obama administration has also given his blessing to the technology, as long as it’s regulated.

But the increasing likelihood of driverless cars also raises ethical questions, such as whether a vehicle should prioritize the safety of the driver or the safety of pedestrians in the event of an unavoidable crash.

Two doctoral students at the Massachusetts Institute of Technology have attempted to address this problem by creating the Moral Machine, a computer program which allows users to decide how a driverless car should respond to an imminent disaster. However, several technologists question whether the tool (or an actual driverless car) can be programmed to fully replicate all the big and small decisions human drivers must make.

Sohan Dsouza and Edmond Awad, who developed the Moral Machine, are part of Scalable Cooperation, a research group at the MIT Media Lab which aims to understand how technology will reshape human collaboration. Their program presents moral dilemmas where a driverless car with failing brakes must choose the lesser of two evils, such as killing two passengers or five pedestrians. Users can judge which outcome they think is more acceptable and see how their responses compare with those of other people. They can also design their own scenarios to view, share and discuss. So far, the Moral Machine has logged 14 million decisions from two million people.

“We wanted to create discussion of a neglected topic,” Dsouza told the Observer. “We also cover all the complexities of traffic laws, where pedestrians are either jaywalking or following the walk signal. That affects whether you kill more or less people.”

“We collect data to understand the human perception of moral decisions made by machines,” Awad added.

“We want to understand what people think about these variations…That gives us a chance to understand whether people want to save kids over elderly people.” – Edmond Awad

This data includes information that self-driving cars wouldn’t have access to, such as the age, sex, occupation and weight of the virtual pedestrians and passengers involved.

“We don’t think the machine should differentiate this group over that group, but we want to understand what people think about these variations,” Awad said. “That gives us a chance to understand whether people want to save kids over elderly people.”

The tool’s emphasis on life-or-death situations is by design, according to the developer duo.

“We understand there may be infinite possibilities, but two options is easier for the user,” Awad said. “This is just the beginningthere’s a lot to process.”

“Our scenarios for now are death only, which is just to get the conversation going,” Dsouza said.

Autonomous car experts say this approach sounds good in theory: Michael Ramsey, research director for the information technology-research firm Gartner, told the Observer that the Moral Machine was an admirable attempt to adapt the rules of the road to the self-driving car era.

In practice, however, the road ahead gets more rutted—Ramsey said that the scenario presented by the Moral Machine was “incredibly uncommon,” which made its value more sociological than practical.

Tesla's self-driving car hits the road.
Tesla’s self-driving car hits the road.

Ramsey suggested that it would be more beneficial for users to decide whether a driverless car should maintain the legal speed limit of 55 miles per hour on the highway, or whether it should increase its speed to 70 or 80 miles per hour to keep up with traffic.

“Which is more dangerous?” Ramsey asked. “Some human or a group of humans needs to decide what to do.”

Dr. Jeffrey Miller, associate professor of engineering practice in the Department of Computer Science at the University of Southern California’s Viterbi School of Engineering, agreed that the Moral Machine’s black-and-white approach to safety was a roadblock to its success.

“You have to ask how much data we need in order to make a better decision,” Miller told the Observer. “Humans driving cars can analyze the overall impact and see how to protect the passengers. A swerve could be best for the driver, but other variables change the answer. What if there’s a minivan with a family of four in the next lane, or a school bus?”

In Miller’s view, driverless cars in their current state can’t compete with a human driver’s instincts.

“You can’t program for every eventuality,” he said. “If we’re making decisions based on a minimal amount of data, it could be fatal for someone next to a driverless vehicle.”

The developers of the Moral Machine acknowledge that self-driving cars are not equipped to make split-second decisions at this point, and would only be able to in the future if humans programmed these responses.

“The thinking would need to be incorporated ahead of time,” Awad said.

However, Awad added that it would be nearly impossible to prepare for every eventuality since every driver responds differently behind the wheel.

“Humans don’t agree on moral decisions, and so they don’t agree on what the machines should do in different scenarios,” he said.

To address this issue, Dsouza said the Moral Machine team plans to translate the website into different languages and promote it in foreign countries to get diverse perspectives.

SEE ALSO: Google’s Self-Driving Car Czar Hails Fender Benders as Teachable Moments

Miller said this development could help the Moral Machine become a more practical tool.

“The more data we can get, the more intelligent our vehicle will be, because we’re gonna be able to make more decisions programmatically about what the vehicle should do,” he said. “The idea is great and aptly timed.”

Ramsey concluded that while it was beneficial to crowdsource a behavioral model for driverless cars, psychological tools like the Moral Machine need to be put into practice to do any good.

“You can’t implant the trolley problem into code,” he said, referring to the common ethical thought experiment“The context of the situation is gonna dictate all different kinds of things. It needs to be refined before it’s used as the moral code for an autonomous vehicle.”

Can Self-Driving Cars Develop a Moral Code?