KILLER ACCOUNTABILITY

Top AI researchers say they won’t make killer robots

Activists are looking to change international law to ban killer robots.
Activists are looking to change international law to ban killer robots.
Image: AP Photo/Michel Spingler
We may earn a commission from links on this page.

More than 2,600 AI researchers and engineers have signed a pledge to never create autonomous killer robots, published today by the Future of Life Institute. Signees include Elon Musk, Alphabet’s DeepMind co-founders Mustafa Suleyman, Demis Hassabis, and Shane Legg, as well as Google’s Jeff Dean, and the University of Montreal’s Yoshua Bengio.

“There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI,” the researchers write. “In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine.”

This pledge is another in a long list of pledges where top AI researchers signal that they won’t make autonomous weapons. It’s easy to be skeptical of the pledge, especially when it’s difficult to imagine when Skype, one of the companies that signed the pledge, might be called upon by the Department of Defense to make a killer robot.

But as shown with Google’s Project Maven, which Google pitched as a “Google-Earth-like” surveillance system, even seemingly-innocuous cloud services can be used to conduct real-time military surveillance or eventually become the targeting systems for autonomous weapons.

As for what the letter signals to lawmakers, it’s more of an olive branch than anything else.

“It’s certainly a strong commitment and signal of intent, and something we can hold these companies and individuals to in the future,” Mary Wareham, advocacy director for theHuman Rights Watch’s arms division, told Quartz. “People who have signed [this] letter have come to the United Nations over the last five years and said, ‘Don’t worry about regulating this field. We’re asking you to regulate this field. We’re saying we’ll work with you as you regulate this field.'”

Wareham is looking forward to taking this pledge, along with other signed letters collected in previous years, to Geneva this August where she and other activists will make a case in front of the UN for international regulation on killer robots under existing laws regulating weapon use.

While the letter signed today is vague on what an autonomous robot actually would be—like whether a useful dataset to teach robots friend from foe would be considered part of a weapon—Wareham says that the argument over definitions always comes after the decision to actually regulate.

“It’s a bit of a red herring, really,” she said. “This is why we need negotiations. Every effort to address certain unacceptable weapons system always starts with a discussion of ‘what are we talking about here.’ And that’s definitely a part of the discussion by states at the moment.”