Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

An AI researcher who has been warning about the technology for over 20 years says we should 'shut it all down,' and issue an 'indefinite and worldwide' ban

Artificial Intelligence
An AI researcher warned that "literally everyone on Earth will die," if AI development isn't shut down. iLexx/Getty Images

  • One AI researcher who has been warning about the tech for over 20 years said to "shut it all down." 
  • Eliezer Yudkowsky said the open letter calling for a pause on AI development doesn't go far enough. 
  • Yudkowsky, who has been described as an "AI doomer," suggested an "indefinite and worldwide" ban.
Advertisement

An AI researcher who has warned about the dangers of the technology since the early 2000s said we should "shut it all down," in an alarming op-ed published by Time on Wednesday.

Eliezer Yudkowsky, a researcher and author who has been working on Artificial General Intelligence since 2001, wrote the article in response to an open letter from many big names in the tech world, which called for a moratorium on AI development for six months. 

The letter, signed by 1,125 people including Elon Musk and Apple's co-founder Steve Wozniak, requested a pause on training AI tech more powerful than OpenAI's recently launched GPT-4.

Yudkowsy's article, titled "Pausing AI Developments Isn't Enough. We Need to Shut it All Down," said he refrained from signing the letter because it understated the "seriousness of the situation," and asked for "too little to solve it." 

Advertisement

He wrote: "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."

He explained that AI "does not care for us nor for sentient life in general," and we're far from instilling those kinds of principles in the tech at present. 

Related story

Yudkowsky instead suggested a ban that is "indefinite and worldwide" with no exceptions for governments or militaries. 

"If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data center by airstrike," Yudkowsky said. 

Advertisement

Yudkowsky has for many years been issuing bombastic warnings about the possibly catastrophic consequences of AI. Earlier in March he was described by Bloomberg as an "AI Doomer," with author Ellen Huet noting that he has been warning about the possibility of an "AI apocalypse" for a long time. 

Open AI co-founder and CEO Sam Altman even tweeted that Yudkowksy has "done more to accelerate AGI than anyone else," and deserves "the Nobel peace prize," for his work in what Huet theorized is a jab at the researcher that his warnings about the tech have only accelerated its development. 

Since OpenAI launched its chatbot ChatGPT in November and it became the fastest-growing consumer app in internet history, Google, Microsoft, and other tech giants have been competing to launch their own artificial intelligence products. 

Henry Ajder, an AI expert and presenter who sits on the European Advisory Council for Meta's Reality Labs, previously told Insider that tech firms are locked in a "competitive arms race environment"  in an effort to be seen as "first movers," which may result in concerns around ethics and safety in AI being overlooked.

Advertisement

Even Altman has acknowledged fears around AI, saying on a podcast last week that "it'd be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid."

He added, however, that OpenAI is taking steps to address kinks and issues with its tech, saying: "We will minimize the bad and maximize the good."

Axel Springer, Business Insider's parent company, has a global deal to allow OpenAI to train its models on its media brands' reporting.

Axel Springer, Business Insider's parent company, has a global deal to allow OpenAI to train its models on its media brands' reporting.

AI Elon Musk OpenAI
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account