Leading figures in the development of artificial intelligence systems, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, have signed a statement warning that the technology they are building may someday pose an existential threat to humanity comparable to that of nuclear war and pandemics.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement, released today by the Center for AI Safety, a nonprofit.
The idea that AI might become difficult to control, and either accidentally or deliberately destroy humanity, has long been debated by philosophers. But in the past six months, following some surprising and unnerving leaps in the performance of AI algorithms, the issue has become a lot more widely and seriously discussed.
In addition to Altman and Hassabis, the statement was signed by Dario Amodei, CEO of Anthropic, a startup dedicated to developing AI with a focus on safety. Other signatories include Geoffrey Hinton and Yoshua Bengio—two of three academics given the Turing Award for their work on deep learning, the technology that underpins modern advances in machine learning and AI—as well as dozens of entrepreneurs and researchers working on cutting-edge AI problems.
“The statement is a great initiative,” says Max Tegmark, a physics professor at the Massachusetts Institute of Technology and the director of the Future of Life Institute, a nonprofit focused on the long-term risks posed by AI. In March, Tegmark’s Institute published a letter calling for a six-month pause on the development of cutting-edge AI algorithms so that the risks could be assessed. The letter was signed by hundreds of AI researchers and executives, including Elon Musk.
Tegmark says he hopes the statement will encourage governments and the general public to take the existential risks of AI more seriously. “The ideal outcome is that the AI extinction threat gets mainstreamed, enabling everyone to discuss it without fear of mockery,” he adds.
Dan Hendrycks, director of the Center for AI Safety, compared the current moment of concern about AI to the debate among scientists sparked by the creation of nuclear weapons. “We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” Hendrycks said in a quote issued along with his organization’s statement.
The current tone of alarm is tied to several leaps in the performance of AI algorithms known as large language models. These models consist of a specific kind of artificial neural network that is trained on enormous quantities of human-written text to predict the words that should follow a given string. When fed enough data, and with additional training in the form of feedback from humans on good and bad answers, these language models are able to generate text and answer questions with remarkable eloquence and apparent knowledge—even if their answers are often riddled with mistakes.
These language models have proven increasingly coherent and capable as they have been fed more data and computer power. The most powerful model created so far, OpenAI’s GPT-4, is able to solve complex problems, including ones that appear to require some forms of abstraction and common sense reasoning.
Language models had been getting more capable in recent years, but the release of ChatGPT last November drew public attention to the power—and potential problems—of the latest AI programs. ChatGPT and other advanced chatbots can hold coherent conversations and answer all manner of questions with the appearance of real understanding. But these programs also exhibit biases, fabricate facts, and can be goaded into behaving in strange and unpleasant ways.
Geoffrey Hinton, who is widely considered one of the most important and influential figures in AI, left his job at Google in April in order to speak about his newfound concern over the prospect of increasingly capable AI running amok.
National governments are becoming increasingly focused on the potential risks posed by AI and how the technology might be regulated. Although regulators are mostly worried about issues such as AI-generated disinformation and job displacement, there has been some discussion of existential concerns.
“We understand that people are anxious about how it can change the way we live. We are, too,” Sam Altman, OpenAI’s CEO, told the US Congress earlier this month. “If this technology goes wrong, it can go quite wrong.”
Not everyone is on board with the AI doomsday scenario, though. Yann LeCun, who won the Turing Award with Hinton and Bengio for the development of deep learning, has been critical of apocalyptic claims about advances in AI and has not signed the letter as of today.
And some AI researchers who have been studying more immediate issues, including bias and disinformation, believe that the sudden alarm over theoretical long-term risk distracts from the problems at hand.
Meredith Whittaker, president of the Signal Foundation and cofounder and chief advisor of the AI Now Institute, a nonprofit focused AI and the concentration of power in the tech industry, says many of those who signed the statement likely believe probably that the risks are real, but that the alarm “doesn’t capture the real issues.”
She adds that discussion of existential risk presents new AI capability as if they were a product of natural scientific progress rather than a reflection of products shaped by corporate interests and control. “This discourse is kind of an attempt to erase the work that has already been done to identify concrete harms and very significant limitations on these systems.” Such issues range from AI bias, to model interpretability, and corporate power, Whittaker says.
Margaret Mitchell, a researcher at Hugging Face who left Google in 2021 amid fallout over a research paper that drew attention to the shortcomings and risks of large language models, says it is worth thinking about the long-term ramifications of AI. But she adds that those behind the statement seem to have done little to consider how they might prioritize more immediate harms including how AI is being used for surveillance. “This statement as written, and where it’s coming from, suggest to me that it’ll be more harmful than helpful in figuring out what to prioritize,” Mitchell says.