No menu items!
EletiofeSix Months Ago Elon Musk Called for a Pause...

Six Months Ago Elon Musk Called for a Pause on AI. Instead Development Sped Up

-

- Advertisment -

Six months ago this week, many prominent AI researchers, engineers, and entrepreneurs signed an open letter calling for a six-month pause on development of AI systems more capable than OpenAI’s latest GPT-4 language generator. It argued that AI is advancing so quickly and unpredictably that it could eliminate countless jobs, flood us with disinformation, and—as a wave of panicky headlines reported—destroy humanity. Whoops!

As you may have noticed, the letter did not result in a pause in AI development, or even a slow down to a more measured pace. Companies have instead accelerated their efforts to build more advanced AI.

Elon Musk, one of the most prominent signatories, didn’t wait long to ignore his own call for a slowdown. In July he announced xAI, a new company he said would seek to go beyond existing AI and compete with OpenAI, Google, and Microsoft. And many Google employees who also signed the open letter have stuck with their company as it prepares to release an AI model called Gemini, which boasts broader capabilities than OpenAI’s GPT-4.

WIRED reached out to more than a dozen signatories of the letter to ask what effect they think it had and whether their alarm about AI has deepened or faded in the past six months. None who responded seemed to have expected AI research to really grind to a halt.

“I never thought that companies were voluntarily going to pause,” says Max Tegmark, an astrophysicist at MIT who leads the Future of Life Institute, the organization behind the letter—an admission that some might argue makes the whole project look cynical. Tegmark says his main goal was not to pause AI but to legitimize conversation about the dangers of the technology, up to and including the fact that it might turn on humanity. The result “exceeded my expectations,” he says.

The responses to my follow-up also show the huge diversity of concerns experts have about AI—and that many signers aren’t actually obsessed with existential risk.

Lars Kotthoff, an associate professor at the University of Wyoming, says he wouldn’t sign the same letter today because many who called for a pause are still working to advance AI. “I’m open to signing letters that go in a similar direction, but not exactly like this one,” Kotthoff says. He adds that what concerns him most today is the prospect of a “societal backlash against AI developments, which might precipitate another AI winter” by quashing research funding and making people spurn AI products and tools.

Other signers told me they would gladly sign again, but their big worries seem to involve near-term problems, such as disinformation and job losses, rather than Terminator scenarios.

“In the age of the internet and Trump, I can more easily see how AI can lead to destruction of human civilization by distorting information and corrupting knowledge,” says Richard Kiehl, a professor working on microelectronics at Arizona State University.

“Are we going to get Skynet that’s going to hack into all these military servers and launch nukes all over the planet? I really don’t think so,” says Stephen Mander, a PhD student working on AI at Lancaster University in the UK. He does see widespread job displacement looming, however, and calls it an “existential risk” to social stability. But he also worries that the letter may have spurred more people to experiment with AI and acknowledges that he didn’t act on the letter’s call to slow down. “Having signed the letter, what have I done for the last year or so? I’ve been doing AI research,” he says.

Despite the letter’s failure to trigger a widespread pause, it did help propel the idea that AI could snuff out humanity into a mainstream topic of discussion. It was followed by a public statement signed by the leaders of OpenAI and Google’s DeepMind AI division that compared the existential risk posed by AI to that of nuclear weapons and pandemics. Next month, the British government will host an international “AI safety” conference, where leaders from numerous countries will discuss possible harms AI could cause, including existential threats.

Latest news

7 Best Handheld Gaming Consoles (2024): Switch, Steam Deck, and More

It feels like a distant memory by now, but right before the Nintendo Switch launched in 2017, it seemed...

The Boeing Starliner Astronauts Will Come Home on SpaceX’s Dragon Next Year

NASA has announced that astronauts Barry Wilmore and Sunita Williams will return to Earth next February aboard SpaceX’s Dragon...

How to Switch From iPhone to Android (2024)

Ignore the arguments about which is better, because iPhones and Android phones have far more in common than some...

12 Best Tablets (2024): iPads, Androids, and More Tested and Compared

Tablets often don't come with kickstands or enough ports, so it's a good idea to snag a few accessories...
- Advertisement -

Will the ‘Car-Free’ Los Angeles Olympics Work?

THIS ARTICLE IS republished from The Conversation under a Creative Commons license.With the Olympic torch extinguished in Paris, all...

Lionel Messi will return before MLS playoffs, says Inter Miami coach Tata Martino

Inter Miami head coach Tata Martino said on Friday that Lionel Messi will return to the team's lineup before...

Must read

7 Best Handheld Gaming Consoles (2024): Switch, Steam Deck, and More

It feels like a distant memory by now, but...

The Boeing Starliner Astronauts Will Come Home on SpaceX’s Dragon Next Year

NASA has announced that astronauts Barry Wilmore and Sunita...
- Advertisement -

You might also likeRELATED
Recommended to you