No menu items!
EletiofeMeet Pause AI, the Protest Group Campaigning Against Human...

Meet Pause AI, the Protest Group Campaigning Against Human Extinction

-

- Advertisment -

The first time we speak, Joep Meindertsma is not in a good place. He tears up as he describes a conversation in which he warned his niece about the risk of artificial intelligence causing societal collapse. Afterward, she had a panic attack. “I cry every other day,” he says, speaking over Zoom from his home in the Dutch city of Utrecht. “Every time I say goodbye to my parents or friends, it feels like it could be the last time.”

Meindertsma, who is 31 and co-owns a database company, has been interested in AI for a couple of years. But he really started worrying about the threat the technology could pose to humanity when Open AI released its latest language model, GPT-4, in March. Since then, he has watched the runaway success of ChatGPT chatbot—based first on GPT-3 then GPT-4—demonstrate to the world how far AI has progressed and Big Tech companies race to catch up. And he has seen pioneers like Geoffrey Hinton, the so-called godfather of AI, warn of the dangers associated with the systems they helped create. “AI capabilities are advancing far more rapidly than virtually anyone has predicted,” says Meindertsma. “We are risking social collapse. We’re risking human extinction.”

One month before our talk, Meindertsma stopped going to work. He had become so consumed by the idea that AI is going to destroy human civilization that he was struggling to think of anything else. He had to do something, he felt, to avert disaster. Soon after, he launched Pause AI, a grassroots protest group that campaigns for, as its name suggests, a halt to the development of AI. And since then, he has amassed a small band of followers who have held protests in Brussels, London, San Francisco and Melbourne. These demonstrations have been small—fewer than 10 people each time—but Meindertsma has been making friends in high places. Already, he says, he has been invited to speak with officials within both the Dutch Parliament and at the European Commission.

The idea that AI could wipe out humanity sounds extreme. But it’s an idea that’s gaining traction in both the tech sector and in mainstream politics. Hinton quit his role at Google in May and embarked on a global round of interviews in which he raised the specter of humans no longer being able to control AI as the technology advances. That same month, industry leaders—including the CEOs of AI labs Google DeepMind, OpenAI, and Anthropic—signed a letter acknowledging the “risk of extinction,” and UK prime minister Rishi Sunak became the first head of government to publicly admit he also believes that AI poses an existential risk to humanity.

Meindertsma and his followers offer a glimpse of how these warnings are trickling through society, creating a new phenomenon of AI anxiety and giving a younger generation—many of whom are already deeply worried about climate change—a new reason to feel panic about the future. A survey by the pollster YouGov found that the proportion of people worried that artificial intelligence would lead to an apocalypse rose sharply in the last year. Hinton denies he wants AI development to be stopped, temporarily or indefinitely. But his public statements, about the risk AI poses to humanity, have resulted in a group of young people who feel there is no other choice.

To different people, “existential risk” means different things. “The main scenario I’m personally worried about is social collapse due to large-scale hacking,” says Meindertsma, explaining he’s concerned about AI being used to create cheap and accessible cyber weapons that could be used by criminals to “effectively take out the entire internet.” This is a scenario experts say is extremely unlikely. But Meindertsma still worries about the resilience of banking and food distribution services. “People will not be able to find food in a city. People will fight,” he says. “Many billions I think will die.”

But the Pause AI founder also worries about a future where AI advances enough to be classified as “super-intelligent” and decides to wipe out civilization, once it understands that humans limit AI’s power. He echoes an argument, also used by Hinton, that if humans ask a future super intelligent AI system to fulfill any goal, AI might create its own dangerous sub-goals in the process.

This concern dates back years and is generally credited to the Swedish philosopher and Oxford University professor Nick Bostrom, who first described in the early 2000s what hypothetically could happen if a super-intelligent AI was asked to create as many paperclips as possible. “The AI will realize quickly that it would be much better if there were no humans, because humans might decide to switch it off,” Bostrom said in a 2014 interview. “Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

AI research is a divided field and some experts who might be expected to rip Meindertsma’s ideas apart, instead seem reluctant to discredit them. “Because of the rapid progress, we just don’t know how much of science fiction could become reality,” says Clark Barrett, co-director of Stanford University’s Center for AI Safety in California. Barrett does not believe a future where AI helps develop cyber weapons is plausible. This is not a field where AI has so far excelled, he claims. But he is less willing to dismiss the idea that an AI system that evolves to be smarter than humans could work maliciously against us. People worry that an AI system “could try to steal all of our energy or steal all of our compute power or try to manipulate people into doing what it wants us to do.” This is not realistic right now, he says. “But we don’t know what the future can bring. So I can’t say it’s impossible.”

Yet, other AI researchers have less patience with the hypothetical debate. “For me, it is a problematic narrative that people claim any kind of proof or likelihood that AI is going to be self conscious and turn against humanity,” says Theresa Züger, head of Humboldt University’s AI and Society Lab, based in Germany. “There is no evidence that this is going to appear and in other scientific fields, we wouldn’t discuss this if there is no evidence.”

This lack of consensus among experts is enough for Meindertsma to justify his group’s demand for a global halt to AI development. “The most sensible thing to do right now is to pause AI developments until we know how to build AI safely,” he says, claiming that leaps forward in AI capabilities have become divorced from research on safety. The debate about how the relationship between these two halves of the AI industry have evolved is also taking place in mainstream academia. “This is something that I’ve seen getting worse over the years,” says Ann Nowé, head of the Artificial Intelligence Lab at the Free University in Brussels. “When you were trained in the ’80s to do AI, you had to understand the application field,” she adds, explaining it was normal for AI researchers to spend time speaking to people working in the schools or hospitals where their system would be used. “[Now] a lot of AI people are not trained in having this conversation with stakeholders about whether this is ethical or legally compliant.”

The government-mandated pause, Meindertsma envisions, would have to be organized by governments of different countries at an international summit, he says. When British prime minister Rishi Sunak announced the UK would host a global summit on AI safety in the autumn, Meindertsma interpreted this as a flash of hope. He believes the UK is well-suited to make sure we’re not rushing towards a doomsday scenario. “It’s the home for many AI safety scientists. It’s where DeepMind is currently located. You have members of parliament already calling for an AI safety summit to prevent extinction.” Yet Sunak’s announcement was also tinged with ambitions to make the UK a hub of AI industry—simultaneously revealing that the company Palantir would base its new European headquarters in London—implying the likelihood of the UK advocating for an industrywide pause is remote.

Sunak’s willingness to engage with AI’s existential risk means the UK is a focus for Meindertsma. One of his newest recruits, Gideon Futerman, half runs, half walks past the British Houses of Parliament, banners wrapped in plastic under his arm. His train was delayed, he says, explaining why he’s late to his own protest. Futerman wears small, round glasses and odd socks. Pause AI’s British branch is not a slick operation. And this protest is technically not a protest. It’s meant to signal support for Sunak’s summit and to pressure the prime minister to use the meeting to introduce a pause. But the group of people here today also shows how anxiety is building among some young people. Referring to artificial general intelligence, one of the group’s banners reads: “Don’t build AGI,” the letters dripping in red ink, designed to look like blood.

The group is small. There are seven protesters in total, all of them young men in their teens or early twenties. Their experience of AI varies. One is a politics student, another works at a nonprofit dedicated to AI safety. Several have a background in protesting against climate change. “One of the main similarities between climate change and AI is the fact that you have a few companies risking people’s lives today and even more lives in the future, for the sake essentially of making profits,” says Futerman. He shares Meindertsma’s concerns, or versions of them. One future scenario he’s worried about is that leading AI companies could develop artificial “super” intelligence. If that happened, he believes these models would gain the power to significantly reduce human agency over our future. “They’re aiming to build a doomsday device,” he says. “In the worst case scenario, this could wipe us out.”

Among the cluster of protesters is Ben, an animal rights activist with a mane of red hair, who declines to share his surname because he doesn’t want his AI activism to affect his career. Before the protest, we go for coffee to talk about why he joined Pause AI. “There was definitely a time when I felt that the extinction risk argument was science fiction or overblown,” he explains, the earring in his left ear gently shaking as he becomes more animated. “Then as ChatGPT came out, and GPT-4, it became apparent to me how powerful these AI models were getting, and also how fast they were increasing in their power.”

Ben has never communicated directly with Meindertsma; he met fellow Pause AI members through a London coworking space. He believes his animal rights activism gives him a template to understand the dynamic between different species, with varying degrees of intelligence. “It’s difficult to predict what a world would look like with a different, more intelligent species than the human existing,” he says. “But we know that our relationship with species that are less intelligent than us hasn’t been great for those other species. If you look at humanity’s relationship with other animals, some of them we farm and slaughter for our own purposes. And many of them were driven to extinction.”

He acknowledges some of the scenarios Pause AI is warning about might never happen. But even if they don’t, powerful AI systems will likely turbocharge problems that technology has already accelerated in our societies, he says, such as labor issues and racial and gender bias. “People who are concerned about AI extinction also take these problems really seriously.”

The second time I speak to Meindertsma, he’s in a better mood. He has new recruits, he feels the world is listening. He has just returned from Brussels, where he was invited to a meeting at the European Commission, declining to publicly name the official he met in case that sours the relationship. And now the UK is holding the global summit he has spent weeks campaigning for. “So I feel like we’re making a lot of progress in a short time,” he says.

As Pause AI’s ideas gain traction, politicians and AI companies are still figuring out how to respond—with researchers divided about whether their concerns help garner support for AI safety research or simply spread panic about future scenarios that might never happen. Meindertsma argues that intelligence is power, and that’s what makes it dangerous. But every day, supposedly intelligent humans try to take more power for themselves and find their efforts blocked by institutions and systems specifically designed to contain it, according to Stanford’s Clark Barrett. He might not be willing to predict how AI might evolve, but he does believe society is more prepared than Pause AI might give it credit for. “There are certain barriers in place that I think shouldn’t be underestimated in terms of preventing this kind of runaway effect that people are worried about.”

Latest news

7 Best Handheld Gaming Consoles (2024): Switch, Steam Deck, and More

It feels like a distant memory by now, but right before the Nintendo Switch launched in 2017, it seemed...

The Boeing Starliner Astronauts Will Come Home on SpaceX’s Dragon Next Year

NASA has announced that astronauts Barry Wilmore and Sunita Williams will return to Earth next February aboard SpaceX’s Dragon...

How to Switch From iPhone to Android (2024)

Ignore the arguments about which is better, because iPhones and Android phones have far more in common than some...

12 Best Tablets (2024): iPads, Androids, and More Tested and Compared

Tablets often don't come with kickstands or enough ports, so it's a good idea to snag a few accessories...
- Advertisement -

Will the ‘Car-Free’ Los Angeles Olympics Work?

THIS ARTICLE IS republished from The Conversation under a Creative Commons license.With the Olympic torch extinguished in Paris, all...

Lionel Messi will return before MLS playoffs, says Inter Miami coach Tata Martino

Inter Miami head coach Tata Martino said on Friday that Lionel Messi will return to the team's lineup before...

Must read

7 Best Handheld Gaming Consoles (2024): Switch, Steam Deck, and More

It feels like a distant memory by now, but...

The Boeing Starliner Astronauts Will Come Home on SpaceX’s Dragon Next Year

NASA has announced that astronauts Barry Wilmore and Sunita...
- Advertisement -

You might also likeRELATED
Recommended to you