Hi! I’m Vittoria Elliott. I’m a reporter on the WIRED Politics desk, and I’m taking over for Makena this week to talk about politicians rising from the dead in India and the rapper Eminem endorsing opposition parties in South Africa.
These things haven’t really happened, obviously, but deepfakes created by generative AI have made it seem like they have. Already, we’re seeing how politicians, campaigns, and regular people are using generative AI in elections. And this is only the beginning. So today, WIRED is launching a project to track it, all over the world.
Let’s talk about it.
This is an edition of the WIRED Politics Lab newsletter. Sign up now to get it in your inbox every week.
Politics has never been stranger—or more online. WIRED Politics Lab is your guide through the vortex of extremism, conspiracies, and disinformation.
- 🗞️ Read previous newsletters here.
- 🎧 Listen to the WIRED Politics Lab podcast.
- 💬 Join the conversation below this article.
AI Is Changing Politics All Over the World
Many Americans have their eyes set on November, but 2024 has already been a big election year for the rest of the world. India, the world’s largest democracy, is wrapping up its vote; South Africa and Mexico are both heading to the polls this week; and the EU is ramping up for its parliamentary elections in June. It’s the largest election year in history, and there are more people online than ever before.
If you’re a voter in Indonesia, you may have recently seen a video of a deceased dictator stumping for his political successor. If you’re a Democrat in American Samoa, perhaps you received a personalized message from a little-known presidential candidate, Jason Palmer, who went on to beat President Joe Biden in the island territory’s primaries. Or perhaps you’re one of the people whom WIRED contributors Nilesh Christopher and Varsha Bansal spoke to for this great WIRED feature, and who received personalized AI-generated phone calls from local candidates in India’s elections.
These are just some of the ways that generative AI is reshaping the world of politics, elections, and democracy–and it’s more available than ever.
But here’s the thing: A lot of reporting on this issue is one-off, examining or fact-checking individual instances. That work is important, but it doesn’t get at the scope and breadth of where and how these tools are being used. So, for the rest of the year, we’re going to be tracking the use of generative AI all over the world, across more than 60 elections.
In our project, we tracked uses of AI that, in some cases, have reached millions of people: For example, TikToks featuring an AI-generated image that made Prabowo Subianto, Indonesia’s former defense minister and now president-elect, seem cute and cuddly were viewed more than 19 billion times. (Subianto was at one point banned from the US for alleged human rights abuses.)
Experts know that generative AI is poised to drastically change the information landscape, and problems that have long plagued tech platforms—like mis- and disinformation, scams, and hateful content—are likely to be amplified, despite the guardrails that companies say they’ve put in place.
There are a few ways to know whether something was made or manipulated using AI: People or campaigns may have confirmed its usage; fact-checkers may have analyzed and debunked something circulating out in the world; or maybe the AI content is being clearly used for something like satire. Sometimes, if we’re lucky, it’s watermarked, meaning there’s something indicating that it was generated or changed by AI. But the reality is this likely accounts for only some of what’s already out there. Even our own dataset is almost certainly an undercount.
And that leads us to another issue: As British journalist Peter Pomerantsev has said, “When nothing is true, everything is possible.” In an information ecosystem where anything can be generative AI, it’s easy for politicians or public figures to say that something real is fake—what’s known as the “liar’s dividend.” That means people may be less likely to believe information even when it’s true. As for fact-checkers and journalists, many don’t have the tools readily available to assess whether something has been made or manipulated by AI. Whatever this year brings, it’s likely going to be only the tip of the iceberg.
But just because something is fake doesn’t make it bad. Deepfakes have found a home in satire, chatbots can (sometimes) provide good information, and personalized campaign outreach can make people feel seen by their political representatives.
It’s a brave new world, but that’s why we’re tracking it.
The Chatroom
As part of our AI project, we’re asking readers to submit any instances of generative AI you’re encountering out in the wild this election year.
To get a better sense of how we’ll be evaluating submissions (or even the things we find) and to send one our way, check out this link here. If you’re not sure whether something was made from generative AI or just a run-of-the-mill cheapfake, send it anyway and we’ll look into it.
💬 Leave a comment below this article.
WIRED Reads
- The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’: African workers who label AI data and screen social posts for US tech giants are calling on President Biden to raise their plight with Kenya’s president, William Ruto, who visited the US last week.
- Germany’s Far-Right Party Is Running Hateful Ads on Facebook and Instagram: Published ahead of the EU elections, the ads blame immigrants for crime and sexual violence.
- You Can Get Paid to Talk to Friends About Voting: Campaigns and political groups think they can beat disinformation by paying average people to talk to their friends about voting.
Want more? Subscribe now for unlimited access to WIRED.
What Else We’re Reading
🔗 TikTok says it removed an influence campaign originating in China: TikTok said last week that it had taken down thousands of accounts linked to 15 Chinese influence campaigns on its platforms. (The Washington Post)
🔗 Ramaswamy Urges BuzzFeed to Cut Jobs, Air More Conservative Voices: Vivek Ramaswamy, the former Republican presidential candidate, is now an activist investor in BuzzFeed. He wants the publication to court conservative readers and to say it “lied” in its reporting about Donald Trump and Covid, among other topics. (Bloomberg)
🔗 OpenAI Creates Oversight Board Featuring Sam Altman After Dissolving Safety Team: The new board will make recommendations about safety and security, and will have 90 days to “further develop OpenAI’s processes and safeguards,” according to the company’s blog. (Bloomberg)
The Download
One last thing! This week on the podcast, I spoke with our editor and host Leah Feiger about the AI elections project. Give it a listen!
In addition to talking about the new project (can you tell I’m excited?), Leah and I were joined by Nilesh Christopher, who has reported on the role of deepfakes in India’s elections for WIRED. The biggest takeaway: The Indian elections are wrapping up soon, and many of the country’s burgeoning generative AI companies are looking for new markets that might be interested in their tools—possibly even coming to an election near you.
That’s it for today. Thanks again for subscribing. You can get in touch with me via email and X.
Source image: Getty Images