As members of the UK’s largest opposition party gathered in Liverpool for their party conference—probably their last before the UK holds a general election—a potentially explosive audio file started circulating on X, formerly known as Twitter.
The 25-second recording was posted by an X account with the handle “@Leo_Hutz” that was set up in January 2023. In the clip, Sir Keir Starmer, the Labour Party leader, is apparently heard swearing repeatedly at a staffer. “I have obtained audio of Keir Starmer verbally abusing his staffers at [the Labour Party] conference,” the X account posted. “This disgusting bully is about to become our next PM.”
It’s unclear whether the audio recording is real, AI-generated, or recorded using an impersonator. British fact-checking organization Full Fact said it is still investigating. “As we’re talking now, it can’t be validated one way or the other. But there are characteristics of it that point to it being a fake,” says Glen Tarman, Full Fact’s head of advocacy and policy. “There’s a phrase which appears to be repeated, rather than [using] a different intonation the second time it’s used, and there’s a few glitches in the background noise.”
Audio deepfakes are emerging as a major risk to the democratic process, as the UK—and more than 50 other countries—move toward elections in 2024. Manipulating audio content is becoming cheaper and easier, while fact-checkers say it’s difficult to quickly and definitively identify a recording as fake. These recordings could spend hours or days floating around social media before they’re debunked, and researchers worry that this type of deepfake content could create a political atmosphere in which voters don’t know what information they can trust.
“If you are listening to a sound bite or a video online with this seed of doubt about whether this is genuinely real, it risks undermining the foundation of how debate happens and people’s capacity to feel informed,” says Kate Dommett, professor of digital politics at Sheffield University.
X’s manipulated media policy states that videos or audios that have been deceptively altered or manipulated should be labeled or removed. Neither has happened to the post, and X did not reply to WIRED’s request for comment on whether the platform has investigated the recording’s authenticity.
Starmer’s team has yet to comment. But several MPs from the ruling Conservative party called the recording a deepfake. “There’s a fake audio recording of Keir Starmer going around,” MP Tom Tugendhat said on X. “The last 30 years of public life has seen a catastrophic undermining of faith in institutions, for good and bad reasons,” Matt Warman, another Conservative MP, posted. “But today’s Sir Keir Starmer deepfake is a new low, supercharged by AI and social media. Democracy is under real threat—technology to verify content is essential.”
The incident comes a week after a scandal in the final hours of Slovakia’s election campaign, when an audio recording was released on Facebook that appeared to show the leader of the opposition Progressive Slovakia party talking about his plans to rig the election. Michal Šimečka denounced that audio as fake, and AFP’s fact-checking department said the audio showed signs of manipulation. At the time, fact-checkers said they felt ill-equipped to definitively debunk AI-generated audio recordings.
Countries around the world are struggling with how to respond to audio recordings that are deemed fake. Alleged deepfake recordings have been causing confusion in both Sudan and India. In Sudan, “leaked recordings” of former leader Omar al-Bashir, who hasn’t been seen in public for a year, were suspected of being manipulated. In India, an audio recording was released of Palanivel Thiagarajan, an opposition politician, allegedly accusing his fellow party members of corruption. Thiagarajan said the recording was machine-generated.
The problem of easy-to-create deepfake media is compounded by the fact that detection tools are not widely available, says Sam Gregory, executive director at Witness, a human rights group that focuses on technology. “There is no shared standard for adding watermarks or provenance signals to AI-generated deepfake audio, just efforts by single companies. It’s no use having a tool to spot whether content is generated by one company when the same tool would give a false negative on fake audio created by one of the many other tools on the market.”
The inability to definitively prove the authenticity of an audio recording is adding murkiness that will also be exploited by politicians featured in real audio, adds Gregory. “Politicians will claim that real audio is faked and put the pressure on fact-checkers to debunk this claim, when they lack the tools or the speedy capacity to do this.”