A recent study published by the Center for Countering Digital Hate (CCDH) found that rage is a lucrative business. Since the start of the Israel-Hamas conflict last October, “accounts posting anti-Jewish and anti-Muslim content have seen a sharp rise in followers on X,” the study concluded, and the social media company appeared to be making money from ads that showed up alongside those posts. Imran Ahmed knows this problem all too well, and he’s made it his life’s work to expose what happens when hate can be profitable. It’s been this way since September 11, 2001, the day before his 23rd birthday.
“I felt connected to it,” Ahmed tells me over Zoom. His family is Pashtun, one of the largest ethnic groups in Afghanistan. The Taliban were also Pashtun. “I thought, I’ve got to do something to fix the world,” he says. “Fix this deep evil, this deep wrong.” So Ahmed went back to college, studying politics at the University of Cambridge, which later led to a role as a political adviser to Hilary Benn, then the shadow foreign secretary in Parliament.
But Parliament came with its own set of challenges. Ahmed’s second life turn came in 2016, as the Labour Party campaigned to keep the UK in the European Union. As debate over Brexit intensified, Ahmed says, the party was experiencing “a rapid infiltration of antisemitism.” It was a season of uncertainty. During this period, Ahmed says, the far-right party Britain First launched a dangerous conspiracy theory that the EU was attempting to import Muslims and Black people to “destroy” white citizens through a campaign of rape. “Those lies were integral in shifting that 4 percent of the vote that ended up being the reason why the United Kingdom left the European Union. But those lies also directly led to the murder of my colleague, [Labour MP] Jo Cox,” who was shot and stabbed by a white supremacist. Recounting the story, Ahmed pauses for a beat. Cox was a dear friend, and I can tell he hasn’t quite shaken the nightmare of that day.
Now 45, Ahmed formed the CCDH in 2019 to call attention to the enormous amount of digitally driven disinformation and hate online. He says it is time to finally hold tech companies accountable for the harm social media causes—because if no one does, the consequences of inaction will be catastrophic.
JASON PARHAM: Elon Musk sued the Center for Countering Digital Hate and lost. What was that experience like?
IMRAN AHMED: [Long pause followed by laughter] So he sued us because he said that we affected his advertising business by doing research that was on the front page of The New York Times about the increase in hate on [Twitter] when he took over. There was a tripling in the use of the N-word. On a daily basis, the use of the N-word tripled on his platform the week after he took over compared to the daily average of the year before. And that’s because he put up the Bat-Signal, didn’t he, when he took over, to racists and homophobes and bigots of all kinds saying, my platform is now open for business for you guys.
Twitter was already chaotic but it felt like uncharted territory, especially after he fired most of the trust and safety staff.
He let tens of thousands of [bad actors] back onto the platform who’d been banned by the previous regime at Twitter. He sort of opened the doors of Arkham Asylum and said this is now a safe space for racists and bigots. He sued us for that research.
But to answer your question, I feel good. If you hold up a mirror to a platform and the reflection is particularly ugly, his reaction was to respond by suing the mirror. Whereas, what he should have been doing is trying to fix his platform. But he didn’t. He gave us a sign as well that this is really about economics. You know what pissed him off? It wasn’t people calling him an asshole.
It was the money. He also told advertisers “go fuck yourself.”
Musk believes that by creating a safe space for racists, almost like an ever-rolling car crash of hate and anger, that it will create a spectacle that no one can turn their eyes away from. There’s almost an Imperial Roman theory behind it: Let the racists and the anti-racists joust for the entertainment of the public. The public will flood to it and we will make tons of money through advertising. Turns out his theory didn’t work because the advertisers were like, “we don’t really want to place our adverts on this shit.” I’m very proud that we proved that that model doesn’t work.
When the judge dismissed the suit, he said that it had been an attempt by X to “punish” people critical of the platform. Was the victory a validation of the work the CCDH does?
It’s a validation of our theory of our understanding of what matters. From the beginning, we’ve said that our job is to create costs for the production and distribution of hate as well as the lies that underpin hate and other forms of disinformation. It was a vindication of a theory of change of what really matters.
We are in a make-or-break election year, and election disinformation is on the rise. What’s at stake if we don’t stop it—or worse, people keep buying into these lies?
It’s not just a crucial election in the US. We are in a year of elections. Over 2 billion people will vote in democratic elections around the world this year. And we’ve seen social media being weaponized by bad actors—whether it is nonstate actors, hate actors who are trying to affect the election process, or political parties that use the dynamics on social media platforms that give an algorithmic advantage to hate. It’s part of the reason why we’ve seen an increase in fascists in the European Parliament. There’s a message that’s been sent by the rest of the world that they are unable to stem the tide of hate that is spreading on social media and re-socializing our politics, our democracies, and our offline world at pace.
At times, the effects of it feel uncontainable.
This is the third election cycle in the US—2016, 2020, 2024—where social media is going to have played a really significant role in the election. The US still hasn’t gotten to grips with the fact that our democracy is becoming more and more precarious. It’s becoming more polarized, it’s becoming more hateful, it’s becoming less capable of consensus. With the 2020 election we saw that people no longer even accept elections are real. It’s important that we start to put into place the transparency and the accountability that’s required for these platforms that control the information ecosystem that has such an enormous impact on our electoral cycles.
Why do you think it’s been so difficult to regulate social media and the harm it can cause?
Countries around the world are doing it. The UK legislated the Online Safety Act. The EU legislated the Digital Services Act. Canada has legislated through C-63, and I’m going to give evidence in Ottawa at some point on that. In the US, we have seen social media companies put up their most aggressive defenses that they put up anywhere in the world. They’re spending tens of millions of dollars on lobbying on the Hill, in supporting candidates, trying to stop the inevitable from happening.
Something’s gotta work, no?
Ironically, I think the thing that is most likely to eventually move lawmakers is parents, and parents in particular worrying about the impact of social media platforms on their kids’ mental health. And that’s the thing with social media, it affects everything. CCDH looks at the effects of social media, disregulation on our ability to deal with the climate crisis, on sexual and reproductive rights, on public health and vaccines during the pandemic, on identity-based hate and kids. It’s the kids’ thing—really, it just is such an unimpeachable case for change.
My wife and I are having our first soon. I understand what you would do to defend your kids from being harmed. I think that when you’ve got platforms that are hurting our kids at such a scale, it is inevitable that change will come.
The optimist in me hopes you are right. The next generation should inherit a better world, but so much is working against that.
You know, one of the things that really scares me, we did some polling last year that showed that young people for the first time ever, 14- to 17-year-olds—the first generation who were raised on algorithmically ordered short-form video platforms—they are the most conspiracist generation and age cohort of any in America.
Oh wow.
Old people are slightly more likely to believe conspiracy theories. But it goes down as you get younger and then 14- to 17-year-olds, bam, the highest of all of them. We did that by testing across nine conspiracy theories: transphobic conspiracy theories, climate-denying conspiracy theories, racist conspiracy theories, antisemitic conspiracy theories, conspiracy theories about the deep state. And on every single one, young people were more likely to believe it. And it’s because we’ve created for them an information ecosystem that’s fundamentally chaotic.
And is only getting more chaotic.
Look, the way that tyrants retain power is not just by lying to people, it’s by making them unable to tell what truth is. And it creates apathy. Apathy is the tool of the tyrant. It was true with the Soviet Union. It was true with Afghanistan. There’s no secret to the fact that CCDH is senior leadership of people who come from places where we’ve seen this kind of destruction of the information ecosystem lead to tyrannical government. So, yeah, there is this awareness that things could get real bad real fast. And you’re right in saying that we worry about our kids, and we want to make our world better for them.
OK.
But here’s the thing. I think that the kids themselves, that they might be the generation who are being resocialized at pace by social media algorithms in a way that might undermine the democracy that we need to fix the problem. We are consuming our kids for ad dollars. We are eating our own children for ad dollars that go to a small coterie of social media executives in San Francisco.
And as AI becomes more mainstream, it is very likely that these problems will only intensify.
Social media was the nuclear age of disinformation. In the old days, you had to put in some effort to spread disinformation. You know, print a pamphlet and send it to people by mail. Social media changed that. Mark Zuckerberg once did this chart showing that the more violative content is, the more engagement it gets. And on platforms which reward engagement with amplification, that means that disinformation gets more amplification than good information, right? So you’ve got an asymmetric playing field fundamentally. On those platforms that spread disinformation at scale, look at the economics. It’s zero cost to the sender for each additional person, for each additional message, zero cost. That is like the nuclear bomb. It is like the creation of almost unlimited energy from a few atoms of uranium-238. And then you have AI.
Shit.
So the only cost in the past for being able to disseminate these messages for zero cost to a billion people was producing it. AI changes the economics of the production of disinformation and turns that to zero as well. So what we are now in is that we’ve very, very quickly gone from the conventional age of disinformation to the nuclear age of disinformation with social media, and now to the thermonuclear age of disinformation, where AI and social media combine. You could flood the information ecosystem with AI-generated disinformation, which is given an algorithmic advantage on social media platforms. You create absolute chaos in the information system.
It is happening now.
And here’s the problem, that chaos doesn’t necessarily just lead to January 6. It also leads to apathy. It means that we go into a state of epistemic anxiety, which is where we don’t just know what’s true or not, but we also don’t know how to work out what’s true or not.
And we don’t have the resources for that, either, because our governments, in part, have failed us.
The situation now where you can’t trust what you see or hear—that’s unprecedented.