Extremist groups have begun to experiment with artificial intelligence, and in particular generative AI, in order to create a flood of new propaganda. Experts now fear the growing use of generative AI tools by these groups will overturn the work Big Tech has done in recent years to keep their content off the internet.
“Our biggest concern is that if terrorists start using gen AI to manipulate imagery at scale, this could well destroy hash-sharing as a solution,” Adam Hadley, the executive director of Tech Against Terrorism, tells WIRED. “This is a massive risk.”
For years, Big Tech platforms have worked hard to create databases of known violent extremist content, known as hashing databases, which are shared across platforms to quickly and automatically remove such content from the internet. But according to Hadley, his colleagues are now picking up around 5,000 examples of AI-generated content each week. This includes images shared in recent weeks by groups linked to Hezbollah and Hamas that appear designed to influence the narrative around the Israel-Hamas war.
“Give it six months or so, the possibility that [they] are manipulating imagery to break hashing is really concerning,” Hadley says. “The tech sector has done so well to build automated technology, terrorists could well start using gen AI to evade what’s already been done.”
Other examples that researchers at Tech Against Terrorism have uncovered in recent months have included a neo-Nazi messaging channel sharing AI-generated imagery created using racist and antisemitic prompts pasted into an app available on the Google Play store; far-right figures producing a “guide to memetic warfare” advising others on how to use AI-generated image tools to create extremist memes; the Islamic State publishing a tech support guide on how to securely use generative AI tools; a pro-IS user of an archiving service claiming to have used an AI-based automatic speech recognition (ASR) system to transcribe Arabic language IS propaganda; and a pro-al-Qaeda outlet publishing several posters with images highly likely to have been created using a generative AI platform.
Beyond detailing the threat posed by generative AI tools that can tweak images, Tech Against Terrorism has published a new report citing other ways in which gen AI tools can be used to help extremist groups. These include the use of autotranslation tools that can quickly and easily convert propaganda into multiple languages, or the ability to create personalized messages at scale to facilitate recruitment efforts online. But Hadley believes that AI also provides an opportunity to get ahead of extremist groups and use the technology to preempt what they will use it for.
“We’re going to partner with Microsoft to figure out if there are ways using our archive of material to create a sort of gen AI detection system in order to counter the emerging threat that gen AI will be used for terrorist content at scale,” Hadley says. “We’re confident that gen AI can be used to defend against hostile uses of gen AI.”
The partnership was announced today, on the eve of the Christchurch Call Leaders’ Summit, a movement designed to eradicate terrorism and extremist content from the internet, to be held in Paris.
“The use of digital platforms to spread violent extremist content is an urgent issue with real-world consequences,” Brad Smith, vice chair and president at Microsoft said in a statement. “By combining Tech Against Terrorism’s capabilities with AI, we hope to help create a safer world both online and off.”
While companies like Microsoft, Google, and Facebook all have their own AI research divisions and are likely already deploying their own resources to combat this issue, the new initiative will ultimately aid those companies that can’t combat these efforts on their own.
“This will be particularly important for smaller platforms that don’t have their own AI research centers,” Hadley says. “Even now, with the hashing databases, smaller platforms can just become overwhelmed by this content.”
The threat of AI generative content is not limited to extremist groups. Last month, the Internet Watch Foundation, a UK-based nonprofit that works to eradicate child exploitation content from the internet, published a report that detailed the growing presence of child sexual abuse material (CSAM) created by AI tools on the dark web.
The researchers found over 20,000 AI-generated images posted to one dark web CSAM forum over the course of just one month, with 11,108 of these images judged most likely to be criminal by the IWF researchers. As the IWF researchers wrote in their report, “These AI images can be so convincing that they are indistinguishable from real images.”