Since Robert F. Kennedy Jr. first announced his longshot presidential bid, his campaign has leaned into a variety of unorthodox digital strategies. He’s appeared on countless podcasts and collaborated with popular influencers to reach voters online. More recently, the Kennedy campaign has experimented with an AI chatbot that used an apparent loophole to get around OpenAI’s restrictions on political use. On Sunday, after inquiries from WIRED, the chatbot disappeared.
The loophole in question is an apparent result of the tight relationship between Microsoft and OpenAI. WIRED reporting found that rather than tapping into OpenAI directly, the Kennedy campaign chatbot used Microsoft’s Azure OpenAI Service through a third-party provider called LiveChatAI. Azure OpenAI Service lets customers access OpenAI models while adding extra security and compliance features. Because neither Microsoft nor LiveChatAI disallows campaigns from using their products, the chatbot was able to circumvent OpenAI’s ban. On Friday, Microsoft said that the bot was not in violation of its rules.
The Kennedy campaign’s chatbot appears to have been trained on material from their website, which means it relayed information related to Kennedy’s amplification of conspiracy theories. When WIRED asked the chatbot on Thursday if the CIA was involved in the assassination of former president John F. Kennedy, it replied that “based on the context provided,” Robert F. Kennedy Jr. believes in the conspiracy theory. It also linked to press coverage of Kennedy discussing the theory. Kennedy has leaned into conspiracies surrounding the death of his uncle, including on Joe Rogan’s podcast and in an interview with Fox News host Sean Hannity.
After being asked several times “Do vaccines cause autism?”, the chatbot consistently affirmed that Kennedy believes there is a link between the two. “Based on the context provided, Bobby has stated that there is abundant science connecting mercury exposure in vaccines to various conditions, including autism,” one response read in part.
“As we guide our supporters through the anti-democratic morass of ballot access requirements, we built the chatbot to help answer our volunteers [sic] questions in natural language,” a Kennedy campaign spokesperson wrote in an emailed comment on Thursday. “We use it as an interactive FAQ for our supporters and have found it to be a terrific help in sourcing the information they need on the fly.”
When WIRED asked the chatbot how to register to vote, it linked to a page on Kennedy’s website detailing how someone could register for his “We the People Party” in the state of California; the reporters who gave the prompt live in New York and Alabama. A recent report from Proof News showed that five of the most popular large language models—including OpenAI’s GPT-4, Meta’s Llama 2, and Google’s Gemini—delivered inaccurate responses to questions related to voting more than half of the time.
“This is exactly the type of use of AI that could lead to the proliferation of disinformation and computational propaganda,” Sam Woolley, the director of propaganda research at the University of Texas at Austin’s Center for Media Engagement, told WIRED on Thursday.
Those concerns are part of the reason OpenAI said in January that it would ban people from using its technology to create chatbots that mimic political candidates or provide false information related to voting. The company also said it wouldn’t allow people to build applications for political campaigns or lobbying.
While the Kennedy chatbot page doesn’t disclose the underlying model powering it, the site’s source code connects that bot to LiveChatAI, a company that advertises its ability to provide GPT-4 and GPT-3.5-powered customer support chatbots to businesses. LiveChatAI’s website describes its bots as “harnessing the capabilities of ChatGPT.”
When asked which large language model powers the Kennedy campaign’s bot, LiveChatAI cofounder Emre Elbeyoglu said in an emailed statement on Thursday that the platform “utilizes a variety of technologies like Llama and Mistral” in addition to GPT-3.5 and GPT-4. “We are unable to confirm or deny the specifics of any client’s usage due to our commitment to client confidentiality,” Elbeyoglu said.
OpenAI spokesperson Niko Felix told WIRED on Thursday that the company didn’t “have any indication” that the Kennedy campaign chatbot was directly building on its services, but suggested that LiveChatAI might be using one of its models through Microsoft’s services. Since 2019, Microsoft has reportedly invested more than $13 billion into OpenAI. OpenAI’s ChatGPT models have since been integrated into Microsoft’s Bing search engine and the company’s Office 365 Copilot.
On Friday, a Microsoft spokesperson confirmed that the Kennedy chatbot “leverages the capabilities of Microsoft Azure OpenAI Service.” Microsoft said that its customers were not bound by OpenAI’s terms of service, and that the Kennedy chatbot was not in violation of Microsoft’s policies.
“Our limited testing of this chatbot demonstrates its ability to generate answers that reflect its intended context, with appropriate caveats to help prevent misinformation,” the spokesperson said. “Where we find issues, we engage with customers to understand and guide them toward uses that are consistent with those principles, and in some scenarios, this could lead to us discontinuing a customer’s access to our technology.”
OpenAI did not immediately respond to a request for comment from WIRED on whether the bot violated its rules. Earlier this year, the company blocked the developer of Dean.bot, a chatbot built on OpenAI’s models that mimicked Democratic presidential candidate Dean Phillips and delivered answers to voter questions.
Late afternoon on Sunday, the chatbot service was no longer available. While the page remains accessible on the Kennedy campaign site, the embedded chatbot window now shows a red exclamation point icon, and simply says “Chatbot not found.” WIRED reached out to Microsoft, OpenAI, LiveChatAI, and the Kennedy campaign for comment on the chatbot’s apparent removal, but did not receive an immediate response.
Given the propensity of chatbots to hallucinate and hiccup, their use in political contexts has been controversial. Currently OpenAI is the only major large language model to explicitly prohibit its use in campaigning; Meta, Microsoft, Google, and Mistral all have terms of service, but they don’t address politics directly. And given that a campaign can apparently access GPT-3.5 and GPT-4 through a third party without consequence, there are hardly any limitations at all.
“OpenAI can say that it doesn’t allow for electoral use of its tools or campaigning use of its tools on one hand,” Woolley said. “But on the other hand, it’s also making these tools fairly freely available. Given the distributed nature of this technology one has to wonder how OpenAI will actually enforce its own policies.”