India is less than a month away from a national election in the world’s largest democracy. But a new report from the nonprofit Global Witness and the Internet Freedom Foundation (IFF) finds that YouTube and Koo, a homegrown Twitter-style alternative that specializes in Indian languages, continue to allow hateful content that violates their policies in both Hindi and English, leaving them up even after they’re reported. This, experts say, could be a harbinger for how they may respond to a deluge of divisive election-related content.
“I think it shows they’re really ill equipped for the elections. They’re not able to deal with content that’s reported to them in a sort of transparent and responsible way,” Henry Peck, digital threats to democracy campaigner at Global Witness, told WIRED.
The report focused on misogynistic hate speech. YouTube’s policy prohibits content that “promotes violence or hatred against individuals or groups based on any of the following attributes, which indicate a protected group status under YouTube’s policy,” and includes gender. Prateek Waghre, executive director at the IFF, says that the goal of the research was to test how responsive the platform would be to user reports, one way that platforms identify violating content.
“This investigation was to understand how their own reporting mechanisms would respond if you were able to pick out instances that, based on some analysis, were in violation of their policies,” he says.
India is YouTube’s biggest market, with more than 460,000 million users. But despite its massive user base, the platform has historically struggled to address issues of hate speech in the country, where the Hindu-nationalist Bharatiya Janata Party, which currently holds power, has fomented fear against the country’s Muslim minority to drive its popularity. YouTube has platformed influencers that made their names spreading divisive anti-Muslim hate, and autogenerated hateful music videos. A 2022 report from New York University’s Center for Business and Human Rights found a “spate of misogynistic rants” within right-wing Hindu nationalist content on YouTube. On YouTube, some of the videos identified by Global Witness and IFF were from smaller creators with only a few thousand views; others racked up millions.
One video with more than 760,000 views encourages violence against women, with the voiceover saying that “if a man hurts a woman in bed, and the woman stays quiet, it is an indication of her extreme love for the man.” Others identify the skin color or body types that indicate that a woman will be unfaithful.
“[The BJP] has demonstrated time and again a strategy of inflaming the majority Hindu population by creating fear and loathing of Muslims,” says Paul Barrett, deputy director of the Center for Business and Human Rights at New York University and a coauthor of the report. “This is primarily along religious lines, but also tends to scoop up and include very misogynistic rhetoric as well. That just seems to come along for the ride.”
Though YouTube says it reviews reported content on a consistent basis, it did not answer questions from WIRED about how long it takes to review videos that are reported, or whether the videos flagged by Global Witness had been reviewed by the time of publication.
“Our hate speech policies make clear that we prohibit content promoting violence or hatred against individuals or groups based on attributes like gender identity and expression,” Javier Hernandez, YouTube spokesperson, told WIRED. “We’re currently reviewing content provided by WIRED, and remain committed to removing any material that violates our Community Guidelines.”
Barrett also says that some of Youtube’s issues likely reflect a lack of company investment in non-Western countries and non-English languages.
“One can understand their desire to keep costs to a minimum,” he says. But choosing to cut costs via an outsourcer means that “moderation, in all likelihood, is going to be inadequate because the hiring, training, and supervision of the people doing the job is being pushed onto the shoulders of a vendor whose sole purpose is to keep costs low.”
Koo co-founder Mayank Bidawatka told WIRED that Koo uses a combination of freelance and staff moderators to police the platform in English, Hindi, and Portuguese.
And while Waghre says the platforms function in the context of a very complicated information environment within India, “the responsibility is still on them to take action especially if it’s defined by their own policies,” he says. “Especially when it comes to things around hateful conduct and hate speech, especially in a gender context.”
Koo has a much smaller reach—only about 3.2 million users—but has been a favorite of the BJP and its supporters. Posts flagged by Global Witness and the IFF on Koo promoted the Islamaphobic “love jihad” conspiracy theory, that Muslim men are trying to marry, seduce, or kidnap Hindu women in order to force a demographic change in Hindu-majority India, similar to the Great Replacement conspiracy theory in the US. Koo’s terms of service prohibit “hateful or discriminatory speech,” including “comments which encourage violence; are racially or ethnically objectionable; attempts to disparage anyone based on their nationality; sex/gender; sexual orientation; religious affiliation; political affiliation; any disability; or any disease they might be suffering from.”
In response to the report, Koo told Global Witness in an email that it screens content algorithmically, and then manually for sensitive topics. “Koos that do not target individuals and do not contain explicit profanity are typically not deleted,” but may be deprioritized, the email says.
“In line with our guidelines, action was taken against most of the Koo’s flagged by Global Witness,” Bidawatka told WIRED in response to a request for comment. “Out of the 23 Koos, we have deleted 10 which violated our guidelines and taken action against the remaining. Actions taken ranged from reduced visibility of the post, deletion to account level actions such as blacklisting for an account exhibiting repeated problematic behavior.”
With major elections around the corner, it’s likely that platforms’ systems will be placed under even more strain.
“If you’re going to run a global platform in a place like India with millions of people participating, you need to put up the guardrails to make that safe,” says Barrett. “And to make sure that your business is not doing things like disrupting the election process in the country where you’re operating.”
This article has been updated to reflect comments from Koo.