The documents have been sent to the US Congress by a whistleblower.
Content promoting hate surged in India on topics from elections to Covid to the 2019 protests against a new citizenship law, show Facebook's internal documents reviewed by NDTV. The documents reveal that Facebook's senior officials downplayed the threat from such inflammatory content, which raises questions about the social media giant's internal gatekeeping.
A spokesperson for Meta, the newly rebranded version of Facebook, denied the charges, calling it "an acute misrepresentation of our efforts to curb hate speech on our platform."
A document from July 2020, titled "Communal Conflict in India", for instance, showed how offline harm was often accompanied by online hate during key moments of crisis.
The report said one of the reasons was the "recent spike in inflammatory content and hate speech in India" and a "marked rise in violence against the Muslim Minority in India over the last 18 months".
It found that in December 2019 -- which saw an over 80 per cent increase in inflammatory content over the baseline around the time of the anti-CAA protests -- online content on Facebook and WhatsApp included "misinformation on protests, demonizing content (against Muslims) hate speech and inflammatory spikes".
The same document also showed how in March 2020, at the start of the first COVID lockdown, there was a 300-plus per cent spike in inflammatory content. Online content blamed Muslims for the spread of Covid.
These documents are part of disclosures made to the American Securities and Exchange Commission and provided to the American Congress in a redacted form by whistleblower Frances Haugen's legal counsel. The redacted versions received by the US Congress were reviewed by a consortium of news organizations, including NDTV.
Internal documents also reflect the human impact of this spike in hate, captured in interviews of Muslims and Hindus in India conducted by Facebook. The team found that Muslim users felt particularly threatened or upset, while Hindus did not express similar fears. A Muslim man from Mumbai said that he was scared for his life and that he was worried that all "Muslims are going to be attacked" due to the wide circulation of Islamophobic hashtags like #CoronaJihad.
When asked about hate on social media, a Meta spokesperson said the company has invested in technology to detect hate speech in languages such as Hindi and Bengali. "As a result, we've reduced the amount of hate speech that people see by half this year. Today, it's down to 0.03 percent. Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online," they said.
However, a document from just a year before the earlier document in January 2019, "Critical Countries: Review With Chris Cox" said that "there is comparatively low prevalence of problem content (hate speech etc) on Facebook" in India. Mr Cox is a senior Facebook executive, who was in charge of the company's applications including Facebook, WhatsApp and Instagram at the time. The document goes on to say that "surveys tell us that people generally feel safe" in the country and "experts tell us that the country is stable".
This supposed "clean chit" was just months before the high pitched Lok Sabha polls, and the very divisive Delhi election campaigns.
However, by 2021, the tune seems to have changed.
Documents from the time show that Facebook's assessment of India had changed in an apparently belated acknowledgement of spiralling online and offline hate. Documents, which were related to then upcoming 2021 state elections in the country, said that "India has been assessed as severe for Societal violence... with recurring mobilization along identity fault lines, tied by both the press and civil society groups to social media discourse".
On this apparent change in stance, spokesperson for Meta said their teams have developed "an industry-leading process of reviewing and prioritizing which countries have the highest risk of offline harm and violence every six months. We make these determinations in line with the UN Guiding Principles on Business and Human Rights and following a review of societal harms, how much Facebook's products impact these harms and critical events on the ground."