Internal doc by Facebook says platform should have informed users when politicians shared false information

An internal document, highlighted that “high-risk misinformation” was shared by Indian politicians which was often “out-of-context video stirring up anti-Pakistan and anti-Muslim sentiment”

Representative image
Representative image

NH Web Desk

In several reports that have come out since 2019, there has been an increase in “anti-minority” and “anti-Muslim” narrative on Facebook since the last Lok Sabha election campaign, reported The Indian Express. Such a narrative of hate speech often runs into violence threats, fake news about “Muslims engaging in communal violence” and “Covid-related misinformation involving minority groups”, reported the English daily.

According to a July 2020 report, this online hate speech could reflect on the ground during the Assembly elections. The report predicted that its effect could be seen in the West Bengal elections which took place earlier this year.

All of these reports were disclosed by whistleblower Frances Haugen, who is also an ex-Facebook employee, to the United States Securities and Exchange Commission and were provided to the US Congress.

In another internal report before the Assam elections 2021, Assam CM Himanta Biswa Sarma had been flagged for inflammatory rumours about “Muslims pursuing biological attacks against Assamese people by using chemical fertilizers to produce liver, kidney and heart disease in Assamese.” However, Sarma denied knowledge of any such thing and told The Indian Express that he had not received any communication that Facebook had flagged his posts.

Inflammatory content on Facebook in regional languages, especially Bengali, Hindi and English increased during December 2019 and March 2020, when the anti-CAA protests and the Covid lockdown peaked, according to “Communal Conflict in India”, another internal report.

The documents disclosed by Haugen show that while some employees at Facebook were flagging these concerns, others were designing algorithms to push these very flagged content. Thus, in July 2020, an internal staff group developed “inflammatory classifiers” to target such content more effectively, and built “country specific banks for inflammatory content and harmful misinformation relevant to At Risk Countries (ARCs)”, in which India was placed often for being at risk of societal violence. But staffers still wanted only a “stronger time-bound demotion” of this hate content.

“India Harmful Networks”, another report, said that groups allegedly affiliated with TMC were posting “often inflammatory,” but “usually non-violating” content, while groups affiliated with RSS and BJP contained inflammatory content about “love jihad” and Islamophobia.

A spokesperson from Meta told The Indian Express that their teams closely tracked any possible risks for the Assam elections, and put in place emergency measures to contain inflammatory content from reaching people’s newsfeed. He added that they not only removed accounts that violated Facebook policies, but also reduced their engagement. He also mentioned that they have reduced hate speech to just 0.03% by investing in technology that identifies hate speech in Hindi and Bengali.

“Effects of Politician Shared Misinformation”, an internal document, highlighted that “high-risk misinformation” was shared by Indian politicians which was often “out-of-context video stirring up anti-Pakistan and anti-Muslim sentiment”. The document said that Facebook should have taken the responsibility to inform users about their leaders sharing false information.

Click here to join our official telegram channel (@nationalherald) and stay updated with the latest headlines