Staffers alerted Facebook to misinformation, polarising content in India, but company ignored

Several Facebook staffers had explicitly raised several red flags in internal meetings between 2018 and 2020 pointing towards constant barrage of polarising nationalistic content on FB in India

Staffers alerted Facebook to misinformation, polarising content in India, but company ignored
user

NH Web Desk

Several Facebook staffers had explicitly raised several red flags in internal meetings between 2018 and 2020 pointing towards the denigration of minority communities, misinformation, and the constant barrage of polarising nationalistic content on the platform in India.

Despite these explicit alerts by the staff mandated to undertake oversight functions, an internal review meeting in 2019 with Chris Cox, then Vice President, Facebook, found “comparatively low prevalence of problem content (hate speech, etc)” on the platform. Two reports flagging hate speech and “problem content” were presented in January-February 2019, months before the Lok Sabha elections.

A third report, as late as August 2020, revealed that the platform’s AI (artificial intelligence) tools were unable to “identify vernacular languages” and had, therefore, failed to identify hate speech or problematic content.

The first report “Adversarial Harmful Networks: India Case Study” noted that as high as 40 per cent of sampled top VPV (view port views) postings in West Bengal were either fake or inauthentic.

VPV or viewport views are a Facebook metric to measure how often the content is actually viewed by users.

The researchers wrote in their report that Private Facebook groups made up of like-minded users generated more divisive content. Inflammatory content primarily targeted the already vulnerable Muslims.


These details were revealed in the documents which are part of the disclosures made to the United States Securities and Exchange Commission (SEC) and provided to the US Congress in redacted form by the legal counsel of former Facebook employee and whistleblower Frances Haugen.

The Facebook internal review meetings on hate being spread on the platform in India with Cox took place a month before the Election Commission of India announced the seven-phase schedule for Lok Sabha elections on April 11, 2019.

Cox, who had quit the company in March that year and returned in June 2020 as the Chief Product Officer, pointed out that the “big problems in sub-regions may be lost at the country level”.

The second – an internal report – authored by an employee in February 2019, is based on the findings of a test account. A test account is a dummy user with no friends created by a Facebook employee to better understand the impact of various features of platform.

This report notes that in just three weeks, the test user’s news feed had “become a near constant barrage of polarizing nationalistic content, misinformation, and violence and gore”.

The test user followed only the content recommended by the platform’s algorithm. This account was created on February 4, it did not ‘add’ any friends, and its news feed was “pretty empty”.

“The quality of this content is… not ideal,” the report by the employee said, adding that the algorithm often suggested “a bunch of softcore porn” to the user.

Over the next two weeks, and especially following the February 14 Pulwama terror attack, the algorithm started suggesting groups and pages which centred mostly around politics and military content. The test user said he/she had “seen more images of dead people in the past 3 weeks than I have seen in my entire life total”.

The researchers recommended that for tackling inflammatory content Facebook must invest more resources to build out underlying technical systems that will detect and enforce on inflammatory content in India, the way human reviewers might.

Follow us on: Facebook, Twitter, Google News, Instagram 

Join our official telegram channel (@nationalherald) and stay updated with the latest headlines


/* */