Following damaging whistleblower allegations, Facebook under the scanner again in the US
US Federal Trade Commission is looking into whether Facebook might have violated a 2019 settlement with the agency over privacy concerns, for which the company paid a record penalty of $5 billion

Facebook has been under severe scrutiny after a trove of internal company papers were included in disclosures made to the Securities and Exchange Commission and provided to Congress in a redacted form by Facebook whistleblower Frances Haugen’s legal counsel. After these revelations, the US Federal Trade Commission has begun looking into the disclosure.
Federal Trade Commission staffers are looking into whether Facebook research documents indicate that it might have violated a 2019 settlement with the agency over privacy concerns, for which the company paid a record penalty of $5 billion, the Wall Street Journal reported. The release of the papers has triggered calls to FTC from lawmakers and children’s advocates to investigate whether Facebook engaged in deceptive or misleading conduct.
In recently released reports by The Washington Post, it had become clear that the social media giant tracked real-world harms aggravated by its platforms, ignored warnings from its employees about the risks of their platform’s design decisions and exposed vulnerable communities around the world to mostly hate-driven content.
Facebook’s internal research found evidence that the company’s algorithms promote discord and that its Instagram app is harmful for teenage girls, a sizable percentage of its users.
Documents show that, Facebook’s problems with hate speech and misinformation are much worse in developing countries. Research papers highlighted that users in India experience Facebook without critical guardrails common in English-speaking countries.
Haugen referred to Facebook founder Mark Zuckerberg’s public statements at least 20 times in her SEC complaints, asserting that the CEO’s ‘unique degree of control’ over Facebook forces him to bear ultimate responsibility for a litany of societal harms caused by the company’s ‘relentless pursuit of growth’.
These documents confirm previous reporting that employees had been sounding alarm bells for years over the social media company’s practices in favoring right-wing publishers and were dismayed that the company did not do more to control misinformation and divisiveness on its platform.
In an attempt to wash his hands off the hatred on his platform, Zuckerberg underscored that the company could not be held solely responsible for political divisions in the country, nor the state of the media business. “Polarisation started rising in the US before I was born and Facebook can’t change the underlying media dynamics,” he said.
Zuckerberg testified last year before Congress that the company removes 94% of the hate speech it finds before a person reports it. But in internal documents, researchers estimated that the company was removing less than 5% of all hate speech on Facebook.
Zuckerberg has said the company does not design its products to persuade people to spend more time on them. But dozens of documents suggest otherwise.
From 2017 onwards, Facebook’s algorithm gave emoji reactions such as ‘angry’ five times the weightage than ‘likes’ and boosted these posts in its users feeds. Facebook’s reasoning was that posts which prompted a lot of reactions tended to keep users engaged on the platform and keeping people engaged was key to Facebook’s business.
The social media company doesn’t publish the values its algorithm puts on different kinds of engagement.
Facebook’s own researchers were quick to suspect a critical flaw. Favoring “controversial” posts — including those that make users angry — could open “the door to more spam/abuse/clickbait inadvertently,” stated a staffer in the documents.
The company’s data scientists confirmed that the “angry” reaction, along with “wow” and “haha,” occurred more frequently on “toxic” content and misinformation.
In several cases, the documents show Facebook employees on its “integrity” teams raising flags about the human costs of specific elements of the ranking system — warnings that executives sometimes heeded and other times seemingly brushed aside.
According to the documents, Facebook’s levers rely on signals most users wouldn’t notice, such as how many long comments a post generates, or whether a video is live or recorded, or whether comments were made in plain text or with cartoon avatars. This shapes what shows up on each user’s newsfeed.
In 2018, the company made changes that would further boost some of the most extreme ideological websites over moderate and neutral news sources. That means Facebook for three years systematically amped up some of the worst of its platform, making it more prominent in users’ feeds and spreading it to a much wider audience.
In the run-up to the 2020 US presidential election, the social media giant stepped up efforts to police content that promoted violence, misinformation and hate speech. But after November 6, Facebook rolled back many of the measures aimed at safeguarding US users.
It was recently revealed that Facebook’s products in India highlighted inflammatory content linked to the deadly carnage which rocked Delhi in February 2020 which left 53 dead and spiked 300% above previous levels during the months following December 2019, when the anti Citizenship Amendment Act protests swept through the country.
According to The Washington Post report, rumors and calls to violence spread particularly on Facebook’s WhatsApp messaging service in late February 2020. Hindu and Muslim communities felt that they saw a large amount of content that encourages conflict, hatred and violence on Facebook and WhatsApp. Facebook has refused to comment on the report.
Follow us on: Facebook, Twitter, Google News, Instagram
Join our official telegram channel (@nationalherald) and stay updated with the latest headlines