Facebook has rejected claims that its algorithms only eliminate a miniscule fraction of hateful messages. To detect and remove such content, the firm employs automated technologies in addition to traditional techniques.
According to the Wall Street Journal (WSJ), stolen papers indicate that the technology only removes a small fraction of the problematic information. Facebook, on the other hand, maintained that it has recently seen results in decreasing hate speech on its site.
The WSJ obtained confidential internal Facebook documents that included information on a team of employees who reportedly discovered the technology was only successful in deleting 1% of postings that violated the social media company’s own guidelines.
An internal review in March 2021 purportedly revealed that Facebook’s automatic takedown operations were removing posts that generated just 3 to 5 percent of overall views of hate speech.
Facebook is also accused of reducing the amount of time that human reviewers spend examining user-submitted hate speech accusations. According to the WSJ, this move, which occurred two years ago, “made the firm more reliant on AI enforcement of its regulations and boosted the apparent performance of the technology in its public numbers.”
Facebook has strictly denied that it is failing to combat hate speech. Guy Rosen, Facebook’s vice president of integrity, stated in a blog post that a new measure should be used to assess the company’s success in this area.
Mr. Rosen noted that the prevalence of hate speech on Facebook – the quantity of such content viewed on the site – has decreased as a percentage of all content read by users.
Hate speech now amounts for 0.05 percent, or five views per 10,000, and has dropped by half in the previous nine months, he claims. “Prevalence is how we measure our work internally, which is why we use the same statistic externally,” he explained. He also stated that Facebook’s algorithms discover more than 97 percent of deleted content before it is reported by people who have seen it.
The stories are based primarily on internal papers supplied to the newspaper by former Facebook employee Frances Haugen. They are referring to a variety of content moderation issues, ranging from anti-vaccine disinformation to violent videos, as well as the experiences of younger Instagram users, which are owned by Facebook. On Monday, Facebook’s vice-president of global affairs, Nick Clegg, the former UK deputy prime minister, joined the chorus of opposition.