Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on October 18, 2021

Facebook wants you to believe its AI is working against hate speech

Everything is fine


Facebook wants you to believe its AI is working against hate speech

In the middle of the last decade, Facebook decided it needed to build AI to fight hate speech.

While the technology did work in some cases, we’ve also seen glaring failures. After the Christchurch shooting, for example, Facebook wasn’t able to quickly remove the video.

Over the weekend, the Wall Street Journal published a new report indicating Facebook’s AI can’t identify first-person shooting videos and racist rants consistently. Plus, there was a bizarre incident where the algorithm wasn’t able to separate cockfighting and car crashes.

 The report noted that the firm’s AI only detects a small part of hate speech posts on the platform, and removes a few of them. In recent documents leaked by former Facebook employee Frances Haugen, the company takes action on only 3-5% of hate and 0.6% of violence and incitement content

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

According to a senior engineer who spoke to WSJ, the company doesn’t have and  “possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas.”

Facebook has hit back at these claims through a blog post written by the VP of integrity, Guy Rosen. The post claims that the company’s AI has been able to reduce the prevalence of hate speech on the platform by 50%.

Prevalence is a measure the firm uses to measure the spread of hate speech on the platform. For instance, the current percentage rate is 0.05%. That means 5 views per 10,000 come across a hate speech-related post. However, given Facebook’s massive scale, that still means many people see these posts.

Rosen said that data from leaked documents is wrongfully trying to paint a picture that Facebook’s AI is inadequate in removing hate speech:

Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress. This is not true. We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it.

Facebook spokesperson Andy Stone told WSJ that AI is just one of the ways the company aims to tackle hate speech. It also lowers the visibility of problematic posts so that fewer people see them. 

While the company claims its AI has improved leaps and bounds, examples like relating black people to primates keep coming up.

Despite these blemishes, Facebook is bullish on using AI to fight hate speech. So that means the company needs to buckle up and make its algorithms more inclusive and effective.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with