X

Facebook Admits Its Content Reviewers Are Inconsistent

Featured image for Facebook Admits Its Content Reviewers Are Inconsistent

Facebook admitted that its anti-hate speech systems implemented over the course of this year are still inconsistent, ProPublica reported Thursday, citing correspondence with the company. Following a crowd-sourced investigation into the manner in which the world’s largest social media platform handles hate speech, the publication provided the Internet giant with 49 posts that its content reviewers refused to define as hateful or sexist speech after being made aware of their existence despite the fact that people who reported them were sure they’re in violations of Facebook’s related guidelines. Following a second review of the highlighted items, the firm said its reviewers were wrong by not taking down 22 of them. In six more cases, Facebook said the reviewers were wrong but only because the content that violated the guidelines wasn’t correctly reported by users, causing reviewers to not judge it correctly, or the reported post was removed or edited by the author before reaching the review team.

Facebook defended 19 out of 49 decisions to not remove supposedly obvious hate speech as such and refused to comment on two other cases, citing an unspecified lack of information. The company’s Vice President Justin Osofsky apologized for the mistakes and reiterated the social media giant’s previously announced plans to double the size of its online security team to 20,000 employees by the end of 2018. According to Mr. Osofsky’s estimates, 66,000 posts labeled as hate speech are removed by Facebook on a yearly basis but the company is trying to tread a line between content that’s clearly illegally hateful and pure censorship. As it’s striving to not end up doing the latter, its mechanisms designed against combating hate speech still aren’t maximally effective, the executive suggested.

Hate speech isn’t the only negative phenomenon facilitated by social networks that Facebook is having trouble containing, with the company recently admitting that its efforts to combat the dissemination of factually inaccurate and misleading news stories still aren’t as effective as they should be. Regardless, the firm remains adamant that it’s making progress on both fronts and will continue improving its mechanisms aimed at combating illegal content and so-called “fake news” in the future.