X

Facebook Explains How Technology Is Used To Catch Bad Content

As part of the company’s F8 developer conference Facebook has released a new blog post explaining how it is using technology to help find and fight the spread of “bad stuff.” As is to be expected, the backbone of its technology approach is artificial intelligence (AI) and machine learning with the company stating how AI is helping fight bad content on multiple fronts. Almost as importantly, AI is also growing in its capabilities to distinguish between different forms of bad content.

Whether it is nudity, graphic, or hate-related, Facebook uses AI to firstly identify what it considers to be undesirable content. From then on the approach to dealing with the issue can vary depending on the type of content. For example, Facebook highlights while aspects like nudity and graphic content are usually more clear-cut, there are additional difficulties associated with the likes of hate speech. One such difficulty is the language in use. This has proven to be an issue due to AI having more resources to draw from — and therefore, learn — for some languages compared to others. With Facebook highlighting English as a prime example of where AI is far better at identifying content and responding accordingly. Though Facebook expects this issue to somewhat work itself out over time once more investment and resources become available for a wider degree of language support. Another issue, is AI’s ability to actually determine whether content is promoting hate, or actually condemning it. This particular issue has proved problematic for a number of other sites who use AI as a means of policing content due to the issue’s fundamental dependency on context. Here, Facebook notes this is where the other, and more rudimentary, element of its fight against bad content comes in – people. As Facebook explains that once content has been flagged, if it is a context-related content, dedicated reviewers take a closer look at the content to verify whether it is indeed bad content.

In other words, Facebook is looking at the bad content issue as more of a quantity and quality issue where neither AI or personnel can deal with easily on their own. Instead, this two-pronged approach looks to overcome the wider issue of having to deal with the masses of content by ruling out anything that is more clearly defined as bad from the start. From then on, content that is more debatable and requires more of a qualitative assessment is passed on to those who can make a meaningful and relevant decision. The announcement did also point out that one of the prevailing, and still most useful ways of finding and fighting bad content is the Facebook community itself. When those members draw the company’s attention to specific content they are not only finding the content, but also providing an immediate qualitative judgement on it as well. Something the company hopes will continue in the future and even while its own in-house solutions, human or otherwise, continue to improve.