British newspaper The Guardian has revealed the types of content allowable on Facebook after it managed to obtain internal training manuals and spreadsheets of the social media giant. Moderators who screen the content uploaded on the social network are given training manuals and other documents to aid in their work. These training manuals provide a clear idea on how Facebook moderates posts that tackle the issues of racism, violence, hate speech, terrorism, pornography, and self-harm. To aid the moderators in screening content effectively, the training manuals provide specific examples and scenarios on what content should be deleted from its platform. Aside from those already tackled in the training manuals, the company is also developing additional rules to combat certain types of questionable content that are becoming increasingly prevalent on its platform. An example of these content is revenge porn, which is already categorized in an internal document as a high-level concern.
The policies on content moderation center around the credibility, authenticity, or context of the post. In one of the leaked documents, Facebook treats the use of violent language simply as means to express frustration. The platform expects that people uttering the words will actually be indifferent to each other once they meet in real life. However, the company draws the line in cases wherein the threat either conveys a certain degree of specificity or is targeted towards certain individuals like heads of state. Videos of violent deaths are also allowed on the premise that it should help inform or raise awareness regarding a certain issue. Meanwhile, videos of animal violence and non-sexual violence towards children are allowed as long as it is not sadistic in nature.
Despite its strict and detailed rules regarding content moderation, Facebook is still facing problems with regards to policing posts on its platform. For example, its moderators often only have ten seconds to decide whether a post, image, or video is allowable, disturbing, or simply not acceptable. Another concern is the inconsistencies of rules despite its detailed nature, especially in cases of nudity and sexual content. In order to further improve its content moderation, Facebook is developing a software to quickly screen suspected questionable content, which might become similar to the tools it developed to combat fake news. However, the company admits the software is still in its early stages, and it will take a certain period of time to perfect automated content moderation.