Facebook has issued a response to the shocking violence in Washington overnight with a set of new moderation policies. The company has announced that any content praising the incident or inciting a repeat would be removed in the coming days.
Facebook has long come in for criticism for its moderation efforts over the years. Seemingly, the platform prioritized free speech over moderating misinformation and hate speech but recently the company has taken more steps to tackle the issue.
Despite this, the staff involved in moderating content for the company the conditions they are forced to work in are unsustainable and dangerous.
The company has released its efforts to moderate during the 2020 Presidential election. These show a significant improvement in the results the company has previously produced. However, critics will still continue to claim this action is but a drop in the ocean compared to what is required.
Facebook announce moderation policy changes
The attack on the Capitol seems to have sparked Facebook into action, however. The shocking images promoted a series of changes to the moderation policies of Facebook.
As mentioned these include, removing any content that praises the incident or incites a repeat. Facebook has also removed the video of Donald Trump following the event. The company said it “contributed to, rather than diminished, the risk of ongoing violence”.
Facebook has also updated its election misinformation labels with a new message as reported by Android Central. It will read “Joe Biden has been elected President with results that were certified by all 50 states. The US has laws, procedures, and established institutions to ensure the peaceful transfer of power after an election”.
Facebook has also decided to keep active a range of new policies that it introduced in the run-up to the election. These include adding to the requirement for group admins to review posts before posting.
Facebook also automatically disables on posts in Groups that start to have a high rate of hate speech. Finally, Facebook will continue to use AI to demote content that likely violates company policy.
Ultimately, critics will level two main questions at Facebook regarding this action. The first will focus on whether the company acted quickly enough to remove damaging content. Whilst the second will question whether the actions taken have had a positive effect.
Many argue that these sorts of reactive measures fail to have the desired impact as misinformation and hate speech has already spread. They argue companies need to take more proactive measures which are not always simple. Overall, it is great to see Facebook taking its responsibilities more seriously but clearly more needs to be done.