Facebook is taking down misinformation that could lead to violence

Despite Facebook’s often-stated intention of being a neutral and open platform, the social media giant has announced that it will begin removing any posts that contain misinformation with the motive of inciting violence.

In a statement to CNBC, a Facebook spokesperson said that, “There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down.” 

Technically, the policy will be implemented “during the coming months”, although it has already been used in Sri Lanka, where the recent spread of false information targeting the country’s Muslim minority has lead to multiple instances of mob violence.

Delicate situation

The new policy will target Facebook posts that contain text and/or images that are deemed to be deliberately inflammatory – specifically, posts that have the intent of “contributing to or exacerbating violence or physical harm”.

So who makes the call on what’s acceptable speech and what’s not? Along with the social media’s existing image-recognition technologies, Facebook says it will work with local and international organizations to both identify and verify the intent and veracity of the posts in question.

While these external parties are yet to be identified, the matter of the impartiality of these organizations is obviously going to play a significant part in the success of the policy overall, just as Facebook itself promises to hold the same standard of neutrality.

Where to draw the line

Employing such policies treads a difficult ethical line – one between the absolute freedom of expression that a neutral platform should (in theory) allow, and the equal opportunity of such expression that comes from the promised safety of the community as a whole.

In a recent discussion with Recode, Facebook CEO Mark Zuckerberg stated that the company would draw the line at false information that didn’t deliberately incite violence or physical harm, instead simply making such posts less prominent.

As an example, Zuckerberg raised the issue of Holocaust denial – a stance which he has said he personally finds repugnant, yet one that would be allowed on the site if it didn’t explicitly incite violence (albeit in a significantly de-prioritized state).

In a clarifying statement provided to Recode, Zuckerberg reiterated that he found the topic “deeply offensive” and that he “absolutely didn’t intend to defend the intent of people who deny [the Holocaust]”.

“Our goal with fake news is not to prevent anyone from saying something untrue — but to stop fake news and misinformation spreading across our services.”

Be the first to comment

Leave a Reply

Your email address will not be published.


*