We have to force Facebook to be responsible on illegal content

EPA/RITCHIE B. TONGO

We have to force Facebook to be responsible on illegal content


Share on Facebook
Share on Twitter
Share on Google+
Share on LinkedIn
+

It’s 15 years since Facebook was born and – like all adolescents – it’s getting into trouble. However, Facebook’s kind of trouble is a little more serious than most teenagers.

For a start, lackadaisical content moderation on the platform was partly implicated in the ethnic cleansing of Rohingya Muslims in Myanmar. Most famously, the company has been used as a disinformation platform by a Russian government seeking to undermine democracy in the West.

Perhaps most irksome to European regulators though, has been the discovery of just how fast and loose Facebook has been playing with who has access to user data. That particular failing was the trigger for the Cambridge Analytica scandal, which kicked off the tsunami of controversy that has been washing over the company ever since.

Given that the company’s annus horribilis didn’t really begin until March 2018, it makes sense that it is still continuing in February this year. Just last week it emerged, in the latest scandal, that Facebook has been paying teenagers $20 a month to deep dive into their data through an app it cloaked as a non-Facebook product. In an era when children are always online but not always well protected and aware of the dangers, what they did was profoundly irresponsible.

Facebook should be the adult in that relationship, but the Facebook Research app scraped information gathered by other apps from deep within the user’s phone, in ways no teenager could reasonably understand the full consequences of. Almost immediately after the story broke, the company was forced to delete the app from Apple’s iOS store for a violation of terms of service.

Of course, acting after the fact is typical of Facebook’s approach. Once the harm has been done and the company is in the spotlight, it scrambles to tidy its room. (Having said that, the Facebook Research app is still – at the time of writing – available on Android.)

We at the Counter Extremism Project are particularly concerned about the spread of illegal content on the platform. The use of Facebook by online extremists is dangerous and ever-present. The platform, with all of its resources and influence, is in a position of real power but has not put the requisite thought into its responsibility. Instead, the company chooses to do the bare minimum in order to negate public criticism.

Facebook’s favourite self-promotional statistic in this regard is that its artificial intelligence detects 99% of ISIS and al-Qaeda-related content before users flag it. That figure has been paraded before European and U.S. lawmakers by CEO Mark Zuckerberg, and most recently in Brussels by new global policy chief Nick Clegg.

But that figure is deceptive. It doesn’t tell us how much extremist content is on the platform. Facebook, as it collects and analyses an abundance of its users’ information, says that can’t be estimated. It doesn’t tell us how long the removed content stays up. This is important because most views, shares and downloads are collected in the first hour such content is online.

Artificial intelligence should be deployed in tandem with human intervention. Technology needs to be constantly reassessed and refined as terrorists find ways around it. Facebook needs to work in tandem with outside actors and regulators to ensure the real threat can be assessed and corrective measures put in place.

Brussels has dedicated time and resources to try to address some of these issues. There have been recent breakthroughs: such as the disinformation strategy signed by major tech firms last week, and Nick Clegg’s public appearance at which he pledged to do more to tackle images of self-harm, and even offered to fly over a team of engineers to talk about what happened in Myanmar.

Today, the Commission announced some progress on online hate speech according to less-than-stringent metrics laid down for platforms as part of a mutually agreed Code of Conduct. But even while presenting that as a success, justice commissioner Věra Jourová pointed to the legislative measures against terrorist content currently going through Parliament.  These measures are an outstanding issue, stuck in the space between three Parliamentary committees. Time is running out to pass this legislation before the end of this mandate.

It is crucial that Facebook and other platforms are required to do more to address extremist content on their platforms, and a one-hour time limit between upload and removal is particularly important.

The issue is responsive responsibility. What needs to be avoided is a situation where Facebook is finally taking the necessary action on terrorist content only after a major attack that costs multiple lives.

At more than 2 billion users and more than half a trillion dollars in market capitalisation, it should be is safe to say by now that Facebook has reached adulthood. Well, with maturity comes responsibility. 15 years into the world and it is time for Facebook to finally grow up.

Share on Facebook
Share on Twitter
Share on Google+
Share on LinkedIn
+