Zuckerberg’s Dodge Reveals Facebook’s Inadequacies on Extremist Content

© European Union, 2018

Mark Zuckerberg, Chief Executive Officer of Facebook meets President Tajani and the political group chairpersons in the European Parliament in Brussels.

Zuckerberg’s Dodge Reveals Facebook’s Inadequacies on Extremist Content


Share on Facebook
Share on Twitter
Share on Google+
Share on LinkedIn
+

Mark Zuckerberg’s appearance before the European Parliament last week was, by all accounts, a disaster. The agreed-upon format allowed the CEO of one of the richest and largest tech companies in the world to group the questions from MEPs and answer those he wanted, while ignoring others or promising to “get back to you on that.”  The leaders of the EU left frustrated and unsatisfied with Mr.  Zuckerberg’s latest apologies and recitation of talking points and non-answers to their queries about data privacy, terror content, Facebook’s monopolistic practices, and other pressing issues.

Even so, among Mr. Zuckerberg’s evasive answers, one sentence leaves the unmistakable impression of a company refusing to take responsibility for the darker aspects of online behaviour it has fostered and it came early in his presentation

Never missing a chance to repeat the folksy anecdote that Facebook started in his college dorm room, Mr. Zuckerberg described his company’s content monitoring evolution from a user-led flagging system to its present use of sophisticated Artificial Intelligence software that, he proclaimed, “can flag 99 percent of the ISIS and al-Qaeda related content that we end up taking down before any person in our community flags that for us.”

That sounds impressive, but the key words in Mr. Zuckerberg’s carefully curated sentence were “that we end up taking down.” With that phrase, the metric was subtly shifted from what actually matters to what sounds like it matters.

Ninety-nine percent of everything eventually removed indicates precisely nothing about what is not removed, or the total amount of existing terrorist content on Facebook. To use an analogy, if the International Olympic Committee claimed to ban 99% of the athletes they caught using performance-enhancing drugs, would the public accept it? Of course not. We would also want to know how many cheats they are detecting in the first place.

For instance, while claims of taking down 99 percent of terrorist content sounds impressive, in reality Facebook is grading itself on a significant curve by referencing only content from two groups – ISIS and al-Qaeda.  A Bloomberg report highlights that at least a dozen U.S.-designated terror groups, from Hamas to Boko Haram and FARC, are active on Facebook and using the platform to recruit, using tactics like posting images of grisly killings.

Facebook’s recent Transparency Report, released with one eye on their CEO’s upcoming parliamentary appearance, does give something away. It posits the same, almost meaningless 99 percent figure, but nestled under the sub-heading ‘Data not available’ in the section on terrorist content, the claim is made that there is “relatively little [terrorist propaganda content] because we remove the majority before people see it.”

Research by the Counter Extremism Project (CEP) tells a different and more sobering story. CEP has turned up dozens of posts and grisly propaganda videos that have been online for extended periods. This month, CEP located a violent ISIS propaganda video that had been online for two years, along with other videos that had been online for days but had managed to rack up thousands of views and dozens of shares. Two of these videos featured mass executions.

CEP’s newest report, Spiders of the Caliphate: Mapping the Islamic State’s Global Support Network on Facebook, found active Facebook accounts for 1,000 ISIS supporters, and six months after their discovery, Facebook had suspended less than half of the accounts. The report also details how ISIS followers avoid detection by using Facebook Live to host meetings and linking to banned material in comments, tricks that avoid Facebook’s automated flagging tools. Even worse, the report shows how Facebook’s algorithmically-powered “recommended friends” feature is helping connect disparate groups of ISIS supporters across the globe.

CEP Senior Advisor and the world’s leading expert in digital forensics, Dr. Hany Farid, has noted that Mr. Zuckerberg’s predictions for AI are overly optimistic and assume that advances will continue at their recent pace. In a recent Bloomberg interview, Facebook’s own Chief AI Scientist Yann LeCun admitted that AI was “not sophisticated enough” to handle such problems and is “only part [of the answer].”

It’s not like Facebook don’t have the resources to do better. Facebook’s market capitalization is more than the GDP of Belgium, where the European Parliament welcomed Mr. Zuckerberg to speak. As Mr. Zuckerberg himself said during the intimate gathering: “Now, as a big company, we also have the ability to employ tens of thousands of people to go review more of this content.”

Until Facebook finally tells the truth, it will be difficult for lawmakers and the public to hold it, and other tech companies, accountable for the level of disturbing and harmful content that proliferates online today. Policymakers in the EU should be applauded for attempting to wrest answers from Mr. Zuckerberg, who was even reluctant to appear in public. But it is well past the time when they must insist on truth and transparency on behalf of the people they represent, who are rightfully concerned about how Facebook and other tech companies are negatively affecting their safety, privacy and cherished institutions.

Share on Facebook
Share on Twitter
Share on Google+
Share on LinkedIn
+