Are Internet companies complicit in promoting hateful and harmful content?

File photo: Screen grab from an ISIS propaganda video.

Are Internet companies complicit in promoting hateful and harmful content?


Share on Facebook
Share on Twitter
Share on Google+
Share on LinkedIn
+

Child pornographers, human traffickers, cyber-criminals, terrorists and extremists have weaponized the Internet. Technology companies have been slow to recognize, admit to, and respond to the illegal activities on their platforms that have resulted in devastating consequences.

Over the past few weeks, European Institutions and national governments have been calling on these companies to do more to rein in these abuses. Most recently, UK Prime Minister Theresa May told the United Nations General Assembly that technology companies need to go “further and faster” in developing technological solutions to automatically reduce the length of time that terror-related material remains online and eventually prevent it from appearing at all.

In 2016, Facebook, Google, Microsoft, and Twitter announced that they would work together to develop new technology to quickly identify and remove extremism-related content from their platforms. Despite some progress, serious problems remain. For example, the ISIS video “The Religion of Kufr is One,” which shows multiple executions by firearms and a hanging – clear violations of YouTube’s terms of service – has been uploaded and removed from YouTube at least six times since May 31, 2016. Analysts at the Counter Extremism Project (CEP) most recently found this video on September 11, 2017, where it already had 42 views. Technology companies must do better. I advocate for a multi-pronged approach to reining in online abuses.

First, we need a fast and effective method to remove content. Once content has been identified, reported, and determined to be illegal or in violation of terms of service, it should be immediately removed (Prime Minister Theresa May is calling for a maximum of two hours from notification to take-down). Past and future uploads of the same content should also be eliminated. While the initial takedown may require human reporting, the expunging of the content from past and future uploads can be fully automated. Robust hashing technology extracts a distinct digital signature from digital content that can be used to automatically, efficiently and accurately identify this same content. This technology is well understood and a version of it – photoDNA – has been in deployment for nearly a decade in the fight against the global distribution of child pornography. I worked with CEP to develop the next generation of robust hashing technology – eGlyph – that extends the reach of photoDNA from images to video and audio recordings. There is no technological, legal, or policy hurdle to the broad deployment of this type of robust hashing technology.

Second, we need to cooperate. The deployment of robust hashing technology requires collaboration between companies and non-governmental agencies to share the signatures of extremist content. This type of shared database will cast a wide net and prevent extremist content from simply migrating from one platform to another.

Third, we must innovate. It is essential to continue to develop new technologies to find and eliminate the most heinous and harmful content from being uploaded and shared online. This should include the development of machine-learning based algorithms that can accurately, automatically, and efficiently flag and remove content.

Fourth, we need to invest in human resources. While advances in machine learning hold promise, these technologies – as technology companies will admit – are not yet nearly accurate enough to operate across the breadth and depth of the internet. There are more than a billion uploads to Facebook each day and 300 hours of video uploaded to YouTube each minute of the day. This means that any machine-learning based solution will have to be paired with a significant team of human analysts that can resolve complex and often subtle issues of intent and meaning that are still out of reach of even the most sophisticated machine learning solutions.

On October 17-18 in Brussels, CEP is hosting a two-day conference, Building Alliances—Preventing Terror, that will bring together representatives from a range of European institutions as well as esteemed academics, government officials, and technology companies. As a senior advisor to the CEP, I am looking forward to presenting the eGlyph technology and new technologies that we are developing, and taking part in important discussions on the role of the EU in combatting and preventing extremism, disrupting the financing of terrorist activities, and grassroots efforts at preventing radicalization.

It is clear there is a political drive for Europe to take action and steer the fight against illegal and harmful content. Technology companies, academics, non-governmental and government agencies must work together to rein in online abuses and return the Internet to its original promise – a place where great ideas can be disseminated, discussed, and debated.

 

 

Share on Facebook
Share on Twitter
Share on Google+
Share on LinkedIn
+