News

Actions

What Facebook, Twitter and YouTube can do now to stop terrorism and hate online

Posted
and last updated

 The New Zealand assailant’s 87-page manifesto is replete with white supremacist propaganda that was shared across the internet. Social media sites like Facebook, Twitter and YouTube are now left scrambling to stop the spread of this hateful rhetoric as it continues to get shared across their platforms.

Washington Post reporter Drew Harwell’s recent tweet captures the significant role that the internet played in this attack: “The New Zealand massacre was livestreamed on Facebook, announced on 8chan, reposted on YouTube, commentated about on Reddit, and mirrored around the world before the tech companies could even react.”

Facebook announced it has removed 1.5 million videos of the attack. Twitter said on Friday it was working to remove the video from its platform. YouTube said it removes violent and graphic content as soon as it’s made aware of it. 8chan said it is responding to law enforcement.

But there are steps these companies can take now to help mitigate the damage and prevent terrorist attacks like this from happening in the future.

It’s clear that the internet — and social media in particular — plays a key role in amplifying conspiracy theories, misinformation and myths about Muslims. But the internet didn’t create these things — anti-Muslim sentiment long predates the internet.

In a project at New America, we documented 763 separate anti-Muslim incidents in the United States between 2012 and today. Of the 763 incidents, there have been 178 hate incidents against mosques and Islamic centers in America and another 197 incidents of anti-Muslim violent crimes.

It’s difficult to know the assailant’s precise path to radicalization. How much violent extremist material was he consuming online? Was he part of a community of extreme hate on 8chan, or some other platform? Did YouTube’s recommendation algorithms pull him into an echo chamber of hate and vitriol against Muslims?

The truth is that we may never know the answers to these questions. What’s more, there are real challenges to curtailing online hate: namely, there are no current laws on the books to adequately address domestic terrorism. Labeling those who use social media and other technologies to incite violence as domestic terrorists would give tech companies the legal leverage they need to take down their content or block them altogether. Without legislation, we are left with competing, if not fuzzy, ideas about what constitutes acceptable and unacceptable forms of hate speech.

These challenges are compounded by the sheer amount of content that goes online every day. Every single minute there are on average 400 hours of video uploaded to YouTube; 510,000 comments, 293,000 updated statuses and 136,000 photos posted on Facebook; and 350,000 tweets posted on Twitter.

Notwithstanding these challenges, there are concrete and immediate actions tech companies should take to address anti-Muslim hate online.

YouTube should turn off recommendations on anti-Muslim content that pushes misinformation and disinformation. Not all of this content breaches its terms of service, but it often pushes right up to the line, making some of this content hard to police. But there’s no reason the algorithm should be recommending anti-Muslim and toxic content. This won’t be an easy fix for YouTube, but this latest terrorist attack should be a clarion call for the company to dedicate real resources to address this specific problem.

Google, Facebook, Twitter and Microsoft should also establish a forum looking at hate targeting minority communities. This would be an industry-led initiative to identify, better understand, and disrupt hateful content and dangerous speech used on their services. The forum would be modeled after, but separate from, the industry-led Global Internet Forum to Counter Terrorism, which seeks to find industry wide ways of stopping terrorists from promoting and disseminating terrorist propaganda through online platforms. The new forum would allow for expertise and input from a wide range of individuals and organizations, which would only help tech companies better understand and address hateful content and activities — in all its forms and manifestations — against a range of minority communities.

These immediate and needed steps won’t rid the world of the anti-Muslim hate, but they will be important first steps by the tech companies to help curtail hate and dangerous speech on their services. And this, in turn, could help mitigate future terrorist attacks.