YOUR FAVORITE MTV SHOWS ARE ON PARAMOUNT+

Social Media Has A Hate-Speech Problem — Why Won't Companies Do Something About It?

While better monitoring and action against hate speech on social media won't solve white supremacy, it's a start toward delegitimizing one of its current forms

By Lincoln Anthony Blades

The March 15 terrorist attack on both the Al Noor mosque and the Linwood Islamic Centre in Christchurch, New Zealand, was the deadliest shooting in the nation's history, an act of white supremacist violence engineered to inflict maximum pain on the Muslim community, and go viral in the process.

We know this in part because it was later revealed that, prior to the attack, the shooter allegedly uploaded a 73-page manifesto, linked out to on Twitter, that was heavily coded with references to obscure memes, low-quality trolling and odd racist irony, called "shitposting."  The shooter, a white man wearing a helmet-mounted GoPro, also streamed the murder of 50 worshippers in a video that was posted to Facebook, Twitter, and YouTube. The recording was later taken down on all platforms, and soon after, both YouTube and Facebook promised to block and remove uploads of the video; the latter in particular specifically condemned the livestream of the terrorist attack.

Let's be clear that white supremacy is violence. While various groups claim different titles, there is no version of white supremacy that comes without violent erasure of minorities, and the legacy of white supremacy is the proof. But the intention to destroy those considered non-white is not just a matter of historical reflection: Today, there is a steady rise in white supremacist violence, a rise in hate crimes against marginalized groups, and the reality that 100 percent of extremist mass murders committed last year came at hands of far-right zealots and white supremacists.

For those reasons, it’s particularly unsettling that on Facebook, at least 1.5 million copies of the video had to be found and removed in the immediate aftermath of the attack. But it’s not altogether surprising; multiple reports have detailed the ways in which white supremacist ideologies have permeated the most public-facing corners of the internet for years, with little interference from the platforms themselves.

On March 29, mere weeks after the mosque attacks, YouTube chief product officer Neal Mohan told the New York Times the site “was started as, and remains, an open platform for content and voices and opinions and thoughts… across the entire spectrum.” In July 2018, Facebook founder Mark Zuckerberg defended the company’s decision to not curb Holocaust-deniers and the like under the claim that the company shouldn’t silence people simply because “they got a few things wrong.” For years, people have begged Twitter to do something about the abuse and harassment they face on the service, to little avail. Instagram is becoming a breeding ground for far-right conspiracy theories. Each time, the platforms allege that, because they did not create the content themselves, they are in the clear. Each time, they point to the idea of “free speech” to absolve themselves of culpability.

The idea of “free speech” is borne out of the First Amendment to the United States Constitution, which, with certain caveats, prevents the government from placing restrictions on what people say, how they worship, and how they choose to express themselves. It also protects how people choose to protest the government, and protects the press. Yet over the years, its 45-word promise has become twisted to also encompass hate speech; as Jeffrey Rosen, the President of the National Constitution Center, explained to USA Today, “The American free speech tradition holds unequivocally that hate speech is protected, unless it is intended to and likely to incite imminent violence.”

Crucially, however, the First Amendment does not directly apply to privately owned social-media sites. As the Columbia Journalism Review points out, “As nongovernmental entities, the platforms are generally unconstrained by constitutional limits, including those imposed by the First Amendment.” But that itself is a slippery slope towards legitimizing hateful, bigoted belief systems, with increasingly dire real-world consequences. We must stop framing free speech as a battleground for tolerance and political correctness, when that “civility” quite often fosters the existence and expansion of white-supremacy inspired terrorism.

In a 2018 report titled "Alternative Influence: Broadcasting the Reactionary Right on YouTube", Becca Lewis, a PhD student who researches online political subcultures, identified how some white nationalists are leveraging YouTube to promote their xenophobic views for monetization and engagement, all while radicalizing viewers.

“YouTube monetizes influence for everyone, regardless of how harmful their belief systems are,” she explained. “The platform, and its parent company, have allowed racist, misogynist, and harassing content to remain online – and in many cases, to generate advertising revenue – as long as it does not explicitly include slurs." Not only does white supremacist content openly exist on social networking platforms, their algorithms have also been gamed to push radicalizing content on viewers who don't even express any interest in consuming that type of content.Because the people who make hateful content are often canny about avoiding explicit slurs, their content is free to proliferate.

Digital giants like YouTube and Facebook clearly have the tools to do demonstrable good — specifically, to block hateful content, conspiracies, and propaganda — but, in the interest of preserving their appearance of corporate nonpartisanship, only ever seem willing to take a selective stance against hate. While it's commendable that Facebook and YouTube would battle the uploading and sharing of a violent white supremacist terror attack, many experts responded by wondering why these massive and prominent social networks aren’t similarly motivated and diligent about removing white supremacist content more broadly. The fact that Facebook is now banning outright white nationalism, itself a rebranding of white supremacy, is a start, but as Motherboard reports, “Implicit and coded white nationalism and white separatism will not be banned immediately, in part because the company said it’s harder to detect and remove.”

In 2017, Germany passed the Network Enforcement Act to ensure that social media sites with at least two million German users were upholding national laws against hate speech which were implemented as a post-Holocaust measure. Because Germany does not want abusive content sitting online on social media sites to be consumed by their citizens, the law they’ve recently passed gives social media sites 24 hours to remove abusive hate speech before they are fined; Facebook has since come under fire in a German court for not doing enough to prevent hate speech on their platform, and has deleted hundreds of posts from its platform in response to remain in accordance of the law. The law and its implementation isn’t without its flaws, however, which kicked off a heated debate about what, exactly, constitutes free speech — and laid bare the reality that these companies should care more, and do more, regardless of legal precedent.

Instead, a noticeable disparity has emerged between the content these platforms do remove for being harmful, and the content they allow. According to a report from Program on Extremism at George Washington University, white nationalist movements have seen a 600 percent growth in their Twitter followers since 2012, as well as a large increase in how many tweets they publish every day, with seemingly little effort from the platform to curtail that growth. To be clear, there have been a few moments of what felt like progress: in 2016, Twitter permanently banned Milo Yiannopoulos for targeting black female actress Leslie Jones with racist and sexist abuse; they later followed suit with Alex Jones in 2018, who violated their abusive behavior policy in numerous ways, including live-streaming a verbal attack on a reporter outside of a congressional hearing. But bans like these often occur years after pundits solidify their once-fringe fan bases into something more mainstream. This has allowed their inflammatory and virulent discrimination to expand at previously unfathomable speeds.

Frequently, even the most egregious acts of direct hate speech are allowed to remain on the site for extended periods of time, or until mounting public pressure results in change. These actions against hate speech, or lack thereof, correlate to particular targets, too: Per a 2016 study by Amnesty International, black women were the most likely to face abuse on Twitter, yet many black women point out that it takes far too long for Twitter to both recognize their claims as actual hate speech, and then actually taking the next step in removing the hateful content and accounts. Political commentator Rochelle Ritchie made multiple reports against the Twitter account of would-be pipe-bomber Cesar Sayoc Jr. for threatening her; at the time, she had only received a message from Twitter that his actions did not violate their terms of service. Compare that to the swiftness with which the platform deactivated the account of the Australian teen responsible for egging an Australian politician who blamed the mosque attacks on immigrants.

While better monitoring and action against hate speech on social media won't solve white supremacy, it's a start toward delegitimizing it in its current, digital form. As Clark Hogan-Taylor, manager at Moonshot CVE, an organization specializing in countering online extremism, told MTV News, “There is no question that social media companies need to continue to improve in their efforts to remove extremist content and accounts. But it's important to point out that someone looking for a piece of content does not necessarily have their interest in that content reduced by its removal.”

Bigotry has become an accepted reality of our mainstream discourse. Pundits come on-air to facetiously debate racism like they're arguing if LeBron is better than Jordan, and hosts are allowed to revel in their xenophobia. Racism has not just infected, but serves as a feature in every branch of government from Capitol Hill to the Oval Office. College campuses have been increasingly flooded with white supremacist propaganda. But just as frustrating as the mainstreaming of white supremacy is how those who spew white supremacist beliefs, many of whom either launched or expanded their racist platform on social media, have been propped up as victims of intolerance and political correctness, rather than being held responsible for stoking the rise of violence against certain groups of people instead of quelling it.

The elimination of hatred from the mainstream is neither intolerance, a threat to free speech, nor oppressive political correctness — rather, it is a necessary rebuke of our society’s violent white supremacy. Until we start treating white supremacy as what it is, and what it has always been — the advocacy of a racial caste system, cultural oblivion, and genocide — we will continue to see this uptick in white supremacist terrorism. But, if we’re truly interested in stemming this discriminatory brutality, here's the first step social media behemoths can take: to start treating white supremacy, online and off, like the hate speech it is.

Latest News