Social Media Is A Toxic Mess. It Should Be On The Companies To Fix It.

The Alex Jones case has only demonstrated the gravity of the problem.
Alex Jones of Infowars at a pro-Trump rally on July 18, 2016.
Alex Jones of Infowars at a pro-Trump rally on July 18, 2016.
Lucas Jackson / Reuters

With Facebook, YouTube, Spotify, Google and Apple all taking concrete steps in the past week to remove conspiracy theorist Alex Jones from their platforms, a renewed sense of urgency has inflamed the debate over free speech and online accountability. Are technology platforms capable of balancing these often competing values? And how do we ideally regulate the dark side of the web and rein in tech monopolies without curbing our ability to speak freely?

In a lengthy blog post, Facebook explained that it removed Jones’ material from its site “for glorifying violence, which violates our graphic violence policy, and using dehumanizing language to describe people who are transgender, Muslims and immigrants, which violates our hate speech policies.”

When YouTube revoked his channel, it explained that his account was terminated for “violating YouTube’s Community Guidelines.” The site’s terms of service prohibit hate speech and harassment. Spotify also cited “hate content” as a reason for removing Jones but didn’t specify which episodes had violated its policy.

Twitter CEO Jack Dorsey, however, refused to follow suit, claiming that Jones had not “violated their rules” and he preferred to be guided by principle rather than public pressure.

He stated that his company wants to promote a “healthy conversational environment.”

Critics of Jones (and Dorsey’s unwillingness to ban him) wonder what exactly is “healthy” about a man who has for years been spreading vicious falsehoods for the easily gullible.

Jones may be on Twitter hyperbolically proclaiming that “we’re all Alex Jones now,” but the truth is we are not, and the real question for many isn’t whether he should have been removed from these platforms, but why did it take so long to happen? And how can we ensure similar actions are taken in a responsible and transparent manner, devoid of any ambivalence or subjectivity? In other words, when it comes to banning Jones and his kind, it’s time to focus less on the why and more on the how. Considering our dependence on the internet and its unlimited capacity for harm, the time has come to establish ethical and legal frameworks that can both redefine and take into consideration today’s online reality.

Let’s make one thing clear here: I am a huge proponent of free speech and believe that the only cure for bad speech is better speech. Bad ideas need to be debated, denounced and eventually dismissed out in the open, without needless censorship making free-speech martyrs out of those spewing them. But does that also include the freedom to incite violence? Where did we get the idea that freedom of speech was absolute?

To claim that tech companies are already censoring and removing offensive material on the request of easily offended “snowflakes” and SJWs is untruthful. Quite the contrary, in fact. There is absolutely no shortage of offensive and potentially damaging material to be found online. Many platforms have long since been allowed to become cesspools of every imaginable vileness permitted. In the meantime, many with more constructive things to offer have long abandoned Twitter because the site has become too toxic to handle, and Facebook and Instagram continue to mysteriously allow unfettered misogyny, online harassment and vocal hate groups while obsessively censoring every female nipple in sight.

It takes massive abuse ― as is the case of Alex Jones, with major lawsuits and public opinion overwhelmingly swaying against him ― for the ax to fall. That’s not a slippery slope, that’s a sticky flat surface of continuously erring on the side of free speech to the detriment of human lives.

“We should seriously question the status quo that lets tech companies stand as unaccountable platforms for content.”

When we have targeted harassment campaigns and defamatory falsehoods being disseminated by way of the internet that can incite unstable people to terrorize and harm others, companies need to develop better and more precise tools to rein them and their toxic sludge in. They must learn to deal with offenders in a way that doesn’t allow for their supporters to cry free speech abuse but rather that helps define in a clear manner why some voices were removed for specific violations. We can no longer rely on vague, arbitrary and malleable definitions of “hate speech” to shut conspiracy theorists down. This only plays into their hands by encouraging cries of censorship. The rules of conduct were defined by these tech companies; they now need to redefine them to better protect their users.

If longevity is their goal, maximizing profits in the short term should not and cannot be tech companies’ priority over building the trust of their users. Respecting and defending users’ rights (and online safety) is foundational in building that trust ― and that includes increased transparency about what won’t be tolerated online and why people’s content is removed.

When Twitter’s CEO says that Jones has not “violated their rules” ― true or not ― isn’t now the ideal time to reevaluate and redefine those rules, if they continue to allow for the targeted harassment and persistent abuse of innocent people? Is the fact that Twitter has practically become a breeding ground for harassment and extremism not an inherent design flaw that needs to be resolved?

For the government’s part, if free speech laws have begun to feel obsolete, precisely because they were created before the modern internet (and its immense power) emerged, it’s time to update them. That’s not capitulation to political correctness, that’s evolution. We should seriously question the status quo that lets tech companies stand as unaccountable platforms for content.

Twitter CEO Jack Dorsey has balked at banning Alex Jones from the site.
Twitter CEO Jack Dorsey has balked at banning Alex Jones from the site.
Bloomberg via Getty Images

David French wrote in The New York Times that it’s important we find a better way to ban Alex Jones, by foregoing vague and subjective reasons for doing so, which are subject to considerable abuse, and instead focus on prohibiting slander or libel on their platforms.

“It’s a high bar,” French says. “But it’s a bar that respects the marketplace of ideas, avoids the politically charged battle over ever-shifting norms in language and culture and provides protection for aggrieved parties .... Those investigations would rightly be based on concrete legal standards, not wholly subjective measures of offensiveness.”

In Jones’ case, there is a lawsuit underway. Real, quantifiable, libelous damage has been inflicted, and there is a good chance he will face legal repercussions. But what about people who don’t have the financial means to pursue legal battles? And what about the increasingly complicated logistics of using artificial intelligence and real-life content moderators to expunge the most egregious content online? Even if you redefine the standard, you still need fallible humans to enforce it.

That’s why the line needs to not only be better defined, but tech companies also need to have some skin in the game. They need to be forced to take responsibility for the toxic mess their sites have become.

Jones is a prime example of what an unregulated web can enable and provide a platform for. He isn’t some harmless charlatan whom we can let yell into a microphone until he turns blue in the face and hope that interest in his tantrums dies down. This is a man whose conspiracy theories have had dire consequences for innocent people. His claim that the 2012 massacre at Sandy Hook Elementary School in Newtown, Connecticut, was an elaborate hoax, complete with “crisis actors” and government-supported gun-control activists, has altered the lives of the people who least deserved it: the parents of the schoolchildren murdered.

What kind of free speech principle enables the defamatory victimization of people who have already suffered such unimaginable loss? Who is comfortable defending the unrestricted ability to engage in such behavior? Is Jones really the free-speech hill we all want to die on?

We should have long moved beyond discussions of whether chronic and targeted abuse should be tolerated. Yes, we should be thoroughly skeptical and vigilant of any rules that would curb unpopular or minority speech. But legislators and tech companies should be working in tandem to figure out how to more ethically and transparently regulate and ultimately remove targeted malice and slander from these platforms. That’s not limiting our freedom; that’s enforcing accountability.

Toula Drimonis is a Montreal-based freelance writer, editor and columnist.

Popular in the Community

Close

What's Hot