Why Trolls Won in 2016

We may earn a commission from links on this page.
Illustration: Jim Cooke/Gizmodo
Illustration: Jim Cooke/Gizmodo

A decade is all it took, more or less, for the internet to become unmanageable.

In that time, a handful of online communities went through unprecedented expansion, scaling the experimental, self-organizing colonies of the Web 1.0 era into sprawling, corporatized digital city-states like Facebook and Twitter that grew out of Web 2.0. The warning signs have been easy to spot for regular users, but 2016 will mark the time when the infrastructural bottleneck inherent in such massive groups utterly shattered.

The tech industry’s inability to admit its failures, govern with transparency, and adequately protect its own users is reflected by the rampant abuse, organized harassment, and disinformation proliferating within these corporatized communities. This was the worst year yet.

Advertisement

How Did It Get So Bad?

The starting gate can be hard to pinpoint, but between 2004 and 2006 online communities began to codify in a meaningful way. 2004 gave us Facebook; Reddit came in 2005; Twitter hobbled into the spotlight in late 2006, the same year a two-year-old YouTube was scooped up by recently IPO’d search giant Google (a platform with its own unique troubles best saved for another piece.) Respectively Facebook, Reddit, and Twitter occupy the 2nd, 3rd, and 7th most popular websites in the United States. Google is number one.

Advertisement

All three have treated their users as numbers, quantities to be attained rather than individuals experiencing much of their day-to-day lives through the products they’ve built. This year that insincerity and backwards thinking cause the shit to hit the fan.

Advertisement

Gizmodo followed Facebook’s foibles all of 2016—from learning the supposedly algorithmic Trending News module was the work of underpaid human editors to the site’s role as a megaphone for demonstrably false articles masquerading as news. That first discovery led to the Trending News team’s firing en masse, a Congressional inquiry, and the module being retooled to surface popular stories via an algorithm, as advertised. The replacement was as buggy as the original, human-run model was susceptible to editorial bias. And the resulting product, without the ability to accurately discern legitimate websites from utter nonsense, exacerbated the prevalence, reach, and potential profit to be gained by sharing bullshit passed off as news. The cure, as the saying goes, was worse than the disease.

Advertisement

Twitter—described this year by a former employee as “a honeypot for assholes”—has routinely overlooked moderation, which led to the mob of followers willing to do the bidding of professional troll Milo Yiannopolis slinging abuse at SNL star Leslie Jones. Though Yiannopolis and several other far right mouthpieces have since been banned from the platform, Richard Spencer, a hero to internet fascists, has since been allowed to rejoin with Twitter’s blessing. The platform makes fertile soil for hate to grow: many personalities associated with the so-called “alt-right” (what we’re calling internet fascists these days) have built cults of personalities and massive, loyal armies there, leveraging unquestioning loyalty and relative lack of user repercussions to target people they disagree with. Coordinated online harassment first entered the mainstream consciousness through the brutal salvos launched by GamerGate near the start of this decade. Though the movement itself has all but fizzled out, its tactics have unfortunately crystalized into the norm.

Advertisement

Gizmodo reported on the systematic gaming of Reddit’s voting system by the site’s pro-Trump community r/the_donald this year, who employed the strategy despite direct oversight from company leadership. They made attempts at filtering, but ultimately faltered in taking any real stand, an outcome that should surprise no one familiar with the company’s involvement in the mass dissemination of hacked nude celebrity photos (better known as The Fappening) and its users’ vile campaign of threats and harassment directed at former interim CEO Ellen Pao. In the meantime, however, the company’s current CEO Steven Huffman was caught editing user posts without their permission. His puerile revenge, he explained, was a direct result of coordinated harassment he received from the very same Trump subreddit he would later fail to ban. Can it be more deafeningly obvious that something is fundamentally broken? A gunshot might complete the metaphor.

None of these communities exist in a vacuum, and what spreads on one is bound to end up on the others. Earlier this year, Edgar Welch, armed with multiple weapons, entered a DC pizzeria and fired, seeking to “investigate” the pizzagate conspiracy—the debunked theory that John Podesta and Hillary Clinton are the architects of a child sex-trafficking ring covertly headquartered in the nonexistent basement of the restaurant Comet Ping Pong. Egged on by conspiracy videos hosted on YouTube, and disinformation posted broadly across internet communities and social networks, Welch made the 350-mile drive filled with righteous purpose. A brief interview with the New York Times revealed that the shooter had only recently had internet installed in his home.

Advertisement

There were far more incidents this year than any sane person would have the time or emotional fortitude to catalog exhaustively. But the through-line of these fuck-ups—which have been present for years but became more prevalent, more severe, and more closely-tied to real-world implications in 2016—can all be attributed to these sites’ failures to set and enforce clear rules, and the humility to admit when they simply don’t have it all figured out.

The Sorry State of Moderation

These days a site lives and dies not only by how quickly, accurately, and easily it can push relevant content to its users but also by the protections it offers them from hate, harassment, and propaganda, a process no company has yet to find a perfect solution to. Human intervention is still necessary, perhaps more than ever, and in digital city-states like these moderators are the backbone of online law enforcement. They’re the caste of employees tasked with determining what belongs on a site, and what’s irrelevant, abusive, or outright illegal. It’s also a natural chokepoint. As more users flock to these major online hubs, the level of mayhem has gotten worse then ever.

Advertisement

Facebook, Reddit, and Twitter have all refused to tell Gizmodo how many people they employ in a moderation capacity. Given tech’s propensity to grandstand its own accomplishments, one could assume that the number of moderators tasked with rooting out bad actors on these massive sites is unimpressive, and based on the increasing frequency of abuse, these teams haven’t been able to keep up with the abuse. A Pew Research study from 2014 found 40 percent of adults have experienced some form of harassment online. That figure reaches a staggering 70 percent among 18-to-24-year-olds.

Reddit’s survival as a usable community rests almost entirely in the hands of unpaid volunteer moderators. An interview with Reddit CEO Steve Huffman revealed that three teams handle moderation to varying degrees on his site: the community team, the trust and safety team, and the “anti-evil” engineering division which builds tools for the former two to use. One former employee told Gizmodo that, as recently as late 2014, the community team consisted of only seven people to wrangle a website now nearing 1 million interconnected subsites. In that same interview Huffman stated with regard to harassment on the 11-year-old site he co-founded, “there’s not a crystal clear line of what behavior [is] acceptable or not.” Employing an inadequate number of people whose power is derived from hazy edicts is a recipe for failure—and when users don’t like or understand why action has been taken against them, these moderators become targets.

Advertisement

Twitter’s former CEO Dick Costolo has stated publicly that he and his company “suck at dealing with abuse,” and a 2015 study found that 88 percent of online abuse occurs on the platform. Not much has changed since Jack Dorsey took over Costolo’s role at the microblogging service in July of last year. A lengthy Buzzfeed expose revealed Twitter has done little to help protect its users from the deluge of cruelty the platform’s interactions easily facilitate, hiding behind a familiar refrain: we’re “a communication utility, not a mediator of content.” That sentiment is echoed in Facebook’s insistence on it’s “not a media company” and Reddit’s self-assigned position as “the internet’s home for conversation.”

Advertisement

Facebook is arguably the most opaque of these giant and secretive companies. While far less susceptible to targeted harassment than platforms that thrive on anonymity, Facebook has borne the lion’s share of blame for the rampant publication of fake news, and rightly so. Last month, it was revealed that a sizable amount of that disinformation was the byproduct of teens in the Balkans writing whatever they felt was mostly likely to resonate with readers’ preexisting biases (particularly right-wing biases) about US politics and passed it off as fact in order to make a quick buck. These sites’ entire business model rested on their share-ability on Facebook.

Facebook apparently let this fake news phenomenon happen. Not only was the removal of such sites eminently possible, Gizmodo learned earlier this year that the tools to do so were built and not deployed for fear of conservative backlash. While the company has publicly committed to curbing the spread of fake news, a cursory glance at any of the dozens of right-wing Facebook groups like Trump Friends or The Trump Deplorables shows no obvious change in the frequency with which stories like “HUGE: This is How Obama Plans to Rule America After the White House” are shared.

Advertisement

Part of the effort to cut down on Facebook’s reputation as a swap meet for falsehoods involves using ABC News, Politifact, FactCheck, and Snopes as fact-checkers—a novel but likely just as fallible form of moderation that opens up the platform to the same biases as the human-edited Trending News section on a wider scale. A Reuters investigation into the company’s controversial removal of an iconic war photo showed that acceptable content on the site can be decided by Facebook’s executives, rather than any sort of clearly outlined policy. This inner circle of top-ranking officers has the power to make or overturn moderation decisions seemingly at their discretion.

It’s a lie and not a very good one. Though Reddit and Twitter are far from profitable, all three are thoroughly in the business of content. You—your identity and whatever intellectual possessions you chose to leave in the custody of these businesses—are the commodity. Your willingness to be mined for content that makes the site more attractive and data to better serve you with advertisements are the price of admission. What meager protections they offer now outweigh the cost.

Advertisement

Investing in more users has always been a higher priority than investing in the infrastructure to keep existing users safe. The tensions between “communication utility” and community business, moderation and censorship, and the fallibility of algorithms or the fallibility of humans have underscored the maturation of Facebook, Twitter, and Reddit, and all three have dodged these questions until they became tremendous problems.

Advertisement

It’s not just that these communities have grown too large to police; they’ve also grown meaner. And that’s a reflection of who has joined the internet in the last ten years, and who was already there to greet them.

A Digital Class War

Inaction has been a tacit endorsement of trolls and bigots by Facebook, Twitter and Reddit for years. Meanwhile, internet influence and political power have began to merge. For the past year, newcomers to the internet were immediately exploited, capitulating in the election of a troll for the President of the United States. What started as an experiment had slid into an abusive dystopia.

Advertisement

Other than those in the burgeoning tech industry, the earliest public incarnation of the internet—USENET—was populated mostly by academia. It also had little to no moderation. Each September, new college students would get easy access to the network, leading to an uptick in low-value posts which would taper off as the newbies got a sense for the culture of USENET’s various newsgroups. 1993 is immortalized as the Eternal September when AOL began to offer USENET to a flood of brand-new internet users, and overwhelmed by those who could finally afford access, that original USENET culture never bounced back.

Similarly, when Facebook was first founded in 2004, it was only available to Harvard students. Twitter’s early adopters were tech insiders and journalists who found the microblogging service useful for sharing news. Reddit is unique in that its first “users” were fake accounts generated by the site’s founders, both University of Virginia grads who grew up in cozy mid-Atlantic suburbs. Early human users included employees Chris Slowe (Harvard) and Aaron Schwartz (Stanford), and venture capitalist Sam Altman (Stanford). The trend has remained fairly consistent: the wealthy, urban, and highly-educated are the first to benefit from and use new technologies while the poor, rural, and less educated lag behind. That margin has shrunk drastically since 2004, as cheaper computers and broadband access became attainable for most Americans.

Advertisement

Reddit now reigns as the “frontpage of the internet.” Facebook boasts 1.18 billion daily users. Twitter is the preferred communication vehicle for the rich and the famous, including the President-elect Donald Trump—whose own actions on the platform, it should be noted, go against site policies regarding inciting harassment, as in the cases of Lauren Batchelder and Chuck Jones. 

However, the vast majority of internet users today do not come from the elite set. According to Pew Research, 63 percent of adults in the US used the internet in 2004. By 2015 that number had skyrocketed to 84 percent. Among the study’s conclusions were that, “the most pronounced growth has come among those in lower-income households and those with lower levels of educational attainment” and that, “African-Americans and Hispanics have been somewhat less likely than whites or English-speaking Asian-Americans to be internet users.”

Advertisement

White, rural, poor, and less-educated is a demographic set familiar to most Americans as the core electorate behind Donald Trump’s presidential victory—a win which statistical models and overconfident prognosticators described as a pipe dream. (The New York Times gave Trump a 15 percent chance until a few hours into election night.) This group includes not only relatively new arrivals to the internet but also those least likely to buy into or benefit from the techno-utopian fantasy that Silicon Valley elites peddle. What we’re experiencing now is a huge influx of relatively new internet users—USENET’s Eternal September on an enormous scale—wrapped in political unrest.

Now, the idealistic Web 2.0 movement that emerged in the mid-Aughts is being upended. The shift is not a matter purely born from mass migration, as USENET’s collapse was. Bad actors are intentionally exploiting the vulnerabilities that these massive companies have laid bare. The actions of The_Donald on Reddit, for instance, revealed how easy the site’s aging algorithm is to game. Twitter has been fully weaponized by hate groups who descend on users for ideological differences with impunity. Facebook made fake news profitable.

Advertisement

Relative newcomers to these online ecosystems were beset by exploitation from above and within. Profiting off these corporate cities were magnates with neither the resources nor inclination to protect users, clinging to an inaccurate and outdated image of their companies as neutral conduits for communication. And steadily gaining traction on them were outlets like Breitbart and Infowars and loathsome pundits, training the new arrivals into their own troll armies, whipping them into a frenzy with whatever flavor of hatemongering helped them earn a buck along the way. 2016 provided these dispersed patches of rage with a focal point: Donald Trump, an outsized internet bully himself whose political clout and wealth further insulated him from leveraging these platforms to spread misinformation and push his agenda.

Advertisement

Both sides of this new digital class war are maintaining uneasy relationships with the exceptionally wealthy: Donald Trump, Peter Thiel, Rex Tillerson, Andrew Pudzer, and Gary Cohn on one side; Mark Zuckerberg, Jeff Bezos, Larry Page, Sergei Bryn, and Tim Cook seemingly on the other. Less than a month from Trump’s inauguration, the distinction is increasingly hard to discern.

The tech industry’s 10-year plan to sweep the problems of harassment, abuse, and misinformation under the rug was only a prelude to the industry’s soon-to-be cozy relationship with the incoming administration: Paypal creator and early Facebook investor Peter Thiel being the obvious bridge between the two circles. Elon Musk and Uber CEO Travis Kalanick recently accepted roles on Trump’s Strategic and Policy Forum. Facebook’s Vice President of Public Policy was seen returning to Trump Tower days after PEOTUS summoned the industry’s oligarchs to a secretive tech summit there. Meanwhile, Breitbart executive chair Steve Bannon left his post to join Trump’s cabinet. While in media, Milo Yiannopolis, who we mentioned earlier, was Bannon’s star blogger. Alex Jones, creator of InfoWars, is counted among the few people Trump has said something positive about, a bizarre endorsement that took place while he was a guest on Jones’ radio show.

Advertisement

The tech industry’s efficacy at moderating what it’s built is on par with Trump’s own strides at “draining the swamp.” Whatever differences of class divide the majority of Americans pales in comparison to the chasm that exists between them and these industry titans—users and voters on both sides have been played and we’re busy fighting over scraps.

With any luck, 2017 will be the year Twitter, Reddit, and Facebook are forced to reckon with what thoughtless acquisition and gutless non-intervention have fostered within their products, and more importantly, for the real, terrified, and often endangered humans using them. The present excuses no longer hold water and users have drawn the battle lines. Anyone with a shred of empathy can see which side is the overwhelming source of these new and dangerous problems. It’s time these companies decide where they stand.

Advertisement