Trump’s Been Unplugged. Now What?

The platforms have acted, raising hard questions about technology and democracy.
Trump is silhouetted onstage in front of a crowd.
The removal of an American President from social-media sites marks a turn in the relationship between the tech industry and the public.Photograph by Dina Litovsky / Redux

For around a decade, a meme has circulated on social media depicting a youngish white man in a shirt and tie, frantically gesturing toward a wall covered in paper ephemera—envelopes, handwritten notes—connected by red string. The image, a still from a 2008 episode of “It’s Always Sunny in Philadelphia,” is often used as a joke to imply the presence of conspiracy thinking; it’s popular on Twitter, where the paranoid style thrives. In a Twitter timeline, information is abundant but almost always incomplete, conflict is incentivized, context is flattened, built-in speed offers a sense of momentum. It seems fitting that a common storytelling form is a sequence of linked tweets known as a thread: the service is electric with the sensation, if not always the reality, of connecting the dots.

Last week, on Wednesday, January 6th, a mob of Trump supporters descended on the Capitol. Some carried assault weapons and zip ties; all claimed that the 2020 Presidential election had been stolen—a conspiracy theory of the highest order. The President had stoked and amplified this delusion via Twitter, and, even after the mob had smashed its way into the Capitol, he tweeted encouragement, calling the rioters “great patriots” and telling them, in a video, “We love you. You’re very special.” Twitter blocked a few of these tweets and, by Friday, had permanently suspended his personal Twitter account, @realDonaldTrump. The President’s tweeting was “highly likely to encourage and inspire people to replicate the criminal acts at the U.S. Capitol,” the company stated, in a blog post. It noted that plans for additional violence—including a “proposed secondary attack” on the Capitol and various state capitols—were already in circulation on the platform.

Following the suspension, Twitter was flooded with outrage, joking, and speculation. On my own feed, people mourned the loss of the demented and absurd posts by Trump that predated his time in office. (“Sorry folks, I’m just not a fan of sharks—and don’t worry, they will be around long after we are gone,” he had tweeted, in 2013.) A few suggested that Trump start a Substack. Some wondered whether the move might set a precedent for the deplatforming of marginalized groups. Others pointed out that some sex workers, pro-Palestinian activists and journalists, and Black Lives Matter supporters had already been booted from the service. “Wish I could see his tweets about getting kicked off twitter . . . like when someone dies and you have the urge to call them to tell them the news,” one friend tweeted, with weird poignancy. For a brief period, Trump attempted to assume the controls of various accounts manned by associates; Twitter swiftly removed those tweets. All the while, people kept asking questions. Was this the free market in action or was it corporate tyranny? Was it a good idea? What took Twitter so long? Hillary Clinton retweeted a tweet of her own, from 2016, in which she had called on Trump to delete his account; in her retweet, she added a check-mark emoji.

Although Twitter has been an undeniable force throughout the Trump Presidency—a vehicle for policy announcements, personal fury, targeted harassment, and clumsy winks to an eager base—most Americans don’t use it. According to Pew Research, only around twenty per cent of American adults have accounts, and just ten per cent of Twitter users are responsible for eighty per cent of its content. In many ways, it’s a niche platform: two days before the Capitol riots, a trending topic on the site concerned the ethically correct way to teach a child to open a can of beans. Still, Trump’s tweets, reproduced on television and reprinted in newspapers, are inextricable from his identity as a politician. His suspension from Twitter, moreover, has turned out to be just one in a series of blunt actions taken against him by tech companies. Following a commitment to crack down on claims of voter fraud, YouTube removed a video of Trump addressing the supporters who had gathered last Wednesday at the Capitol; it has since suspended Trump’s channel, for at least a week. Through an update on his personal Facebook page—an odd stream of corporate announcements, family photographs, and coolly impersonal personal musings—Mark Zuckerberg informed the public that Trump’s accounts would be suspended until at least after the Inauguration. Facebook has also committed to removing all instances of the phrase “stop the steal,” which has been taken up by conspiracists challenging the results of the Presidential election, from its service. Both YouTube and Facebook, where extremist content flourishes, have more than three times Twitter’s audience among American adults.

By Saturday, most major tech companies had announced some form of action in regard to Trump. The President’s accounts were suspended on the streaming platform Twitch, and on Snapchat, a photo-sharing app. Shopify, an e-commerce platform, terminated two online stores selling Trump merchandise, citing the President’s endorsement of last Wednesday’s violence as a violation of its terms of service. PayPal shut down an account that was fund-raising for participants of the Capitol riot. Google and Apple removed Parler, a Twitter alternative used by many right-wing extremists, from their respective app stores, making new sign-ups nearly impossible. Then Amazon Web Services—a cloud-infrastructure system that provides essential scaffolding for companies and organizations such as Netflix, Slack, NASA, and the C.I.A.—suspended Parler’s account, rendering the service inoperable.

These actions immediately activated conspiratorial interpretations. Was this a coördinated hit from Big Tech? How long had it been in the works? Did tech companies, known for their surveillance capacities, have intelligence about the future that the public did not? In all likelihood, the real story doesn’t involve a wall of crisscrossing red strings—just a red line, freshly drawn. It seemed that tech corporations were motivated by the violence, proximity, and unequivocal symbolism of the attack—and that the response, prompt and decisive, was a spontaneous, context-based reaction to threats that had been simmering on their platforms for years. The action was compensatory rather than cumulative—a way of curtailing, if not preventing, further harm. It was compounded by the cascade effect: each suspension or ban contributed to the image of Trump as a pariah, and put pressure on other companies to follow suit, which in turn diminished the repercussions those companies would likely face for their decisions. Last week may simply have been a breaking point, a moment at which the potential damage to American democracy, security, and business had become impossible to ignore.

The vacuum created by Trump’s absence on social media is now filled with questions and counterfactuals. The conversation is consistent only in its uncertainty. Why did things have to reach a point of extremity before the tech companies took action? What would’ve happened if they hadn’t acted? Are these decisions durable, and will they be repeated? Was this a turning point? Will it change the Internet, and if so, how?

Generally speaking, deplatforming works: it diminishes a voice, a movement, or a message, and arrests its reach. But Trump’s ejection from corporate tech platforms—a public event enacted by private companies—is an unusual form of the practice. A robust and powerful communications apparatus remains at the President’s disposal. The incitements embedded in his tweets were materialized in last week’s Capitol invasion. They have been echoed by the hundred and forty-seven Republican lawmakers who voted to overturn the election results, and are ingrained in coverage on Fox News, on talk radio, and in right-wing publications. (Although this, too, may be changing: Fox News declared Biden the winner on November 7th, and, on Friday, Inside Music Media reported that Cumulus Media, an Atlanta-based company that owns four hundred and sixteen radio stations and employs a number of popular conservative and right-wing talk radio hosts, had instructed its on-air talent to stop promoting the stolen-election narrative.) Trump’s followers and supporters retain their ideological and political beliefs, and are likely to organize and act accordingly; in many cases, moves to deplatform the President will only strengthen these commitments.

Still, the deplatforming of an American President marks a turn in the relationship between the tech industry and the public. It adds a new layer to the ongoing discourse about content moderation on social networks—a conversation which, especially in recent years, has been dominated by fruitless, misdirected, and disingenuous debates over free speech and censorship. In the United States, online speech is governed by Section 230 of the Communications Decency Act, a piece of legislation passed in 1996 that grants Internet companies immunity from liability for user-generated content. Most public argument about moderation elides the fact that Section 230 was intended to encourage tech companies to cull and restrict content. But moderation is complex and costly, and it is inherently political. Most companies have developed policies that are reactive rather than proactive. Many of the largest digital platforms have terms-of-service agreements that are constantly evolving and content policies that are enforced unevenly and in self-contradictory ways. Twitter and Facebook are especially infamous for their inconsistency. Even as Trump’s rhetoric has intensified—and even as his followers have engaged in increasingly alarming and violent behavior—the largest social networks had braided together explanations for keeping his accounts active.

There are no easy answers to questions of platform governance, and the political environment has generated conversations that are tangled, trapped, and circuitous—a ball of knots. Despite a bounty of rich and nuanced scholarship on the topic, recent discourse around Section 230—including at the governmental level—has focussed on culture-war priorities. In fact, the law interacts with many other issues, including the social costs of engagement-driven (and ad-supported) business models and the design and intentions of algorithmic recommendation systems. And there’s the matter of monopoly power: perhaps Trump’s social-media exile would be less important if the digital landscape weren’t dominated by a handful of corporations. (One might argue that Facebook’s huge scale is a reason why conspiratorial and extremist content has been able to spread so efficiently.) Finally, there’s the unique leverage that tech workers have when they choose to engage in collective action. Twitter’s decision to ban Trump came after hundreds of employees signed a letter calling on executives to act. Employees at Google, where some full-time and contract workers recently formed a so-called minority union, have been putting pressure on YouTube. The combination of monopoly power and worker power can have striking effects.

The movement to deplatform Trump highlights central, often-overlooked issues within the Section 230 debate, and offers a novel case study. It also raises more questions: What if the platforms had taken content moderation more seriously from their inception? What if they had operated under different business models, with different incentives? What if they had priorities other than scaling rapidly and monetizing engagement? What if the social-media and ad-tech industries had been regulated all this time, or Section 230 had been thoughtfully and meaningfully amended?

For years, social networks have justified Trump’s continued presence by citing context and newsworthiness; none have acknowledged outright that staying on an Administration’s good side, and so remaining largely unregulated, is politically and financially advantageous. In advance of a Biden Administration and a Democratic-controlled Congress, companies such as Facebook, Google, and Amazon may be recalibrating and moving with the political current. In a certain light, the revoking of Trump’s access might be seen as an indirect form of lobbying—a way to curry favor with the incoming Administration. (On Monday, Google, Facebook, and Microsoft announced freezes on political spending, temporarily suspending donations to all political-action committees—a move that some have interpreted as an attempt at bipartisan appeasement.) And there is, of course, a business case for deplatforming Trump. Discursive political division has proven lucrative for social networks, but active political instability usually makes for a hostile business environment. Other corporations, such as Deutsche Bank and P.G.A. of America, have also cut ties with the President.

In the past, Facebook has shut down accounts belonging to military leaders in Myanmar, as well as prominent American extremists. Twitter has purged hundreds of thousands of accounts believed to be tied to ISIS and tens of thousands of accounts linked to Chinese and Saudi Arabian state-backed disinformation campaigns. Amazon Web Services has unplugged WikiLeaks and a sub-domain of the unmoderated social-network Gab. (A.W.S. does not host the main Gab Web site.) What is surprising about last week’s bans and revocations is not that such a small number of companies has power that is vast, concentrated, and swiftly deployed. Instead, what’s notable is that, for the first time, and in concert, tech companies chose to use this power against one of the most important people in the world.

The Web’s system of endless, decontextualized links and hypertext has often been seen as inherently conspiratorial. “The internet was made for conspiracy theory,” the anthropologist Kathleen Stewart wrote, in 1999. “One thing leads to another, always another link leading you deeper into no thing and no place.” Last week’s actions did not constitute a conspiracy, but they did illuminate a network: the one operating levels below hypertext and content—not the papers, pins, or red string, but the walls of the basement, the bulletin board. To cut a politician and affiliated organizations off from payment processors, Web-hosting services, and e-mail providers is to halt them at the level of infrastructure. It is a different game than the moderation of user-generated content, and raises different questions. Advocates for net neutrality have long argued that Internet service providers should not determine who their users are, within the bounds of legality; in this view, Internet service providers are akin to public utilities. For Trump, arguably, it’s as though the electric company has permanently turned off the lights.

Ejected from today’s major platforms, Trump’s followers may create their own social networks and build their own infrastructure services; downloads have surged for apps popular among conservatives, such as the social networks MeWe and CloutHub. They may move onto existing messaging apps that are harder to monitor: in the past week, Signal, the end-to-end encrypted chat service, was the most-downloaded program in Apple’s app store in multiple countries, in large part due to changes in WhatsApp’s privacy policy, and an endorsement from Elon Musk. A large-scale movement toward encrypted-communications channels would further complicate the conversation about content moderation; it would almost certainly be used to justify a crackdown on technologies with strong encryption, either by the government or by private companies. Signal, too, is distributed through app stores run by Apple and Google; its app runs on Amazon Web Services. We may see the emergence of new business structures for communications and media platforms, or an increase in collective actions taken by tech workers. In all likelihood, this past weekend was a turning point not only for Trump and Trumpism, but for the American technology industry.

At the crux of any conspiracy theory is the desire for order, meaning, and control. It’s nice to think that someone has a plan. In reality, there are only ever new contingencies and uncertainties. Trump’s tweets, stored in online databases, such as the Trump Twitter Archive—and, soon, the United States National Archives— are now crystallized in the realm of historical artifact; they will be cited and analyzed for decades to come. Yet the material and long-term damage caused by his entanglements with digital networks and subcultures will continue to mount. A significant portion of the electorate still believes that November’s election results were illegitimate. American extremism is on the rise. Six people present at last week’s mob action are now dead. We are closing in on a year of needless, mass death due to the coronavirus pandemic, a threat that hundreds of thousands of Americans believe is overblown or a hoax. The stakes have always been high; finally, the tech industry’s most powerful stakeholders seem willing to act. They also seem happy to hedge: Trump’s suspensions from Facebook and YouTube are still only temporary.

What does this add up to, and where is it going? What will change, and for whom? Platforms will likely continue to engage in a clumsy balancing act, hoping to reconcile the consequences of toxic, dangerous content with its profitability; the events of this month could also lead to an expansion of the domestic surveillance state, in which tech companies have long played a role. On the other hand, crises are often seen as opportunities in Silicon Valley, and the events of the past week could offer a chance to reimagine platform governance, net neutrality, monopoly power, privatization, and corporate ownership. The knots may be unravelling. January 6th was a narrative pivot, an unanticipated turn. Across apps and platforms, more turns are in the works: for Inauguration Day, at state houses, in the streets. What comes next will depend on who picks up the thread.


Read More About the Attack on the Capitol