How to Stop Misinformation Before It Gets Shared

It's never been easier for mistruths to go viral, and content moderation is inadequate. What social media needs is some old-fashioned friction. 
Collage of images of person on smart phone family listening to radio women gossiping and illustration of old newspaper
Photo-Illustration: Sam Whitney; Getty Images

In July of 1588, the Spanish armada’s hundred-plus ships and 26,000 men set sail for England, to overthrow the Protestant Queen Elizabeth I and restore Catholic rule. After two months at sea, the fleet fought English forces off the coast of France in a series of fierce battles. News of the outcome spread across Europe, and many learned that the armada had, as expected, won the day and crushed the English fleet. Catholics celebrated in the streets, and Protestants feared sanction as geopolitics whirred to life.

Many days later, the opposite news arrived: The English fleet had won a decisive victory and crippled the Spanish. The tattered remains of the great armada were long in retreat by the time millions of Europeans learned that they had been fooled by a viral rumor.

It’s tempting to think that viral misinformation is a modern invention of social media and malicious actors. In fact, “fake news” is as old as news itself. For centuries, falsehoods have been shared widely as facts and stood uncorrected for months or years, even becoming accepted truth. Many of these stories were consequence-free, such as the widely believed report in 1569 of a Leicestershire woman who was “confirmed” to have given birth to a cat. Others led to tragedy and horror, such as viral rumors that the Black Plague was caused by Jews poisoning wells, which led to executions and violent pogroms throughout Europe.

Regardless of the era, rumors and falsehoods spread via two basic steps: discovery, then amplification of unverified knowledge. What’s different now is that today’s communication platforms have fundamentally transformed the way information flows, propelling viral rumors exponentially faster and farther than ever. Widespread belief in certain types of viral rumors poses a threat to institutions that we rely on, including democracy itself. An urgent question has emerged: How can we mitigate the kind of high-consequence misinformation that’s increasingly plaguing our communication ecosystem? Friction, we believe, is the answer.

Inferred from Kollin (2018)

A Modern History of Virality

Before the printing press, viral rumors spread through word-of-mouth chatter in the market square or pub. Still, businesspeople, rulers, and religious authorities required trustworthy knowledge, and they would spend enormous sums on timely, accurate news.

For those under their employ, the earliest proto-journalists, sourcing truth was a constant struggle. Newsmen added “friction” to the process of sharing knowledge, painstakingly validating stories through second- and third-hand sources before they published—lest they lose their reputation and sponsors.

This tension between speed and accuracy came to define early news reporting. News that was both timely and accurate was incredibly expensive, requiring verified couriers and messengers, known as postal systems. We can still see this holdover in the title of “post” in many newspaper names today.

Early journalists were far from perfect, and many of the first newspapers competed for attention by aggressively peddling false, outrageous, or nakedly partisan stories, gruesome crime coverage in particular. But during the 19th century, some papers slowly matured and professionalized, building reputations for publishing factual narratives, and engendering trust as “objective” news sources.

Through fits and starts, this patchwork system of news-gathering and distribution became the dominant way we empirically verify information before amplifying it. We learned to trust journalists, largely because they fact-check rumors.

The information environment transformed yet again with the emergence of radio, and then television. Although these technologies allowed for unprecedented reach, they still relied on human gatekeepers. Each of these inventions created a new means of determining consensus that centered narrow sources of mostly verified yet selective knowledge. The public, a captive audience, was largely exposed to the same “objective” information.

There were, however, significant downsides: Reporting on powerful authorities, companies, and institutions was often uncritical, particularly if it might cause a conflict with the financial interests of the channel or newspaper. Yet most professional reporters generally adhered to journalistic standards, and the proliferation of blatantly false viral rumors was largely kept to a minimum.

Frictionless Free-for-All

In 10 short years, the internet—and social media in particular—blew the system of journalistic friction to pieces.

First the internet transformed publishing. In the mid-'90s, blogging platforms enabled anyone to publish whatever, whenever, without the critical eye of a journalistic colleague. Publishing was now a democratized, zero-cost endeavor.

When the social networks emerged, distribution and reach were also transformed. Within a decade, hundreds of millions of people found themselves perpetually online in new, targetable, frictionless communities. Groups became digital gathering places for ordinary people, and not gatekeepers, to share information. The single-click Share button turned people into active participants in the distribution and amplification of information. Newsfeeds pushed out bite-size posts to friends, and friends of friends. Curation algorithms used likes and favorites to decide what to showcase, and recommendation engines boosted engaging content even further.

Some viral rumors today obtain greater reach than traditional media broadcasts.

Reduced friction has enabled important new voices to be heard, but it has also led to the rapid spread of significantly impactful viral misinformation. The 2020 election, for example, saw farfetched false narratives about stolen elections and CIA supercomputers going viral within hyperpartisan echo chambers. QAnon grew from a small online conspiracy to a decentralized online cult boasting millions of members, who energetically spread nonsense theories about corporations that the community alleged were involved in child trafficking. The Covid pandemic saw demonstrably, unequivocally false videos like "Plandemic," which espoused numerous lies and conspiracies, reach an audience of millions before platforms decided to take it down.

As the US (and other countries) struggle with a crisis of democracy, public health, and other outgrowths of the information environment, it’s clear that current answers aren’t working. Attempts to stifle viral rumors retroactively through content moderation and takedowns are inadequate. And common scapegoats, like bots and algorithms, commandeer much of the attention in debates over solutions. But the reality is more nuanced: Bots do spread misinformation, but most platforms have since reined in the impact of automation. Recommendation algorithms do influence consumption, but they are not the only dynamic in play.

It’s time for proactive solutions; it’s time to reintroduce the sort of friction that can assist with collective sense-making.

Lies Are Fast. Truth Is Slow

Seneca the Younger apocryphally wrote: “Time discovers truth,” an idiom we still hear today as “time will tell.” Time is a critical component in determining accuracy, allowing more opportunities to filter, assess, and confirm.

Because information is now able to leap between human minds, friction-free, we may need to rethink some of the core “truths” of the modern social web. Chief among these is the paradigm that breaking information must be posted and spread instantaneously. We are operating in an environment in which high-velocity information is a significant driver in the spread of misinformation, falsehoods, and propaganda, particularly because of how it intersects with virality. Researchers from MIT have found that false news spreads further, and faster, than real news.

As we reimagine a more trustworthy social web, we can rethink the relationship between velocity and virality. Low-velocity content can still go viral: a good book we share with our friends, say, or a word-of-mouth recommendation for a film. One way in which we might do this is having a system in which rapidly or broadly spreading content is temporarily throttled by platforms to allow fact-checkers time to assess it. This need not apply to all viral content; it could be tailored topics that are most likely to cause harm: politics, health, or breaking news. It’s a model that other industries use—Wall Street exchanges, for example, use circuit breakers to help the public appropriately digest emerging information to avoid stocks going haywire.

Give Users a Nudge

Stopping conflagrations of high-impact misinformation before they happen shrinks the supply of poor information, and avoids the difficult blowback that comes from heavy-handed content moderation.

A helpful and practical metaphor can be taken from the Nobel Prize-winning work of Daniel Kahneman, whose research discovered two key “systems” in our mental operations. System 1, the fast, instinctive, and emotional; and System 2, the slower, more deliberative, and more logical way of thinking and consuming information. System 1 is prone to biases and mental shortcuts that allow us to make snap decisions, while System 2 helps us with complex and nuanced problems.

Both systems are helpful in our daily lives, but System 1 thrives within digital architecture that prioritizes speed and impulsivity. From clickbait to emotionally arresting, outrage-inducing news, the social web is now built to capitalize on System 1, tilting us all towards the reactive, automatic, and unconscious.

We can use this as a frame for thinking through design changes and frictions that might push people towards System 2, away from emotional shares and towards pro-social and reflective ones. Some of this work has been confirmed by the research of Nicholas Christakis at Yale, as well research on other design frictions improving cognitive decisionmaking. Indeed, many of these nudges are beginning to be used by tech companies, from interstitial warnings on misleading or false content (famously placed over Trump’s tweets) to prompts alerting people that certain information has been flagged in the past, or that comments are likely to be interpreted as toxic.

Various interventions at Instagram, Twitter, TikTok, and elsewhere have shown that such nudges might fundamentally improve the type of content we see and respond to on the internet. These include things like prompts asking people if they’d like to read the article before retweeting, suggesting a domain is low-quality, or noting that a word used in a comment is generally unproductive for discourse and asking if the author might like to revise. Open design libraries of testable interventions would go far in encouraging adoption across platforms.

Interstitials and warnings can be helpful for reducing the spread of disinformation.

Speeding Up Verification

New tools also show promise in speeding up the rate of verification itself—meeting high-speed mis- and disinformation as it spreads. Several recent studies have yielded encouraging new fact-checking methods, for instance, using the crowd to verify or debunk claims far faster than professional fact checkers, with similar levels of accuracy.

Crowdsourcing from a group of 1,128 of users, researchers were able to segment groups as small as 10 individuals online that could accurately determine whether or not an article was false—about as well as professional fact-checkers. Supplemented by algorithms, a system like this could be trained to identify fake news at the speed and scale in which it spreads.

Furthermore, open-sourcing these methods of verification so they are auditable and transparent enough to be easily understood might help ease claims of bias and censorship. An early attempt at this can be seen in Twitter’s Birdwatch, which leverages the community to flag misinformation tweets; the system is new and imperfect, and there are clearly ways that it can be gamed (a problem for any verification system), but it’s an important first attempt.

But Who Determines Truth?

Each of these three interventions requires someone, somewhere to make a determination as to what is true or what is of high quality. This “baseline” truth is a critical piece of the puzzle, but it's an increasingly fraught idea to address.

Controlling the narrative will always be contentious, and any system that attempts to fix disinformation will be attacked for partisan bias. Indeed, extreme partisanship is directly associated with sharing fake news. Social media seems to be especially effective at drawing partisan battle lines around more and more issues, even if the issues are not inherently partisan.

But this is a new manifestation of an age-old problem: How do we verify knowledge? And how might we do it quickly enough to be reliable? Who do we trust in society to establish truth? Here we are wading into tricky epistemological territory, but one with precedent.

Let’s look at other services we regularly use to verify facts—imperfect but powerful systems we have come to rely upon. Google and Wikipedia have, writ large, built reputations on effectively helping people find accurate information. We generally trust them, because they have systems of verification and sourcing embedded in their design.

The frictionless design of the current social web has undermined the necessary precondition to democratic functioning: shared truths.

Implicit in our three recommendations is a trust and faith in the basic journalistic process of verification. Journalism is far from perfect. The New York Times does get it wrong sometimes. Just as all media entities struggle with the selective interpretation of events, along with editorial influence over the tone and tenor of stories. But the inherent value of validated information is critical infrastructure that has been undermined by social media. Social posts are not news articles, even if they’ve come to resemble them in our news feeds. Verifying new information is a core part of any functioning democracy, and we need to recreate the friction that was previously provided by the journalistic process.

On the horizon are new technologies that will enable both decentralization and end-to-end encryption of social media—immune to any moderation. As these new tools reach scale, viral rumors will become even harder to debunk, and the supply problem of mis- and disinformation will only worsen. We should address how these tools might be designed to rebalance the flow of accurate information now, before we lose our capacity to do so.

This responsibility lands at least partially on our shoulders as individuals. We must be vigilant about identifying inaccuracies, and about finding established, reputable sources of knowledge—both academic and journalistic. Too much institutional skepticism is toxic for our shared reality. We can redouble our efforts to find ways of carefully, and compassionately, sourcing truth together. But platforms can help, and must help, tilt the design of our shared spaces towards verifiable facts.

Data-Visualizations by Tobias Rose-Stockwell


More Great WIRED Stories