On April 27, 2021, Tristan Harris, a tech critic and former Google employee, warned at a U.S. Senate hearing that we are facing a choice between installing a “Chinese ‘Orwellian’ brain implant…with authoritarian controls, censorship and mass behavior modification” or “clean” Western technology. Harris’s testimony highlights a growing trend of analysts decrying the decline of the West because of social media and foreign influence.

Certainly, there is mounting evidence pointing to the Chinese Communist Party’s (CCP) attempts at narrative manipulation through censorship and propaganda. China has doubled down on foreign-targeted propaganda in recent years as part of its campaign to “tell China’s story well” and to expand China’s “international discourse power” against Western narratives. According to the Center for Responsive Politics, Chinese entities’ total spending in the United States has increased from over $10 million in 2016 to nearly $64 million in 2020, with the biggest spender being the American division of the state-owned media China Global Television Network. So-called wolf-warrior diplomats have taken aggressive means to target Western democracies including amplifying disinformation and using verbal abuses on Twitter. Ultranationalists and enthusiastic followers of the CCP have also engaged — sometimes voluntarily — in information operations, ranging from spreading pro-China propaganda about Covid-19 or the Hong Kong protests on Twitter to buying media outlets in Taiwan to influence elections. Domestically, companies are under immense government pressures to impose censorship and surveillance on citizens especially in times of crises.

Nevertheless, fears of Chinese disinformation are often exaggerated by overblown assessments of the effects of China’s propaganda campaigns and casually drawn attributions.

First, there is little evidence that shows China’s digitally mediated efforts at influencing public opinion overseas is actually working. Research warning against the scale of China’s external propaganda often acknowledges that its efficacy is mixed, if not counterproductive. An extensive overview of Chinese and Russian influence operations by Rand Corp. also found that there is no conclusive evidence about the impact of foreign disinformation campaigns. The report further warns that inflated claims of Russian and Chinese activities may “have provided more strategic value to Moscow and Beijing than the direct effects of the manipulation” due to the controversy and fear that surrounded the influence campaigns themselves, regardless of whether they had any effect on changing minds or impacting behavior.

Relatedly, attribution of influence operations is often difficult to ascribe to state actors, as many have pointed out, and portraying the Chinese state and the CCP as an all-powerful monolithic entity simplifies the nuanced relationships between state actors and private actors and runs the risks of mis-assessing the motives, strategies, and outcomes of certain disinformation campaigns. For instance, a Taiwan report exposed the heavy involvement of “Mission,” a notorious profit-driven content farm in Taiwan, in spreading pro-China click-bait articles and false information primarily for monetary gains. Similarly, many YouTubers, religious groups, and Facebook page administrators were found to voluntarily share pro-state narratives for financial rather than political reasons. Even in operations where the backing of state actors is suspected, research often shows that the implementation of those operations tends to be fragmented and decentralized, outsourcing much to various private actors.

Second, while concerns over China’s growing influence are valid and worthy of interrogation, inflating such threats can potentially lead to disproportionate measures that unduly target content, individuals, and entities of Chinese origin. Such measures might result in policies that are not only ineffective at improving informational security or strengthening the democratic process but may also infringe on the right to information and freedom of expression. Despite consistent doubts in the efficacy and attribution of the CCP’s digitally mediated influence operations, however, politicians, researchers, and pundits from the whole political spectrum have advocated for extreme measures in response, calling on social media platforms to ban the CCP once and for all and appealing to borderline racist and colonialist rhetoric against anything Chinese. Moreover, public policies and repetitive narratives that single out individuals and entities based on their nationality and/or country of origin will almost certainly fuel negative sentiments and harassment against communities of those origins, as shown in the recent surge of anti-Asian racism and hate crimes during the COVID-19 pandemic.

Third, policies that resort to blanket bans based on political stance or country of origin are only strengthening China and its allies’ cyber sovereignty agenda, which has been used to solidify digital borders and justify surveillance and suppression practices. As Peter Pomerantsev argues regarding Russia, pushing an “information war” world view where all content could be deemed foreign interference risks reaffirming the Kremlin’s global policy goals of a “sovereign” internet where censorship is normalized. Moreover, it is imperative to point out that not all Chinese state-affiliated entities are uncritical servants of the CCP. It is true that all news media in China are de jure subject to CCP leadership. Rather than succumbing to total government control, however, there are many outspoken news groups that play a significant role in humanizing marginalized groups and muckraking government wrongdoings. For instance, during the early outbreak of the pandemic in China, business magazines Caixin and Caijing, local newspaper Beijing News, and Beijing Youth Daily all provided in-depth reporting on the pandemic. Treating all Chinese entities indiscriminately as proxies of the Chinese state or singling out individuals just because they advocate for more collaborations with China would risk further fear mongering, self-censorship, racial strife, and a misinformed public.

Fourth, democracies should focus on putting their own house in order and pay as much attention to homegrown disinformation campaigns and issues pertaining to domestic information environments as to foreign influence operations. Compared to the doubtful effects of Chinese attempts at influencing overseas public opinion or “undermining democracy,” domestic actors who often have a wider range of resources, connections, and media coverage to exploit the loopholes of the democratic process have thus far proved to be much more successful at executing effective influence operations, propagandizing the public, and fomenting hateful speech.  The U.S. Department of Homeland Security named “domestic violent extremism” as one of its top threats in its October 2020 report, citing extremist media and social media as potential factors leading to real world violence.

Take the example of former President Trump, whose claims of electoral fraud and incitement on Twitter likely contributed to the Capitol Hill riots on January 6, 2021, leading computer scientist and disinformation scholar Kate Starbird to refer to the day as “hashtags come to life.” Or the influence operation funded by America’s largest broadband companies, which ended up faking the majority of comments submitted to the Federal Communications Commission in 2017 to create the illusion of popular support for the repeal of net neutrality protections. And let’s not forget QAnon, the online-born “crowdsourced conspiracy” that participated in the January 6 siege and whose own adherents have found their way into congress, or Plandemic, a coordinated campaign to spread a “documentary” that contained dangerous medical misinformation and other anti-vax content in the midst of the COVID-19 pandemic. An information and research environment that assigns blame and utmost attention to foreign actors without reflecting on the trends and tactics of domestic agents risks overlooking potentially much more harmful and immediate consequences of disinformation.

Last but not least, discourses or agendas that seek to securitize disinformation (i.e., frame it as a national security threat) have led to censorship-enabling policies and legislation across the globe. Ruling parties in authoritarian and democratizing countries have been capitalizing on Western media’s justified concern over “fake news,” framing it as a threat to national security that in turn justifies extreme measures including passing laws and regulations that would criminalize the creation and dissemination of “fake news” or “rumors.” The securitization of misinformation and disinformation has led to increasingly illiberal policies, such as urging the private sector to police speech or even adopting China’s model of “significant monitoring and speech control.” If we have learned anything from studying authoritarian actors in-depth, it is that their model of regulating information is the antithesis of good governance.

Also benefiting from the securitization framing are private companies who profit greatly from the militarization of domestic policing. Since the 9/11 attacks, the retail market for surveillance tools has grown from “nearly zero” in 2001to over $90 billion in 2021. Reports by Citizen Lab consistently demonstrate how private intelligence contractors capitalize on legitimate safety and security concerns while neglecting (or choosing to be willfully ignorant of) the human rights implications resulting from such collusion between the state and private sector. Securitizing disinformation and inflating the scale and efficacy of “foreign influence operations” may therefore further empower states and companies to legitimize mass surveillance.

So what exactly should researchers and policymakers do when it comes to thorny issues of disinformation and foreign interference? First, it bears reminding that influence is not limited to the internet, and any assessment of a country’s soft power should include all other avenues of persuasion, be it through foreign direct investment, military collaboration, the sale of surveillance technology, or conventional diplomacy.

Second, researchers of disinformation should be as focused on trying to discern the actual effects of propaganda across the entire media ecosystem as they are on the number of clicks, tweets, and likes a campaign receives. Evidence of activity is not the same as evidence of impact. To focus solely on content and online metrics without defining what constitutes harm or contemplating the ways in which the digital ecosystem leads to real world consequences risks both resources and viable long-term solutions. Certainly, platforms should remove deceptive activities that amplify misleading or dangerous statements, as they should for any other content that violates a platform’s policies or leads to real-world harms regardless of its origins or political stances. But any attempts to mitigate against the harms of misinformation or deceptive influence operations — both foreign and domestic — must be proportionate to the threat. Hastily passed legislation based on moral panic and inconclusive evidence would risk infringing on freedom of expression, access to information, and other rights the West professes to cherish and protect.

IMAGE: Photo by Kevin Frayer/Getty Images