BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Should Social Media Be Allowed To Profit From Terrorism And Hate Speech?

Following
This article is more than 5 years old.

Getty

Social media companies portray themselves as benevolent public services bringing the world together. Yet under that veneer of public good lie Orwellian surveillance machines that silently watch our every waking moment, harvesting and mining our every action down to our most intimate moments and relentlessly monetizing them. Most importantly, their ad-supported ecosystems do not distinguish between profiting from legitimate and legal activity and from horrific and illegal content. Social media platforms earn a profit from terrorism propaganda and recruiting, human trafficking, genocide, hate speech, sexism, racism, suicide, bullying and all other forms of unimaginably horrific activity. Should they be forced to hand back that money rather than continuing to profit from the worst of human society?

In the print and broadcast era, advertisers paid to have their messages shown alongside professionally vetted content. Newspapers, magazines, radio and television shows and most other outlets had professional editorial staff that manually and carefully reviewed every piece of content they published, ensuring that advertisements appeared alongside content consistent with national laws and the sensibilities of those advertisers. Moreover, advertisers could choose to run their ads only in mainstream publications or select whether they wanted to target more controversial outlets.

In stark contrast, the social media era sees advertisers placing their messages alongside content created by ordinary people from across the world without any human review or direct editorial controls of any kind. An advertiser could see their ad appear alongside a post featuring cute kittens just as easily as alongside an ISIS recruitment poster.

The problem is that the social platform running those ads makes just as much money from the ad running beside the kittens as it does the ad running beside terrorism imagery. As long as the advertiser doesn’t threaten to reduce their spending or government regulators threaten to step in with new laws, the social media company has no real incentive to better control what content ads show up alongside.

In cases where advertisers have stepped in with boycotts and spending reductions, platforms have moved swiftly to take at least basic steps, but these have often been more cosmetic than structural.

When a child bride is auctioned off on Facebook, the company profits monetarily from human trafficking. When endangered animals or their parts are sold through a Facebook page, the company earns money from pushing our natural world towards extinction. When terrorists promote and recruit on Facebook, the company earns money from helping terrorist organizations launch attacks, expand their ranks and kill innocent people. When modern genocide is committed, much like an arms dealer, Facebook profits from it. When a young child is bullied or commits suicide, the company profits from all of the posts that bullied the child and all of the posts about her death. When racist, sexist, anti-Semitic and all other forms of hate speech are posted, once again the company earns a profit from the ads shown alongside them.

Every piece of the most horrific and harmful content posted to social media directly monetarily benefits the platforms by earning them advertising revenue.

In short, social media companies profit from terrorism, hate speech and all other unimaginably vile and harmful speech. Every hateful or violent post means additional profit for the companies.

This creates a very real conflict of interest in that while arguing that they are working hard to remove toxic speech from their platforms, the companies actually directly monetarily benefit from that speech.

The companies have never provided public breakdowns of how much of their annual revenue comes from advertisements that appear alongside questionable or toxic content or material that was later removed, but even if the total percentage is low, it still represents a moral quandary for American companies to directly profit from terrorism and hate speech.

Might the social media companies consider automatically refunding advertisers for all ads shown alongside content that is later removed for a violation of the site’s terms of service?

I asked Facebook this question last month with respect to the child bride auction conducted on its platform. When asked whether the company has ever considered refunding the advertising revenue it earns through content like this, the company declined to comment. When asked whether it would at the very least delete the advertising selectors and profile additions it gained from users interacting with the content that assist it in better selling ads in the future and thus indirectly monetarily benefit it, again the company declined to comment.

Putting this all together, should social media companies be permitted to continue directly profiting from everything from terrorism and human trafficking to hate speech and violence? Or should they be forced to make a moral stand by refunding all revenue earned from content they remove? In the end, if companies didn’t make money from toxic content, faced even greater advertiser backlash or feared greater threat of government regulation to remove it, perhaps they might actually take the issue seriously and finally invest in protecting their platforms from misuse and no longer profiting from society's hate.