Skip to main contentSkip to navigationSkip to navigation
Content moderators work at a Facebook office in Austin, Texas.
Content moderators work at a Facebook office in Austin, Texas. Photograph: The Washington Post/Getty Images
Content moderators work at a Facebook office in Austin, Texas. Photograph: The Washington Post/Getty Images

Sorry, but I’ve lost my faith in tech evangelism

This article is more than 4 years old
John Naughton
There are too many worrying developments in tech – traumatised moderators, AI bias, facial recognition – to be anything but pessimistic about the future

For my sins, I get invited to give a few public lectures every year. Mostly, the topic on which I’m asked to speak is the implications for democracy of digital technology as it has been exploited by a number of giant US corporations. My general argument is that those implications are not good, and I try to explain why I think this is the case. When I’ve finished, there is usually some polite applause before the Q&A begins. And always one particular question comes up. “Why are you so pessimistic?”

The interesting thing about that is the way it reveals as much about the questioner as it does about the lecturer. All I have done in my talk, after all, is to lay out the grounds for concern about what networked technology is doing to our democracies. Mostly, my audiences recognise those grounds as genuine – indeed as things about which they themselves have been fretting. So if someone regards a critical examination of these issues as “pessimistic” then it suggests that they have subconsciously imbibed the positive narrative of tech evangelism.

An ideology is what determines how you think even when you don’t know you’re thinking. Tech evangelism is an example. And one of the functions of an ideology is to stop us asking awkward questions. Last week Vice News carried another horrifying story about the dark underbelly of social media. A number of Facebook moderators – those who spot and delete unspeakable content uploaded to the platform – are suing the company and one of its subcontractors in an Irish court, saying they suffered “psychological trauma” as a result of poor working conditions and a lack of proper training to prepare them for viewing some of the most horrific content seen anywhere online. “My first day on the job,” one of them, Sean Burke, reported, “I witnessed someone being beaten to death with a plank of wood with nails in it.” A few days later he “started seeing actual child porn”.

Facebook employs thousands of people like Burke worldwide, generally using subcontractors. All of the evidence we have is that the work is psychologically damaging and often traumatic. The soothing tech narrative is that Facebook is spending all this money to ensure that our social-media feeds are clean and unperturbing. So it’s an example of corporate social responsibility. The question that is never asked is, why does Facebook allow anybody to post anything they choose – no matter how grotesque – on its platforms, when it has total control of those platforms? You know the answer: it involves growth and revenues, and traumatisation of employees is just an unfortunate byproduct of its core business. They’re collateral damage.

Or take machine learning, the tech obsession du jour. Of late, engineers have discovered that “bias” is a big problem with that technology. Actually, it’s just the latest manifestation of GIGO – garbage in, garbage out – except now it’s BIBO. And there’s a great deal of sanctimonious huffing and puffing in the industry about it, accompanied by trumpeted determination to “fix” it. The trouble is that, as Julia Powles and Helen Nissenbaum pointed out in a recent scorching paper, “addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.”

But this idea – that bias is a problem for which there is no technological fix – is anathema to the tech industry, because it threatens to undermine the deterministic narrative that AI will be everywhere Real Soon Now and the rest of us will just have to get used to it.

Worse still (for the tech companies), it might give someone the idea that maybe some kinds of tech should actually be banned because it’s societally harmful. Take facial recognition technology, for example. We already know that it is poor at recognising members of some ethnic groups and researchers are trying to make it more inclusive. But they still implicitly accept that the technology is acceptable.

That tacit acceptance is actually buying into the tech-deterministic narrative, though. The question we should be asking – as the legal scholar Frank Pasquale says – is whether some of these technologies should be outlawed, or at least licensed for socially productive uses, like, say, radioactive isotopes are for medical purposes. And as regards some of the really dangerous applications of this stuff – for example face-classifying AI, which is already being explored (and, it seems, deployed in China as a way of inferring sexual orientation, tendencies toward crime, and so on just from images of faces – shouldn’t we be asking whether this kind of research should be allowed at all? And if anyone regards that as a pessimistic thought, then can I respectfully suggest that maybe they haven’t been paying attention?

What I’ve been reading

Listen up, libertarians
Capitalism needs the state more than the state needs it – a terrific essay on Aeon by a great economist, Dani Rodrik. It should be required reading in Silicon Valley.

His defects are manifest
Big tech’s big defector: the title of an interesting New Yorker profile of Roger McNamee, an early investor in Facebook who eventually saw the light, and is now repenting.

More haste, less speed
Speed reading is for skimmers, slow reading is for scholars, according to David Handel on Medium.

Most viewed

Most viewed