Comment

Britain is sleepwalking into censorship and we’re running out of time to stop it

The revised Online Safety Bill still incentivises Big Tech to turn its algorithms against legal speech

Michelle Donelan
Culture Secretary Michelle Donelan has not gone far enough in amending the Online Safety Bill. If it is not changed further, the British press will face ever more extensive censorship via bots Credit: Kin Cheung/AP

When Rishi Sunak stood for leader of the Conservative Party, one of his pledges was to defend free speech. It certainly was in peril: Nadine Dorries was planning a Bill that would give her, as culture secretary, powers to censor anything deemed “legal, but harmful”. This dangerously nebulous phrase could mean anything: she offered, as an example, a Jimmy Carr joke. Social media firms would be instructed to let their censorship algorithms rip, and pay huge fines if anything slipped through. Britain would end up with one of the most draconian regimes in the free world.

Earlier this week, we heard that the Online Safety Bill was to be amended and the threat lifted. But this turned out to be a false alarm. The legal-but-harmful rule remains, albeit intended for the under-18s. The problem, of course, is that cyberspace does not distinguish between children and adults. Nor can it, if anonymity and privacy are to be preserved.

Michelle Donelan, the 10th culture secretary in 10 years, has told Silicon Valley that she’ll come after them for “billions of pounds” if children find the wrong content. So the obvious thing for them to do is censor for everyone.

This takes us back to where we started: with Britain sleepwalking into a system far harsher than anything that the EU or the US is proposing. Worse, this isn’t even intentional. It’s happening because ministers have not really thought through the implications, and are in a panicked rush into cleaning up the internet for the young. Big Tech has no friends and a great many enemies, but a censorship law that seeks to bring them to heel will end up imposing profound consequences that will reshape our public debate.

A robot, for example, will already have read this column and sought to ascertain if my argument justifies the headline. If not, the article will be punished, pushed far down the search rankings. This is a standard Google procedure, intended to improve search results.

But how, I asked a tech chief recently, does an algorithm judge the quality of an argument? As an editor, I’ve come to find out that first-class sub-editors are the most valuable and rarest people in the industry. Can their craft really be judged by a bot? I have my doubts. But we’ll never know, as the process is invisible.

And this is the problem. Bots make mistakes all the time – but no one knows because their decisions are never made public. The Online Safety Bill could end up with all kinds of articles targeted, but we’d never know or be told. YouTube will say that such secrecy is vital. It has been taking down thousands of Russian propaganda videos recently, for example. Must it really inform the Kremlin every time it does?

But then come the other casualties. The Spectator recently broadcast an interview with a Harvard fellow about the Ukraine war. It was removed (by a TikTok bot) and categorised as lacking in “integrity and authenticity”. We appealed. We lost. No explanation.

Part of an editor’s job, now, is to battle with these bots. I wish I could say that my magazine is unsullied by the digital world, but we rely almost entirely on digital to find new readers in print.

A third of The Spectator’s traffic comes from search engines, a quarter from social media. They are the new newsstands where people pick us up, browse, see if they’d like to buy. Every publication, even the world’s oldest weekly, now sails in these waters. And we do so against a swarm of bots, which the UK Government is about to make more powerful still.

The idea that all this is resolved with age restrictions is naive. Surely, ministers say, we age-restrict films, magazines and video games? Yes, but for the past few centuries we have not censored the written word (with a handful of quite famous exceptions). Why start now? Where might it lead?

The anonymity of internet use is an important principle of privacy. Keystrokes and browsing history can help companies guess a user’s age, but it’s only a guess. If they’re fined billions for getting it wrong, why take the risk? Why not just treat everyone as a child, without telling them?

Ms Donelan argues that Big Tech would not dare because they make such good money from news. If only. Facebook, Google et al have more power than all the press barons put together, by asking bots to curate the news feeds of billions. But they never asked for such power – and now see it as a liability.

It brings lots of headaches, regulatory risk – and hardly any cash. YouTube makes big money selling ads against things like Baby Shark videos (its all-time number one). Less than 2 per cent of Google searches are for news stories, and adverts are seldom sold against it.

If news was hugely profitable, its independence might be more vigorously defended. But in my own conversations with Big Tech chiefs, they’re usually quite candid. News, for them, is a marginal slice of their business. And if the UK is about to be the most dangerous place in the free world to publish against-the-grain opinions, this gives them a huge incentive to tell those bots to take a more risk-averse approach.

The role of algorithms in our everyday lives is far greater than is appreciated in Westminster. The rise of machine learning means that even Google doesn’t quite know why its bots make the decisions they do. But such decisions are curating the digital world, and the news as it is seen by billions. Elon Musk, Twitter’s new owner, is quite explicit about the politicisation of the system he inherited. “The obvious reality,” he said recently, is that Twitter “has interfered in elections”. The question, he says, is what to do about it.

Another obvious reality now is that more children get news from TikTok than the BBC, more adults get their news from Facebook than any newspaper – and all of it is arranged by algorithms. The Government has huge power to distort these algorithms, by threatening such insanely large fines – but is it sure what the effects will be? How will they be monitored?

The best option would be to outlaw the genuine nasties, and refrain from censoring the written word – as successive governments have done for centuries. This is a hugely complicated problem. But there is still time for Mr Sunak to do the right thing.

License this content