Cory Doctorow Wants You to Know What Computers Can and Can’t Do

A conversation about the “mediocre monopolists” of Big Tech, the weirdness of crypto, and the real lessons of science fiction.
A glitchy illustrated portrait of the writer Cory Doctorow.
Illustration by Yoshi Sodeoka

I first spoke with Cory Doctorow two years ago. I was trying to get a handle on the sci-fi genre known as cyberpunk, most famously associated with the work of William Gibson. (It also served as the inspiration for a recent video game, Cyberpunk 2077, which had a famously tumultuous rollout.) Doctorow, who is often described as a post-cyberpunk writer, is both a theorist-practitioner of science fiction and a vigorous commentator on technology and policymaking; his answers to my questions were long, thoughtful, and full of examples. And so, after that first talk, I made plans to speak with him again, not for research purposes but as the basis for the interview below.

Doctorow, who is fifty-one, grew up in Toronto, the descendant of Jewish immigrants from what are now Poland, Russia, and Ukraine. Before becoming a novelist, he co-founded a free-software company, served as a co-editor of the blog Boing Boing, and spent several years working for the nonprofit Electronic Frontier Foundation. Our first conversation, in late 2020, took place just after he had published the novel “Attack Surface,” part of his Little Brother series; it dramatizes the moral conflict of cybersecurity insiders who try to strike a balance between keeping their jobs and following their consciences.

The second time we spoke, Doctorow told me that he had eight books in production. “I’m the kind of person who deals with anxiety by working instead of by being unable to work,” he explained, when I asked how he was handling the ongoing pandemic. Among those eight books were “Chokepoint Capitalism,” co-written with the law professor Rebecca Giblin and published this past September, and “Red Team Blues,” a novel set in the world of cryptocurrency, which will come out in April. In the course of two interviews, Doctorow discussed the right and wrong lessons that one can learn from science fiction, the real dangers of artificial intelligence, and the comeuppance of Big Tech, among other topics. Those conversations have been edited for length and clarity.

I wanted to talk to you about cyberpunk because you’ve written eloquently about its historical and cultural underpinnings. Has your conception of what the genre is and what it can be shifted over the years?

Certainly. I mean, my first encounters with it were short stories in Asimov’s and OMNI. As a kid born in 1971, who was thirteen when “Neuromancer” came out, it was just dazzling, right? I quite side with Gibson on this: he says, although people called it dystopian, it was actually optimistic—because he wrote in the mid-eighties about worlds where there had only been limited nuclear exchanges and the human race still continued! I was involved with the anti-nuclear-proliferation movement from my earliest years—my parents were political organizers—and I was moderately convinced that there was a good chance that we would all be radioactive ash by the time my eighteenth rolled around.

I am identified with a group of writers who are loosely called post-cyberpunk. And I think one of the defining features of us is the idea that computers are dealt with as things in the world and not as metaphors. The writer who probably best epitomizes that shift is Neal Stephenson, who starts off very much as a techno-metaphorist—even though he’s a computer-industry professional, or has a background in the computer industry—and then becomes increasingly techno-realist in his approach, sometimes even excruciatingly.

Do you think the genre has a new salience now that Big Tech companies are no longer commonly treated as innocuous engines of innovation?

The comeuppance of Big Tech has two major sides. There’s the side that says Facebook invented a mind-control ray to sell you fidget spinners, and then Robert Mercer stole it and made your uncle racist with it, and now we don’t have free will anymore because of Big Data. And those people, I think, are giving cyberpunk real salience, because that is a cyberpunk science-fiction plot, not a thing that happens in the world. Everyone who’s ever claimed to have a mind-control ray turned out to be a liar or deluded.

The other side is, Look at these completely ordinary mediocre monopolists, doing what monopolists have done since the days of the Dutch East India Company, with the same sociopathy, the same cheating, the same ruthlessness—we should do unto them as we did unto the Rockefellers and the Carnegies and so on. And that strain of techlash, I think, rightly views the cyberpunk motifs as fiction that has been mistaken for reality, the same way Elon Musk mistakes the fairy tales about unitary inventors—who, in their lab, create a faster-than-light machine or whatever—for a thing that actually happens in the world, as opposed to a kind of juvenile fantasy, and then declares himself to be Iron Man.

Cyberpunk was a radical literature. And, if you’re going to radicalize people, you have to engage with computers as they are so that people understand that you’re not making up a fairy tale but reflecting on their actual lived experience about things that can happen, do happen, and could be better.

In the eighties, in its metaphor stage, cyberpunk got people to realize how intimate technology had become in their lives. But you don’t think we need metaphors so much anymore?

I’ve been at this for long enough that I had to explain to people that I wasn’t speaking metaphorically when I said that they were headed for a moment in which there would be a computer in their body, and their body would be in a computer—by which I meant their car. And, if you remove the computer, the car ceases to be a car. And that they would have things like pacemakers and artificial pancreases, and just all manner of implants. I have a friend with Parkinson’s who now has a wire in his brain that’s controlled by a computer.

We think of computers as being a thing that sits on your desk and that you use to do your taxes. And then we think of it as a rectangle in your pocket that you use to distract yourself. Eventually we’re just going to think of a computer as being, like, a physics, right? The rules by which we make infrastructure will be our computer capabilities and policies.

Bill Gibson was going to arcades in Toronto and seeing kids thrust their chests at the video games while they pumped quarters into them and thought, What world are they trying to enter when they play these games? And he coined the term cyberspace. The thing that cyberspace gets us, as a metaphor, is the sense that our technology policy is going to be the framework in which our infrastructure, and thus our lives, emerge. And that enormity is difficult for people to grasp.

Then there is the danger that people get lost in escapist thinking. “Oh, imagine what it would be like if people could upload their consciousness to computers,” et cetera.

I wouldn’t call that a danger of cyberpunk. I would call that a danger of the moment that we’re living in, which happens to include cyberpunk, which can be turned to reactionary or revolutionary ends. You see explicit, self-conscious cyberpunk motifs being worked into things like the June 12th uprising in Hong Kong—a radical, revolutionary, pro-democracy human program. But you also see cyberpunk in the motifs of white nationalists.

Tell me more about how you see cyberpunk being used in a reactionary form.

Like, the cryptocurrency world, right? The whole “Let’s found an island nation backed with aircraft carriers that uses cryptocurrency to enact a program of radical individualism.” I mean, Jesus Christ, the Oculus founder is building a gulag on the border. You don’t get more reactionary than that.

If there were one thing that you wish more people would think about when it comes to where tech is going, what would that be?

When we design a computer that treats its user or owner as its adversary, we lay the groundwork for unimaginable acts of oppression and terror. Here’s an example: in 2005, it was revealed that Sony BMG had shipped millions of audio CDs that had a rootkit on them that, when you put it in the CD drive on your computer, silently patched your computer’s kernel so that it could no longer see programs that began with “$sys$”—that little string of characters. And then they installed a program that started with that string which broke CD-ripping, so you could never rip a CD again. They didn’t want you to uninstall that program, which is why they modified your kernel for that. This was radioactively illegal. They infected between two and three hundred thousand computers. They settled with the F.T.C. for a giant amount of money. Every virus writer in the world immediately pre-penned their virus to “$sys$” and made it invisible to your computer and its antivirus software.

Wow.

This is 2005. So we are now fifteen years into this and we still have car companies, phone companies, med-tech companies all building devices that are designed so that the owner cannot override the manufacturer’s choices. You have HP shipping updates to printers that update them so they can detect the latest third-party ink cartridges. And everyone has followed them because, of course, we have market concentration, so there’s only four printer companies. They all do this now. They all have zero-touch, no-user-intervention firmware updates that could be used by malicious parties to do incredibly terrible things to your network, to you, to your data.

There’s a guy named Ang Cui. He runs a thing called Red Balloon Security. But, in 2011, he was a grad student at N.Y.U., and he gave a security presentation at the Chaos Communication Congress called “Print Me if You Dare,” where he showed that he could update the firmware of an HP printer by sending it a poison document. You just give, like, the H.R. department a document called resume.doc. And when they print it the printer’s firmware is updated silently and undetectably: it scans all future documents for Social Security numbers, and credit-card numbers, and sends them to him. It opens a reverse shell to his computer, through the corporate firewall, and then it scans all the computers on your LAN for known vulnerabilities and takes them over. It was just a little proof of concept; he never released it.

You don’t have to be a science-fiction writer to see this coming because it’s been happening in the real world for fifteen years.

In “Attack Surface,” you write, “Indifference is a lot harder to correct than simple ignorance.” I wonder if cyberpunk can do anything to correct that indifference.

Think of it being like “Silent Spring,” right? Before DDT made a bunch of animals extinct, “Silent Spring” convinced people to take action. There’s a problem when you have threats on your horizon where the cause and effect are separated by a lot of time and space. The natural point at which denial gives way to concern is past the point of no return. So what you want to do is shift the moment of peak denial further back so that you’ve got more runway to do something about it. You see it very explicitly now with climate fiction.

What that narrative can do is shift the point of peak indifference. But, just as importantly, it can keep denialism from sliding into nihilism. What you have to show people is not just how bad it will be if they don’t take action but how much room there is to take action to make things better. And it’s a very hard balance because the better job you do of demonstrating the vast, frightening challenge ahead of us and the consequences for inaction, the harder it is to convince people that some action could make a difference.

I think the best fiction does strike that balance. I mean, not to toot my own horn, I think that’s the thing that people like about “Little Brother.” It is a story that, for a certain kind of reader, both scares the shit out of them about how bad things can be and inspires them about how much we can do to make them better. This will all be so great if we don’t screw it up.

In “How to Destroy Surveillance Capitalism,” you argue, contra Shoshana Zuboff, that Big Tech companies, like Google and Facebook, aren’t dangerous because of their ability to influence behavior but because of their monopolistic impulses. Do you think trust-busting might swing back into fashion?

Oh, yeah. There’s a pretty good column about this by Matt Stoller, where he said that, if you watch old war movies, there’s often this moment where the torpedoes are in the water but they haven’t hit yet, and things are very tense. He said that’s kind of where we’re at on antitrust. There have been these high-profile investigations and regulatory proceedings to alter the real foundational way that antitrust is enforced—most notably, the sunsetting of the consumer-welfare standard, which I think is the great villain of the antitrust wars, and the restoration of a much more muscular standard for intervention, which would effectively end the kinds of anti-competitive mergers that we’ve seen to date. It’s the end of the forbearance that has dominated for forty years and produced, for example, three or four firms that make most of the baby food and baby formula, and three or four firms that control most of the shipping.

Why now?

I think the public is becoming more aware of the issues related to antitrust. When you look at the polls on inflation, there’s a pretty bipartisan majority that says that, at least in part, inflation is the result of price gouging by monopolies that don’t fear being undercut because they operate as a cartel. When people start to say, “Oh, the reason why broadband sucks is not because physics says that we can’t get fast broadband here but because my cable operator decided not to invest and, rather, to do a giant stock buyback, because it doesn’t have to compete with anyone, because it has an exclusive franchise to serve my region,” that’s the moment at which things start to change, and the political will starts to gather.

You’re a proponent of interoperability—the idea that devices and apps are improved with third-party development—so that Apple or Facebook doesn’t completely control how you use its products. Why is this something more people should think about?

When economists gather to talk about the problems of winner-take-all in tech markets, they really lean into the problems of network effects—you got on Facebook because the people who were already there made it valuable to you, and then, once you were there, you made it more valuable to other people. It is how Big Tech gets big. But Big Tech stays big by making switching costs really high. So, you get on Facebook because your friends are there, but you don’t leave because you can’t take your friends with you.

If there was interoperability, those switching costs would come down. If you could leave Facebook but continue to stay in touch with the communities, the customers, the family members, and the friends that you value there, then Facebook would, first of all, have to work to keep your business. And one of the things that we saw in the F.T.C.’s unsealed antitrust complaint is that Facebook’s senior managers quite openly discuss with one another how they can make the switching costs higher—and they use those terms. They say, We will make this acquisition because it’s going to add photos to Facebook, and that will make the switching costs high because people don’t want to leave their family photos behind, even if they don’t like Facebook. And so, if you want to make it such that Facebook only has people on it who like Facebook better than the alternatives, then you should want interoperability.

If you were to project even just a couple of years into the future, what would be your most idealistic scenario for where you hope these antitrust trends will lead? And what is your most realistic scenario?

Well, Big Tech is not the only concentrated industry. A bunch of concentrated industries use Big Tech antitrust as a pretext for going after Big Tech—not to end monopoly but to redistribute the industry’s share of the monopoly themselves. So, cable operators, phone companies, entertainment companies. I think it’s fair to say that the big entertainment companies don’t want to kill Google; they just want to take it over. Some of the energy that comes up to break up or tame Big Tech is coming from other sectors that are every bit as much in need of taming and breakup as Big Tech is.

Google is a company that’s only made one-and-a-half successful products in its entire history. It made a search engine and a Hotmail clone, and everything else that it’s done that’s successful it bought from someone else. The only way it was able to build a good video service was by buying YouTube. This is why merger scrutiny is such a big deal, because these companies are not built by super geniuses who use their access to the capital markets to build these impregnable businesses which no one else can assail. They are regular, venal mediocrities who use their access to the capital markets to buy everyone who might threaten them. If there’s merger scrutiny, that just stops happening.

My best hope for the next three years is that we win against Big Tech, then we take on Big Everything Else. My more realistic one is, over three years, it’s probably not going to get to the point where we break up Big Tech. But I do think that we will have at least one major interoperability mandate in one major market—that would be the U.S., the European Union, or India. I think it’s quite possible that there will be interoperability mandates in China.

Do you think that the concern over A.I.’s expanding capabilities is misplaced?

I do. I think that the problems of A.I. are not its ability to do things well but its ability to do things badly, and our reliance on it nevertheless. So the problem isn’t that A.I. is going to displace all of our truck drivers. The fact that we’re using A.I. decision-making at scale to do things like lending, and deciding who is picked for child-protective services, and deciding where police patrols go, and deciding whether or not to use a drone strike to kill someone, because we think they’re a probable terrorist based on a machine-learning algorithm—the fact that A.I. algorithms don’t work doesn’t make that not dangerous. In fact, it arguably makes it more dangerous. The reason we stick A.I. in there is not just to lower our wage bill so that, rather than having child-protective-services workers go out and check on all the children who are thought to be in danger, you lay them all off and replace them with an algorithm. That’s part of the impetus. The other impetus is to do it faster—to do it so fast that there isn’t time to have a human in the loop. With no humans in the loop, then you have these systems that are often perceived to be neutral and empirical.

Patrick Ball is a statistician who does good statistical work on human-rights abuses. He’s got a nonprofit called the Human Rights Data Analysis Group. And he calls this “empiricism-washing”—where you take something that is a purely subjective, deeply troubling process, and just encode it in math and declare it to be empirical. If you are someone who wants to discriminate against dark-complexioned people, you can write an algorithm that looks for dark skin. It is math, but it’s practicing racial discrimination.

I think the risk is that we are accelerating the rate at which decision support systems and automated decision systems are operating. We are doing it in a way that obviates any possibility of having humans in the loop. And we are doing it as we are promulgating a narrative that these judgments are more trustworthy than human judgments.

You’ve said before that governments want strong encryption for themselves and weak encryption standards for their citizens. Is there an ongoing struggle over encryption standards that you think is worth following closely right now?

I think the battle rages on. I mean, you have contractors—like Cellebrite—who are tolerated by so-called democratic governments, even though they primarily serve the world’s worst dictatorships. And the reason they’re tolerated is because these so-called democratic governments also rely on those tools, because they don’t want to do shoe-leather detective work—they want to take a shortcut. They want to be able to just crack messages. I think the struggle is very much alive, not least because, even if you trust your government’s rule of law, we all use the same tools, we all use the same standards, and weakening those tools and standards exposes everyone in the world to risk, including people in regimes where the human-rights record is unquestionably appalling, where there’s really no argument about it.

In your next novel, “Red Team Blues,” you focus on Martin Hench, a sixty-seven-year-old forensic accountant, who is tasked with recovering a set of stolen signing keys that, with some technical finagling, can permit one to rewrite a blockchain’s distributed ledger, swiping assets from one side to the other, as it were. Do you think blockchain tech is less secure than enthusiasts portray it to be?

I think so. One of the things about pseudonymity is that it has a cumulative information-leakage problem. So, if you’re pseudonymous, and you make one transaction and then never do anything else, that transaction will likely be very hard to trace back to you, right? If you make one blockchain transaction, it’s unlikely to be identifiable as you. If you make two, suddenly you’re getting into a lot more possible re-identification material. And then, if you have a long life on the blockchain, where you do lots of things over years and years, and something happens that unmasks something a long time ago—say, someone is arrested and they disclose records of which people were associated with which wallets—now a bunch of your transactions are in the public domain, even if you weren’t doing anything illegal.

Did you have fun writing about crypto?

A little bit. Crypto is weird, because, much more so than other technologies, if you don’t like crypto, crypto people really want to convince you that you’re wrong. There are other technological choices that I’ve been involved with. Like, for example, I think that the iOS model of curated computing, where a company not only has its own app store but stops you from choosing a rival app store—I think that’s bullshit. And there are a lot of people who really like Apple, and yet very few of them insist that I come on their podcast to explain why I think they’re wrong. And I had to declare a moratorium on going on blockchain podcasts to explain why I thought people were wrong. There is, among blockchain enthusiasts, a kind of unwillingness to believe that, if someone disagrees with you, it’s because they understand you and, despite understanding you, still disagree.

Ethereum is a project based around decentralized applications, which run on a scattered network of computers and don’t have a single owner who controls them. That would seem to be in line with what you want for the Internet, in the sense of more interoperability and more security. Or am I wrong?

I think distributed apps are a great idea. I am skeptical of smart contracts, which are the building block of distributed apps. Smart contracts are hard to get right. And this is not a thing that you can fix. There’s this foundational idea in computer science called the halting problem, which says that, above a pretty minimal threshold, it’s impossible to know all the different ways that a program can behave. One of the ways that computer scientists try to address this risk is by keeping Undo buttons around in our code. We try not to make irreversible operations. We try to write a backup of the data before we save it again, so that, when you save it, if the program crashes, you have a backup of the last save state. We try to maintain an audit log and to unwind processes that go off the rails.

The code in your anti-lock braking system, though—once it fails, your brakes don’t work, and that’s it. You can’t unring that bell. Or the code that controls whether or not the coolant will be released into the nuclear reactor—if it fails to go off, and the reactor melts down, you can’t fix it. Those are still instances where we want automation, and we try to minimize how automated they are, and we try to surround them by other systems—we try to build, like, soft walls around them, because we understand that this should be the exception, and it should be treated as very dangerous, because computer programs are very unpredictable.

In blockchain land, including in smart-contract land, we throw all of that away. We take applications that in no way benefit from being irreversible and we make them irreversible. So, rather than having a bank that decides whether or not a transaction goes forward, you have this automated Proof of Work or Proof of Stake process, all these different computers, running in tandem, all checking each other’s work, and there’s no way to unwind it.

The title “Red Team Blues” plays on an idea from cybersecurity and war gaming, that red attacks and blue defends. The red-versus-blue concept comes up at different points in the narrative. Was it a structural cornerstone for the book from the beginning?

In some ways, it’s just me working out my own anxieties. I am firmly convinced of the attacker’s advantage—that the attacker needs to find one exploitable defect, and the defender needs to make no mistakes. And this means that, over the long term, attackers tend to have the advantage, and defenders need to become attackers in order to win. But, at the same time, it makes me despair for some of the things that I treasure. Like content moderation, for example.

I worry that, because of the attacker’s advantage, the people who want to break the rules are always going to be able to find ways around them, and that we’re never going to be able to make a set of rules that is comprehensive enough to forestall bad conduct. We see this all the time, right? Facebook comes up with a rule that says you can’t use racial slurs, and then racists figure out euphemisms for racial slurs. They figure out how to walk right up to the line of what’s a racial slur without being a racial slur, according to the rule book. And they can probe the defenses. They can try a bunch of different euphemisms in their alt accounts; they can see which ones get banned or blocked, and then they can pick one that they think is moderator-proof.

Meanwhile, if you’re just some normie who’s having racist invective thrown at you, you’re not doing these systematic probes—you’re just trying to live your life. And they’re sitting there trying to goad you into going over the line. And as soon as you go over the line they know chapter and verse. They know exactly what rule you’ve broken, and they complain to the mods and get you kicked off. And so you end up with committed professional trolls having the run of social media and their targets being the ones who get the brunt of bad moderation calls. Because dealing with moderation, like dealing with any system of civil justice, is a skilled, context-heavy profession. Basically, you have to be a lawyer. And, if you’re just a dude who’s trying to talk to your friends on social media, you always lose. So this book is me trying to work out what it means to be on the red team—or, rather, to be forced onto the blue team when you want to be on the red team, and how you can turn the tables.

There’s a memorable bit in your novel “Little Brother” where the teen-age narrator, Marcus, says, “If you’ve never programmed a computer, there’s nothing like it in the whole world. It’s awesome in the truest sense: it can fill you with awe.” What was your gateway computer experience?

My dad was a computer programmer. The first computer I used was in the mid-seventies, when he brought home a teletype terminal. We connected it to an acoustic coupler—that’s, like, two suction cups. You would dial your regular Bell phone—you would dial the computer at the university with it. And when you heard it going “whirr,” you would put the phone in the two suction cups, with the speaker on one and the mouthpiece on the other. And there was this teletype terminal, which is just a printer and a keyboard, no screen.

My mom was a kindergarten teacher at the time, and she would bring home rolls of brown bathroom hand towels from the kid’s bathroom at school, and we would feed a thousand feet of paper towel into the teletype and I would get a thousand feet of computing after school at the end of the day. It was a very weird, very primitive experience, but I think that I was lucky, in that I entered the computing world right at the very end of the legibility of computers.

In the BDS world that followed from it—that BDS software was all written in BASIC. And you could get a copy of the BDS software. You could read the code, and it was in a humanlike language that was legible to a moderately skilled practitioner, an eleven-year-old who had learned to write a little BASIC by buying computer magazines and retyping the programs that the computer magazines used to print in order to get a new video game. And then you had “view source” in early Web pages—you had these layers and layers of legibility that made it possible to develop this very intuitive sense of what’s happening in the computer, sort of from the bare metal all the way up to the sprites on the screen.

There’s a German word for this: it’s Fingerspitzengefühl, which means the “fingertip feeling.” If you’ve ever held a basketball on your fingertips and turned your hand this way and that, and the basketball doesn’t fall off because you know exactly where its center of gravity is—that’s the fingertip feeling. And I feel like being around through that early computing age gave me a Fingerspitzengefühl for computers. I feel like I’ve got Spidey sense a little—like, I sometimes will encounter a delay in a Web page loading and, based on which elements have loaded, and what kind of delay it feels like, I’m, like, Oh, this is a slowdown in the real-time-bidding marketplace to put ads on this Web page. This is surveillance lag. They’re spying on me, and that’s why I can’t load this Web page. And that’s a thing that you get if you’ve got this kind of cross-disciplinary familiarity with the system that starts from a very low-level foundation.

I have one last question. You’re an incredibly prolific writer, public speaker, and podcaster, not to mention a father. How do you find the energy for it all?

Well, I should send you a column I wrote about this, called “How to Do Everything (Lifehacking Considered Harmful).” Back in the early two-thousands, I was on the committee of a conference called the Emerging Technology Conference, and my friend Danny O’Brien gave a talk where he invented the term “life-hack.” Danny got really involved in this kind of productivity porn—in a good way, I don’t say that in a critical way. And he went off and discovered a book called “Getting Things Done,” by David Allen, which I went and read and which was very life-changing for me. It is a book that counsels you to understand that you can’t do all the things that you want to do, and that, if you don’t have a plan for what you’re going to do, you’ll end up doing the things easiest to check off on your to-do list. And you will come to the end of your day, or your week, or your month, or even your life, and realize you never did the things you wanted to do. And so he proposes a method, which I won’t go into, but it’s a pretty straightforward one for making sure that the things that you do by the end of the day are the things that you wanted most to do that day and not the things that were easiest to tick off your list.

And I have now done that for twenty years. And the conundrum of this is that, if your method for getting stuff done is to whittle away the things that are less important to you, then eventually there comes a time when everything on the list is important to you, and you start to whittle away stuff not on the basis that it’s not important but on the basis that it only serves one of the many things that you’re interested in. Anything that I do now requires that I sideline something that I’ve considered to be non-negotiable. I am now at the point where I am trading non-negotiable equities, because, if I take this out, there’s just going to be something that doesn’t get done, that isn’t just like a subtask toward a more important goal but is the goal itself. It compromises my ability to reach that goal. And, to the extent that I’m very prolific and do a lot of stuff, it’s because of that method. And to the extent that there are other things that I would really like to do, that I haven’t gone off and done, it’s also because of that method.

I’m going to get that book.

Good. ♦