Living in Alan Turing’s Future

The British mathematician Alan Turing was one of the more unquantifiably original minds of the twentieth century.Photograph from Alamy

More than a decade has passed since the British government issued an apology to the mathematician Alan Turing. “On behalf of . . . all those who live freely thanks to Alan’s work,” then Prime Minister Gordon Brown said, in an official statement, “we’re sorry, you deserved so much better.” The tone of pained contrition was appropriate, given Britain’s grotesquely ungracious treatment of Turing, who played a decisive role in cracking the German Enigma cipher, allowing Allied intelligence to predict where U-boats would strike and thus saving tens of thousands of lives. Unapologetic about his homosexuality, Turing had made a careless admission of an affair with a man, in the course of reporting a robbery at his home in 1952, and was arrested for an “act of gross indecency” (the same charge that had led to a jail sentence for Oscar Wilde in 1895). Turing was subsequently given a choice to serve prison time or undergo a hormone treatment meant to reverse the testosterone levels that made him desire men (so the thinking went at the time). Turing opted for the latter and, two years later, ended his life by taking a bite from an apple laced with cyanide.

His wartime code-breaking work was just one example of what made Turing one of the most influential minds of the twentieth century. In 1936, when he was twenty-three years old, he published a paper called “On Computable Numbers,” in which he attempted to tackle the problem of “decidability” in formal systems like mathematics. In it, he sketched a design for a peculiar machine, somewhere between a gramophone stylus and a typewriter carriage, that moved along a tape divided into squares. At any given time, the machine might be in one of a finite set of states that would tell it to move either right or left or to print, erase, or stop. The machine was not a piece of functioning hardware but a thought experiment meant to reveal something about the essence of computation. The really novel idea behind Turing’s imaginary machine was that it was not designed for a specific purpose but could be given instructions (“programmed”) that allowed it to simulate any other machine. Such universal computers are now called Turing machines and are the basis for all smartphones, laptops, and the Internet.

Yet Turing’s temperament was the antithesis of the stepwise, uniform procedure captured in his thought experiment. A dreamy nonconformist in the style of hyperrational eccentrics such as Lewis Carroll and Bertrand Russell, Turing operated best on the ludic frequency of games, puzzles, secret codes, and abstract formal systems like mathematics. Wholly a man of science, with nothing but scorn for any whiff of the theological, Turing nevertheless had a speculative streak, which could lead him into realms bordering on science fiction. Since boyhood, he had been keenly interested in mechanism (at eleven, he drew up the plans for a typewriter of his own design) and invented words (“quockling” is the sound seagulls make), and he developed a fondness for Edwin Tenney Brewster’s “Natural Wonders Every Child Should Know,” which suggested that human beings were just very sophisticated machines. Later, he began pursuing the idea that thinking itself could be mechanized, and, in collaboration with his Cambridge friend David Champernowne, he developed Turochamp, one of the very first computer chess programs.

A mind like Turing’s ended up being immensely valuable to Allied counterintelligence during the Second World War. When the German Enigma machine became the most powerful ciphering instrument in the world—it was believed to be impregnable—military cryptography accordingly became more mathematically complex. With the help of notes provided by Polish cryptanalysts and some recovered codebooks from sunken U-boats, Turing oversaw the construction of a machine that could find loopholes in the Enigma’s polyalphabetic rotary design, and soon the code-breaking team began cracking Nazi radio messages without the Germans’ knowing it. Though Turing had been against the war as a student at Cambridge, he seems to have undertaken this work as much for the challenge of tackling a fiendishly complex puzzle as for any sense of patriotic duty. He was also a stickler for respectable working conditions: he wrote Winston Churchill a letter complaining about the poor plumbing facilities at Bletchley Park, the Tudor mansion northwest of London where the code-breakers had set up shop.

After the war, Turing began writing more speculatively about minds and machines. Anyone who had been reading American science fiction would have been familiar with the questions raised in his paper “Computing Machinery and Intelligence,” from 1950, and one of the more delightful intersections in the history of ideas is the way both Turing, in the august philosophy journal Mind, and the young Isaac Asimov, in the pulp magazine Astounding Science Fiction, started talking about the same thing at about the same time. Turing, in his typically chatty, unadorned way, wondered what could serve as a criterion for treating a machine as “intelligent.” To answer that question, he came up with the second of his famous thought experiments, the imitation game (now known as the Turing test), in which a person poses questions via teletype to two interlocutors, one a human, the other an algorithm. If the questioner cannot tell the difference between them, then we must grant that the machine thinks.

One reason that Turing settled on a talking test for artificial intelligence was that he did not want machines to be judged according to irrelevant criteria. “We do not wish to penalise a machine for its inability to shine in beauty competitions,” he wrote, just as we would “not penalise a man for losing a race against an aeroplane.” While Asimov was writing stories about government-issue robots with rules burned into their positronic brains to prevent them from rebelling against their masters, Turing’s essay directly inspired a new wave of trippier science fiction. Philip K. Dick happened upon a reprint of “Computing Machinery and Intelligence” and, soon afterward, went to work on “Do Androids Dream of Electric Sheep?,” a novel that posited a so-called Voight-Kampff empathy test for determining whether someone is a human being or a replicant. (The story was later the seed for the film “Blade Runner.”)

One argument against machine intelligence was what Turing called Lady Lovelace’s Objection, referring to Ada Byron, the Countess of Lovelace and the daughter of the poet. Lovelace kept up a correspondence with the English inventor Charles Babbage, who worked his whole life on a huge brass cogwheel engine that could compute logarithmic tables, to be used by astronomers and navigators. In a letter, Lovelace said of this contraption that it “has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.” It was a coder’s insight (Lovelace is considered one of the first computer programmers), but it was also the instinct of a poet’s daughter, attuned to more mysterious routes of thought than logarithmic tables. The two attitudes beautifully commingle in another letter, in which Lovelace, discussing the punch cards that Babbage used as proto-software for his analytical engine, notes that the machine “weaves algebraical patterns just as [a] loom weaves flowers and leaves.”

After the declassification of wartime documents, in the mid-seventies, an industry of Turing hagiography began, reaching its glamorous apex when Benedict Cumberbatch played Turing in the film “The Imitation Game,” from 2014. Based (extremely loosely) on Andrew Hodges’s excellent biography “Alan Turing: The Enigma,” from 1983, the film imagines Turing as an obtuse savant who can’t make out English idioms or catch social cues. In a preliminary interview for the code-breaking work at Bletchley, Turing unnerves a government official with his literalism. “Ah, Turing! A mathematician. How ever could I have guessed?” the official says, looking into a manila folder. “You didn’t. You just read it on that piece of paper,” Turing replies. A little later, he has trouble understanding what the word “lunch” means.

By most accounts, Turing was gregarious and socially at ease, not the near-android that he is made out to be in the film, and, if he was also noted to be a stickler for precision in thought and speech, fastidiousness isn’t the same thing as humorlessness. Cumberbatch has in any case become the central-casting option for on-the-spectrum-ish über-nerds, a style that he also brings to the BBC series “Sherlock,” in which Holmes regularly flies over the heads of the non-geniuses around him. (Sir Arthur Conan Doyle’s amateur detective, by contrast, is keenly aware of others’ feelings and often empathic to a fault.) Whether this depiction is the result of poetic license, Cumberbatch being his (often great) actorly self, or rote adoption of Hollywood tropes about how geniuses are supposed to behave (think Russell Crowe’s anguished John Nash in “A Beautiful Mind”), the effect is to bring Mr. Spock clichés to a person who was anything but a human robot.

A more recent fictional Turing shows up in Ian McEwan’s “Machines Like Me,” from 2019, a novel set in a counterfactual 1982, in which Sir Alan has lived to be seventy and oversees the first commercial manufacture of A.I., in partnership with the DeepMind co-founder Demis Hassabis. (My guess for the choice of 1982 is that it is the release year of “Blade Runner,” which, in a further through-the-looking-glass twist, is itself set in 2019.) Their collaboration begins when they “devise software to beat one of the world’s great masters of the ancient game of go,” a sly bit of alternate history, as DeepMind’s AlphaGo program in fact won four out of five games against the South Korean Go master Lee Sedol, in 2016. There was some buzz when I.B.M.’s Deep Blue machine beat the world chess champion Garry Kasparov, in 1997, but the victory over Sedol was thought to mark the encroachment of A.I. onto the sacred ground of human creativity and intuition, since Go is exponentially more complex than chess and playing well often involves an ability to grasp which move is the most beautiful.

McEwan is good at imagining the messy situations that might attend the arrival of full-blown artificial people, no matter how good they are at Go. He imagines androids who always tell the unvarnished truth, under any and all circumstances, because it is the right thing to do, for example, or who become menacingly guileless sexual rivals. (Julian Lucas points out in this magazine that McEwan’s “Adam” comes equipped with “Kantian morals and fully functioning phallus.”) McEwan has his own way of embellishing his counterfactual Turing with traits that he does not seem to have possessed in reality. For instance, his Turing becomes irritated with an American cable-TV host while trying to explain P = NP. It’s a problem that has its origins in Turing’s 1936 paper on decidability and asks if problems verifiable in polynomial time can also be solved in polynomial time. In the novel, Turing has cracked it, but, outside the pages of the book, P = NP remains one of the outstanding problems posed by the Clay Mathematics Institute, and its solution carries a prize of a million dollars.

In the years leading up to Turing’s death, his thoughts ran in increasingly imaginative, unpredictable directions. He used the Fibonacci series to understand patterns like those in sunflower petals and hydra tubules, tinkered with a theory of cellular automata, and pursued the design of machines that would not only pass the Turing test but also learn from experience (the ultimate rebuttal to Lady Lovelace’s Objection). Given that the twenty-first century has become one giant Turing machine, it is not surprising that the culture remains obsessed with him. Had Turing lived longer, perhaps the state of artificial intelligence would encompass more than drearily corporate banalities such as the Amazon checkout window making suggestions about what you might like for your next purchase, Google offering up a few words for how to complete a sentence in progress, or a South Korean genius having his soul crushed by a roomful of statistics wonks—not to mention more chillingly Orwellian developments, such as facial-recognition software. It is fortifying to remember that the very idea of artificial intelligence was conceived by one of the more unquantifiably original minds of the twentieth century. It is hard to imagine a computer being able to do what Alan Turing did.