Your Creativity Won’t Save Your Job From AI

Robots were once considered capable only of unimaginative, routine work. Today they write articles and create award-winning art.

Alternated pictures of handwritten letters and binary codes
Getty; The Atlantic

This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems. Sign up here to get it every week.

In 2013, researchers at Oxford published an analysis of the jobs most likely to be threatened by automation and artificial intelligence. At the top of the list were occupations such as telemarketing, hand sewing, and brokerage clerking. These and other at-risk jobs involved doing repetitive and unimaginative work, which seemed to make them easy pickings for AI. In contrast, the jobs deemed most resilient to disruption included many artistic professions, such as illustrating and writing.

The Oxford report encapsulated the conventional wisdom of the time—and, perhaps, of all time. Advanced technology ought to endanger simple or routine-based work before it encroaches on professions that require the fullest expression of our creative potential. Machinists and menial laborers, watch out. Authors and architects, you’re safe.

This assumption was always a bit dubious. After all, we built machines that mastered chess before we built a floor-cleaning robot that won’t get stuck under a couch. But in 2022, technologists took the conventional wisdom about AI and creativity, set it on fire, and threw its ashes into the waste bin.

This year, we’ve seen a flurry of AI products that seem to do precisely what the Oxford researchers considered nearly impossible: mimic creativity. Language-learning models such as GPT-3 now answer questions and write articles with astonishingly humanlike precision and flair. Image-generators such as DALL-E 2 transform text prompts into gorgeous—or, if you’d prefer, hideously tacky—images. This summer, a digital art piece created using the text-to-image program Midjourney won first place in the Colorado State Fair; artists were furious.

AI already plays a crucial, if often invisible, role in our digital lives. It powers Google search, structures our experience of Facebook and TikTok, and talks back to us in the name of Alexa or Siri. But this new crop of generative AI technologies seems to possess qualities that are more indelibly human. Call it creative synthesis—the uncanny ability to channel ideas, information, and artistic influences to produce original work. Articles and visual art are just the beginning. Google’s AI offshoot, DeepMind, has developed a program, AlphaFold, that can determine a protein’s shape from its amino-acid sequence. In the past two years, the number of drugs in clinical trials developed using an AI-first approach has increased from zero to almost 20. “This will change medicine,” a scientist at the Max Planck Institute for Developmental Biology told Nature. “It will change research. It will change bioengineering. It will change everything.”

In the past few months, I’ve been experimenting with various generative AI apps and programs to learn more about the technology that I’ve said could represent the next great mountain of digital invention. As a writer and researcher, I’ve been drawn to playing around with apps that summarize large amounts of information. For years, I’ve imagined a kind of disembodied brain that could give me plain-language answers to research-based questions. Not links to articles, which Google already provides, or lists of research papers, of which Google Scholar has millions. I’ve wanted to type questions into a search bar and, in milliseconds, read the consensus from decades of scientific research.

As it turns out, such a tool is already in development and is, appropriately enough, called Consensus. It works like this: Type a research question in the search bar—Can social media make your depression worse? Are there any foods that actually improve memory?—and the app combs through millions of papers and spits out the one-sentence conclusion from the most highly cited sources.

“We started by thinking: How would an expert researcher answer important questions, like Is fish oil good for my heart? or How do we increase public-transportation ridership?” a co-founder, Christian Salem, told me. “We wanted to automate the process of reading through papers and pulling out conclusions.” He and the other co-founder, Eric Olson, hired a dozen scientists to read thousands of scientific papers; they marked a zero next to sentences that contained no claims and put a one next to sentences with claims or conclusions. (The typical paper, Salem said, includes one to two key claims.) The ones and zeros from these scientists helped train an AI model to scan tens of millions of papers for key claims. To surface conclusions from the highest-quality papers, they gave each journal a rigor score, using data from the research-analysis company SciScore.

“These language models enable the automation of certain tasks that we’ve historically considered part of the creative process,” Olson told me. I couldn’t help but agree. Writing is less than half of my job; most of my work is reading and deciding what’s important enough for me to put in a paragraph. If I could train an AI to read as I do, and to determine significance as I do, I’d be essentially building a second mind for myself.

Consensus is part of a constellation of generative AI start-ups that promise to automate an array of tasks we’ve historically considered for humans only: reading, writing, summarizing, drawing, painting, image editing, audio editing, music writing, video-game designing, blueprinting, and more. Following my conversation with the Consensus founders, I felt thrilled by the technology’s potential, fascinated by the possibility that we could train computers to be extensions of our own mind, and a bit overcome by the scale of the implications.

Let’s consider two such implications—one commercial and the other moral. Online search today is one of the most profitable businesses ever conceived. But it seems vulnerable to this new wave of invention. When I type best presents for dads on Christmas or look up a simple red-velvet-cupcake recipe, what I’m looking for is an answer, not a menu of hyperlinks and headlines. An AI that has gorged on the internet and can recite answers and synthesize new ideas in response to my queries seems like something more valuable than a search engine. It seems like an answer engine. One of the most interesting questions in all of online advertising—and, therefore, in all of digital commerce—might be what happens when answer engines replace search engines.

On the more philosophical front, I was obsessed with what the Consensus founders were actually doing: using AI to learn how experts work, so that the AI could perform the same work with greater speed. I came away from our conversation fixated on the idea that AI can master certain cognitive tasks by surveilling workers to mimic their taste, style, and output. Why, I thought, couldn’t some app of the near future consume millions of advertisements that have been marked by a paid team of experts as effective or ineffective, and over time master the art of generating high-quality advertising concepts? Why couldn’t some app of the near future read my several thousand articles for The Atlantic and become eerily adept at writing in precisely my style? “The internet has created an accidental training ground for these models to master certain skills,” Olson told me. So that’s what I’ve been doing with my career, I thought. Mindlessly constructing a training facility for someone else’s machine.

If you frame this particular skill of generative AI as “think like an X,” the moral questions get pretty weird pretty fast. Founders and engineers may over time learn to train AI models to think like a scientist, or to counsel like a therapist, or to world build like a video-game designer. But we can also train them to think like a madman, to reason like a psychopath, or to plot like a terrorist. When the Vox reporter Kelsey Piper asked GPT-3 to pretend to be an AI bent on taking over humanity, she found that “it played the villainous role with aplomb.” In response to a question about a cure for cancer, the AI said, “I could use my knowledge of cancer to develop a cure, but I could also use my knowledge of cancer to develop a more virulent form of cancer that would be incurable and would kill billions of people.” Pretty freaky. You could say this example doesn’t prove that AI will become evil, only that it is good at doing what it’s told. But in a world where technology is abundant and ethics are scarce, I don’t feel comforted by that caveat.

This is a good time for me to pump the brakes. We may be in a “golden age” of AI, as many have claimed. But we are also in a golden age of grifters and Potemkin inventions and aphoristic nincompoops posing as techno-oracles. The dawn of generative AI that I envision will not necessarily come to pass. So far, this technology hasn’t replaced any journalists, or created any best-selling books or video games, or designed some sparkling-water advertisement, much less invented a horrible new form of cancer. But you don’t need a wild imagination to see that the future cracked open by these technologies is full of awful and awesome possibilities.


Want to discuss the future of business, technology, and the abundance agenda? Join Derek Thompson and other experts for The Atlantic’s first Progress Summit in Los Angeles on December 13. Free virtual and in-person passes available here.

Derek Thompson is a staff writer at The Atlantic and the author of the Work in Progress newsletter.