Why Hollywood Really Fears Generative AI

Unions representing actors and writers are negotiating with major studios to stop AI from running riot in the industry. Their concerns are very real.
Film strip with two frames one frame shows the silhouette of a person and the other shows a 3D model of a human form
ILLUSTRATION: WIRED STAFF; GETTY IMAGES

The future of Hollywood looks a lot like Deepfake Ryan Reynolds selling you a TeslaIn a video, since removed but widely shared on Twitter, the actor is bespectacled in thick black frames, his mouth mouthing independently from his face, hawking electric vehicles: “How much do you think it would cost to own a car that’s this fucking awesome?”

On the verisimilitude scale, the video, which originally circulated last month, registered as blatantly unreal. Then its creator, financial advice YouTuber Kevin Paffrath, revealed he had made it as a ploy to attract the gaze of Elon Musk. (Which it did: the Tesla CEO replied to Paffrath’s tweet with a “nice.”) Elsewhere on Twitter, people beseeched Reynolds to sue. Instead, his production company responded with a similarly janky video in which a gray-looking Musk endorsed gin made by Aviation, a company Reynolds co-owns. That video has also since been deleted.

“Finance guy sucks up to Musk on Twitter” is far from earth-shattering news, but the exchange is indicative of a much bigger problem: AI is making it possible for anyone to deepfake famous faces into whatever video they like. And actors, in turn, are becoming increasingly aware of the potential of AI to encroach on their work. With the Writers Guild of America already on strike, in part because of a similar threat, upcoming negotiations between the actors’ union and studios will likely reference images like Fake Bruce Willis and Fake Ryan Reynolds as the latest steps toward a future dominated by AI.

The hype around the technology means it will be a focus of the talks, especially given that contracts are negotiated just once every three years, explains Duncan Crabtree-Ireland, executive director and chief negotiator for the Screen Actors Guild—American Federation of Television and Radio Artists (SAG-AFTRA). “Considering how far [AI has] advanced in the last 18 months, it’s hard even to imagine where it’ll be in three years,” he says.

In a message asking its members to authorize a strike, the guild noted that it was seeking a contract that would protect members from losing income due to “unregulated use of generative AI.” The deadline is Monday, June 5; on June 7, SAG-AFTRA begins negotiations with the Alliance of Motion Picture and Television Producers (AMPTP), which represents the studios. If actors go on strike, it would be the first time since 2000.

SAG has been concerned about machine learning tools since the days of pixelated sports video games. Back then, the guild worried about how easy it was for game studios to insert pro athletes into Madden games. Now, Hollywood studios are de-aging Harrison Ford and recreating the voices of the dead.

Given this, it’s not hard to imagine a future in which a wide-eyed actor signs up for one season of a vampire TV show, and then two seasons later their AI replacement busts out of a coffin. Meanwhile, they receive no additional compensation, even if the AI-generated character was based on their likeness and performance.

“The nature of the impact on performers is unique, especially with generative AI tools that can be used to recreate a performer image, likeness, or voice persona, or to do things that they didn’t originally contemplate ever doing,” says Crabtree-Ireland. “That’s a concern.”

Actors, like all Americans, are protected against commercial appropriation of their identity by the right of publicity—also known as name, image, and likeness rights. SAG wants to buttress these protections and stomp out exploitative terms like the vampire example by adding “informed consent” into future contracts: Certain kinds of AI use must be disclosed and compensated, the union argues.

But writers cannot lean on publicity rights in the same way. If they own the rights, they can seek recourse or compensation if their work is scraped by large language models, or LLMs, but only if the resulting work is deemed a reproduction or derivative of their script. “If the AI has learned from hundreds of scripts or more, this is not very likely,” says Daniel Gervais, a professor of intellectual property and AI law at Vanderbilt University.

And it’s this scraping, applied to performers, that concerns talent reps. Entertainment lawyer Leigh Brecheen says she’s most worried about her clients’ valuable characteristics being extracted in a way that isn’t easily identifiable. Imagine a producer conjuring a digital performance with the piercing intensity of Denzel Washington while entirely skirting his wages. “Most negotiated on-camera performer deals will contain restrictions against the use of name, likeness, performance in any work other than the one for which they are being hired,” Brecheen says. “I don’t want the studio to be able to use the performance to train AI either.” This is why, as Crabtree-Ireland explains, it is crucial to reframe AI works as an amalgam of countless humans.

But will people care if what they’re watching was made by an AI trained on human scripts and performances? When the day comes that ChatGPT and other LLMs can produce filmable scenes based on simple prompts, unprotected writers rooms for police procedurals or sitcoms would likely shrink. Voice actors, particularly those not already famous for on-camera performances, are also in real danger. “Voice cloning is essentially now a solved problem,” says Hany Farid, a professor at the University of California, Berkeley who specializes in analyzing deepfakes.

Short term, most AI-generated actors may come off like Fake Ryan Reynolds: ghoulishly unlikeable. It seems more likely that people will accept audiobooks made by AI or a digitally rendered Darth Vader voice than a movie resting on the ripped shoulders of an AI-sculpted GigaChad-esque action hero.

Long term, though, if AI replicants escape the uncanny valley, audiences of the future may not care whether the actor in front of them is human. “It’s complicated,” says Matthew Sag, a professor of law and artificial intelligence at Emory University. “The job of writing can be encroached on in a marginal or progressive way. Performers are likely to be replaced in an all-or-nothing way.”

As the actors’ union and Hollywood studios head into talks next week, the key concern will be economic fairness: The union states that it has become increasingly difficult for guild members to “maintain a middle-class lifestyle.” There is a modern disconnect between a film or TV show’s success and residual compensation, unions argue, as well as longer gaps between increasingly shorter seasons, which means less time spent working.

In this context, AI could be Hollywood’s next gambit to produce more content with fewer humans. Like the AI-generated Reynolds, the whole thing would be banal if it wasn’t so critical. As such, union strikes remain a possibility. “They’ve got a 2023 business model for streaming with a 1970 business model for paying performers and writers and other creatives in the industry,” says Crabtree-Ireland. “That is not OK.”