• Show Notes
  • Transcript

How might AI impact our elections? 

This is the second episode of a Stay Tuned miniseries, “AI on Trial,” featuring Preet Bharara in conversation with Nita Farahany, professor of law and philosophy at Duke University.

Preet and Nita discuss the hypothetical case of a hotly-contested Senate race. The campaigns are derailed when the leading candidate is accused of using AI to create fake videos that burnish his performance and hurt his opponent. Do existing laws, policies, and government agencies sufficiently safeguard our political process, and if not, what needs to happen to protect democracy in time for the real presidential election in November?

REFERENCES AND SUPPLEMENTAL MATERIALS: 

  • First Amendment to the U.S. Constitution
  • 52 U.S.C. §30124 – Fraudulent misrepresentation of campaign authority
  • California Code, Elections Code – ELEC §20010
  • Texas Election Code §255.004 – True source of communication
  • United States v. Alvarez, U.S. Supreme Court, opinion, 2012
  • NetChoice v. Bonta, U.S. District Court Northern District of California, order, 2023
  • “Protect Elections from Deceptive AI Act,” Senate bill, 2023
  • “Klobuchar, Murkowski Introduce Bipartisan Legislation to Require Transparency in Political Ads with AI-Generated Content,” Klobuchar press release, 3/6/24
  • “Petition for Rulemaking to Clarify that the Law Against ‘Fraudulent Misrepresentation’ Applies to Deceptive AI Campaign Ads,” Public Citizen, 5/16/23
  • “Political consultant behind fake Biden robocalls says he was trying to highlight a need for AI rules,” AP, 2/26/24
  • “Kerry ‘dirty tricks’ claim over doctored photo,” The Guardian, 2004

Stay Tuned in Brief is presented by CAFE and the Vox Media Podcast Network. 

Tamara Sepper – Executive Producer; Lissa Soep – Senior Editor; Jake Kaplan – Editorial Producer; Matthew Billy – Audio Producer. Original score was composed by Nat Weiner. Senate candidate Benjamin Barrister was voiced by Marty McCarthy. Eleven Labs was used to create Kyle Kapitalista’s voice and the voices of side characters.

Please write to us with your thoughts and questions at letters@cafe.com, or leave a voicemail at 669-247-7338. For analysis of recent legal news, join the CAFE Insider community. Head to cafe.com/insider to join for just $1 for the first month.

Preet Bharara:

Hey folks, Preet here. Welcome back to our Stay Tuned miniseries, AI on Trial where we use hypothetical cases set in the near future to explore the most pressing questions posed by AI today. Last week you’ll recall we met the tragic bot, cross lovers, Lucy and Ryan, and we got into some fascinating questions about criminal justice in the age of AI. This week, episode two, “Deepfakes v. Democracy.” Thank goodness, Duke Law and Philosophy, professor Nita Farhan is with us again. Nita could not do this without you, so thanks for being here.

Nita Farahany:

I’m glad to be back with you.

Preet Bharara:

So this time, our hypo is set in the fall of this year, 2024, and it raises a huge question that is anything but hypothetical right now. And the question is this, can AI be used to influence an election and what, if anything, can be done about it?

We’re fast forwarding to the final months of a very tight US Senate race in a southwestern state that could go either way. The first candidate, Kylie Kapitalista, is a relative newcomer on the political scene. She’s only 37 and made a bunch of money as a tech entrepreneur. Her opponent is Benjamin Barrister, who is a politics veteran. He’s 60 and was an elementary school teacher before joining the school board decades ago, and he’s been various appointed in elected positions ever since. So you got Kylie Kapitalista, Benjamin Barrister barreling towards a close finish. One of the issues in the campaign is who is going to appeal more to young folks.

Kylie Kapitalista:

At a time like this? America doesn’t need leaders who are stuck in old ways, the old ways aren’t working.

Preet Bharara:

So I’d say Kapitalista is pretty convincing, but I would also say Barrister is doing a pretty impressive job countering allegations that his advanced age works against him. He keeps up with the latest cultural and policy debates and he’s also remarkably good at TikTok. Also, Barrister at age 60 looks great. The guy’s regularly spotted shirtless rock climbing with his grandkids, and you better believe his campaign makes it a point to post plenty of pictures and videos until some scandalous accusations break.

NPR Newscaster:

Social media is filled with images of Benjamin Barrister scaling rock walls and shooting three pointers, but today some concerned citizens are claiming that footage isn’t real. They allege these videos were significantly enhanced by AI.

Preet Bharara:

So we have our first sort of wrinkle in this campaign before we get to what the implications are and the consequences might be, Nita, let’s start with the technology. So how is AI used to create a fake video? How could that be done?

Nita Farahany:

So a deepfake is using AI technologies, doing something like if I had a video where I wanted to make believe that it was you who was in the video versus the person who is actually in the video, then feeding the system with a lot of other videos of you, it would be possible for the deepfake technology to replace the face and then make it appear as if you were the person in the video, even if it was somebody else entirely.

Preet Bharara:

As we go through these hypotheticals and the variations, what people should be thinking about is what makes something cross the line from ordinary sprucing up that not just politicians do, but we do all the time. So in this podcast I will have misspoken a number of times in reality, but the editing team will take that out and nobody thinks that’s inappropriate or improper, right?

Nita Farahany:

There were limits to what we could do in the past though, and the realm of changes that a person could make would be limited by how good was the lighting or the makeup or cutting out ums and ahs in a conversation or a mistake and starting the sentence again, what we’re talking about capability wise here, I don’t think is just a difference in degree. I actually think it’s a difference in kind.

Preet Bharara:

So let’s go back to the hypothetical and explore this a little bit more. At first, these allegations of fakes against Barrister don’t get a ton of traction and he’s killing it on the campaign trail.

Benjamin Barrister:

Together we can build a country of shared values, not special interests.

Preet Bharara:

So Barrister’s voice sounds gorgeous and the crowds look huge, but then more allegations drop that these videos are fake too. His speeches, it turns out were not quite so impeccable.

Benjamin Barrister:

If we work to feather together, we could build a country of shared values.

Preet Bharara:

So now that credible suspicions have been raised, his opponent Kapitalista and her people, they’re pissed and they’re convinced that Barrister’s team is behind the fakes. Kapitalista wants an investigation and accountability. I guess the first thing to understand generally, so we have a foundation, is political speech is protected under the 1st Amendment and is given special deference. Fair.

Nita Farahany:

Yeah, fair. And probably like on steroids you could say that, right, which is political speech is so tightly protected under 1st Amendment that it makes regulating in the space a real challenge. There is broad latitude in the ability to communicate political speech with the idea being that the electorate, us, we have the right to have all of the information that a candidate may present to us and then to use that to make our own judgments and decisions. The problem of course is how are we supposed to know when the technology becomes so sophisticated or becomes very difficult to validate something as truth or lies. Before it was a little easier, somebody would make a false statement like I voted every single time when I was in Congress in favor of X, Y or Z funding or restrictions. And then you could just go back at the data and find out if that was true or false versus the doctoring of images or the creation of false images that give that same kind of impression, but are a lot harder to be able to go back and fact check.

Preet Bharara:

Should we talk about the Alvarez case? Yeah, so there’s a case that’s pretty cool where Congress tried to take away protection for certain kinds of lies in a very narrow context. It’s called U.S. v. Alvarez and it involves an elected official in California. Do you want to tell folks what that case is about?

Nita Farahany:

Sure. So there was, I think it was in 2007 that Xavier Alvarez was a member of the three Valley’s Water District board of directors, and he asked to come there and speak about his background and while he was there, he said that he was a retired Marine of 25 years that he had been awarded the Congressional Medal of Honor, but actually he had never received the Congressional Medal of Honor and he had never even served in the U.S. armed forces. So the Stolen Valor Act makes it a crime to falsely claim rec receipt of these kinds of decorations or medals.

Preet Bharara:

So there’s a law, it says you can’t lie about these things, presumably because it was an insult to people who had received them and people who served in our military, he gets prosecuted, but the case is challenged based on the premise that political speech is protected, even if it’s lying. Political speech goes all the way up to the Supreme Court and Alvarez wins. Why does he win?

Nita Farahany:

What their argument was was that content-based restrictions on speech are always subject to strict scrutiny, so held to the highest standard of review, which means that they’re almost always going to fail. They’re always going to be invalid except under really, really rare circumstances where the government has a compelling interest that would justify that kind of limitation on speech. And one of the more interesting inroads into all of this will be the extent to which content is speech. Are generative AI images, which is not a real person, maybe not even a real voice, does that constitute a speaker with protected speech and interest under the 1st Amendment? There was a case recently out of California where there was a very wide reaching preliminary injunction, and the judge in that case went pretty far to say all code basically is speech, which would have implications in this area as well. If everything that is software generated is also speech, then presumably it also would have 1st Amendment protections as political speech. I’m not quite sure that that’s right though. I think it’s going to be interesting to see that area evolve and who is the speaker when what we’re talking about is a fake generated image that is not actually a candidate, not a real voice, and not even content that is spoken by any person. So who has 1st Amendment rights? I’m pretty sure we’re not going to give generative AI 1st Amendment rights, but maybe we will.

Preet Bharara:

That’s a good, I think, foundation for understanding how protective our system is, how protective our courts are of political speech, even deceptive political speech. The next question is who even the authority and jurisdiction to police these things? The natural policing agency you would think is the Federal Election Commission, the FEC, which I will say without meaning any particular disparagement to that agency is often toothless hard for it to act because it’s divided politically. So it’s tough as an initial matter, but the FEC you might think could be appealed to by a candidate like Kapitalista who’s claiming that there’s deceptive advertising going on on the part of her opponent.

Kylie Kapitalista:

Are you kidding me? There’s really nothing the FEC can do. What about the FTC?

Preet Bharara:

The FTC as people might realize and appreciate the Federal Trade Commission on a regular basis, this is their bread and butter. They call out false advertising, but they don’t deal with political advertising. So what’s the obstacle to the FEC parallel to its sister agency, the FTC, just sort of policing and monitoring whether people are telling the truth or not in political ads.

Nita Farahany:

I think probably the biggest obstacle is what the FEC is really charged to do, and it isn’t to police the content of the advertisements, at least not as it’s been interpreted until now. It’s primarily been procedural. So they’re looking at things like public funding and presidential campaigns and elections or finance disclosures or contribution limits. So these are procedural issues that govern campaigns rather than substantively trying to evaluate the content of political advertisements.

Preet Bharara:

People often say the antidote for bad speech or hate speech or false speech is more speech, but if those things can’t be identified as false because of the advent of this enterprising technology, then you can’t fix it through that mechanism of more speech.

Nita Farahany:

More of it may just create more noise and more confusion rather than more clarity, which is what you would hope with more speech or counter false speech with truthful speech.

Preet Bharara:

We’ll be right back with more AI on trial after this.

Okay, so Nita, so far we’ve established that political speech is special and accorded great deference and it takes a lot to police it. So let’s see how that plays out in our hypothetical case. We’ve got all these allegations flooding the media that Benjamin Barrister has been issuing deep fakes and things are finally starting to look a bit shaky for him. Kapitalista begins creeping up in the polls, but then some very unfavorable footage hits in a big way. Kapitalista is at a campaign stop. A mom is getting ready to hand her baby over to the candidate for a classic warm and fuzzy photo op, but Kapitalista has her eye on a tech VIP over the mom’s shoulder. And as you see in the video, as Kapitalista makes her way towards the billionaire, she elbows the mom out of the way so harshly that the mom drops her baby on the asphalt.

Woman in Baby Drop Video:

Oh my God, my baby. Seriously.

Preet Bharara:

And all this is captured on video. The media had a field day.

NPR Newscaster:

If Kylie Kapitalista does that to this mother’s baby, imagine what she’ll do to our country.

Angry Pundit 01:

She says she believes in the future, but she can’t even take care of our children.

Preet Bharara:

It’s not looking good for Kapitalista because of this video until days later. The allegation surface, you guessed it that this video was doctored too.

NPR Newscaster:

Breaking news tonight. The Kapitalista campaign insists that these so-called “Baby drop” video currently trending on social media is a deepfake. A spokesperson for the campaign said they’re demanding an investigation.

Preet Bharara:

So the hypo is ratcheting up. Now we’ve got a candidate who’s accused of undermining his opponent releasing a fake video that puts his rival in a bad light. In a case like this capitalist might well have grounds for a defamation suit, which as you know is an aspect of the law that we’re going to get into in the next episode. So here I think we should revisit the idea of why the Federal Election Commission with its mandate can’t police this.

Nita Farahany:

Sure. So first of all, I’d say we don’t know exactly how all of it’s going to play out, but the FEC already has looked at this issue to see whether can it govern AI and campaign ads without greater jurisdiction and the jury still out on whether or not FEC’s authority extends beyond procedural to get at the content of the ads itself. The question is, is the FEC well positioned to be regulating the content, figuring out the truth or the falsity of it or anything like that?

Preet Bharara:

How is this different in our hypothetical from lying about the other guy’s record that happens all the time, that’s not policed. That’s policed by the same principle that I mentioned earlier, which is you get your message out, you buy counter advertising, you call your opponent out for lying about your record, and you talk about your own record and maybe you lie about the other guy’s record. Why do we care more—and it seems like we do and we’re more worried about it—why do we care more about this kind of lying and deception than the kind of lying and deception that’s been a part of our politics forever?

Nita Farahany:

Because of the difficulty, if not for the average person near impossibility of being able to validate an image and to tell whether or not it’s real or fake. And that can have a profound impact on how people think about a candidate and ultimately how they vote.

Preet Bharara:

Here’s a question I bet people have. What are the laws on the books right now, if any, that prohibit circulating fake AI generated political content?

Nita Farahany:

That’s a good question. There’s a law in California that bars political candidates from distributing deceptive audio or visual media about an opposing candidate, but there’s some limits to that statute, like the law only covers misinformation spread within the 60 days before the election. There’s also something like that in Texas. Neither of these statutes have been tested, but we’re starting to see other states pass laws that require disclosures of the use of AI and political ads that have varying penalties when that doesn’t happen. And there are also some state laws on deepfakes, particularly aimed at trying to prevent deepfakes for pornography that might be applicable here too, but federal law is lagging behind a bit. So there’s a statute 52 USC 30124 that prohibits a candidate or an agent of a candidate from misrepresenting their authority to speak on behalf of another candidate in a manner that’s damaging to the candidate. But it isn’t clear whether it can be proven that spreading a deep fake would be considered speaking on behalf of an opposing candidate. And this has also never been used in connection with AI. So it remains to be seen the extent to which it’ll work.

Preet Bharara:

And then of course, Senator Amy Klobuchar has introduced a bill that has received some bipartisan support. It’s called the Protect Elections from Deceptive AI Act, pretty good name. And that piece of legislation would ban the use of AI to generate false depictions of political candidates in ads. So what do you think about that proposal? Do you think it would cover our hypo and similar issues?

Nita Farahany:

I think trying to put into place liability and norms is a good thing. I think it’s going to be very hard to police it and to whom you’re going to give the authority and how quickly and how easily these images and videos can be generated. All of this is to be tested over time, but I think the technologies evolving so quickly that some of the legislative proposals also may have a hard time with keeping up with what’s actually being used in election cycles. It’s hard to imagine exactly how laws actually stop the spread of deepfakes. It’s easier to imagine how liability would attach once a deepfake has been identified. But if what we’re worried about is changing people’s minds in time to influence their votes and how long it might take to actually detect and then prosecute the use of a generative AI image or video deepfake, we have a significant time gap between them. I think our best bet is to have laws that serve in a deterrent function where there’d be serious consequences if it’s later discovered.

Preet Bharara:

We should point out once again that though the technology is proliferating very quickly and is hard to keep up with and is more profoundly creative and technologically proficient and faster than anything we’ve seen before, these kinds of things are not new. There’s the famous case of John Kerry in the 2004 election where a photo was circulated showing him with Jane Fonda together on a stage at an anti-Vietnam war rally in the seventies. It turns out that never happened. He was not on stage with Jane Fonda, but that circulated fairly widely and probably caused John Kerry some votes with respect to people who didn’t appreciate that appearance that never happened.

Nita Farahany:

I think that’s right. A lot of the seeds of this have been happening for a long time as well as cropping images to make it look like a crowd is much larger than it actually was. But part of what you couldn’t do before that you can do with much greater precision and a much more convincing way now is to make minute changes that would be undetectable. So there’ve been a bunch of studies that have been done to show that tiny changes to the corner of the mouth of the way that a person smiles or opening their eyes up in ways that are a little bit different or changing the dilation of their pupils can invoke trust in the onlooker in ways that you couldn’t before make possible that to changing the tone of the voice to literally creating entire videos that are nearly impossible for people to tell are fake. That’s different. I think it goes beyond just a degree difference from what was happening with John and Jane Fonda supposedly on a stage,

Preet Bharara:

Let’s test the limits of what barrister or capitalist can do with respect to the other. So going back to the incident of the baby fake video, how would something like what Senator Amy Klobuchar is proposing work or would it not be workable?

Nita Farahany:

So I think probably what’s the most promising set of developments that I’ve seen in this space and also what’s included within the language of what she’s put forward is the idea of moving toward greater transparency when AI is used in ads. So in particular having something like a badge that says generative AI was used in this image, or a number of different tech companies who’ve joined together where they’ll watermark images that are created on their platforms by generative AI. There’s a lot of really interesting research that shows with traditional advertisements, if there’s a banner that says this is an advertisement, its effectiveness on the consumer goes down substantially, we are able to safeguard ourselves against a lot of mental manipulation with that kind of transparency. The more transparency that we have with labeling and badges that say this was created with generative AI and the more watermarking there is to trace the provenance or changes to an image, the easier it will be for people to safeguard against this kind of mental manipulation. And there’s a ton of great neuroscience that shows the effect on being able to safeguard yourself in this context, and I think that’s where I see the most hope right now, rather than necessarily the FEC spending a lot of time trying to look at each image and develop an expertise to try to validate it without the kind of help of that ecosystem.

Preet Bharara:

Is there another possibility also in the political realm? Not technology, not FEC enforcement, but voluntary agreements in the same way that we have voluntary agreements about debates.

Nita Farahany:

I think there need to be a set of voluntary agreements that everybody signs onto and it needs to be pretty sophisticated. One of the challenges is some of the legislation that’s proposed right now is that it focuses on things that are created by generative AI that doesn’t cover the huge category of manipulated by AI, but not created by AI and trying to both come up with those agreements, but be thoughtful about the different kinds of technologies that can already be used and may emerge and what the basic principles are that everybody is agreeing to would go, I think, a long way to both help politicians but especially help voters to not be deceived by what’s likely coming. We’ve

Preet Bharara:

Used relatively tame examples in our hypotheticals, but you can imagine much more nefarious things that might not only cause people not to want to vote for your opponent, but also may engender violence, right? You can have deepfake showing people doing things or saying things that are incendiary in a way that we’ve seen have caused people to engage in violence at the Capitol on January 6th or on other occasions as well. So that to me as a practical matter is another reason why we need to be caring about this a lot.

Nita Farahany:

Now there you might have a little bit more inroad and regulating because to the extent that it’s incendiary in those ways toward inciting violence, it would show up in some of the exceptions to 1st Amendment protections. I mean, there’s a tiny set of cases that will survive that 1st Amendment scrutiny, but that’s the kind of example that likely would, you don’t have a right to put out incendiary videos that are meant to incite violence.

Preet Bharara:

Well, what’s also interesting is in the legal analysis and also just thinking about the policy in that Alvarez case, what the Court was considering is what was the harm that was coming from these lies about winning the Congressional Medal of Honor and these other things. And the Court found that in the political arena or in the context in which they were deciding the case, there was not a lot of harm. I guess one question people might ask is, are we underestimating the level of harm to our democracy by being as naive and treating as benignly as we do false political speech of the kind that can be gotten across with deepfakes?

Nita Farahany:

The question is on the margins, does it make a difference? And when our elections are so close where it comes down to nail-biting endings that the national level, a few voters here and there in different key jurisdictions can really lead to differences in outcomes. And there are the people who are influenced by what they see, who are trying to figure out the facts or trying to decide things based on what politicians say. And it may be that it’s enough to make a difference.

Preet Bharara:

Look, there’s all sorts of things you can work into the hypothetical on the day of voting. If you’re not an early voting state, if the technology permits it someone and not the candidate. By the way, most of the worst things that are done in politics are not done by the candidate’s own campaigns. They’re done by third parties who are hard to punish, harder to identify. And you could imagine putting on social media a concession speech by your opponent early in the day or a news anchor reciting exit polls showing that one of the people has a runaway lead. And as we know from earlier times in our own democracy, people are less likely to go vote if they think their vote is not going to count. And at the margins in neck and neck races like we have with capitalist and barrister, that can make a difference and very hard to do anything about.

Nita Farahany:

No, I think that’s right. I mean, I was just thinking about voting acts and voting rights and whether any of this would frustrate that. Does putting out something that says it’s already been decided or there’s been a concession speech or what if you put out something that says, actually these polling stations have been closed, there was a flooding or something like that, something that would actually be designed to interfere with people going out and casting their vote. Then you might have regulations because you might say, well, maybe that’s really meant to intimidate or to threaten or to coerce a person to interfere with their right to vote. Those are hard cases to win. They’re very hard to prove.

Preet Bharara:

Now the truth is we’ve already seen this kind of thing actually happening around the 2024 New Hampshire primary,

Biden Robocall Tape:

A bunch of malarkey. We know the value of voting democratic on our votes count.

Preet Bharara:

So a political consultant has admitted to creating robocalls from a Biden soundalike to highlight the need for AI regulation. That’s what he claims his motivation was. So our hypo is not really far from reality at all.

Biden Robocall Tape:

It’s important that you save your vote for the November election.

Preet Bharara:

And speaking of the November election, here’s a question I have. Are there ways for voters to assess whether something is real or fake? Are there tells that we can look for?

Nita Farahany:

Yeah, I mean for now there are some people have pointed out some things like if you look at the hands of people in these images, most generative AI don’t have the hands quite right yet. They look a little bit distorted or they have too many fingers or too few fingers. Sometimes what happens with a deepfake generated image or face of a person is that the lighting is off. So if you look and there’s something weird about the face of the prime person in a video and it’s either a little bit blurry or the lighting is off because the video that was imported or used to construct the generated image had different lighting when it was filmed. Those kinds of things can help you tell. So if something feels a little off, trust that instinct. It may in fact be on top of that. There are a lot of promising advances in things like watermarking of images, and I think the more that platforms and tech companies can commit themselves to this, especially for the election cycle, the more likely we as consumers and voters will have the ability to use other technology that’s emerging to say, okay, well what does that mean?

What does it tell us about the likelihood that this is true or false? And I’d say, don’t spread things on social media that you don’t have, if they don’t come from a good source, go back and figure out who is it that shared it to begin with, and start to have an expectation that if it’s from somebody you’ve never heard of and they look sketchy, that maybe it’s actually designed to be the spreading of false information or misinformation or a deepfake.

Preet Bharara:

Yeah, I mean, in our hypothetical, we have a sort of fairly simple, straightforward, binary scenario. We have two political opponents who in our hypo are doing things on their own within the campaigns. We talked about the fact that third parties do some of the worst things in terms of ethics and electoral politics, but it’s even worse than that in some ways. We have foreign adversaries who have been proven before to try to disrupt our elections, and you can imagine all sorts of third parties, not even within the United States, but outside the United States. So it is important to think about these technologies, the watermarking and people using their common sense as well. No law, by the FEC or otherwise, is going to eliminate the problem wholesale, even if it was permitted to be enacted.

Nita Farahany:

That’s right. That means for the average voter, we have to assume that it’s not just what we hear or what’s set in debates that need to be fact-checked and that we should take with a grain of salt, but images, videos, voices that we hear, we have to start to treat as suspect until they’re validated.

Preet Bharara:

Right. And so Nita, of course, what’s really at stake here beyond just the outcome of any particular election, is the erosion of trust in our fundamental institutions, whether it’s the ones that safeguard our elections or actually any other institution that we have to count on in the democratic society.

Nita Farahany:

Yeah, I so agree. I mean, I think what we’re seeing right now is in many ways a frightening erosion of trust in academia and media and governments such that people are talking about living in a post-truth world, and that’s a very, very hard place to be. I think one of our pathways forward is going to really have to be transparency to regain trust and confidence in what we’re seeing and what we’re readi

Preet Bharara:

And so needed. Toward that end, in the spirit of transparency and full disclosure, we should make another disclosure about one thing that’s about our hypothetical candidate, Kylie Kapitalista.

Kylie Kapitalista:

At a time like this, America doesn’t need leaders who are stuck in old ways. The old ways aren’t working.

Preet Bharara:

So guess what? We used AI to create Kapitalista’s voice, the newscasters you heard on this episode and the Mom in the notorious “Baby drop” video for Kapitalista. We fed the AI voices of real life, well-known figures from politics and culture. I think you can recognize some of the voices in there. Tweet at us with the hashtag #AIonTrial to share your guesses or write to us at letters@cafe.com. Before we go, one quick update. After we recorded this episode, senators Amy Klobuchar and Lisa Murkowski introduced bipartisan legislation that would require disclaimers on political ads created or substantially altered by AI. We’ll be watching for further developments on this bill. Stay tuned. On the next episode of our stay tuned miniseries, AI on Trial. The final victim of AI malfeasance is me. Someone trains an AI to sound like me and act like me, and then launches a copycat podcast called Stay Tuned With Bot Bharara. What legal recourse do I have? Find out next Monday on the final episode of AI on Trial.

If you like what we do, rate and review the show on Apple Podcasts or wherever you listen. Every positive review helps new listeners find the show. Send me your questions about news, politics, and justice. Tweet them to me at @PreetBharara with the hashtag #Ask Preet. You can also now reach me on threads, or you can call and leave me a message at 669-247-7338. That’s 669-247-7338. That’s 669-24-PREET, or you can send an email to letters@cafe.com. Stay Tuned is presented by CAFE and the Vox Media Podcast Network. The executive producer is Tamara Sepper. The audio producers for AI on Trial are Matthew Billy and Nat Weiner, who also composed our music. The editorial producer is Jake Kaplan. Lissa Soep is the editor of the miniseries. Marty McCarthy is the Voice of Hypothetical Senate Hopeful Benjamin Barrister, and of course, Nita Farahany is our honored guest for all three episodes. Special thanks to Art Chung. I’m your host, Preet Bharara. Stay tuned.