• Show Notes
  • Transcript

How might AI infringe on intellectual property and personality rights? And could AI replace Preet as the host of Stay Tuned?  

This is the final episode of a Stay Tuned miniseries, “AI on Trial,” featuring Preet Bharara in conversation with Nita Farahany, professor of law and philosophy at Duke University.

Preet and Nita discuss the hypothetical case of an artificial intelligence chatbot that impersonates Preet as the host of a copycat podcast, Stay Tuned with Bot Bharara. The unauthorized chatbot was trained on everything Preet has ever said or written online. Can Preet protect his intellectual property rights? Is the law on the real Preet’s side, or is it time to surrender to an AI-dominated world and collaborate with the bot?

REFERENCES AND SUPPLEMENTAL MATERIALS: 

Stay Tuned in Brief is presented by CAFE and the Vox Media Podcast Network. 

Tamara Sepper – Executive Producer; Lissa Soep – Senior Editor; Jake Kaplan – Editorial Producer; Matthew Billy – Audio Producer. Original score was composed by Nat Weiner.

Please write to us with your thoughts and questions at letters@cafe.com, or leave a voicemail at 669-247-7338. For analysis of recent legal news, join the CAFE Insider community. Head to cafe.com/insider to join for just $1 for the first month.

Preet Bharara:

Hey folks, it’s Preet. We’re back with our Stay Tuned miniseries AI on Trial. This is our third and final episode, “Bot Bharara Steals Stay Tuned.” And I’m happy to report that Nita Farahany is once again here with me. Nita, welcome back. This is getting to be a habit.

Nita Farahany:

It’s a fun one. I like it.

Preet Bharara:

Listeners, if you haven’t heard our prior episodes, I strongly encourage you to pause here and catch up on episodes one and two of AI on Trial where Nita and I contend with the impact of AI on criminal justice and election law. For today’s hypo, we turn to intellectual property. Our case is set in the Not-too-distant future later this year. It just so happens that Stay Tuned with Preet is celebrating its seventh anniversary with a special episode. Looking back on our favorite conversations on the show, while one of our producers is prepping for the special, he comes across something alarming.

Bot Bharara:

I’m Bot Bharara.

Preet Bharara:

Another podcast, Stay Tuned with Bot. And with a little digging, we find out that this Bot Bharara has been trained on all seven years of this very podcast, plus everything else I’ve ever said or written online, including my book. The producers of the copycat show claim that they’re my biggest fans. They say their whole point is to give the world more of the good stuff we offer here on Stay Tuned and maybe hit a topic or two we can’t get to. Needless to say, I’m not buying it. And at a moment like this, even a lawyer needs a lawyer, so I call mine and we consider going after two parties, the AI company that used my intellectual property without permission to train its system and the podcasting outfit behind Stay Tuned with Bot. So Nita, let’s start with the AI company that developed a software that created Bot Bharara. Before we get to the legal issues, and there are lots of those, tell us how hard it would be to clone my voice in 2024 as unique and singular as my voice is.

Nita Farahany:

I mean, it’s already possible to clone your voice today using deep learning techniques that can synthesize speech from text or audio samples. So there are companies that are already offering voice cloning for different purposes, like creating digital assistants or dubbing movies or generating audiobooks. And some of them require the consent of the voice owner and some of them don’t. And the quality varies. But in general, the technology’s advancing really rapidly and it’ll be really hard to tell the difference between what’s real and synthetic.

Preet Bharara:

So let’s get into some of the legal issues and we’ll focus on the questions specifically raised by the use of AI. The first big area is clearly copyright law. This bot is impersonating me. So Nita, as a matter of law, what is copyrightable and what’s not copyrightable?

Nita Farahany:

Yeah, I mean, so we’ll start with what’s actually settled on. And I say a lot of this is going to be kind of the wild west of figuring it out because it blurs a lot of different distinctions. But in general, copyright law protects original works of authorship that are fixed in a tangible medium of expression. So that means the work has to be that there’s some creative effort by a human author. It has to be recorded or embodied in some physical or digital form that could be perceived or reproduced. So that’s things like books or songs or paintings, movies, software, podcasts.

But copyright doesn’t protect ideas and facts and even your voice. And that’s because ideas are too abstract in general to be subject to copyright law and facts are part of the public domain and they’re not owned by anybody, and voices aren’t something that you authored. I mean, maybe you worked on your voice, but I don’t think so. They’re more your natural attribute, like your height or your eye color. So you can’t claim copyright over your voice unless they’re copying some specific expression of your voice, like your recording of your podcast.

Preet Bharara:

So I got to ask, if a voice isn’t copyrightable, which is the company that I sue? Is it the AI company behind the software used to create the bot? Or is it the podcast company itself, the one that generated the fake me?

Nita Farahany:

You’re going to have a hard time, I think to start with in suing the AI company because of the legal status of AI generated works. It’s pretty unclear, it’s pretty unsettled. And I think there are two major questions that are going to arise, which is who’s the author of the AI generated work? And what’s the source of the AI generated work? And usually copyright law has required a human author to give copyright protection to, and it doesn’t recognize AI itself as the author or legal person.

Preet Bharara:

But why wouldn’t it be just whoever the person is who directed the creation of Bot Bharara that’s a human?

Nita Farahany:

But I mean, the question is there’s the company and then there’s the AI generation of the content itself, and then there’s the question of whether or not you owned any copyright material that went into what they trained to create it as well. But it raises this question of what is it that’s going into the models? And what is fair for them to use to train the models? And what’s produced at the other end of it and who produced it and who owns what was produced?

Preet Bharara:

It would seem to a lay person who doesn’t understand and has not studied copyright law or any of the other areas of law that we’re going to discuss that if you have a program called Stay Tuned with Preet with my actual authentic voice and my style and approach, and you then created another show with similar art called Stay Tuned with Bot that also purported to feature me that either I or Vox Media should have legal recourse. How can that not be?

Nita Farahany:

Yeah, I mean, so what you would argue is that the AI company infringed on your copyright by copying and using your podcast recordings, your book, your speeches, your tweets without your permission, without giving them a license to train their AI, which generated a new work that mimicked your voice and your style. And then you would have to argue that the AI didn’t transform your original works, but they created a derivative work that’s substantially similar to or derivative of your original works. But I mean, you can see that’s a number of steps you’ve got to get through. Which is like-

Preet Bharara:

It seems unduly complicated.

Nita Farahany:

Well, it’s both complicated because of technologically what they’re doing, but also what they’ve created isn’t literally taking one of the recordings. It’s something that’s derivative and mimicking, but novel content.

Preet Bharara:

Is it maybe more understandable if someone created Stay Tuned with Bot that was intended to be a parody of me and not to emulate me? That would likely be okay.

Nita Farahany:

Yeah, that would be okay. I mean, that’s going to fall under First Amendment protection and it’s going to be permissible because it’s clearly meant to also be differentiated from what you’ve created. It’s not competitive with Stay Tuned with Preet. I mean, maybe people would enjoy listening to it, but it has a distinct purpose which isn’t just derivative of what you’ve done. It’s meant to build on what you’ve done and to create a parody of what you’ve done.

Preet Bharara:

Yeah. Does the concept of fair use that we hear in connection with copyright law, does that apply to the parody situation or the other situation we’ve been talking about?

Nita Farahany:

I mean, it applies to all of this. That’s what a lot of the AI companies, their main argument is. And it’s interesting, before we get into their main argument, a lot of copyright scholars really fall on the side of they believe that all of this is fair use. The idea is that there is some amount of information and some uses of information in the world that are fair to use and that we want to enable. So the doctrine is something that allows this use of copyrighted work without the permission or the license granted by the copyright owner for these purposes that are considered to be socially beneficial like criticism or comments or parody or news reporting or scholarship or research. It’s not a right, it’s a defense. So if you sued the AI company and said, “You violated my copyright.” They would raise in defense that it would fall under fair use and then it would be evaluated according to these four factors which are used to decide whether or not it actually falls within fair use.

Preet Bharara:

Yeah. So people have been making these allegations and these claims in court. We should talk about how those cases are going. One, somewhat famously, the comedian Sarah Silverman is part of a group of plaintiffs. What was that case about and how’s that going?

Nita Farahany:

So this was a group of authors who sued Meta over the AI program, LLaMA, which is the large language model, Meta AI. And people are maybe more familiar at this point with ChatGPT, but these are large language models that work by being trained on massive amounts of text from different sources, which includes books and articles and blogs. And all of that information is used to train the models themselves. And some of those texts are copyrighted by the authors who wrote them. And the plaintiffs in this case claim that Meta used their works to train LLaMA and by doing so violated their copyright.

Preet Bharara:

Okay. So far this case hasn’t been what I’d call a slam dunk for the plaintiffs. The judge initially dismissed many of the claims saying that the output of the AI-generated work was just not substantially similar to the original works used to train the AI. Now, that’s one judge.

Nita Farahany:

If you look at what is fair use, what are the characteristics like the four factors that are considered, it’s the purpose and the character of the use, the nature of the copyrighted work, the amount that’s used, and the effect on the market value of the copyrighted work. And what a lot of copyright scholars say is that it’s not unlike how you and I learn. We have read lots of books in our lives, we’ve listened to podcasts, we have all this information that has gone in, and then we write something new, it’s based on this corpus of knowledge that we have read and haven’t gotten copyright and licenses and permissions to use to generate new knowledge. And they think that’s what essentially these large language models are doing, is they’re trained on all that information and then they’re creating something that’s new.

Preet Bharara:

Yeah, it’s just like what people do. Most of what humans spout out in novels and nonfiction books alike is derivative in some way. This is the nature of how we pass along knowledge.

Nita Farahany:

Yeah.

Preet Bharara:

So put the law aside for a moment. Are there ways that through technology or other means a website can prevent itself from being learned on or scraped?

Nita Farahany:

Yeah, it’s so interesting because imagine this, you are somebody who’s creating an image and you’re creating a digital image of your artwork. There are technologies that are being developed, backend things that you can do to subtly change a couple of the pixels in the image in ways that aren’t detectable by the human eye so that your artwork doesn’t look adulterated if you share it with another person. But when one of the models take it confuses the model and gives false information to the model, which can ultimately break the models themselves. The next time somebody goes and says, create something that looks like a dog basking in the sun, instead it creates a cat basking in the sun. And similarly for websites, and I guess this is probably how you would do it with Stay Tuned with Preet, is websites are scraped. There are technologies that go and scrape the data from the technology that go into these large language models.

And recently, OpenAI revealed a bunch of the different companies like, here’s what the bot is that’s coming in, scraping it, and here’s how you’d prevent it from scraping your website. And some of the companies have started to implement those measures to prevent the scraping from occurring. And some of them have long been doing it as well. And others think, well, there’s an advantage to doing so. Google has been scraping data from websites for a really long time, but by doing so, that means that you might be prioritized in the Google search results and you don’t want to be excluded from the search results. So not everybody’s going to put into place the anti-scraping measures, but there are starting to be technological ways that people can prevent new data from going into the websites. It may not matter if these models already have sufficient data that they’ve scraped to have their base level of the corpus of knowledge necessary for them to do their functionality.

Preet Bharara:

So there are things that people can do?

Nita Farahany:

Yeah, and I think increasingly there will be these kinds of tech solutions that people put into place to try to watermark their images or put in little, just like there’s pixels that can change a digital image, presumably people start to do that with their stories and with their text as well. They’ll figure out ways to do this so that future information that goes into the models, if you want to have it protected, you could, and it might even be that over time, your failure to do so means that you’re not asserting your copyright in ways that you otherwise would have to to signal that you want to not have it be part of the models.

Preet Bharara:

We’ll be right back with more AI on Trial after this.

Okay. So there’s another legal right, the right to publicity that does not exist in every state, but in many states. Can you describe briefly what that is and what that means?

Nita Farahany:

Yeah, so the right to publicity is it’s a legal right like copyright, but what it does is protect a person’s interest in controlling or profiting from the commercial use of their name or their image or their likeness or their voice or other distinctive aspects of their identity or their persona by other people. And it’s based on this idea that you as Preet, your identity has economic value and that you should be able to prevent other people from exploiting it or misappropriating it without your consent or without being compensated for it. And it’s also based on the idea that there’s personal value to it, not just economic value. You should have some kind of dignitary right or privacy right or autonomy right that prevents people from doing that. And it’s recognized in most states, whether it’s by statute or by common law, but the scope of it varies by state.

Preet Bharara:

So if I’m hearing you correctly, what that possibly means in connection with Stay Tuned with Bot is that the reenactment of my voice, the mimicking of my voice, and my content and my approach, and all of that may not be remedied by law, but if they put ads in the episode the way Stay Tuned with Preet has ads and they use my fake voice, my AI generated voice to endorse products that I actually haven’t endorsed, then I have a much clearer lawsuit. Correct?

Nita Farahany:

Yeah. You need to show damages. Which is like you need to show there’s some economic loss as a result of what they’ve done. And it doesn’t just have to be economic in the kind of direct, I would’ve made money off of that advertisement. It could be a loss through your reputation, which could damage your economic prospects in other instances.

Preet Bharara:

Because it could have me endorsing all sorts of malevolent products.

Nita Farahany:

Right, right. And part of it is I’m going to call you a celebrity, Preet, and say-

Preet Bharara:

How dare you?

Nita Farahany:

Well, I mean, you got a pretty big following on social media and you did get fired by Trump so I feel like that gives you a good right to celebrity.

Preet Bharara:

I’ve earned it. I’ve earned it.

Nita Farahany:

Yeah, rightly so. But for that, you need to have some kind of recognizable or distinctive identity, and you don’t have to be a celebrity, but we’re going to call you one. And celebrities have stronger and broader rights and protections because there’s more economic or personal value that comes from that celebrity status that’s actually being, in some ways traded in the market. It increases your income, your reputation, what you can do with it. And so given that celebrities may be able to claim more aspects of their identity, including your distinctive voice, that would allow them to have a better or stronger claim than somebody else like me would, for example.

Preet Bharara:

But if somebody had you endorsing products that caused you economic harm, you would still have the basis just that it arises more frequently in the case of celebrities, right?

Nita Farahany:

Yeah, it arises more frequently, but also it’s easier probably to establish what the economic loss or harm or value is as a result. And it’s not just damages. I mean, so you might be able to use it to do things like get an injunction to stop the show from playing any longer and stop unauthorized use of your voice or your persona, or even the show itself to say that it’s not a parody. It’s really meant to be the exact same show, and it’s creating confusion in ways that are problematic.

Preet Bharara:

There have been cases going back many, many years about the right to publicity, including on the part of various celebrities like Bette Midler. You want to talk about that case for a second?

Nita Farahany:

Yeah. So Bette Midler has a very distinctive voice, and when they asked her to do an ad and she said no, and then they asked one of her backup singers to do the ad instead, but not to do the ad in the backup singer’s voice, to do the ad with the backup singer sounding as much like Bette Midler as possible. And then it created actual confusion where people thought it was Bette Midler who was doing the advertisement, and she sued, and I think she succeeded in that case.

Preet Bharara:

She prevailed. So there are analogs.

Nita Farahany:

There are analogs.

Preet Bharara:

There are analogs here.

Nita Farahany:

There are analogs, and this goes back a lot further. But interestingly, it had primarily been major superstars and celebrities who’d really been asserting this right to likeness and the right to likeness or the right to publicity, or had been more limited in the number of cases that had been asserted over time. And increasingly, there are states, for example, that have laws that they’re passing around, for example, deepfakes and the use of deepfakes. And it’s come up some in these revenge porn cases where people are creating pornography in the likeness of even just a ex-partner or something like that. And this right to likeness may be extended, I think, in this modern era much more broadly than just celebrity status.

Preet Bharara:

So it sounds like one of the upshots of your analysis is that if the podcast company that makes Stay Tuned with Bot isn’t making any money or is not intending to make any money, and it’s really just an homage, if you will, that there’s really not a basis for a suit because there’s no economic advantage that they’re getting, or potentially no economic harm being done to me or to Vox Media.

Nita Farahany:

Yeah. I mean, it does become a weaker claim if there’s no economic value, if they’re just advertising it for free, they’re not using it to-

Preet Bharara:

But what if they’re diverting listenership to that podcast?

Nita Farahany:

Yeah, I mean, so again, you could show economic harm to the show if it’s both diverting listenership, because people genuinely think that’s you. They’re choosing between two different shows, and they are choosing that one because they think that’s your alternative show versus this one or something.

Preet Bharara:

Or it’s just better. They made a better Preet.

Nita Farahany:

But if they made a better Preet, then that’s just competition in the marketplace.

Preet Bharara:

That’s just the breaks.

Nita Farahany:

Slightly, the newer, improved Preet.

Preet Bharara:

All right, so given all these considerations and difficulties and challenges, let’s pretend you’re my lawyer, okay, counselor.

Nita Farahany:

Yeah.

Preet Bharara:

So how do we think about whether to sue or not? How do you assess the risk? And by the way, I hope I get a friends and family discount.

Nita Farahany:

You definitely get a friends and family discount. And your question is, what are we going to do? Should we or shouldn’t we sue at this point?

Preet Bharara:

Yeah, yes. Because look, I was flattered at the beginning when Stay Tuned with Bot started, but I’m kind of pissed off because there is some confusion I don’t know yet, and I can’t prove yet that I’m losing money or listenership, but I have this feeling that the people are both confused or they’re going elsewhere. And by the way, every once in a while, I’m told Stay Tuned with Bot is not a better Preet, but doesn’t live up to our standards and is a little bit dull and trite and frivolous, which we don’t like to be. So I’m annoyed and I’m angry, and I want you to do something about it.

Nita Farahany:

All right, well then we’re going to sue, we’ll just start there, which is-

Preet Bharara:

Is that all it takes?

Nita Farahany:

Well, sure, yeah, because this is a-

Preet Bharara:

You just sue if you’re annoyed?

Nita Farahany:

No, it’s a murky enough area and I think there’s enough there. I mean, so let’s suppose that all they want to do is just get more Preetian ideas out into the world for free. That’s what they’re doing. But you are both experiencing the dignitary harm, but it’s exploiting your reputation and your goodwill. It’s creating confusion and deception among your listeners and fans. It’s interfering with your dignity and your autonomy. So given that, I’d say there’s still at the very least a broad claim for an injunction and maybe for attorney’s fees, which would be helpful for me, and you still may be able to show that the podcast company’s use of your voice violates your right to control and profit from your identity and persona.

So given that, I’d say you can at least bring this right to publicity claim, and we might as well make the copyright claim because there are these few cases that are out there and have been decided. But this one I think we can start to say is actually just violating the copyright to the show itself and not just to your voice and to the substance and the content of the show. And maybe we can show that they’re using some of the words and phrases that you use from the show and literally copying that.

Preet Bharara:

Yeah. So let me just alter the facts, twist them a little bit more. So at the start of Bot Bharara, the bot is charming, persuasive, entrenched, and kind of flawless, if I’m honest. But what if it started to change? What if Bot Bharara starts to make statements I don’t agree with? Maybe start singing the praises of Donald Trump, something like that. How does the law determine when reputational harm amounts to defamation, which is another area of law? So we’ve talked about the right to publicity, we’ve talked about copyright, but is there a defamation claim possible depending on which way Bot Bharara goes?

Nita Farahany:

Yeah, that’s a great question. It’s really helpful when you’re helping your lawyer out with all of the questions. You’re kind of doing my job for me, I like it. But it’s a really realistic possibility because AI systems are far from perfect. They make mistakes, they can be manipulated, they can be biased, they can be unpredictable. Those are the errors that could be introduced, not just the ways in which they might intentionally have you saying things that you don’t want to be saying. And if they make statements that are false or harmful to somebody’s reputation, they can be liable for defamation, which is a tort or a civil wrong, which happens when somebody makes a false statement of fact that injures another person’s reputation. It’s published by a third party, it’s not privileged or protected by law, and it varies depending on who the plaintiff is and who the defendant is and what the statement is about.

So for a public figure, or I’ve been saying celebrity like you, then they have to prove that not only is the statement false and harmful, but also the defendant acted with actual malice, which means that they knew that the statement was false or they acted with reckless disregard for the truth. One of the questions or the challenges about whether you would sue for defamation is that the law of defamation is based on an assumption that there’s a human speaker or a human publisher who’s responsible for the statements that they make or publish. So here you’ve got the AI Preet Bharara. The best case is we’ve got the publisher. It depends on whether it’s like a scripted episode or if it’s really allowing a kind of model that’s riffing on Preet Bharara. And if it’s about a matter of public concern like politics or law, then they might have a First Amendment defense. So it starts to get a little bit tricky here on actual malice from an AI system. Easier if they’re scripting the show, but what happens if they just sort of set Bot Bharara free?

Preet Bharara:

I feel like a little bit what we’re doing is here in this conversation is giving a roadmap to a rogue imitating Stay Tuned with Bot, how to color within the lines and prevent legal loss to them, right?

Nita Farahany:

We don’t want to give a how-to guide.

Preet Bharara:

Don’t do ads, don’t defame, but otherwise, totally use my voice, train it on all my podcasts and speeches and everything else.

Nita Farahany:

So we have to imagine that, which is we both have the Stay Tuned with Bot Bharara, but there’s also now your voice, which has been cloned. And so maybe there’s the Stay Tuned with Bot Bharara bot that you can interact with on the website and could make Bot Bharara say anything you want, including, “I wish Trump hadn’t fired me because I would’ve so enjoyed working for him.” Things like that.

Preet Bharara:

So I guess based on that case, our first conversation wouldn’t necessarily be, do we sue? But as an initial matter, do we write a nasty letter, a cease and desist letter to the offending AI company and see if that works? Because sometimes it does.

Nita Farahany:

A letter is always a good place to start, and certainly as your lawyer, that’s what I would do is start by sending very sternly worded letters about how they need to cease and desist immediately. But ultimately the question is do they respond to that? And if they don’t, what are our next steps going to be?

Preet Bharara:

So now suppose we go back and forth, it’s not really working great. These people may or may not be making money off of the imitation of my voice and my approach to podcasting, and at some point I get fed up and I think if you can’t beat them, join them. Whatever, let them use my voice. I just want to cut of the revenue. So this is part of a negotiated resolution. Does that make sense? Is licensing of bots a thing of the future?

Nita Farahany:

Maybe. I think one of the things that you see from a lot of the writers and a lot of the artists is this kind of view of you’ve taken all of this information to train these models and you haven’t compensated me for it. It’s not that there’s an absolute objection to the models ever using the information. It’s using it without the permission of the person or without the compensation to the person or the licensing for the person. And Grimes took a stance on this saying, “Anybody who wants to use my voice, no problem. Let’s co-create songs together. You just have to give me 50% the royalties of any of it and let’s make this happen.” And it’s a novel idea.

And the question is, does she actually have a right to 50% of the revenue if people do do that? And that remains to be seen, but it’s a different invitation and a different way forward is to say, this is inevitable, it’s going to happen. Then maybe we need to think about a different economic model. It’s not kind of unlike the Napster moment as people were kind of trying to think about what happens when music starts to be streaming and how do you compensate artists at that point?

Preet Bharara:

Yeah. So speaking of alternate economic models, so now I’m in the mode of really being on the bandwagon of AI and thinking, not how can other people harm me through its use, but instead how I can take advantage. And we put out a lot of episodes of all three podcasts every week, and it’s very tiring, and I have a lot going on, and so I’m asking you now, now you’re going to be my employment lawyer in addition to my intellectual property lawyer and I decided to meet with the CEO of Vox Media that produces these podcasts and say, “I want to take 10 weeks off and let my own Bot Bharara take over the show.” I don’t think my contract addresses that. The company gets what it wants, production of a show. Let’s assume it’s of equal quality. I have a couple of questions that arise from that. One is, can I do that, Ms. Employment Lawyer?

Nita Farahany:

I’ll tell you, this has been going on for a while. Before large language models were released, there were software engineers who figured out how to automate different parts of their job, and especially at work from home, they would just have the software doing their job and then they would do something else at the same time. And the question is, does the employer have the right to know that? Do they have some implicit or explicit expectation of what constitutes adequate performance, which is that you actually are showing up and not bot you is showing up? And then I think the real risk and danger looking at the writer’s strike and the Actor’s Guild, if you have Preet bot stand in for you and Preet bot is successful-

Preet Bharara:

Exactly. This is why I hired you, because this is exactly the problem. It may work out too well.

Nita Farahany:

It may work out way too well, in which case then the real question is why are they ever going to have you back?

Preet Bharara:

Yeah, because Bot Bharara gets paid less.

Nita Farahany:

Gets paid far less, can do even more episodes. You said you do more episodes than anybody else does, but I bet Bot Bharara doesn’t get tired.

Preet Bharara:

Bot Bharara as distinguished from Preet Bharara, yes.

Nita Farahany:

And Bot Bharara has infinite stamina. And so I think there’s a real risk that you don’t want to introduce into your own economic value of showing up and doing these episodes.

Preet Bharara:

That’s the other hypothetical that we haven’t really addressed. We keep talking about this AI company that competes with Stay Tuned with Preet. What if I decide I really don’t want to do this anymore, or I don’t like the terms and conditions of my employment, which is hypothetical, because I do. I love everyone here at Vox Media. I just want to make that clear. It’s just hypotheticals, guys, and they decide to continue making stay Tuned with Preet. Maybe they changed the name, maybe they don’t. Or maybe they have an asterisk and they continue the economic model even though I’ve gone into the sunset, I’ve gone, what is the phrase? I’ve gone quietly into the night.

Nita Farahany:

Do you have this problem too, of getting all of the idioms slightly wrong.

Preet Bharara:

Yeah, because there’s so many of them.

Nita Farahany:

Yeah.

Preet Bharara:

Yeah. I bet Bot Bharara doesn’t have that problem. He keeps all the idioms straight.

Nita Farahany:

Or Bot Bharara hallucinates new versions of the different idioms.

Preet Bharara:

Yeah. Do you think, being an employment lawyer again, that I leave and they continue, it’s hard to make an economic harm case or claim because I’m not doing a podcast anymore.

Nita Farahany:

But I mean, now we get back to are they trading off of the economic value of the reputation that you’ve created and established? And by the way, that comes from not just your time on Stay Tuned with Preet, but a lot that preceded anything that they would have any right to claim on. But this is what a lot of the fight has been about, and the writer’s strike and in the actors striking is trying to create some rights against, for example, taking all of the prior scripts that a writer has written and then using those to generate new ones based on the same style or in the same narrative or in the same voice, or taking an actor after they’ve died and creating movies that go into infinity.

Preet Bharara:

What you’ve just said points to sort of a fundamental question that we have about AI and the role it should play or can play or will play in our lives. If AI has the ability to become you, to become a person, Preet or Nita or someone else. For all intents and purposes, it looks like you, sounds like you operates based on your values, what does that mean about our identity? Our understanding of what it means to be a person? What do you think about that?

Nita Farahany:

I mean, this is like the existential crisis people are having of the moment.

Preet Bharara:

So now you’re a philosopher, Nita.

Nita Farahany:

Yes, I am. I’m putting on my different hat. And I think these are some of the hard and right questions that we have to be grappling with right now. What is human thinking? What’s unique about it? Before generative AI, I think a lot of people believed that creation of text and poetry and art could only be something that humans could do. And now that’s something that obviously isn’t true. And if it starts to look like and sound like us, but it lacks the empathy that humans have for each other, how does that affect both human to human interaction, what it means to be human, but also how we think and feel when we’re interacting with technology that is mimicking being human, but doesn’t share the same values, doesn’t have a vulnerable body that can die, doesn’t experience anything, let alone the range of emotions and values that we have as well. I think the bottom line is all of these are questions that we have to grapple with in the moment, and there are no easy answers to them.

Preet Bharara:

This question of what does it mean to be human or individual? One of the most amazing possibilities for AI that I’ve seen recently, and I don’t know how viable or realistic this is, but the use of AI that will allow humans to communicate with dolphins, for example, were fairly intelligent and then more-

Nita Farahany:

The Earth Species Project.

Preet Bharara:

Yeah. And then more dramatically, and maybe you know something about this, the ability of human parents to communicate with their babies before they have language skills. Is that a real thing?

Nita Farahany:

Maybe. So the Earth Species Project is a really extraordinary project, and there’s a lot of researchers who use AI to try to decode language and communication between other species and looking and trying to understand whether it’s dolphins or bats and bees and even coral reefs and how they communicate, using AI to decode that, to help change how humans think about other species and to recognize the complexity of the interactions and the communication and maybe as a result, lead to greater empathy and care for other species. And that raises this kind of complex question as well, is as AI systems change over time, what will our understanding of those systems be? Will we come to believe that they are entitled to certain rights or dignities or other types of things as the capacities expand?

Preet Bharara:

This is what I love about having you on the show, Nita, and I’m not sure how we started with the sort of disgruntled me annoyed that someone is trying to rip off my podcast to having a philosophical conversation about the nature of humanity and interspecies communication. That’s quite the arc.

Nita Farahany:

That’s where we are these days.

Preet Bharara:

That’s quite the arc.

Nita Farahany:

These questions about even just synthetic voice takes us to the world of interspecies communication.

Preet Bharara:

But I want to end on a more mundane note and ask listeners a question that actually touches on these issues of identity and authenticity and everything else we’ve been discussing. If you learned that this episode was hosted by Bot Bharara and not Preet Bharara, how would you feel? Would you listen? Would you feel cheated in some way? I don’t know. How would you feel about that?

Nita Farahany:

Well, I’m not totally convinced that I’m talking with, so maybe it’s-

Preet Bharara:

That’s how smooth I am.

Nita Farahany:

Maybe I’ve already… I think I’ve already already screwed up some, reached that place of deep skepticism about the world that we live in.

Preet Bharara:

So for now, there’s only one Stay Tuned with Preet. There’s no Stay Tuned with Bot. And if there becomes one, I know who to go to for the cease and desist letter. Nita, I want to thank you again. We’ve reached the end of our miniseries AI on Trial. There’s a lot to learn, of course, and you really help people understand what’s inherently quite difficult to understand.

Nita Farahany:

It was a pleasure as always. Thanks, Preet.

Preet Bharara:

How do I know you’re not a bot?

Nita Farahany:

Oh, you don’t.

Preet Bharara:

And I want to thank our listeners too. I hope that our first Stay Tuned miniseries AI on Trial sparked as many questions for you as it did for us, and that we even managed to answer some. We will, of course, keep covering AI on the show, and we look forward to continuing the conversation with you. If you like what we do, rate and review the show on Apple Podcasts or wherever you listen. Every positive review helps new listeners find the show. Send me your questions about news, politics, and justice. Tweet them to me @PreetBharara with the hashtag #AskPreet. You can also now reach me on Threads, or you can call and leave me a message at (669) 247-7338. That’s (669) 247-7338. That’s (669) 24-PREET. Or you can send an email to lettersatcafe.com.

Stay Tuned is presented by CAFE and the Vox Media Podcast Network. The executive producer is Tamara Sepper. The audio producers for AI on Trial are Matthew Billy and Nat Weiner, who also composed our music. The editorial producer is Jake Kaplan. Lissa Soep is the editor of the miniseries. And of course, Nita Farahany is our honored guest for all three episodes. Thank you, Nita, for keeping us on our toes and making it fun. Special thanks to Art Chung. I’m your host, Preet Bharara. Stay Tuned.

Bot Bharara:

I’m Bot Bharara.

Preet Bharara:

It sounds like Bot Bharara was trained a little bit on Max Headroom.