All Episodes

January 30, 2024 18 mins

This week we talk about robo-Biden, fake Swift images, and ElevenLabs.

We also discuss copyright, AI George Carlin, and deepfakes.

Recommended Book: Debt: The First 5,000 Years by David Graeber

Transcript

The hosts of a podcast called Dudesy are facing a lawsuit after they made a video that seems to show the late comedian George Carlin performing a new routine.

The duo claimed they created the video using AI tools, training an algorithm on five decades-worth of Carlin's material in order to generate a likeness of his face and body and voice, and his jokes; they claimed everything in this video, which they called "George Carlin: I'm Glad I'm Dead," was the product of AI tools.

The lawsuit was filed by Carlin's estate, which alleges these hosts infringed on the copyright they have on Carlin's works, and that the hosts illegally made use of and profited from his name and likeness.

They asked that the judge force the Dudesy hosts to pull and destroy the video and its associated audio, and to prevent them from using Carlin's works and likeness and name in the future.

After the lawsuit was announced, a spokesperson for Dudesy backtracked on prior claims, saying that the writing in the faux-Carlin routine wasn't written by AI, it was written by one of the human hosts, and thus the claim of copyright violation wasn't legit, because while the jokes may have been inspired by Carlin's work, they weren't generated by software that used his work as raw training materials, as they originally claimed—which arguably could have represented an act of copyright violation.

This is an interesting case in part because if the podcasters who created this fake Carlin and fake Carlin routine were to be successfully sued for the use of Carlin's likeness and name, but not for copyright issues related to his work, that would suggest that the main danger faced by AI companies that are gobbling up intellectual property left and right, scraping books and the web and all sorts of video and audio services for raw training materials, is the way in which they're acquiring and using this media, not the use of the media itself.

If they could somehow claim their models are inspired by these existing writings and recordings and such, they could then lean on the same argument that their work is basically the same as an author reading a bunch of other author's book, and then writing their own book—which is inspired by those other works, but not, typically anyway, infringing in any legal sense.

The caveat offered by the AI used to impersonate Carlin at the beginning of the show is interesting, too, as it said, outright, that it's not Carlin and that it's merely impersonating him like a human comedian doing their best impression of Carlin.

In practice, that means listening to all of Carlin's material and mimicking his voice and cadence and inflections and the way he tells stories and builds up to punchlines and everything else; if a human performer were doing an impression of Carlin, they would basically do the same thing, they just probably wouldn't do it as seamlessly as a modern AI system capable of producing jokes and generating images and videos and audio can manage.

This raises the question, then, of whether there would be an issue if this AI comedy set wasn't claiming to feature George Carlin: what if they had said it was a show featuring Porge Narlin, instead? Or Fred Robertson? Where is the line drawn, and to what degree does the legal concept of Fair Use, in the US at least, come into play here?

What I'd like to talk about today are a few other examples of AI-based imitation that have been in the news lately, and the implications they may have, legally and culturally, and in some cases psychologically, as well.

There's a tech startup called ElevenLabs that's generally considered to be one of the bigger players in the world of AI-based text-to-voice capabilities, including the capacity to mimic a real person's voice.

What that means in practice is that for a relatively low monthly fee you can type something into a box and then have one of the company's templated voice personas read that text for you, or you can submit your own audio, creating either a rapidly produced, decent reflection of that voice and having that read your text, or you can submit more audio and have the company take a somewhat more hands-on approach with it, creating a more convincing version of the same for you, which you can then leverage in the future, making that voice say whatever you like.

The implications of this sort of tech are broad, and they range from use-cases that are potentially quite useful for people like me—I've been experimenting with these sorts of tools for ages, and I'm looking forward to the day when I can take a week off from recording if I'm sick or just want a break, these tools allowing me to

Mark as Played

Advertise With Us

Popular Podcasts

Dateline NBC
The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.