Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Ida (00:00):
All right, let's unpack
this Artificial intelligence.
It's not just a buzzwordanymore, is it?
It's everywhere.
Allan (00:07):
Everywhere.
Ida (00:07):
Making our lives easier,
sometimes funnier, and well,
increasingly it's making art.
Allan (00:12):
Right.
Ida (00:13):
But here's where it gets
really interesting and, frankly,
a bit bewildering.
Who actually owns that art andyou know who owns the data it's
trained on in the first place.
Allan (00:24):
That's the billion dollar
question, or maybe trillion.
Ida (00:27):
Exactly Today, we're diving
deep into the fascinating and
often contentious world where AImeets intellectual property.
You've sent us a stack ofsources and our mission,
basically, is to cut through thejargon.
Allan (00:40):
Yeah, there's a lot of it
.
Ida (00:41):
And navigate this legal and
ethical tightrope walk that
comes with this revolutionarytech.
Allan (00:48):
And what's fascinating
here, I think, is that AI isn't
just expanding, it's genuinelycreating entirely new categories
of property.
Ida (00:54):
Right.
Allan (00:55):
And challenging legal
concepts that have been around
for well centuries.
Ida (00:59):
Yeah.
Allan (00:59):
We're talking about an
industry projected to add a
staggering $13 trillion to theglobal economy by 2030.
$13 trillion, the globaleconomy by 2030.
$13 trillion, wow yeah.
So naturally, when there's thatmuch value at stake, the
question of who gets whatbecomes absolutely critical and
surprisingly complex.
Ida (01:17):
Absolutely, and it's not
just for the tech giants or big
corporations, right?
This affects everyone TotallyIndependent artists, musicians,
coders, designers, even you,even you, the curious listener,
as you navigate this new digitallandscape.
We're here to give you the corebasics.
You know what you need tounderstand how creations are
being protected, or maybe notprotected, in the age of AI
mm-hmm, speaking of thisrevolutionary tech, the sheer
(01:40):
scale of the AI boom isstaggering, isn't't it?
It?
Allan (01:43):
really is.
Ida (01:44):
Our sources show global
investment in AI startups just
exploded like from $1.3 billionin 2010.
Allan (01:51):
Peanuts almost compared
to now.
Ida (01:52):
To over $40 billion in 2018
.
And the patents the number ofAI-related patent applications.
Allan (01:59):
Oh yeah, the patent
offices are busy.
Ida (02:00):
The US Patent and Trademark
Office has published over
27,000 since 2017.
And get this 16,000 of thoseflooded in during just the last
18 months.
Allan (02:10):
Wow, that's acceleration.
Ida (02:11):
That's a lot of innovation.
Allan (02:13):
Indeed, and it's not just
about AI as an invention itself
AI is also becoming anincredibly powerful tool for IP
development.
Ida (02:21):
OK, explain that bit.
Allan (02:22):
A tool?
Yeah, think about it.
Pharmaceutical companies areusing AI in drug discovery,
speeding up processes that usedto take, you know, decades.
Ida (02:30):
Right finding new molecules
and things.
Allan (02:32):
Exactly.
Advertisers are leveraging AIto create content at lightning
speed, so this means AI isgenerating valuable new outputs
drugs campaigns, whatever andeven making incremental
improvements to its ownalgorithms All potentially
valuable intellectual property.
Ida (02:51):
It sounds like the ultimate
creative assistant then.
Allan (02:53):
In some ways yeah.
Ida (02:54):
But here's the rub If AI is
generating so much value, what
does this mean for ownership,especially when, as we know,
intangibles like IP representedwhat 84% of S&P 500 company
value back in 2018.
Allan (03:06):
A huge chunk.
Ida (03:07):
It sounds like we're in a
bit of a legal wild west out
there.
Allan (03:10):
You've really hit on the
core challenge there.
How do we protect that valuewhen the legal landscape is well
, it's still very much evolvingStill catching up Definitely
Like WIPO, that's, the WorldIntellectual Property
Organization, the EuropeanPatent Office, epo, the USPTO
here in the US, the US CopyrightOffice they're all actively
(03:32):
examining these fundamentalissues.
Ida (03:33):
Like what specifically?
Allan (03:35):
Well, questions of AI
inventorship Can an AI invent
something what's actuallyeligible for a patent anymore?
Data privacy issues, which arehuge and, of course, ai-related
copyright.
It's an area ripe for newstrategies and, as we'll
definitely see, a lot of legaldebate.
Ida (03:50):
Okay, that discussion about
protecting value raises, I
think, a fascinating questionabout the source of that value
itself.
If AI is creating so much, canit truly be called an artist?
Allan (04:02):
The artist question.
Ida (04:03):
Yeah, and if so, who owns
that creation?
Because, let's be honest, we'veall seen those incredible AI
generated images, or maybe heardAI composed music.
Who's the actual author?
Allan (04:12):
Well, the general
consensus, particularly in the
US, is a pretty firm no.
No really yeah.
The US Copyright Office hasexplicitly stated that works not
created by a human author aresimply not eligible for
copyright protection.
Ida (04:24):
Okay.
Allan (04:25):
They even point to cases
like the now infamous monkey
selfie.
Ida (04:30):
Oh, the monkey selfie.
Allan (04:31):
That photo taken by a
monkey To illustrate that
non-humans cannot be copyrightholders.
The core principle is humanauthorship.
Human authorship Based onoriginal works of authorship
Needs a human.
The monkey selfie Needs a human.
Ida (04:44):
The monkey selfie.
That's a classic.
So okay, if a robot paints amasterpiece, it's just public
domain, free for all.
Allan (04:51):
Pretty much yeah.
Ida (04:52):
No credit for the silicon
Picasso.
Doesn't that feel a bitold-fashioned, especially with
how sophisticated AI is getting?
Are we clinging to an outdateddefinition of creator?
Allan (05:03):
Well, that's the debate,
isn't?
Ida (05:04):
it.
Allan (05:04):
But legally precisely, at
least in the US right now.
However, it gets nuanced veryquickly.
Ida (05:10):
There's always nuance.
Allan (05:11):
Always.
The US Copyright Office hasclarified that if a human
exercises sufficient originality, Sufficient originality OK.
In selecting the inputs ormaybe editing the AI's output,
then the human drivendrivencreative expression in that
final work can be copyrighted.
Ida (05:26):
So the human touch matters.
Allan (05:28):
It seems so.
For example, there was a piecetitled A Single Piece of
American Cheese.
Ida (05:33):
Okay, catchy.
Allan (05:34):
It became the first
visual artwork composed, they
say, solely of AI outputs toactually receive a copyright.
That was back in January 2025.
Ida (05:44):
How did that work?
Allan (05:46):
receive a copyright.
That was back in January 2025.
How did that work?
It was based on the humanselection arrangement,
coordination involved in thecreative process.
Not the AI's autonomousgeneration, but the human
curation, if you like.
Ida (05:55):
So it's not the AI's
creation itself, but the human's
guidance of the AI that counts.
Like the AI is just a reallyfancy paintbrush.
Allan (06:02):
That's one way to view it
.
Yeah, a very, very fancypaintbrush, but the Copyright
Office still maintains thatworks.
Where the expressive elementsare determined by a machine,
those remain uncopyrightable.
Ida (06:13):
Okay, so it's a fine line.
Allan (06:14):
A very fine line and it
mirrors the patent side too.
The USPTO similarly codifiedrestrictions in February 2024.
Confirming human inventors mustalways be named.
Right in February 2024,confirming human inventors must
always be named Right.
That followed the Thaler vPerlmutter ruling in August 2023
.
Stephen Thaler's AI program,DBS, denied inventorship.
Ida (06:33):
It's to say the inventor
didn't happen.
Allan (06:35):
Didn't happen in the US
but, interestingly, other places
are.
Well, they're painting indifferent shades of gray.
Ida (06:42):
Like where.
Allan (06:44):
The UK's Copyright,
Designs and Patents Act from way
back in 1988 says the author ofa computer-generated work is
the person making thearrangements necessary for the
creation.
A bit broader perhaps.
Ida (06:55):
Hmm, arrangements necessary
.
Allan (06:57):
And China's Beijing
Internet Court actually
recognized copyright andAI-generated images back in
November 2023.
Ida (07:04):
Really.
Allan (07:06):
So it's truly a global
patchwork.
Right now, everyone's figuringit out.
Ida (07:08):
Speaking of what AIs can
create, that brings us to how
they even learn to do that.
It's not magic.
Allan (07:14):
Definitely not magic.
Ida (07:14):
They're fed massive amounts
of data and a lot of that data.
Well, it's copyrighted stuff,human created stuff.
Allan (07:21):
The trading data issue.
Ida (07:23):
This is where the lawsuits
really start flying right.
Allan (07:25):
Absolutely.
This is a huge area of conflict.
Deep learning models basically,they scrape enormous amounts of
media from the internet.
Ida (07:33):
Scrape Sounds illicit.
Allan (07:35):
Well, it means
automatically collecting it.
Think of it like a super-fastlibrarian scanning millions of
books and images, but instead ofreading, it's converting text
and visuals into numbers, like aunique digital fingerprint for
each piece, just to identifypatterns.
Ida (07:50):
Okay.
Allan (07:51):
But that process
necessarily involves making
copies of copyrighted works,millions, maybe billions of
copies.
Ida (07:57):
Right Copying is usually a
no-no in copyright.
Allan (08:00):
Exactly so.
It raises the fundamental legalquestion Does this infringe the
copyright holder's exclusiveright to reproduce their work,
or does it fall under fair useexceptions?
Ida (08:12):
Fair use.
Ah, that's the big legaldefense we hear about, isn't it
Like if you use a tiny bit ofsomething for a review, it's
okay.
Does that logic apply here?
Allan (08:21):
It's a lot more complex
than that.
It's a four-factor legaldefense in the US.
Ida (08:24):
Yeah.
Allan (08:25):
Definitely not a simple
loophole.
Ida (08:26):
Okay, four factors.
Allan (08:27):
Yeah, and traditionally
AI developers argued that
training AI models is fair usebecause it's transformative,
meaning it changes the originalwork into something new, it
doesn't just copy it and it'slimited in how it uses the
content.
Ida (08:40):
Makes sense kind of.
Allan (08:42):
Some compared it to cases
like Google Books.
Remember that, yeah, scanningcopyrighted books was found to
be fair use.
Ida (08:48):
I do remember that yeah.
Allan (08:49):
But critics are really
pushing back now, saying hold on
.
This is fundamentally different.
Judge Vince Chabria, in a caseinvolving meta and open AI, put
it quite bluntly.
He basically said you havecompanies using copyright
protected material to create aproduct that is capable of
producing an infinite number ofcompeting products.
Ida (09:11):
Competing products yeah.
Allan (09:13):
You are dramatically
changing the market for that
person's work and you're sayingthat you don't even have to pay
a license to that person.
I just don't understand howthat could be fair use.
Ida (09:22):
Wow, that's a powerful
statement from a judge, kind of
a mic drop moment in court.
Allan (09:27):
It really resonated.
Ida (09:28):
Does that sentiment reflect
a broader legal shift we're
seeing, because that sounds likea game changer and we have seen
some big legal fireworks lately.
Allan (09:37):
It certainly seems to yes
.
In February 2025, a federalcourt sided with Thomson Reuters
against Ross Intelligence.
The court ruled that Ross's AIit wasn't even generative AI its
use of Westlaw headnotes wasnot fair use.
Ida (09:55):
Why?
Because it was building adirectly competing product.
Allan (09:56):
Directly competing.
That seems key, very key.
And even more significantly, inAugust 2025, the AI company
Anthropic agreed to pay wait forit $1.5 billion to settle a
class action lawsuit withauthors $1.5 billion.
Billion, with a B Largestpublicly reported copyright
recovery in history.
Ida (10:14):
Whoa.
What was the core issue there?
Allan (10:17):
Well, the judge basically
affirmed that using legally
purchased books for training wasfair use.
Okay, fine Okay, but usingunlicensed works from data sets
like the Pile, which is thishuge messy collection of
internet text, much of itscraped without clear permission
?
Ida (10:30):
The scraping again.
Allan (10:31):
That was not fair use and
that led to that massive
settlement.
Ida (10:33):
Wow.
So buying books, gettinglicenses, fine Scraping from
shadow libraries or thesemassive unlicensed data sets,
big, big no-no.
Allan (10:41):
Got it.
That seems to be the emergingline.
Ida (10:43):
Yeah, this settlement, $1.5
billion, that's huge.
You know, some people mightlook at that and say if every
image, every text snippet had tobe compensated, wouldn't that
just bankrupt AI companies?
Allan (10:55):
This is the argument from
the developers.
Yeah.
Ida (10:57):
While others might argue
the value of any single image in
a giant data set is likecenti-pennies negligible.
Allan (11:03):
Right the scale argument.
Ida (11:05):
But clearly that anthropic
settlement indicates a
significant shift, at least inthat case.
So what's the scene likeoutside the US?
Are they all grappling withthis too?
Allan (11:14):
Oh, absolutely Different
approaches but definitely
grappling.
In the EU they have the 2019Digital Single Market Directive.
It provides text and datamining TDM exceptions to
copyright infringement.
Tdm exception yes, and the EUAI Act adopted in 2024,
clarified this covers AI datacollection, but copyright
(11:34):
holders can opt out.
Ida (11:35):
They can say no, don't
train on my stuff.
Allan (11:37):
Potentially yes, and,
crucially, providers of general
purpose AI models will need topublish detailed summaries of
their training content by August2025.
Transparency, ah, transparency.
That's interesting.
The UK has proposed somethingsimilar.
India, on the other hand,currently lacks specific
provisions, which is leading toongoing legal battles, like in
(11:58):
the ANI v OpenAI case.
Ida (12:00):
Right, so different paths,
same destination, maybe.
Allan (12:04):
Or maybe different
destinations.
It really does sound likeeveryone's trying to figure out
how to dance this legal tango.
It's complicated.
Ida (12:10):
Okay, so the input side,
the training data, clearly
contentious, huge legal battles.
But what about the output?
Allan (12:18):
The stuff the AI makes.
Ida (12:19):
Yeah, If an AI is fed all
this copyrighted material, what
happens if it then generatessomething that looks well
suspiciously familiar itself,Like?
It basically spits out a nearcopy of something it saw during
training.
Allan (12:32):
Yeah, that can happen.
It's a phenomenon calledmemorization or sometimes
overfitting.
Ida (12:36):
Memorization.
Allan (12:37):
Essentially, the AI has
learned the training data too
well, almost like a student whocan perfectly recite a textbook
page but doesn't reallyunderstand the concepts.
Ida (12:45):
Right Wrote learning.
Allan (12:46):
Exactly, who can
perfectly recite a textbook page
but doesn't really understandthe concepts.
Right Wrote learning Exactly.
While AI developers generallyconsider it an undesired outcome
.
They want generalization, notjust copying.
These deep learning models canreplicate items pretty closely
from their training set.
Ida (12:58):
And that's a legal risk.
Allan (12:59):
A significant copyright
risk?
Yes, Because under US law, toprove infringement, a plaintiff
needs to show their work wasactually copied and that the AI
output is substantially similar.
Memorization could tick bothboxes.
Ida (13:13):
So if an AI generates an
image that looks almost exactly
like, say, a famous photographor painting from its training
data, that's a problem.
Allan (13:22):
Potentially, yes, a big
problem, but it gets even more
complex with something calledstyle imitation.
Ida (13:27):
Style imitation.
Allan (13:28):
Yeah, generative models
are quite adept at imitating the
distinct style of particularartists.
You know, our sources highlighthow an AI could generate an
astronaut riding a horse in thestyle of Picasso.
Ida (13:39):
Right, I've seen prompts
like that.
Allan (13:41):
But here's the thing An
artist's overall style,
generally speaking, is notsubject to copyright protection.
Ida (13:47):
Oh, really, just the
specific work.
Allan (13:48):
Generally, yes, the
expression, not the underlying
style or idea.
This has led to really strongfeelings among artists who feel
like their entire creativeidentity, their style, is being
sort of stolen or diluted.
Ida (14:02):
I can see why they'd feel
that way.
Allan (14:03):
Absolutely, While
proponents argue that humans
also learn and are influenced byexisting art styles.
That's how art evolves.
It's a very thorny debate.
Ida (14:12):
Oh, so if it copies the
vibe, the style, but not the
actual painting itself, legallythat might be okay.
Allan (14:18):
Legally, it's much harder
to challenge.
Ida (14:20):
yes, that feels like a very
, very fine line for artists and
AI users alike.
It seems it's not just aboutwhat the law says, is it?
It's about what society feelsis right or fair, exactly.
It's a very human problem foran artificial intelligence.
Are these ethicalconsiderations actually starting
to influence the technologyitself?
Allan (14:38):
That's the current legal
nuance.
Yes, style versus expression,but you're right, there are
ethical lines starting to bedrawn, partly pushed by public
feeling.
Ida (14:46):
Like what.
Allan (14:46):
Well, in March 2025,
chatgpt actually placed limits
on users generating images inthe style of living artists like
Hayao Miyazaki.
Ida (14:55):
The Studio Ghibli director.
Why?
Allan (14:58):
There was this whole
Ghiblification trend online,
making everything look like hisstyle.
It sparked controversy, partlybecause Miyazaki himself has
been very publicly negativeabout AI art.
Ida (15:10):
Ah, ok, so the platform
responded to the artist's
feelings in the publicdiscussion.
Allan (15:16):
It seems so.
It shows a growing recognitionthat ethical considerations,
public sentiment and thecreative community's concerns
are pushing beyond just strictlegal interpretations.
Companies are starting tolisten, maybe.
Ida (15:28):
Interesting.
This brings us, then, to thebig picture.
What actually happens whenAI-generated content floods the
market?
Does it ultimately help usmaybe more choice, lower prices
or does it hurt the humancreators it learned from by
effectively competing with them?
Allan (15:42):
Well, there's a new study
by Samuel Goldberg and H Tai
Lam that gives us some prettystriking insights into exactly
that question.
Oh good Some data.
Yes, they examined an onlinemarketplace a real one that,
starting in December 2022, beganallowing AI-generated images to
compete directly alongsidehuman-made ones.
Crucially, they had to belabeled as AI-generated images
to compete directly alongsidehuman-made ones.
Ida (16:02):
Crucially they had to be
labeled as AI-generated.
Okay, labeled, and what didthey find?
Was it like a robot uprisingfor artists?
Did the humans get pushed out?
Allan (16:09):
The results were pretty
stark.
The total number of images forsale on the platform skyrocketed
up 78% per month Wow, and thatincrease was almost exclusively
driven by generative AI.
Ida (16:22):
Okay, more stuff available,
but what about the artists?
Allan (16:25):
And here's the kicker the
number of non-AI artists, human
artists, on the platformactually dropped by 23%.
Ida (16:33):
Ouch Okay.
Allan (16:35):
And while total sales
across the platform increased by
39% more transactions, overallRight Purchases of non-AI
human-made images actuallydropped.
Ida (16:44):
Ah, so the AI stuff wasn't
just adding to the pie, it was
taking slices from the human.
Allan (16:48):
That strongly suggests
that AI-generated images are not
just a supplement but directsubstitutes for human-generated
ones, at least in thismarketplace context.
Ida (16:56):
So for consumers, maybe
good news, more choice, maybe
better quality, pushed bycompetition.
Potentially yes, but for humanartists, yeah, yeah, not so
great.
Sounds like a difficult marketshift.
Allan (17:06):
Exactly.
The study concluded thatgenerative AI is likely to crowd
out non-gen AI firms and goods.
Good news for buyers, perhaps,but tricky, as they put it, for
producers.
Ida (17:17):
Tricky seems like an
understatement.
Allan (17:19):
And they flagged a real
policy concern about markets
being completely dominated by AI, effectively squeezing humans
out Right.
And this research, importantly,directly supports the argument
that AI outputs are substitutesfor human-created inputs.
That's a central point in manyongoing lawsuits, including the
big one, the New York Times caseagainst OpenAI.
Ida (17:41):
Danielle Pletka Connecting
the dots.
So what's the solution here?
I mean, do we try and ban AIart?
That seems unlikely.
Allan (17:47):
Marc Thiessen.
The researchers don't recommenda ban, no, but they talk about
ensuring equitable access toGenAI technologies and non-AI
artists.
Danielle.
Ida (17:53):
Pletka Equitable access
Marc.
Allan (17:55):
Thiessen how.
Ida (17:56):
Danielle.
It points to the need for newframeworks, new regulations.
Creative Commons, for example,is actively exploring how its
licenses can maybe supportgenitive AI while still
respecting human creators Tryingto find a balance, exactly, and
they acknowledge that ethicalconcerns go way beyond just
copyright law.
Allan (18:14):
It touches on privacy,
consent, bias, all those wider
economic impacts we just talkedabout.
Ida (18:20):
It's a whole ecosystem of
issues.
Allan (18:22):
It really is, and if we
connect this to the bigger
picture again, the EU AI Actadopted June 2024 is the world's
first comprehensive AI law amajor step.
Ida (18:33):
What does it aim to do?
Allan (18:34):
It aims for safe,
transparent, non-discriminatory
AI.
Safe, transparent,non-discriminatory AI and,
specifically on this point, itrequires generative AI to
disclose that content is AIgenerated, like in the study.
Ida (18:45):
The labeling.
Allan (18:46):
And remember, publish
summaries of copyrighted
training data by August 2025.
Ida (18:55):
Those are crucial first
steps towards greater
transparency and maybe someaccountability.
It really is like a whole neweconomic and artistic paradigm
shift unfolding right before oureyes.
Allan (18:59):
It feels that way,
doesn't it?
Ida (19:01):
So, wrapping this up, what
does this all mean for you, our
listener?
Whether you're a creatorgrappling with these questions,
a consumer enjoying AI-generatedcontent, or just someone
fascinated by how quickly techis changing everything, it's
clear that this intersection ofAI and intellectual property is
well incredibly complex.
It's constantly evolving and ithas significant real world
(19:21):
impacts already.
Allan (19:22):
Indeed From that
fundamental question of human
authorship we started with, tothe intricate dance of fair use
and AI training and these veryreal economic shifts we're
seeing in creative markets.
This deep dive really shows usthat our legal and ethical
frameworks are franklystruggling to keep pace, playing
catch up.
Definitely, the core challengeseems to be balancing this
(19:43):
incredible pace of innovationwith the crucial need for
protection and fair compensationfor human creativity.
Ida (19:50):
Yeah, it's not just about
the law on paper, is it?
It's about fostering a futurewhere AI genuinely augments
human potential, rather thanmaybe diminishing it or
replacing it entirely.
That's the hope and thequestion of who gets compensated
and how well.
It's far from settled, butcases like that massive
anthropic settlement arecertainly setting new impactful
precedents.
Things are moving.
Allan (20:11):
They really are.
And maybe this raises animportant question for all of us
to think about, as AI continuesto evolve its creative
capabilities and it will Forsure.
How will we collectively defineand truly value originality and
authorship in a world wheremachines can mimic, synthesize
and generate content at scaleswe've never seen before?
Ida (20:33):
A question to ponder indeed
.
Will future masterpieces comewith a human signature, or maybe
just a string of code?
That's definitely some food forthought until our next deep
dive.