All Episodes

March 2, 2025 90 mins

Send us a text

Show notes:

3:00 David Newhoff - question of authorship

7:15 Peter Wasilko

9:00 Andres Guadamuz - blog post on AI copyright authorship

10:30 China’s focus on “intellectual achievement” 

12:20 Section 9(3) of its Copyright, Designs and Patents Act 1988 

13:00 Emily Gould - whether copyright is fit for purpose 

13:30 UK joint evidence session on the future of AI and copyright law 

17:15 Newhoff - use of an artist’s style 

18:40 Wasilko - an artist’s training of a model with its own work

20:15 artist's post-stroke gen-AI recording from model training on his work

21:00 Salles Bruins' question on definition of intellect 

25:40 - Ankit Sahni -  China’s protection

28:30 Sahni - India’s position on creativity falls in the middle 

29:00 Ankit Sahni - RAGHAV output “Suryast” 

33:45 Ankit Sahni - protection of AI-assisted works by China’s courts 

35:00 Wasilko - hypothetical of photographing sunsets on VR headsets

36:50 Ankit Sahni - USCO’s case by case basis

37:50 Newhoff - what is actually protectable against infringement

39:30 Sarony decision: looking at human choices used to create photos

41:00 Newhoff - ‘authorship by adoption’ is a “bridge too far”

42:15 Salles Bruins - question about training in Wasilko’s hypothetical

43:10 Wasilko - “bridge too far”-requiring license to “learn” from works

48:00 Stanford’s CodeX Group - talk on product JudgeAI 

50:30 Andres - human creativity exists irrespective of copyright 

52:00 Salles Bruins - copyright is a tool to enable artists to profit 

53:30 Kritika Sahni - defining intellect dependent on AI context 

54:50 Ankit Sahni - sui generis system of registration 

58:45 Gould - applying a right like copyright to output  "tough" to get right

1:02:00 Guadamuz - Ukraine’s sui generis right for AI works 

1:03:45 Jason Jean - defining intellect 

1:08:50 Newhoff - unconvinced that it’s a “sui generis question”

1:09:30 Wasilko - whether inputting human work makes model “assistive”

1:13:00 question of global copyright approach

1:17:15 what is the end game?

Please share your comments and/or questions at stephanie@warfareofartandlaw.com

Music by Toulme.

To hear more episodes, please visit Warfare of Art and Law podcast's website.

To leave questions or comments about this or other episodes of the podcast and/or for information about joining the 2ND Saturday discussion on art, culture and justice, please message me at stephanie@warfareofartandlaw.com.

Thanks so much for listening!

© Stephanie Drawdy [2025]

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
To say that I'm not feeling very sanguine about
developing decent cooperationand handshakes between some of
the major developers and thecreative community.

Speaker 2 (00:13):
I'll believe it when I see it, but I'm not seeing it
right now Welcome to Warfare ofArt and Law, the podcast that
focuses on how justice does ordoesn't play out when art and
law overlap.
Hi everyone, it's Stephanie,and that was David Neuhof,
co-founder of Rights Click andauthor of who Invented Oscar

(00:35):
Wilde, the photograph at thecenter of modern American
copyright.
What follows is a recording ofa recent Second Saturday online
gathering that focused on AI andIP.
The panel of guests thatconvened to discuss this topic,
in addition to David, includedEmily Gould, the Institute of

(00:55):
Art and Law's Assistant Director, dr Andres Guadamuz, a reader
in intellectual property law atUniversity of Sussex and the
editor-in-chief of the Journalof World Intellectual Property.
Ankit Sanhi, an AI and IPenthusiast and the co-creator of
Suriast.
Kritika Sanhi, an IP attorneyand partner at the India-based

(01:18):
Sanhi Associates.
And Stefania Salas-Bruins, anartist and attorney.
As you'll hear, this panelbrought an international
perspective to the issues raisedby AI and IP, including the
question of authorship andwhether tried and true
principles are sufficient forgenerative AI or whether a sui

(01:39):
generis right should be created.
Welcome to Warfare of Art andLaw and Second Saturday.
Thank you so much for all beinghere.
A special thanks to our panelguest, and we'll begin with
Emily Gould, the Institute ofArt and Law's Assistant Director
.

Speaker 3 (01:57):
Hi, stephanie.
Thank you so much, pleasure.

Speaker 2 (02:00):
Dr Andres Guadamuz, a reader in IP law at University
of Sussex and theeditor-in-chief of the Journal
of World Intellectual Property.

Speaker 4 (02:09):
Thanks for having me.

Speaker 2 (02:11):
Ankit Sanhi, AI and IP enthusiast and the co-creator
of Suriast.

Speaker 5 (02:17):
It's a pleasure, thank you.

Speaker 2 (02:19):
Kritika Sanhi, an IP attorney and partner at the
India-based Sanhi Associates.
Thank you for having me,stephanie.
Thank you, david Neuhof.
Co-founder of RightsClick andthe author of who Invented Oscar
Wilde, the Photograph at theCenter of Modern American
Copyright.

Speaker 1 (02:40):
Thank you, thanks very much for having me.

Speaker 2 (02:42):
Stefania Salas-Bruins , artist and attorney.

Speaker 7 (02:47):
Thanks for having me, hi, everyone.

Speaker 2 (02:50):
And I am Stephanie Drotty, also an artist and
attorney, and we will begin withthe question of authorship, and
that's a question that DavidNeuhof had honed in on.
So, david, would you like tobegin?

Speaker 1 (03:05):
Sure.
Thanks so much for letting mekick this off.
So you know, ai and thequestion of authorship is one of
the key challenges.
I think that's facing a lot ofpeople, you know, both artists
and the IT community.
For those who are following it,the Copyright Office very

(03:27):
recently just about a week agoreleased its second report on
this very question of whether ornot works are copyrightable
when AI is used.
I can say from my perspectivethat I generally agree with the
Office's perspective.
That number one certainly underUS law, human authorship is

(03:47):
required in the expression thatis produced in order for
copyright to attach at all.
That use of assistive tools,for example, that might change
grammar or help you with colorcorrect things like that that
have been around a long timethat this should not actually

(04:09):
affect copyright protection.
When you mix generative AI withhuman authorship and you
combine those two, then we getinto you know what is kind that

(04:46):
expression created by generativeAI should not be protected.
But this obviously opens upquestions that will get into
maybe some new complicated areasin discovery processing in
courts.
But that's where I think we areas far as authorship, at the
moment, at least under US law.

Speaker 2 (05:01):
Thank you, David, and when I was looking through the
part two actually part one andpart two of the US Copyright
Office, I was pleased to seethat you were referenced in many
of the footnotes.

Speaker 1 (05:14):
Thank you.
Yeah, I was pleased as well.

Speaker 2 (05:17):
Good to be heard.
And on that, the part two,talking about extending
copyright protection togenerative AI works.
One of the points that you madeI thought was really
interesting was that at acertain point, the application
of copyright law itself mightbecome irrelevant and or
unconstitutional.

Speaker 1 (05:38):
Sure.
So you know.
What I was thinking about there,of course, is that that's
pulled from a larger paragraphwhere, if we start just
arbitrarily saying anything thatexists as a work, of course is
that that's pulled from a largerparagraph where, if we start
just arbitrarily saying anythingthat exists as a work of
expression, simply bymanifesting, is protected under
copyright.
This opens up the possibilitythat, say, company A and Company
B are generating works, and ifeverything there is protected

(06:03):
under copyright, it raises allsorts of questions as to whether
or not either one couldarguably defend its own rights
against the other.
And that's what I was thinkingabout in terms of it becoming
irrelevant at a certain point.
Who has standing?
Are all works ultimatelyindependently created?
If AI A is unaware of the workof AI B, right, it opens up kind

(06:27):
of a can of worms in thatregard, and one way to avoid
that, of course, is that wedecide that, no, that work that
is purely generated by machinesis simply not protected, that we
maintain the human authorship.
Doctor, that's what I wasgetting at in that paragraph.

Speaker 2 (06:47):
Yeah, and to me this relates closely to a point that
Stefania raised about thequestion of how we define and
look at intellect.
So if anyone wants to make acomment about authorship or
anything that David has justsaid, please jump in, and
otherwise I might just shift theconversation a little bit to

(07:12):
Stefania for her thoughts aboutit, and someone quickly I'll go
to has raised their hand.
Peter, I believe.
Go ahead with your question orcomment.

Speaker 8 (07:19):
Okay, I just want to comment on how difficult it is
to use a generative AI system toachieve a result that I already
have in my head.
There are so many models, somany individual parameters.
That combination has amind-boggling number of possible
permutations of settings.
Before the generative AI startsspitting out images, out of

(07:39):
which I'm going to select maybeone out of 50 or 60 that come up
, I'll be doing extensiveresearch across websites to
choose which model might give methe best chance of getting the
kind of image that I'm workingtowards.
So eventually, by the time Iget in, yes, the generative AI
did the image.
I don't have the artisticability to express it, but I
already had the kind of an imageit produced in my head and went

(08:00):
through a tremendous amount ofeffort and cross-com, cross
comparison and experimentationin order to achieve that result.
And I wonder whether an artistwho documented that process
might be able to come back inand use that as justification to
be able to achieve a copyrighton the AI at work.
And has anyone looked at thatproblem?
Has anyone tried to come andsay, okay, here was my process

(08:20):
and how I was able to get to theend result, and here's how much
time and effort went intotweaking parameters of the model
, and here's how many times Iran the model in order to get
this final output image.
And is that enough to be ableto assert human authorship for
copyright purposes?

Speaker 2 (08:37):
I completely relate on many of the points that
you're grappling with the ideathat sweat of the brow is not
going to be a factor and yet, ona case-by-case basis, looking
at all of the points that youjust raised and the amount of
different prompts ormodifications and things like
that that go into it, how we'regoing to go forward with a

(08:59):
case-by-case analysis of whichwork is or is not copyrightable.
Is there anyone else who hadthoughts?

Speaker 4 (09:07):
I think it's quite interesting to contrast what's
been happening in the UnitedStates with sort of what's been
happening elsewhere.
Now, funnily enough, I justpublished yesterday because I
was preparing for this I decided, okay, I'm just going to write
down a lot of my ideas and justpublish the blog post and mostly

(09:27):
what's been happening elsewhere, and it's interesting that what
we are seeing to emerge isfirst, lots of things have been
happening very quickly.
It's just under three yearsthat the generative AI sort of
exploded into everyone'sattention.
That the generative AI sort ofexploded into everyone's
attention and sort of the ideathat everyone had is okay, none

(09:49):
of this is copyrightable, ofcourse, and I think that a lot
of this conversation was rightlyled by the US Copyright Office.
But what has been happeninginterestingly is, particularly,
China has now had four cases Onedenied copyrightability and

(10:10):
three have accepted some form ofworks generated by AI to
copyright protection.
And what has happened with them, with those cases?
What is common in those cases?
With those cases?
What is common in those cases?
So we've had one that predatessort of the degenerative AI
explosion and two that havehappened more recently 2023 and

(10:34):
2024.
What has happened is that thecourts have looked at what the
human intervention has been.
So Peter was making a goodpoint about this.
Is anyone that has worked withimage generators, but also with
text prompting, to get any goodresults?

(10:57):
To get slop is very easy, AIslop is extremely easy, and now
we're seeing this all of thetime.
But what has been happening isthat for you to actually get
anything good and I would liketo think of myself as at least
having achieved some merit inthis is that you have to do a

(11:20):
lot of work and you have to doexperimental prompts and lots of
choice of outputs.
And that takes us to what isprotectable in other countries,
particularly what we see inEurope, in the UK, as
intellectual creation thatreflects the personality of the
author and their free andcreative choices.

(11:41):
And in China, what has happenedis that the courts have decided
okay, what you have done all ofthese complicated prompts, but
also the selection, it is whatthey call intellectual
achievement, and so at leastthree decisions have now
declared images that weregenerated with Midjourney to
actually have copyright.
Now, with mid-journey, toactually have copyright.

(12:03):
Now, I've always been very keenon for us to think okay, we
have to look at the basics ofcopyrightability.
What's happening in the UK isan entirely different thing.
We're probably going to loseSection 9-3, the famous or

(12:28):
infamous section 93, dependingon which side of the aisle
you're sitting.
So what has been happening is,I think, that with the US
Copyright Office and also whateverything else that has been
going on, is a recognition thatmost of these things are tools
and, just like photoshop, justlike photography, just like word

(12:49):
processing, etc.
These technical tools are goingto produce, sometimes, works
that are protected by copyrightand some things that they are
not.
Sorry, I went on too longthat's great, andres.

Speaker 3 (13:00):
So I think already what we're talking about is
bringing us on to the widerquestion of whether copyright is
sort of fit for purpose in thewhole AI environment and whether
we can apply the long standing,tried and tested rules to the

(13:21):
new environment, to the newenvironment.
And I was listening just theother day to a really
interesting joint evidencesession of the Culture, media
and Sport and Science,technology and Industry
Innovation I think they'recalled committees who were

(13:42):
looking at all of thesequestions, because at the moment
most of you probably know this,we're in the midst of a
government consultation in theUK all about this very issue, ai
and copyright and so they'regathering evidence and they were
talking to both AI companies,particularly sort of young AI
companies, smaller businesses onthe one side and then members

(14:07):
of the creative industries onthe other, and what came across
quite strongly I felt there weremainly people in the publishing
industry.
They were quite confident thatactually copyright law and those
basic principles you know, asthey have stood in the UK since

(14:27):
sort of 1709 with its famousstatute of Anne, that the first
copyright statute thoseprinciples are actually doing
okay and the balance thatcopyright is seeking to create
and maintain between thoserights of control of the authors
and then the accessibility foreveryone else.

(14:51):
They were actually appropriateand would work in the world of
AI, but they felt that at themoment there are just sort of
flagrant infringements of thoseof those copyright rules, and
that what needs to happen issort of better enforcement and

(15:12):
transparency, because I thinkthere was a.
What came across to me was thatthere was a real sense that the
control you know, the controlthat, as copyright owners, they
should have, was actually justbeing, you know, taken from them
, because they simply don't knowwhen their data and how their

(15:36):
data is being used by AIdevelopers, and that sort of
lack of knowledge means thatthey can't really do much about
it.
Sort of lack of knowledge meansthat they can't really do much
about it and they just feel,feel that, you know, it's that
the control that copyright issupposed to give them, um, is is
sort of being, you know, beingshifted from under their feet,
but they feel that, you know, ifcopyright were properly
enforced, then actually it isstill a relevant and, you know,

(16:00):
appropriate tool for theprotection of creators.

Speaker 2 (16:05):
Thank you, Emily and Andres, everyone else, please
feel free to jump in.

Speaker 1 (16:13):
I would only add that , on that last point about
copyright being fit for purpose,it's important because
obviously we switched a littlebit there from using AI as a
tool to create works versus thequestion of infringement for
mass or mass infringement forthe development of AI tools.
And they're two completelydifferent questions, of course.
And the first question ofmachine training and

(16:37):
infringement is certainly in theStates working itself.
It's way through about 25 casesat this point or more, and
we'll see where that goes.
But you know, is copyright fitto that purpose?
Absolutely.
I would also say that it'sstill fit for the questions that
we're dealing with in terms ofonce we start to use AI as tools
, can it address this questionof copyrightability using it as

(17:01):
tools?
So I don't see any real reasonfor a new regime per se.

Speaker 2 (17:06):
David, I was re-listening to our interview
from, I believe, last year andwe, I think at that point had
touched on this issue of styleas well.
It's raised by the US CopyrightOffice recently and, I believe,
their Part 1 report.
Since we had touched on thisbefore and I know you have
opinions about that, would youlike to raise any thoughts on

(17:29):
that?

Speaker 1 (17:30):
Sure, I mean, the copyright office's first report
was on on use of likeness.
They were really addressingsomething that's technically
outside copyright law, which isright of publicity questions,
which is being raised in adifferent question, in a
different light right now.
You know, I don't think theview has changed all that much

(17:51):
in the sense that that copyrighthistorically does not protect
style.
However, generative AIcertainly gives me a new
opportunity to say, oh you know,make me work in the style of
Stephanie, and certainly createsa new potential threat in the
marketplace for your work.

(18:12):
But it starts to cross overinto other areas of law like
fraud and again what wetraditionally call right of
publicity.
I don't know that necessarilycopyright will address that
question, but there are alreadynew proposals, new legislative
proposals to address thequestion.

(18:32):
We've got a ways to go to getthere, I think, but I think it's
outside copyright law for themost part.

Speaker 2 (18:40):
And, peter, did you have another thought or question
?

Speaker 8 (18:43):
Yeah, I wonder if we could maybe try to make a trade
dress kind of an argument forthat.
And also I wonder if it wouldmake a difference if an artist
was using a model that wastrained exclusively on their own
prior art so that they'd begenerating it, but the model
itself was trained exclusivelyon your paintings or your
photography and then producedmore in the same style again of

(19:07):
your work, but that's because itwas trained entirely on your
original work product.

Speaker 1 (19:12):
So I'll answer.
I don't want to hog the room,but I mean so.
I mean, certainly, if you useyour own work product to train
your own AI, there's no questionof having infringed anyone
else's work to get there.
It doesn't necessarily addressthe question of whether anything
new that comes out isprotectable, that you've engaged

(19:32):
in any kind of authorship underthe law and, as you alluded to
earlier, the amount of work,that sweat of the brow that is
supposed to be discounted undercopyright law.
And so the question and we'llsee how it turns out in Allen v
Perlmutter a different Allen,because that case specifically

(19:56):
addresses the volume andcomplexity of prompts that he
used to generate that work,which he claims to have been.
You know his own mentalconception, but we'll see what
the court says about thatparticular case.
He's going to be the first oneanswering that question.

Speaker 2 (20:13):
And also, on that point, the idea and this has
been touched on by the CopyrightOffice that individuals who,
for example, they name a certainmusician who'd had a stroke and
he could no longer perform, andhe was able to recently release
a song, after, I think, adecade, based off of his own
prior work and creating a newsong, and that to me raises such

(20:38):
a compelling reason why youshould perhaps consider certain
case-by-case exceptions whereyou could extend copyright.
So, unless there are any otherpoints to be raised, stefania,
on this topic of intellect, wetalked about how China had

(20:59):
discussed this intellectualachievement concept, the
question of how we defineintellect or intelligence.
I just thought it was such aninteresting point that you would
raise, and so then I went andlooked at the definition.
You know the differentdictionary definitions for
intelligence and one that Ithought does not help the case

(21:19):
for human authorship, the powerof knowing, as distinguished
from the power to feel or towill and to have the capacity
for rational thought, especiallywhen highly developed, like
just different definitions likethat that have nothing to do
with the human aspect and couldtotally just be put into the

(21:42):
machine realm as these toolsdevelop.
So I'm not sure if that's whatyou were thinking of, but go
ahead and give your perspectiveon this question.

Speaker 7 (21:51):
Yeah, thanks, I actually raised the question out
of interest to hear all theexperts' positions on it.
It was more of a.
My input is the question.
I don't have a position to takeyet, other than, as far as I
know, these bots don't yet havewhat the standard definition of

(22:15):
intellect is, even according tothe programmers who make the
bots, which I think is funnythat we leave that out of the
discussion.
We talk about the person whouses the prompts, we talk about
the artists who work as used bythe bots, but we never talk
about the coders who created theactual function that can do all
this stuff that the prompterinputs.

(22:37):
Yeah, so basically, I'd love tohear people's perspective on
intellect, and that commentabout the decision in China was
very interesting, so maybe if wecan pick up from there, that
would be fun.

Speaker 2 (22:51):
Thank you, so anyone.

Speaker 4 (22:54):
Just perhaps briefly, on the question of the
programmers and the developersof all these tools.
Most of these have been createdby employees of companies, so
work for hire may or may notapply, depending on where you

(23:15):
are sitting, but for the mostpart, these companies are not
interested in having anyownership on the outputs that
the works generate.
I think that there are veryinteresting good reasons for
that.
If they claimed ownership, thenI think that they would be

(23:38):
opening themselves to moreliability potentially.
But yeah, so we come to anissue of this being about
platform liability andintermediary liability and all
of those wonderful problems forthose of us that also have
internet law hats.
I'm a big internet regulationnerd.
I can talk for hours aboutplatform regulation intermediary

(24:03):
liability, but yeah, that's adifferent question.
I think that the companies arenot interested at all in
claiming the outputs.

Speaker 8 (24:12):
Yeah, perhaps more to the point, the minute a company
tried to claim the outputs,absolutely no one would use
their product.
So the fact that from a marketperspective it would obliterate
the marketability of theirproduct, no one's going to go
there.

Speaker 7 (24:23):
Yeah, I meant more as a sort of intellectual,
academic exercise of where, ifwe think of the true meaning of
copyright and why it exists andthe whole purpose of it, where
would it lie from thatperspective?
Yeah, from a commercialstandpoint, I understand,
there's no, it's a moot point.

(24:45):
But I think that the you knowthere's so many aspects to this
problem and sometimes I think weas a whole, as globally, we
lose sight of what the wholepurpose of this was.
And I think that going back tothe basics, like the
intellectual aspect ofintellectual property rights, it

(25:06):
seems so, it seems redundant tomention it, but I think we do
lose sight of it and I don'tthink those machines are
intellectual yet and I don'tpersonally.
I guess this is where my beliefcomes.
The amount of time you spentputting prompts in, no matter if
you spent 10 years consistentlyI don't think that would raise

(25:27):
to the level of intellect.
Personal opinion.

Speaker 5 (25:31):
Yeah, just to respond to what Stefania is saying.
You know the original questionand then also the China part.
I mean with our experience ofthe Suryast case, of course, you
know I'll try to bring in a bitof that On the Chinese cases
first, I think what the quotesthere are.
Of course that system doesn'tmatch any other jurisdiction or

(25:54):
at least from what I see thekind of representation we have
today amongst the panelists orthe attendees, that legal system
is completely different and asfar as I'm concerned I'm hardly
an expert in Chinese law, leastof all Chinese IP law, but what
I understood in my interactionswith some representatives of

(26:14):
their I don't even know how it'sorganized, but some sort of a
trade department that hadreached out to us for comment
and just general off the recorddiscussions about the Surias
matter and where we are at indifferent jurisdictions.
So one of the things thatseemed to be that at least they
seem to be interested in wasthey kind of draw a distinction

(26:36):
between the effort that thehuman puts into drafting the
prompts and the ability of ahuman to predict also connecting
this with what Peter saidearlier to be able to predict as
to what output they'll be ableto see and the lengthier or the

(26:57):
longer or more number of linesof prompts you put in.
They seem to believe that on aparticular system or a
particular popular generative AIprogram, such as with Journey,
for instance, if you work withit enough you'd be able to, with
several hundred lines ofprompts, safely predict or

(27:18):
presume with some amount ofproximity as to what the output
would look like.
So they're looking atprotecting two aspects the
prompts as literary works andnot connected with the output by
itself, and then the output ona case-to-case basis, perhaps
not the output itself, but if inthe most recent cases I

(27:42):
understand, what the Beijingcourt protected was an image
which was which one part ofoutput, but then it was used and
then edited further and therewas a derivative that was
created out of that.
So I think that, doing both thethings, they want to protect
the prompts as substantiveprompts, not general simple

(28:04):
prompts as substantive prompts,not general simple prompts as
literary works, and then thecorresponding output or output
by itself, provided there issome more additions or editing
that humans make to it, assomething which a machine and a
human have sort of, or rather ahuman has taken the assistance
of a machine to create and thushaving met the level of

(28:29):
creativity that that legalsystem expects.
I mean, from our experience,kritika and I, for instance, are
basically based in India andthe Indian law is very close to
as Emily was saying earlier.
We're based on common law andwe're very closely based on UK
law.
So in a recent landmarkjudgment of the Indian Supreme

(28:52):
Court, what has come out andwhat has been established is
that India, it seems, if therewere two ends of the spectrum
sweat of the brow and modicum ofcreativity as two different
ends, india kind of falls inbetween.
So it doesn't require a higherdegree of creativity, at least

(29:12):
not as high as US, perhaps aCanada like situation, or even
lower, but definitely more thansweat of the brow.
So what happened in the Surya'scase was this is basically how
we argued it.
We said look A, just to clarify, it wasn't a generative AI
program, it was something thatwe got developed as a work for

(29:36):
hire.
And, to Stefania's commentsagain, we thought it was really
it would be fun to name the toolafter the developer, because we
thought well, for academicreasons, there's going to be a
lot of debate about us.
Because we decided to nameourselves as the human co-author
as well as the owner.
There would be a lot of debateabout the pictures, the images,

(30:01):
the style images that we use,but the programmer would not
feature anywhere.
So the name Raghav is basicallythe programmer's name, and then
we kind of spent a couple ofhours making a neat.
It's kind of stupid, but weended up having some kind of a
expanded form and then kind ofreverse engineered like an

(30:22):
abbreviation out of it.
So Raghav stands for Robust,artificially Intelligent
Graphics and Arts Visualizer andRaghav Gupta is the name of the
person who programmed it on awork for hire contract.
And one of our first argumentsto the office action report
received from the corporateoffice was they said how are you

(30:43):
claiming ownership?
Forget authorship first.
They said how are you claimingownership?
Forget authorship First.
Tell us, how are you the owner?
So we had to establish that.
Well, basically, firstprinciples of property law and
saying look, and also firstprinciples of corporate law.
Basically saying, look, this isa work for hire product and
since we own the tool, we alsoown everything that the tool

(31:06):
produces.
Sort of like having a tree inyour garden.
If it's an apple tree, any andall apples that fall off, it are
basically yours.
So that was the argument toestablish ownership.
And from there coming back tothe point of China and the level
of creativity, so in Indiancourt, I mean, and before the

(31:29):
Indian corporate office, what wetried to argue was look, you,
as well as other more advancedsystems that require a high
amount of creativity, recognizethe mere clicking of a camera
button and the production of apicture as an artistic work or,
in certain cases, a separatecategory called photographs or
whatever.
And so what we're doing in thisprocess is this is not a

(31:51):
generative AI tool, it's a styletransfer tool, and what it does
is it takes an input image, ittakes a style image and it
allows you to manipulate ordecide the amount of style that
gets transferred from the styleimage to the input image.
So we said look, if the thefirst step, which is the input

(32:12):
image, by itself is enough togrant me copyright which it is,
because the input image was animage that we clicked off our
regular phone on our house'sterrace, of a sunset, literally,
and that's what suryas means inhindi it means sunset.
Um, what we're doing is plus,plus, is is more, is more steps

(32:34):
in more creativity.
Therefore, after that, becauseI'm number one making the
creative decision, choosing whatum sort of style image would do
well with a sunset scene, andin doing so we decided to go
with Van Gogh's Starry Night formore reasons than one, not just
because Van Gogh is myfavourite artist, but also not

(32:57):
to let the IP office fall backon a moot objection and try and
avoid the main question, themain elephant in the room, which
was we want to go with a styleimage which was in the public
domain so that they couldn'tfall back on that and then
refuse it.
We were kind of trying to forcethem to address the point of how
would this work.

(33:17):
So essentially we chose StarryNight, then we chose the amount
of style that got transferredfrom starry night to the input
image and then came out theoutput image, which we call
suryas.
So what we tried to argue wasthis definitely goes beyond the
minimum amount of creativitythat a human being is required

(33:41):
to put in to produce a work.
It is original because itdoesn't match any other work
which is out there, nor isanyone claiming that it does,
and therefore that's basicallywhere and in what direction we
went, and we believe fromobservation that the courts in

(34:01):
China are trying to arrive atthat kind of situation, and at a
very rudimentary level.
I don't know how much sensethis makes, but at a very simple
level, it appears that theyseem to have assumed that if
they were to give protection toAI-assisted works, that would

(34:22):
encourage more investment interms of tech companies wanting
to have development in Chinaonshore, especially with the
ready supply of GPUs that theyseem to have.
They're a country that is richin natural silicon and therefore
they're aces in making all ofthose chips and everything.

(34:46):
Same with Taiwan, but we knowthe political story behind that.
But this is what we understandand those are my views on this
aspect.

Speaker 8 (34:57):
Yeah, the copyrightability of a photo
taken with a one-click camera bya non-expert photographer seems
, to my mind, to argue in favorof allowing the generative AI
tool operator to get copyrightin the resulting image.
Now, let's posit.
Now that technology hasadvanced dramatically, so now,
instead of it taking 20 minutesor 2 minutes or 1 minute or even

(35:22):
30 seconds to generate a newimage, say, I put in a general
prompt I want to see sunsets,and every 15 seconds or every 2
seconds, every 1 second, we havea system that's able to pop up
another sunset.
And I stand there with myregular camera as sunsets are
flashing across my screen, andthen I click the button on the
sunset that I like Now, becauseI used a camera to take a

(35:45):
photograph of the screen whichhas generated a large number of
sunsets.
So I waited until the sunset onthe screen matched, which is
analogous to my going outsideand waiting for the sun to get
into the perfect position, withthe clouds and birds flapping by
, for the picture I want to takewith my regular camera.
Then I take a picture with myregular camera of the screen,
which again has thousands ofthese flashing by in just a
continuous wave, maybe evenmultiple images at the same time

(36:07):
.
So I'm looking at a, I'm in aVR headset and I'm holding a
virtual camera inside my VRheadset and I have the
equivalent of 16 or 17 4K panelsopen, all simultaneously
showing different sunsets, and Iturn to the one that I want and
I click that button and nowI've got my picture of the
sunset.
It was generated by the AI, butmy process is so directly

(36:29):
analogous to a regularphotograph and it might even be
using a regular camera, so itjust happens to be a computer
screen in the background thatwas caught with the image of my
regular camera.
I think we should be awardingcopyright protection to that.

Speaker 1 (36:42):
All right, I'm going to respond.

Speaker 7 (36:45):
I think first of all.

Speaker 5 (36:46):
I think, peter, with that imagery you took me back to
Star Trek days.
It was almost like me sittingin the cockpit on the
commander's seat and no.
But I think I really appreciateyour question.
I think this is really validand I will just leave it at that
and look forward to hear otherpeople's views.

(37:07):
Except, I think there's a veryimportant part that comes in the
second report of the USCorporate Office the second
report that they've published,which says we will protect books
that have been created with theassistance of AI, but we'll
have to see on a case-to-casebasis what will get protected

(37:28):
and what will not, depending onhow much human effort has gone
into creating what was created.
That seemed to be their answerto what you said, but I think
it's valid.
In terms of Alice in Wonderland, I remember a quote it gets
curiouser and curiouser fromhere.
I think yeah.

Speaker 1 (37:47):
So I mean this conversation just generated
multiple different questions anddoctrines and including two
completely separate sides of theequation.
Right One is aboutcopyrightability and, in the US,
registering the copyright,which of course we have that
formality where other countriesdo not.
So that's one question.

(38:11):
And then the other one is inthe works that you're alluding
to.
Let's say, you know yourcombination of Sunset and Van
Gogh, for example.
What triggers in my mind isokay, now what's defensive?
You go into a court of law.
I infringe your work.
You know you sue me forcopyright infringement and now

(38:33):
we start down the road of okay,what in your work is actually
your expression, at least underUS law, that is defensible there
.
So immediately I'm excludinganything that came from Van Gogh
that's not yours and I startstripping away.
And that's really one of thekey questions in all of this
right is what is actuallydefensible expression in the

(38:56):
work?
If you imagine, not justcopyright attaching at the point
of creation or at the moment ofregistration, but what could
you defend in a court of law,which is obviously the whole
point of having copyrightprotection in the first place.
If you can't defend it againstinfringement, why bother with
the protection?
So you know, to Peter'scomparison it's been made before

(39:19):
about generative AI similar tophotography, comparison that's
been made before aboutgenerative AI similar to
photography.
And yes, photography was one ofthe first machine challenges to
the concept of human authorship.
Right, and you know, in the USanyway, going back to 1884, the
Supreme Court very quickly andeasily found look, there are the
human choices that were made inthe photograph, even though the

(39:41):
defendant in that case tried tochallenge the very idea that no
, you used a machine, themachine made the image, the
human had nothing to do with it.
The court said no, you're wrong.
Here are the choices, we candefine them.
And the plaintiff's attorneyhad basically provided the court
with the language they needed.
It wasn't a very hard case.

(40:02):
But since then, of course, westill have the same challenge
sometimes when photographs arethe subject matter of a case.
And still, the court willidentify what is the human
authorship here, what is notprotected?
Portraits are a classic right.

(40:22):
We saw this with the Warholcase.
There's nothing protectableabout Prince's face, but there's
no question that copyrightattaches to Lynn Goldsmith's
photograph of Prince's face.
And yes, there's a certainamount of a kind of convenient
fiction or metaphysics to thatthat we decided that humans

(40:44):
operating cameras, even in asplit second, in what might
arguably be, uh, you know,almost arbitrary kind of
photograph, uh, you still getcopyright in that photograph.
And so you say to yourself,okay, well, if I use generative
ai, as peter just described,it's a different.
You're crossing a new boundaryand the Copyright Office
actually addressed this in thereport when they talked about

(41:07):
the concept of authorship byadoption.
Right and simply selecting yourfavorite image, that, the image
that the generative AI created,it created a thousand.
And you said that's the one Ihad in mind or that's the one I
really like.
That's what they callauthorship by adoption.
And it's a bridge too far in thesense, because it's like saying

(41:28):
, well, I found this photographin a shop, somewhere in an
antique store, and I'm going toclaim ownership of it.
Or I found a piece of it'sactually been cited in the
copyright compendium.
I found a piece of driftwoodand it kind of looks sculptural.
So I'm going to say it's mysculptural work.
Well, of course it is.
You didn't do anything tocreate that.

(41:49):
So you know, I sorry if I getlost there.
There were, like I said,between those two, that exchange
there conjured a lot ofdifferent doctrinal questions.
I'll stop there and let otherstake over.

Speaker 2 (42:07):
Thank you, David Stefania, did you want to make a
point?

Speaker 7 (42:11):
No, I did have a question for Peter, though, in
this and then a follow-upconcern, depending on the answer
.
But in this universe in thefuture, where all these images
are flashing by, are thoseimages based on existing artwork
that is copyright protected, orare they images, say, there's

(42:33):
cameras everywhere justrecording sunsets and flashing
that towards you?
Because I think, if they arebased on copyrighted work, that
ultimately in this universe andin this future of yours,
eventually there won't be newwork produced, because artists
produce work and are able tosustain a living producing work

(42:55):
because of the protections,whether registered or not,
because people people value trueartistic work.
So ultimately, you'd run out ofsunsets, right, at least
painted ones.
Hopefully the sun keeps comingup in the we keep orbiting,
hopefully.

Speaker 8 (43:16):
Because when we talk about how the AI is trained,
it's not like we actually have acopy of a whole bunch of
photographs of sunsets.
Instead, the AI is modelingdifferent features.
It might be, you know, anintensity difference in the
light in one quadrant of thephoto versus another quadrant.

(43:36):
That might be one feature.
And all of these features aremultiplexed in a really, really
high dimensional space and thepurpose of the prompt is to try
to map to location in the highdimensional space based upon the
mapping of the terms in yourprompt and correlations between

(43:56):
them.
So I'll have picturesassociated with words.
All of those get broken up intostrings of numbers and then you
look at the proximity withinthat space.
So there isn't actually apicture representative.
Instead, you have lots offeatures that are represented
inside of the AI model.
So there isn't even really acopy in a traditional copy sense

(44:20):
existing anywhere in this.
It's a matter of learning and,frankly, if we're going to do
that, then why shouldn't we beable to attack the human artist
who went to a whole bunch of artmuseums and studied in person
every Van Gogh painting, andyou've done in your head exactly
what that large language modelis doing with the photographs

(44:40):
inside the computer system, andthen what we're arguing then is
that?
What we really want to assertis that you're not allowed to
learn from copyright workswithout having gotten a formal
license to be allowed to learnfrom looking at something, and I
think that's really the bridgetoo far with the use of
intellect.

Speaker 7 (44:57):
has that artist looked at those works in the
museum?
That's the difference.
It's the intellect that allowedfor that processing up here.

Speaker 8 (45:07):
Okay, the intellect comes in in reverse engineering
the correlation between languageterms and where in that
high-dimensional design space agiven kind of an image is going
to lie.
So I've looked at a whole bunchof prompts, the kinds of images
they go, and then inside myhead I'm reverse engineering the
black box of the AI to figureout where I need my words to

(45:30):
take the systems, the tractor,to be able to generate the kind
of an image that I want to comeout.
So I see that as the height ofintellect in being able to do
that process.
Now I personally prefer good,old-fashioned AI with a symbolic
approach where you can actuallyuse logic and get out how the
image was derived through ruleapplication.

(45:52):
I think that eliminates a lotof the problems that we get in
this fuzzy neural computingworld and I wish people would
take more of a look atold-fashioned AI, certainly
given today's hardware too.
That was another thing.
The early first generation ofartificial intelligence work was
starting to do some reallyimpressive results, but they
basically hit a hardware walland there wasn't enough RAM in

(46:14):
systems, processors weren't fastenough, so everything just
basically dead-ended and we wentinto the AI winter and only now
with newer chips and being ableto leverage GPUs from the
gaming world to be able to doart and interesting applications
, have we cracked that?
But if we took that newhardware and applied it to some
of the older techniques that hadbeen abandoned by the research

(46:34):
community and they're stillmostly abandoned, because of
course now everybody wants to bedoing LLM work and it's the new
hammer and every single problemis a nail to be bashed with a
large language model, until youget a result, even when, with a
fraction of the computeresources, a traditional AI
approach could derive a muchmore accurate answer without
hallucinations.
Oh, the nightmare ofhallucinations.

(46:56):
I have a friend, mark Bernstein, who is a researcher in the
hypertext community, and if youask any large language model
about Mark Bernstein, his nameappears enough in the literature
that it will hallucinate themost insane things.
I have yet to see it.
Pin an actual paper that hewrote and attribute it to him.

(47:17):
Instead, they will come up withpaper after paper with
plausible titles, plausiblejournals, plausible citations
all of which are completelybogus, attributing ideas that
are the antithesis of everythingMark believes to his writing,
and it's just maddening.
You do not have this problem intraditional, conventional
symbolic AI.
You do with large languagemodels.

(47:37):
It's a mess and it certainlymakes them unreliable and not
fit for a lot of purposes.
But again, everybody coming outof school figures if I want to
get a job, I want to get into alarge language model, newfangled
AI company and we're going tothrow that problem.
We had a talk a while back inthe Codex group out of Stanford

(48:00):
and there was a vendor coming in.
They had a product called JudgeAI and they were trying to use
a large language model to modelthe behavior of jurists.
And they put up a scenario infront of us and they gave us a
case of potential litigation ina third world against a first
world megacorp for allegedpollution, without any actual

(48:22):
damages being established orevidence on the record of a
direct connection.
And they asked the judge AIwhat would be a just result.
And then they surveyed thegroup Now politically on the
call, 85% of the people werestrongly in the liberal
political camp.
A small fraction were in aconservative camp.

(48:43):
The conservative camp in thatcase were little Scalia's.
We looked at the statute thatthey flashed up in the example
and said well, obviously themegacorp is going to win here
because there's no demonstrationof damages, there's no
connection between theallegations on the record before
us, so they should lose.

(49:10):
Judge AI concluded that,because of the potential
ramifications and the indigenouspeople that are being affected
by the pollution, the justresult would be to shift the
burden of proof and make themegacorp prove they had done
everything possible to preventpotential pollution, even if an
actual pollution incident hadn'tbeen demonstrated.
And I looked at this wholething and said, yes, it was
perfectly aligned with thepolitics of the group on the
call, but it had no connectionto the logic of how the statute

(49:32):
should be applied in thecircumstances, to the facts on
the record.
And it's doing politics, it'snot doing law.
Can I just?

Speaker 7 (49:41):
Sorry, can I just circle back to?
Since you did answer myquestion, Peter, thank you.
So I think that in thatuniverse, the sunsets were not
based on copyrighted work.
They were purely coded effectsthat you were searching out.
I think, that makes a bigdifference.

Speaker 8 (49:58):
Yeah, well, they were based probably on learning from
looking at sunsets and lookingat some copyrighted images of
sunsets, but all of the learningjust was deriving a model of
what sunsets are like as anabstract sunset level, not using
sunset from the training dataitself.
The training data was only usedto derive rules about what are

(50:19):
the qualities that peopleassociate with the word sunset.

Speaker 7 (50:22):
Yeah, I think that makes a difference to me, so
thank you for clarifying that.

Speaker 4 (50:28):
I just wanted to do a very, very quick pushback,
stefania, sorry about this.
I wanted to push back on theidea that people create because
of copyright.
People create because ofcopyright.
I think that human creativitywill be alive and kicking well
well.
Well, after all, systems ofproperty disappear, going back

(50:51):
to the Star Trek idea where allproperty is going to disappear.
In the post-scarcity worldwhere you can replicate
everything with the press of abutton, I think that we've
demonstrated, if anything, we'vedemonstrated, as a species that
gave us a couple of things, andwe'll try to make art, music or

(51:14):
literature out of it.
I know I still write poems,very, very bad ones.
I've never even remotelythought about publishing
anything.
Don't get me started on all ofmy failed artistic endeavors,
but I think that a lot of peoplecreate and generate.

(51:36):
Human creativities existirrespective of copyright.

Speaker 7 (51:44):
Oh, I absolutely agree with you and sorry if I
was too brief in what I wastrying to say.
Absolutely, humans willcontinue to create.
What I was trying to say is Iwas linking, I was using
copyright to link creating artlike you do, like I do, like a
lot of us do, to actually beingable to survive in society and
make some money.

(52:04):
So copyright is a tool thatallows artists to profit and,
you know, even for tax purposes,you're okay.
Being a money losing artistlike you're, able to be on the
red for a long time because it'ssomething that's valued and
copyright allows for that.

(52:26):
That's what I'm saying.
I don't think humans will stopdoing it.
I just think that artists willnot be able to survive as much
from a financial standpoint.

Speaker 4 (52:33):
I completely agree with you.
Yeah, those are two differentthings.
No one has ever accused me ofbeing an artist, but that's a
different question.

Speaker 2 (52:43):
And one point I was just going to raise about, uh,
the ragav tool that, uh, I knowankit has said before, is that,
uh, as you described it, thatthat tool had less capability
than many cameras that areavailable today, I believe, and
so so, just going back to theless hyped-up AI tools that are

(53:08):
out there, I don't think Raghavfalls in that category from your
description, and I was alsocurious.
Kritika, I know you were partof the Suriast project and did
you have anything you wanted toraise about that or anything
that's been said today?

Speaker 6 (53:23):
yes, yes, we agree with Stefania where she said
that copyright is a humanintellect.
You know it's all aboutcreativity, you know innovation
and originality.
You know, basically, that needsto be protected and, and in my
perspective, you know thedefinition of intellect, it

(53:46):
depends on the definition of, inAI context, what it will be,
since the laws are still at avery nascent stage, you know
they are still developing andit's very, I feel, every
jurisdiction will have their ownperspective, like, as in India,
like in our jurisdiction, likethey have accepted, they have

(54:08):
copyrighted our work.
But it only depends thatwhether AI generated work should
enjoy, you know, sameprotection as human created
works, same protection as humancreated works, depends on, you
know, the definition ofintellect in AI context.
That will be, you know,actually developed, at whatever
stage.

(54:28):
You know, in every jurisdiction, I feel, that objective of IP
law is only to protect, you know, human intellect.
You know their creativity,their originality and, of course
, the innovation that actuallyis, you know, created.

Speaker 2 (54:48):
Thank you, kritika and Ankit.
Did you have a point?

Speaker 5 (54:51):
I just quickly wanted to again circle back to what
Stefania was saying.
I think you know this makes one, in one sense, a case for a sui
generis system to come upbecause, at the very least, what
it does guarantee is that theworks that are created with the
assistance of AI will beclassified in the corporate

(55:16):
offices in jurisdictions where,of course, you can apply and
register.
There are a few jurisdictions,but the US is one of them,
canada is one of them.
You can apply and register.
There are a few jurisdictions,but the US is one of them,
canada is one of them, india anda few others.
But so far as a sui generissystem is concerned, and in
systems where you can apply forregistration, for copyright, at
least in the regulator's records, the corporate officer's

(55:36):
records, it'll correctly show asa work that has not been
created out of pure human talentbut has been created with
reliance, and possibly heavyreliance, on machine and in this
case, ai.
And that would be the key in,or sort of the starting point in

(55:59):
, ensuring that human creativityis always rewarded, recognized
and incentivized higher than anAI-assisted work.
Of course, practically how itpans out we'll have to see,
because much of these systemswork on self-declarations, very
similar to a trademarkapplication.
You claim to be the owner, youclaim in certain systems to be

(56:22):
using the trademark since xyzdate and it's really on
self-declaration till someonechallenges it and then you have
to prove it in trial.
But I think the sui generissystem is the starting point,
because the other option I'mjust asking myself to my mind
the other option is to thendisincentivize the adoption of

(56:42):
technology, which would runcontrary to how humanity is
developed.
We have not been.
We've not remained at cavepaintings.
We've gone far ahead withspecial effects in movies and so
on.
Everything, animation and allof these things involve
extensive use of technology.

(57:02):
But that doesn't mean thesethings don't have economic value
and they're not givenprotection.
The right thing is.
The simplest example could bein many jurisdictions, for
instance even in India,photographs or other works that
rely on technology, for instancesound recordings.
They enjoy a much shorterduration of protection and in

(57:23):
some instances they are alsosubject to other limitations of
scope of protection, such asthere are more exceptions that
apply to them by the statute orthere would be compulsory
licensing sections that wouldapply to such works.
So I think we first need tolook at how an AI-assisted work

(57:47):
or a computer-assisted workcould form a separate category
of right, and then, from thatpoint, we could of course resume
the debate and move forward aswe were moving forward, that's
really interesting.

Speaker 3 (58:03):
Sorry, Stefania, I was going to say it's very
interesting and I was thinking,as you were talking, I was going
to ask you what kinds of rightsyou thought might attach to
that sui generis right forAI-generated work.
In the context partly of thecurrent debates in the UK at the
moment about computer-generatedworks, and it looks like, as

(58:24):
Andres was saying earlier, itlooks like they might disappear
and I think that I mean I don'tthink they've been sort of very
well used and I think there aresort of fundamental sort of
conundra, really that from umapplying a right a little bit

(58:44):
like copyright, which is basedin these ideas of originality
and authorship, to somethingcreated by machine.
So maybe it could work.
That's to be generous right,but I think, uh, I think it
might be quite a quite a toughone to get right no, I, I
absolutely agree.

Speaker 5 (59:00):
Another question that often puzzles me now, since
what david said is, for instance, I related to that um.
We had a nice discussion with auk-based um body.
We were closely with them,called uk music uh, and they've
been, like most of us,particularly perturbed by by all
of these developments and whatI learned them.
I'm not much of a musicenthusiast, as much as I am for

(59:24):
art, for instance, but what Ilearned is they've had, when you
speak about AI in the musiccontext, they said we have this
magic tool called Auto-Tunewhich can make any non-singer
bathroom singers sing, like youcan make it to the Grammys.
And they say look, what is it?
Basically, since we had the kindof studio recording and the

(59:48):
kind of structure that we have,the songs they're recorded in
layers.
Much of the music at times isdigital because it's synthetic
in the sense there's no actualdrums or whatever.
It's almost AI generated.
In fact it is actual drums orwhatever it's almost AI
generated, in fact it is.
And so is autotune as a tool byitself.
And what I learned from them isand one question that came up

(01:00:12):
was when autotune is applied tothe voice layer of the song,
because there's a bass layer,there's a tune layer, there's
several layers and then there'sa vocals layer.
When an autotune filter isapplied, it's ear powered and it
basically just sort of sharpensthe rough edges a bit and gets

(01:00:34):
it in tune with a particularnote that you're trying to sing.
Or you tell the autotune thatthis is what it's meant to be,
but the singer was off sing, oryou tell the autotune that this
is what it's meant to be, butthe singer was off.
So when you look at what the UScorporate office had to say in
our decision, like the Board ofReview in the Surya's decision
it's there on the corporateoffice website, or even what
they say in the report is unlessyou can discern and separate a

(01:00:57):
human effort from the machineeffort, all of it falls in
public domain.
A human effort from the machineeffort, all of it falls in
public domain.
So I asked myself this questionhow and under what
circumstances can you separatewhat the autotune AI did to the
vocals layer?
What part of it was theircontribution?
What was the failing which?

(01:01:25):
Arguably the entire vocal layerof the song will fall in the
public domain.
I'm not quite sure, despite allof the advent of technology and
all of it shifting focus fromhuman effort to machine effort
and making things easy forpeople who are less talented to
produce.
But I would say throwingeverything in the public domain
is definitely not the intentionof the law either, and that's
one of the questions that comesto mind.

Speaker 4 (01:01:45):
Just briefly on that.
First, I'll make perhaps theobvious comment that the Grammys
are getting into.
The Grammys is a very low barnowadays.
I'll just mention that.
Interestingly, going back toSojournerist Rights, ukraine has
actually implemented aSojournerist Rights for AI works

(01:02:07):
.
Nobody knows how that isworking.
They implemented it don't quoteme on this I think January 2022
, if I remember correctly.
So, besides fighting a war,they managed to change their
copyright law.

(01:02:29):
Because of everything that ishappening in Ukraine, we don't
know it.
We just don't know what'shappening with that.
But they created it's a 15-yearright.
It has to be.
It's a registrar's right, soyou have to go to the IP office.

(01:02:51):
I think it's not a copyrightoffice, it's just generally an
IP office and you have torequest it and it's for very
limited commercial reuses.
So it's mostly attribution andsome commercial rights, but it
is also very limited in time.
The answer is we don't know howthat has gone.

(01:03:11):
I've heard from some colleagueslast year from Ukraine that it
hasn't been applied that much,but that could be just that
creators are just much busierwith other things over there.
But yeah, it is the onlycountry that I know that has

(01:03:34):
implemented some sui generisright.
Okay.

Speaker 9 (01:03:56):
And Jason, did you have your hand raised?
Human intellect versus justlike what the AI model like can
create, and I thought it more.
It more so like boiled down tolike ego, like everyone has an
ego, but a machine doesn't.
The only thing that the machinecan do is basically function

(01:04:17):
off of what you like give it.
So I don't really think it'screating anything.
If anything, it's just liketrying to depending on, like the
word prompt that you give it.
It'll try to make it amalgamlike close to, uh, the prompt
that it was given, um based onits, I guess, like repository of
just like images that like itscoured from the web.

(01:04:40):
It made me think back to liketwo instances.
There was an art competition, apiece of AI generated imagery
was entered in and it wonbecause it was also trained off
of another like well-knownartist and the, I believe, like
the prompt that was like givenout was just like make something
that's like close to the heart,and I thought that's kind of
ridiculous because the machinedoesn't have one.

(01:05:01):
How could it, like itinterpreted that out was just
like make something that's likeclose to the heart, and I
thought that's kind ofridiculous because the machine
doesn't have one.
How could it like itinterpreted that?
And the second instance was awell-known, like Polish fine
artist.
Uh, someone trained an AI modeloff of, like his art style, um,
and also tried to impersonatehim for money.

(01:05:21):
And even though he tried tobring up the subject of like
copyright, of just like, youcan't do this, you can't just
like make, you can't just trainlike an AI model off of my works
and then like, try to like,blend it and like merge it
together to make this likestrange amalgam for for, for
money.
I think that he lost because II didn't read much into it,

(01:05:46):
because I didn't know much aboutcopyright all the time, so,
like, all the words didn'treally make that much sense to
me.
But, um, I don't know, I justfeel like, when it comes to
defining, uh, human intellect,one thing that I always consider
is the ego.
Like, what do you want?
As living beings, we're in aconstant state of need.
A machine doesn't really needanything.

Speaker 2 (01:06:08):
Thank you Any comments?

Speaker 8 (01:06:13):
Maybe the artist should be going after them on
trademark grounds.

Speaker 1 (01:06:16):
Yeah, trademark doesn't actually protect that in
general, at least under US law.
It's a little trickier.
I was going to go back, thoughon the subject because, again,
it's important to alwaysdistinguish between what we're
calling assistive AI versusgenerative right.
So autotune, I would argue,certainly under US doctrine so

(01:06:41):
far, would probably fall underassistive AI and wouldn't really
affect the copyright ability ofthe final song.
Where you have a lot ofexpression, you have selection
and arrangement, you have allthese, all these rationales for
why that finished song has hascopyright protection, and the
auto tune shouldn't reallyaffect that in a similar way,

(01:07:04):
copyright protection and theauto-tune shouldn't really
affect that In a similar way.
I watched a demo not that longago of a special effects guy in
motion pictures.
He showed how he took a sceneof I don't know, it's probably
100 frames for the scene andused an AI to interpolate some
of the work in between theframes for stuff you wanted to
clean up.
Likewise, and in fact thecopyright office almost

(01:07:25):
explicitly said that that kindof use in a larger work like a
motion picture really should notaffect the copyrightability of
the motion picture.
On the other hand, when youstart using generative AI to
create expression, that's wherewe get into the question.
And you know, I personallydon't think that it's as novel a

(01:07:47):
set of questions per se from adoctrinal perspective.
I mean, yes, it raises some newchallenges in terms of the
application of the law.
But I'll give you an example.
We, not that long ago, throughRights Click, helped register
some photographs of thesculptural work.
This woman finds old saws and Imean literal old saws, not the

(01:08:11):
expression old saw and she turnsthem into sculptures.
And anybody who knows copyrightis going to tell you
categorically, you know, thesculptural aspects of that are
protectable, but the saw part ofit that she found is
categorically not protected.
And so we already have aframework for talking about this
.
If I present you with a visualwork and I disclaim it under a

(01:08:35):
limit of claim doctrine and sayyou know, generative AI did X
and I did Y and I'm claiming mycopyrightability in the Y part,
it's not really a novel area oflaw per se and we have a
framework for discovery and forlitigating all that.
And so I'm not convinced thatit's a sui generis question per

(01:08:58):
se.
And all of this notion ofwhether or not generative AIs,
as machines, think and feel andhave motivation to create art.
I'm fairly cynical on thatnotion as a rule,
philosophically and socially.

Speaker 8 (01:09:18):
David, do you think it would make a difference if
you're using text to imageversus image to image?
What if I start out bysketching what I want roughly
with my horribly subpar personalartistic ability and I feed
that to a gen-AI system, whichthen turns my sketch of the room
and the setting into somethingthat looks like it was done by

(01:09:39):
Van Gogh or turns it into whatlooks like a realistic
photograph?
And can I then argue thatthat's just assistive AI,
because it started with thebadly drawn hand sketch?
And then is some court going tocome in and assess whether I
have enough artistic ability tobe able to say that that's
assistive versus purelygenerative?

Speaker 1 (01:10:00):
So I think the question is not.
It's an interesting questionthat you're asking, but
personally, you know, to meright now, assistive is fairly
clearly defined in that you're,you know, you had a mental
conception of what you wanted toproduce and you're using
certain tools to expeditelongstanding.
You know tools that have beenused forever.

(01:10:21):
Photographers have been usingphotoshop for how long and
sitting there and changing thevalues and color values and
whatever in their photographs,um, and if I use a tool to
expedite that process becauseit's learning my, my style, um,
fine, that doesn't really affect.
But you know, I think the thereal question is again going to

(01:10:43):
the Alan V Perlmutter case.
There, the artist used text togenerate a visual and his claim
is getting very close to a claimsimilar to finding copyright
ability in photography.
Right, cinematographer?
Right where he's saying I Iprompted it again and again and
changed the complexity of myprompts and I had an image in

(01:11:05):
mind that I wanted to create andI just kept telling it.
The same way, a director mighttell a cinematographer what to
do, or even a photographer in astudio might tell everybody how
to arrange everything and whatthey want until they get what
they want.
You can see the parallel rightnow in Now.
In part, it depends on theintegrity of the person making

(01:11:27):
that claim that says yeah, thisis what I had in mind, this is
what I had to get, because, ofcourse, the generative AI,
unlike a camera, can producesomething on its own without you
necessarily telling it to, andthat's part of the gray area.
But whether you're using imageto text or text to image, I

(01:11:52):
don't think necessarily mattersso much as you being able to
make a claim that you can.
You've got enough control overthe tool to generate something
that is copyrightedcopyrightable, forgive me.
Um, and right now, the thefeeling, at least according to

(01:12:13):
the office and according toothers, is that right now,
generative ai is a roll of dice.
It's just you don't have thatlevel of control.
You can push it up to a pointand claim that yep, that's what
I had in mind, but that themachines aren't there.
That may not be correct, theoffice may be wrong on that, but

(01:12:34):
that's a technological questionas much as it is a legal one,
right?

Speaker 8 (01:12:39):
And that's what discovery is for.
Yeah, I think, as newer modelshave more and more knobs to
twist and all, we'll get closerto hopefully having that kind of
an outcome.

Speaker 2 (01:12:51):
Unless there are any more points on this, I wonder if
we would shift to the idea of aglobal copyright and the
challenges or incentives?

Speaker 4 (01:13:03):
Oh, God, please.
No.
Copyright is strictly national.
We can barely keep up with ourown countries and, as someone
that loves comparative law,you'd be putting me out of a job
.

Speaker 3 (01:13:23):
I think it's.
Yeah, I know, I completelyagree with Andres and think it's
probably pretty unachievable.

Speaker 6 (01:13:35):
Yeah, but then I think we can still consider it.
You know it's basically crucialdue to you know AI is
technology.
It's a very borderless you knowit's like borderless in nature
and you know it has so much ofimpact to.
You know potential to impact.
You know societies worldwide,so it cannot.

(01:13:57):
You know AI cannot exist in avacuum.
I believe that maybe stepstowards making you know having a
global approach can be taken.
Like you know, there are somany.
We have like WIPO, we have WTOs, these bodies that have
actually standardized theguidelines for several aspects.
You know we have treaties for,like burn convention, for

(01:14:21):
copyright, so why not for AI?

Speaker 3 (01:14:24):
I completely agree, krishka, that the conversations
are really important and arebeing had, but I think that it's
probably a long way off toactually get sort of global
agreement on all of theprinciples given.
But as an aspiration.

Speaker 4 (01:14:43):
Yes, yes, yeah, going back to that, I know that WIPO
are starting the discussion.
So they have been holding theseinternational conversations, or
mostly online, on differentaspects of AI and copyright,

(01:15:04):
participating in a few, and lastyear for the last SEC, for the
last big copyright gathering atWIPO, they had a big launch.
They are starting to thinkabout this.
I know that there are someefforts at WIPO to sort of a big
launch.
They are starting to thinkabout this.

(01:15:24):
I know that there are someefforts at WIPO to sort of start
thinking about a treaty, butthe way things are going, at
least at the international level, it's very slow.
So it took WIPO the best partof 20 years to produce a treaty

(01:15:44):
on traditional knowledge and thebroadcast treaty.
I think even longer.
It is possible, but I think itwould require quite a lot of
very difficult politics to tryto get some harmonization.

(01:16:05):
So at least I think a treatywould be a good idea.
There are some concepts thatneed to be harmonized, or better
harmonized.
I think that the concept oforiginality I mean we have quite
a few concepts represented justin this panel, so it may be

(01:16:30):
possible to start looking atthat.
But yeah, that may be somethingthat at the current stage of
international IP politics.
It will be very slow and, as weknow, a few months in AI is a
decade.

Speaker 3 (01:16:50):
That's exactly what I was going to say.
Who knows where we'll be in 20or 30 years' time.
By the time, ttc might be inprogress.

Speaker 4 (01:16:58):
We'll be all using AI agents.
This conversation will be heldby our respective AI agents that
will give us notes at the endof the meeting.

Speaker 2 (01:17:12):
One of the other points that I think, andres, you
had suggested is what is theend game?
And so, as we're closing outour time, I wonder if everyone
wants to give their closingthoughts and include perhaps
their thoughts on what the endgame for them, how they think it
would be envisioned.

Speaker 4 (01:17:34):
I guess I'll get started on that.
I've been really interested inwhat the end game is going to be
.
I always make jokes that we'regoing to have five years of
strife and then stability willensue Stability, stability Sorry
, is this thing on, but anywayit is possible.

(01:17:56):
I have no idea what is going tohappen, but I think it's going
to be a combination ofpotentially licensing agreements
from large collective societies, collective negotiation bodies.
Publishing appears to bepushing quite heavily towards

(01:18:17):
that, because publishing isdying.
Publishing is dying.
Potentially also some form oflegislation.
I think exceptions andlimitations are going to be part
of that.
Having more clarity fordevelopers.
That is going to take place oflegislation.
Don't ask me what that is goingto look like.

(01:18:39):
And now that the US governmentis very, potentially very
heavily pro ai, we don't knowwhat may come up with there.
But um, and potentially, I think, more technical solutions make
it easier for um, for creatorsto have some form of very

(01:19:05):
clearly say, I don't want myworks to be trained some form of
I know opt-outs are noteveryone's favorite solution but
some form of technical standardor, I don't know, model
poisoning doesn't appear to beworking quite well, so it's just
a combination of tech.

(01:19:26):
This is what always happens acombination of sort of practical
licensing, technical solutionssorry, technical solutions as
well and legislation, andpotentially the case law as well
, is going to start going in onedirection or the other, and
potentially the case law as wellis going to start going in one
direction or the other, and forthe moment we just can't expect

(01:19:48):
the case law to be all over theplace.
So that's sort of where I'm.
Eventually we'll have to have aresolution.
We can't have this situationwhere everyone is suing each
other.
That never lasts for longerthan four or five years.

Speaker 3 (01:20:07):
Perhaps I'll come in now because I don't have much
more to say.
I think Andres has summarised itbrilliantly.
I think and I hope that humancreativity and the products of
human authors will continue tobe valued and the products of
human authors will continue tobe valued, and I think there

(01:20:28):
will be a kind of comingtogether of what we might call
the sort of two sides so thecreative industries and the AI
developers, and I think that'skind of already starting to
happen through the developmentof licensing mechanisms and you
know the sort of uh, theintegration, I guess of of those
two communities, um, and Ithink that maybe um sort of

(01:20:57):
intermediary bodies that kind offacilitate, because in a sense,
in a way, the kind of the catis out of the bag and these AI
developers already have lots ofthe, the data that creators
might want, might have wanted tocontrol, and so it's a way of,
I think, facilitating a dialoguebetween those two sides.

(01:21:22):
And I think maybe bodies thatkind of somehow curate data,
this sort of data breakage idea,will become profitable
businesses.
That will be something that weprobably see more of, be

(01:21:49):
something that we probably seemore of um and more competition
within the market, within the aidevelopment market.
We've we've seen deep seatrecently, which I think is I
mean, I'm no techie but it seemsto me, from what I've read, to
be a bit of a game changer, um.

Speaker 8 (01:22:00):
So yeah, I think kind of we'll muddle through in
terms of the law and casescropping up over the next five
or so years and eventuallythings will kind of settle yeah,
I think I could see a futurewhere some of the AI, gen AI
tool makers start partneringwith artists, maybe marketing

(01:22:24):
individualized personal models,where it would be a split
between the AI company and theartists that they train the
customized submodel on to beable to produce artwork in that
artist's style.
Maybe there might be a role fornon-fungible tokens in here
somehow.
I'm not sure exactly how thatwould play out, but I think that
the ultimate customers want tosee the artist be able to make a

(01:22:49):
good, healthy living forthemselves.
So, even though anyone can goout and get a poster of the Mona
Lisa, the Mona Lisa originalhas tremendous value and for the
same notion, even though wemight be able to find Gen AI
versions that look likeStefania's art, she'll still be
able to find organizations thatwill want to patronize her as a

(01:23:12):
human artist by virtue of herbeing a human artist, who brings
that talent to the table andher own perspective behind it,
as opposed to just the soullessimages that might be similar in
style to her work but notactually be her work.
So I think artist trademarksand that level and partnerships
with some of the AI companiesmight provide a way to massage

(01:23:32):
the economics, so it becomes awin-win all around.

Speaker 2 (01:23:36):
Anyone else, david?

Speaker 1 (01:23:39):
Sure, I'll be the cynic in the room, which is not
news, you know.
In particular, if you ask manyof us Americans about the end
game right now, we're not reallythinking about art.
But in that context you knowAndres mentioned a moment ago
you know that the US has movedtoward an AI-focused,

(01:24:01):
technocentric message and youknow as many as Rick and I know
very well the large techcompanies in the US use very
highfalutin language likeinnovation, and that's a
generalization that doesn't haveany particular meaning.
And unfortunately, generativeAI for making creative work gets

(01:24:26):
rolled into AI for allegedlycuring cancer and solving
climate change and all of itjust becomes one big
conversation that is not reallybeing had, it's just kind of
being promoted.
And so you know, in context toyou know what Emily said a
moment ago about dialoguebetween you know, the creative

(01:24:50):
community and the developers.
With all due respect, creativecommunity have been waiting for
that dialogue for about 20 years, since the days of Napster and
mass piracy, and that dialoguestill hasn't really happened.
It kind of sort of did, but notreally.
And certainly the kind ofconversations that are coming

(01:25:12):
out in Washington, as I say,they're so generalized that you
know we're just gung-ho on AI askind of a you know moon landing
kind of initiative.
I honestly don't know wherecreative and copyright
discussions fall within thatbroader mess of incomprehensible

(01:25:34):
dialogue.
I would love to see copyrightand artist conversations still
happen in Congress that areseparate from the noise that's
coming out of other parts ofWashington, shall we say.
But we'll see.
We're not there yet.
Let's see what comes in thenext.
You know several months.

Speaker 2 (01:25:54):
Thank you, David Ankit.

Speaker 5 (01:25:58):
Yes, I think I'll just summarize using a quick
example or anecdote.
Basically, you know, I rememberwhen I was born, I still had my
great grandmother alive andwhat she used to say was you
know, there's a very high-assriver in India and what used to

(01:26:20):
happen in the olden days wasthat you would often go to the
riverside.
It was really beautiful itstill is and you would have all
of these painters lined up, andwhen the cameras came in, the
first word that got out was hey,this will be the end of all the
painters that line up alongsidethe river.
Nobody's going to get eitherscenes of the river painted or

(01:26:42):
themselves painted and portrayedand carried home.
Nobody's going to get eitherscenes of the river painted or
themselves painted and portrayedand carried home.
But then, when the cameras camein, you realize there was
always a space on your wall forthe pictures and there always
still remained a place for theart.
And what I can say is hopefully,from this point onwards, there

(01:27:04):
would be a third place for thoseof us.
I mean, I'm personally notsomeone who has a taste for that
, but probably the nextgeneration, the Gen Alpha, as
they say will probably have ataste for AI-assisted or AI
generated art, and they willhave a third place on the wall,
hopefully considerably smallerthan the space occupied by the

(01:27:27):
paintings or the photographs,but it will simply occupy
another space on the wall andeach of these will be side by
side.
Basically, to sum up and saythat human hunger, or demand for
each different kind of art form, fine art form, will remain the

(01:27:51):
same.
If my cultural or art hungergets satiated by buying fine art
paintings, it will not getsatisfied by buying photographs
that get framed in the fanciestframes or, worse, ai generated
art.
So I think all of these willcoexist, as time has witnessed

(01:28:11):
and as time has seen.
And, by the way, we still haveas many painters on the banks of
that river in India.
Should you come, you will findas many painters there.

Speaker 2 (01:28:22):
Thank you, Ankit Stefania.

Speaker 7 (01:28:24):
Yeah, I guess my final words will be and it's
more just hope.
Painting's not going to goanywhere, painting's here to
stay.

Speaker 2 (01:28:33):
Amen, I agree, kritika.
Did you have any closingthoughts?

Speaker 6 (01:28:39):
Yes, I think you know a balance needs to be.
You know balance needs to be,you know, created the right
balance where both technologyand human creativity will have
to coexist.
Where the technology has to beencouraged yet human creativity
needs to be rewarded, balanceneeds to be created.

Speaker 2 (01:29:02):
Both have to coexist there will be links in the show
notes to learn more.
If you were intrigued by thispodcast, it would be much
appreciated if you could leave arating or review and tag
warfare of art and law podcast.
Until next time.
This is stephanie drottybringing you warfare of art and
law.
Thank you so much for listeningand remember injustice anywhere

(01:29:26):
is a threat to justiceeverywhere.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.