All Episodes

November 24, 2025 46 mins

What if the rules we write today could make tomorrow’s technology more human, safer, and genuinely worth wanting? We sit down with Anna Aseeva, a legal strategist working at the intersection of sustainability, intellectual property, and AI, to map a smarter path for digital innovation that starts with design and ends with systems people trust.

We dig into the significant shifts shaping tech governance right now. Anna explains a practical model for aligning IP and sustainability: protect early to nurture fragile ideas through sandboxes and investment, then open up mature solutions with licensing that shares benefits and safeguards intent. 

This conversation is equally about culture and code. We talk about legal design that reads like plain talk, citizen participation that turns evidence into policy input, and civic apps that could let communities steer platform rules. We cover digital sustainability beyond emissions—lighter websites, greener hosting, and product decisions that fight digital obesity and planned obsolescence. And we don’t shy away from the realities of AI: hallucinated footnotes, invented coauthors, and the simple fixes that come from a careful human in the loop.

If you’re a builder or curious listener who wants technology to serve people and planet, you’ll find clear takeaways: design for sustainability from day one, keep humans in charge of final decisions, protect what’s fragile, open what’s ready, and invite people into the process. 

Subscribe, share with a friend, and tell us: where should human review be non-negotiable?

Send us a text

Check out "Protection for the Inventive Mind" – available now on Amazon in print and Kindle formats.


The views and opinions expressed (by the host and guest(s)) in this podcast are strictly their own and do not necessarily reflect the official policy or position of the entities with which they may be affiliated. This podcast should in no way be construed as promoting or criticizing any particular government policy, institutional position, private interest or commercial entity. Any content provided is for informational and educational purposes only.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_01 (00:00):
The more people are aware, the more they have
information, knowledge, and themore they relate or they can
relate to policy, to law, to atechnology, the less there is
fear and the more there is notonly acceptance but also
desirability.

SPEAKER_03 (00:27):
Welcome to Intangibilia.
Today we have a very interestingepisode.
We're gonna be talking aboutsustainable by code, rethinking
tech governance from IP to AI.
We are joined by Anna.
She is helping shape the legalfoundation of our digital

(00:47):
future.
She works at the crossroads ofsustainability, technology, and
law, asking not just howinnovation moves forward, but
how it can move wisely, fairly,and with intention.
Anna's trajectory is as globalas interdisciplinarity.
She has advice on responsible AIgovernance, help build

(01:08):
international frameworks onsustainability and innovation,
and explore how legal systemscan adapt to realities of
blockchain, web 3, andgenerative AI.
She work her work spansinstitutions, sectors, and
continents, always guided with aclear ethical compass and deep
understanding of the systemsthat connect us.

(01:30):
From corporate responsibility tolegal design, from dispute
resolution to future-facing IP.
Anna brings together rigor andimagination.
In this episode, we explore howlaw can be more than just a
boundary, it can be a tool forpossibility, protection, and
purpose.

(01:50):
Because the rules we designtoday will shape the tech we
live with tomorrow.
Welcome.
Thank you.
Thanks, Lauricia.

unknown (01:59):
Hi.

SPEAKER_03 (02:00):
So, Anna, thank you so much for joining us.
Thank you for talking.
And I would like to ask you thefirst uh um breaking the ice
question.
Tell us about yourself uh andhow you landed here in Geneva.

SPEAKER_01 (02:13):
All right, thank you.
You already quite introduced me,so I will not repeat what you've
said.
I will give you, let's say,three figures four, 16, and 20.
Um is the number of fields whereI'm specialized.

(02:33):
Well, they're all quiteinterconnected, but firstly, I
did actually in Geneva, Istarted in Geneva by doing a
degree in internationalrelations at the Graduate
Institute.
It was Ashoe de Geneve back inthe days.
Uh I think people know it byboth Ashoe de Geneve and
Graduate Institute.
So I are political science, thenarea studies at the University

(02:54):
of Geneva, law, including PhD inlaw from Sciences Po Paris, and
I also studied design andbusiness communication.
I studied partially web designand web development, and now I'm
studying more like the businesspsychology side.
So the four uh fields, then 16,actually 16 and a half, but
let's stick with 16, is thenumber of startups that I helped

(03:17):
in one way or another.
I advised either legally or justadministratively or uh regarding
the sustainability.
And 20 is the number of years ofexperience, as you mentioned,
from corporate sector to theUnited Nations to law firms in
Brussels, uh, a lot ofconsulting, NGOs, and now I work

(03:39):
for Digital for Planet, is anNGO in Zurich, and we work on
digital sustainability.

SPEAKER_03 (03:44):
That's a beautiful career.
So you have walked many paths.

SPEAKER_01 (03:48):
Yes, I saw a lot of yeah, let's say continents and
fields and people.

SPEAKER_03 (03:54):
That's amazing.
I'm very happy to have you here.
Um, so let's dive in.
Um, you move comfortably betweenpolicy, ethics, innovation, and
law.
What's one thing you believe thelegal field should embrace more
boldly as we shaped the digitalage?

SPEAKER_01 (04:15):
Yes.
Today the digital age is thistransformation from let's say
commodities economy to thedigital economy, fully digital
economy, and this leap happenedmuch faster than we thought.
Because, or thanks to COVID, tothe pandemic, we were all locked
and we were all online, and sothe we are fully now in the

(04:36):
digital economy, let's say.
And the legal field, as actuallyother fields, what it needs
really in this digital age isinterdisciplinarity, but the
real one, not for example,before uh coming to the
consulting and NGOs, I workedfor 10 years in academia from
PhD to associate professorposition.

(04:57):
And there, for example, in legalfield, whenever you have one
researcher who does socialstudies for you or data
analysis, and it becomes eitherqualitative, qualitative, sorry,
or quantitative analysis ofsomething of legal decisions of
case law, it's like, oh, it'sinterdisciplinary.
I don't think so.

(05:18):
I think really bringing peoplefrom hard science, natural
science, uh, social science andhumanities, including law, uh,
it's it's really like beingstronger together, more aware,
and more sustainable.
I think this is reallyimportant.
Like people who create tech, thetech people, should absolutely
work together with uh peoplefrom social science and

(05:40):
humanities, with lawyers, witheconomists, with sociologists,
with natural scientists, and theother way around.

SPEAKER_03 (05:47):
So that you can build from their strengths.
So it comes with their ownexpertise and knowledge, and
then you can build somethingbetter together.

SPEAKER_01 (05:54):
Exactly, as you say from the the strength, because
the difference is not aweakness, it's complementarity.
You can complement each otherand it's only strength, as you
say.

SPEAKER_03 (06:03):
Yeah, perfect.
I love that.
So um, you work also on theframeworks for responsible AI,
platform governance, andemerging tech.
What do you think is the mostexciting shift happening right
now in how we regulateinnovation?

SPEAKER_01 (06:21):
How we regulate innovation.
Um I'd say what is reallyimportant.
I don't know if it's veryexciting, but it's exciting.
It's a definite definitely in uma needed one, I'd say, is the
human participation andvalue-based participation.

(06:42):
Not only what I mean humanparticipation is human
oversight.
Because obviously we have humanparticipation because humans
create AI.
Human oversight, once everythingis created, it's relevant for
AI, it's relevant for IP.
I'm sorry, if your AI wrote abook for you on your behalf or
an article or case analysis,anything, there could be
plagiarism, and that's also anIP issue, right?

(07:05):
So it should be not only humanoversight of the final content
of anything, even if it's justtext revision, even if it's just
language spell check.
Less that, but if it's a bitmore than spell check, human
oversight and value-basedoversight, meaning when we use
the work of others, we citethem.
It's called intellectualhonesty.

(07:27):
So let's continue with that inAI age and platforms age, just
the same basic values that yeah,that's our principal difference.
Exactly.

SPEAKER_03 (07:39):
We just bring them online.
Yeah, of course, it makes sensebecause um that's how every
everyone else has grows as well.
You acknowledge other people uhwork and then you build on that,
and then everyone keeps growingtogether, no one is left uh
behind.

SPEAKER_01 (07:54):
Exactly, it's beneficial for everyone.

SPEAKER_03 (07:56):
Yes, yes, yes, yes.
In your experience, how can Iintellectual property and
sustainability work hand inhand, especially as more
creators, coders, andentrepreneurs want to build with
purpose?

SPEAKER_01 (08:13):
You're right, more and more creators and social
entrepreneurs and startups buildwith purpose because, on the one
hand, uh one of the maindifferences of a startup from
just a company or just a smallcompany, small and medium
enterprise is that to have thepurpose, have a social purpose,

(08:33):
sustainability purpose, right,is one of core differences of a
startup from just SME.
So I'd say at this uh more microlevel is incentivizing through
investments as well throughregulation, but also protecting,
uh including IP protection, whatstartups offer.

(08:55):
Because it's in a way nurturing,right?
The these small but super smart,unique ideas with purpose,
sandboxing.
So once uh when when they are inthe sandbox, we incentivize and
protect.
However, and that's paradoxical,once they grow, uh I would go,

(09:16):
including a regulation forpromoting open licensing for
shared solutions.
Once the solution is matureenough, is grown up enough, we
should share it.
I so that's it soundscontradictory, but it's not,
it's complementary.
We protect and incentivize whatwhen we need it.
And one when it's being built.

(09:37):
Yes, yes.
When it's grown, you need toshare this with the humanity.
Open licensing, open source.

SPEAKER_03 (09:44):
Yeah, but and and the beauty about this kind of um
frameworks is it's not just umfreely do whatever you want.
There's rules into open source,and there's rules and and how uh
you let other people in in yourtechnology, um, in ways that it
doesn't uh lose the purpose, oralso it's not deceiving to the

(10:04):
other people, or it also can beum used for something that was
not intended to be used.

SPEAKER_01 (10:09):
Exactly.

SPEAKER_03 (10:10):
So it's about being open, but also keeping in mind
that there are rules in placethat you you should follow.

SPEAKER_01 (10:15):
That's the other side of also innovation and with
purpose.
Yeah.
So yeah, we need to protectwhere it's needed to be
protected and incentivized andopen where it's needed to be
done.
For example, as we work atDigital for Planet a lot with
this Horizon Europe funding,it's a very specific pillar two
Horizon Europe funding, it'sresearch and innovation.

(10:37):
And there, there are a lot ofprojects on this open source and
open licensing, a lot of rules.
European Union is like thechampion in regulation, and
also, as you said, it makessense because you need also some
rules and limits to protect thesolutions, to share them, but
also to protect when where itcan also be you used, yeah, harm

(11:00):
on other people and used in thethat's why we have a legal
system, yes, that's why we havelawyers.

SPEAKER_04 (11:07):
Exactly, we're not that bad.

SPEAKER_01 (11:10):
Let's not kill all the lawyers.

SPEAKER_03 (11:14):
So you advise across jurisdictions.
Have you seen any standardsamples of countries or
institutions integratingsustainability and digital
innovation into law and in aforward-thinking way?

SPEAKER_01 (11:31):
That's an excellent question.
Uh, maybe you know, if not, it'sworth reading.
Stanford University has a centeron AI, and they publish this
annual index.
It's Stanford University AIIndex, something like that.
Uh, and it's been out 10 daysago or something, maybe two

(11:52):
weeks.
I saw it 10 days ago.
And there are like there is anoverview, and there is point
number four and point numberfive.
Point number four is aboutinnovation in 2025.
There is a graph.
The US is really still numberone, it innovates.
China is catching up, is so issuper close to the US in this

(12:14):
graphic.
They're really very close to theUS in terms of innovation.
And point number five, the nextone after point number four in
this Stanford AI index, it'sabout regulation, AI regulation.
And obviously, you have EuropeanUnion.
So you have OECD, EuropeanUnion, and for a reason.
The GDPR, the General DataProtection Regulation of the EU,

(12:37):
now affected more than 100countries, law in more than 100
countries.
It's absolutely extraterritorialand it protects, I think, the
right values like privacy, it'sabout uh protection.
Yeah, it's about consent, it'sabout, I mean, private data,
trust, security.
I think EU AI Act will dosomething similar, will have

(13:01):
quite extraterritorial effect,and there are other regulations:
Cyber Resilience Act, CyberSecurity Act.
We'll see how it goes.
But EU regulates a lot in aquite forward-thinking way,
especially regarding socialsustainability.
Social sustainability meaningum, yeah, the trust, security,
human-centered solutions, uh,protection of privacy.

(13:25):
So it's still Europe regulates,is still true as well.

SPEAKER_03 (13:29):
So, um, from what I'm getting, is like in in every
country they have theirdifferent um approaches.
So one is running um to the raceof innovation and the race of
the technology itself,developing the technology, and
then another one is running ininnovating in the policy making

(13:50):
and in regulation making in whenthe and when the technology
arrives, then the the the legalsystem is gonna be ready in
place.

SPEAKER_01 (13:58):
It is already in place, yes, in the European
Union.
And the thing is that the famousBrussels effect, quite a few
laws as a GDPR, and I expectmore or less the same, but EU AI
Act, it's quiteextraterritorial.

SPEAKER_03 (14:11):
So yeah, because it's the the way it's it's it's
been uh uh impacted all thebusiness.
And now businesses are global,or at least uh uh they have
presence in more than onecountry.
So it's it's fairly easy to betouched by the GDPR.

SPEAKER_01 (14:27):
Yes, you're totally right, and that's a great
example.
Beforehand, as we discussed, wehad this more, let's say,
extractive economy orcommodities economy.
We had value chains, the globalvalue chains, which were
physical.
Now in the digital economy,everything is online, so it's
much more global than before,even more global than global

(14:49):
value chains, let's say it'sglobal value chains to zero.

SPEAKER_03 (14:54):
I like that.
So you've championed legaldesign and system thinking.
What would you true uh whatwould a truly user-centered
future-proof legal system looklike, whether for tech
contracts, IP, or AI?

SPEAKER_01 (15:14):
This question has a lot of elements.
One element is user-centered,future-proof, yeah, and legal
system, and we say legal systemfor the tech in general, because
it's AI, it's contracts,platforms.
I'd say um when we speak aboutgood legal system, let's say

(15:34):
future-proof, user-centered, wespeak a lot about policy
salience.
I think before we should assessor even dream this or that
policy or legal system beingsalient, we should think about
societal desirability.
So for me, the uh user-centeredmeans end users.
End users for me is citizens,it's you and I.

(15:58):
So being socially acceptable oraccepted is one thing.
There were a lot of sociallyaccepted things since 2020,
right?
During the pandemic, a lot ofthings came from the government
and the society had to acceptthem.
We didn't have much choice.
Uh, social desirability issomething slightly different,

(16:19):
and I think the user-centered,future-proof legal system should
be socially desirable.
Two elements of this socialdesirability, very quickly, I
don't want to like be supertheoretical.
Uh, a simple way, simple exampleof illustrate that would be so
two elements, social acceptance.

(16:40):
First, imagine uh the scale, uh,sorry, the balance.
In one scale you have knowledge,in another scale you have fear.
In between, you haveinformation.
So the more information youhave, the more you have
knowledge.
The less information you have,the more you have fear.
It's not only not necessarilyabout a policy or legal system,

(17:00):
it's about anything, like aforest, sunrise, right?
So the goal, I think, and thething about social acceptance is
to have more uh real, reliable,simply accessible information
about anything, about thetechnology, about the policy,

(17:22):
about the whole legal system.
So the more you haveinformation, the more you have
knowledge, the less you havefear.
So you would avoid thesesituations with burning the
masks or like destroying the 5Gantennas, you know, because
people have more knowledge, soless fear.
That's first uh step.
Next step is the right ofcitizens, really like legally uh

(17:47):
enforceable right of citizens toinformation, including the right
of actively participating in themaking of this information.
I know it's a little bit morecomplicated now than the example
with the balance.
Let's say, for example,sustainability information to
use in lawmaking and in courts.
Citizens can provide evidence.

(18:08):
This evidence, handled in aparticular way, becomes data,
then we use all the GDPR and allthe like the to have this data
be ethically correct, ethicallyproof, etc.
And then you can use thisevidence in court decis in
courts, in litigation, inlawmaking.
For example, so the citizensparticipate quite directly in

(18:29):
the lawmaking, not only throughvotes, we know also the limits
of the democratic system and thevoting system, correct?
Due to recent events.
So here we have a completelydifferent mechanism, how
citizens can participate in themaking of the information.
And coming back one step back,the more information, the more
knowledge, the less fear, themore the system is future-proof

(18:52):
and use user-centered andcitizen-centered.

SPEAKER_03 (18:56):
Interesting.
And I like the balance, and it'sit's so true in everything.
In everything, then the more youunderstand about uh um a
situation, a technology, aprocess, the the less uh you
demystify it.
Exactly.
And it becomes uh uh somethingthat you can really uh fully um

(19:16):
uh take advantage of or or putin your everyday life, and also
can help you to have um tosurpass the bias against new
developments or againsttechnology.

SPEAKER_01 (19:27):
Exactly.
You can relate to that directlyinstead of having fear, oh, this
is lawyer stuff, oh, this islegal stuff, I don't understand
anything because it'scomplicated.
No, there is nothingcomplicated, especially if you
participated in the making ofthat.

SPEAKER_03 (19:39):
Yeah, of course.
Just join in, start.
If we want digital tools,platforms, and IP system to
support long-termsustainability, where do we
begin?
Is it education, policy design,or something else?

SPEAKER_01 (19:55):
So actually, everything that you mentioned is
super uh relevant.
You mentioned design.
Where was that?
Education, design, policy,something else.
I'd say all these things areimportant, but in my opinion, we
should start with design.
You know, this famoussustainability by design, but if
we really embed this at the uhyeah, basically pre um

(20:18):
pre-production level, uh at theconceptual level, you embed
sustainability in the design,then of course you proceed with
education, with policy, alsowith IP, but yes, starting from
the design.

SPEAKER_03 (20:34):
Design, because that's what makes uh the entire
uh system or process ortechnology from what you take
into consideration when you'remaking it.

SPEAKER_01 (20:42):
Yeah, from the uh from the very beginning, from
the upstream.
But then the rest is superimportant as well.
In education, starting fromschool education, for example, I
don't know, to avoid digitalobesity, like you know, these
all these like digital thingsalso make kids thinking about
circularity, like changingphones every year, or changing

(21:04):
laptops every year.
Yeah, every time is a new one,is it a good thing?
So starting from education,actually from very young stages.

SPEAKER_03 (21:12):
Once when they're when they're open to to have uh
their minds changed, because uh,you know, the older you get, the
less likely you are to changeyour mind.

SPEAKER_01 (21:21):
Yeah, used to.
Oh, with my parents, we wereused to that.
I receive a new iPhone for everybirthday.
If it's embedded in yourchildhood memories and they're
happy, you will never change it.
Yeah, yeah, it's hard.

SPEAKER_03 (21:31):
It's like because it becomes like a uh um a tradition
for them or or part of like ahappy memory that they can uh
emulate again by by buyingsomething.

SPEAKER_02 (21:41):
Exactly.

SPEAKER_03 (21:53):
Uh many tech leaders talk about building responsibly.
From your view, what does itreally mean to embed
responsibility into innovationfrom the start?

SPEAKER_01 (22:05):
It's very much it's like a the continuation of the
previous uh question and answer,but let's develop it further
because basically the answer tothis one is also from the design
on the one hand, and on theother hand, from education from
very young ages.
But because it was already thediscussion in the previous one,

(22:26):
let's add something, let's takean example.
Uh, because you can use thisexample in any field, but I will
give an example of uh writing,let's say research innovation
writing.
Either you write a scientificarticle or a I don't know, a
case note, or a researchinnovation proposal, like we

(22:46):
write a lot uh for like for theHorizon Europe funding or
tenders, anything.
Let's say you're like uhresearch innovation writing, uh
you can use, and everyone isusing AI in this writing.
So there are at least threedifferent stages, or let's say
three different strands.
Strand one is when it'sethically okay-ish and even

(23:10):
recommended.
For example, using AI for spellcheck, grammar, or even
translation.
That would be ethically okay.

SPEAKER_02 (23:19):
Okay.

SPEAKER_01 (23:19):
Then second uh tier is when it's ethically okay-ish,
but really subject to control.
And the third tier obviouslywould be ethically not okay.
Never use it.
No, you you can still use it,but not recommended at all.
And I explain then also uh likethe human role and risk uh,

(23:41):
let's say, mitigation.
And that's an example about thewriting, but I guess anything
that you create with AI, andsoon uh there will be news
created with AI.
We had a discussion on Wednesdaywith uh friends who work uh for
a television.
I will not take tell you morethan that, but uh for
television.

(24:02):
And uh, I don't know, five ormaximum 10 years from here, AI
will be involved much more inthe news making in every stage.
Anyways, coming back to myexample, so we have these three
tiers and human oversight,proofreading, control, and
decision making, whether publishit or not, whether use it or

(24:25):
not, even for the first tier,where it's like just to
proofread, is important.
So human oversight for any ofthem and uh risk mitigation also
for the first tier, just checkthat everything is okay.
For the second one, when you,for example, generate footnotes,
you give references, just links,and ask generating footnotes.

(24:49):
Recently, I asked, I asked AI touh follow a particular format of
footnotes for an article.
I already had footnotes in thisin this case.
They were done, they were donebut in a different format.
It was like Oxford something,and it was for like Harvard, I
don't remember exactly.

SPEAKER_03 (25:06):
You needed to change your case.
Yes, yes, it was the title.

SPEAKER_01 (25:09):
A CM conference, and they have a particular way of
citation.
I asked uh AI to do that, andmaybe just because it was my
book, so exactly to my book, soI it's like a single-authored
monograph.
I refer to it once in thearticle.
AI added another name, like Idon't know, John Smith.

(25:31):
I I was puzzled.
I write to the AI, who is JohnSmith?
Like, oh, it's a co-author ofAnna Seva.
I was like, really, I am AnnaSeva.
So I really discussed with AIand it was shocking to me
because it was not John Smith,it was it was another name.
And apparently AI found maybeanother Anna Seiva publishing
with that person, and it decidedto add the name to the book for

(25:53):
no reason.
I have no clue.
Oh wow, okay.
It's just because it's my book,I noticed that.
And then I had to reread everyfootnote that AI didn't add
everything.
We're second-guessingeverything.
You're like, what else do you dohere?
Yes, so I had to go through allfootnotes and check every word
in the title.
AI could add something to thetitle of the article that you

(26:16):
already cited, just in adifferent format.

SPEAKER_03 (26:19):
Oh wow, that's that's scary because then then
it is it can it can damage thewhole research.

SPEAKER_01 (26:24):
Yes, and that was just footnotes.
Yeah, yeah.
Can you imagine if you ask tosummarize something or develop
something?

SPEAKER_02 (26:31):
Okay.

SPEAKER_01 (26:32):
And so here the risk mitigation is very different.
Either you then go through everyword or just use a bit less AI.
And and then obviously the thethe uh non-recommendable, the
third strand is asking AIwriting something from scratch.
I would not recommend.

(26:53):
And because we speak aboutwriting, it's writing from
scratch, but then creatinganything news or any content,
any good, any service, becausenow we are in the age of digital
goods and digital services.
I guess don't ask AI to createit from scratch.
Again, coming back to one ofyour questions, in the EU, soon

(27:13):
we will have all these liketight regulations.
Even the EU AI Act uh speaks alittle bit about that, not too
much uh Cyber Resilience Act aswell, Cybersecurity Act, but
there will be more regulations.
Imagine people start asking AIto create something from
scratch.
Yeah, you will have what I hadas a situation with the
footnotes where all thefootnotes were already there.

(27:35):
You I just needed to reformatthem.

SPEAKER_03 (27:38):
Yeah, because that's uh that's one of the things we
have been seeing, especiallywhen when we talk about
copyright uh um AI generated umcontent.

SPEAKER_01 (27:50):
There you go, then you can have plagiarism easily.

SPEAKER_03 (27:53):
And and there's there's so far that the
different authorities that saidyou can protect your copyright
if there's a human uh uh inputand there's a human uh uh like
guiding the whole process.
It was not just the AI who didthe whole thing.

SPEAKER_01 (28:10):
AI cannot be the author.

SPEAKER_03 (28:11):
Exactly.
So it's it's when the human usethe AI as a tool to create, then
okay, it is protected.
But if the if you just put ageneral prompt and and you took
whatever the AI gave you, thenthat itself there's no there's
no there's no authorship fromthe human side, so there's no
there's no copyright to be to begranted.

(28:32):
So that's what we're seeingright now on that development.
So it's interesting because itit it it connects with that,
with there's uh there's thehuman input is very, very
important, it's crucial to makesure that it's accurate, that
there's no hallucination, thatit's not going out of the way
and doing things that havenothing to do, or creating
footnotes, or or makingreference that is out of out of

(28:54):
plateau.
Oh plagiarism, for example.
Plagiarism, exactly.

SPEAKER_01 (28:57):
You ask AI to okay, please write me a paragraph or a
page on this and that.
AI writes without footnotes asif it has written by itself by
taking information from onegazillion of websites, and in
reality, maybe took this wholepage word by word from somebody
else's work, which is opensource.

(29:18):
And that it's plagiarism, andyou will be like then accused of
plagiarism.

SPEAKER_03 (29:22):
Yeah, that's that's a scary thing.
It's like I I think notechnology should be used
unchecked.
You always need to have uh uhsurveillance eye.
I use AI very much in inespecially on the on the creator
of my podcast, and and and a lotof the the tools that really
help you do the the ground workfaster.
Yes, but you you have to bevigilant, you you cannot just

(29:43):
let it run free.

SPEAKER_01 (29:45):
Uh oversight, uh revision, and decision making.
The final decision should behuman.
Actually, there is aninteresting case.
Uh it's I I just read the onepager.
There is now uh um a claim underthe preliminary ruling.
Procedure.
It's a specific procedure in theEuropean Union.

SPEAKER_03 (30:11):
Okay, so we go into the flash section.
I will ask you to pick one.
So you have to think fast andpick one.
You can explain yourself if youwant.
And the idea is to uh to give togive like a novel or picture
what you think about thesespecific uh uh choices.

(30:33):
Okay, let's start.
Rewrite the legal system fromscratch or debug the one we've
got.

SPEAKER_01 (30:41):
Debug the one we've got.
For sure.

SPEAKER_03 (30:44):
A future where every citizen has an AI lawyer or
where no one needs one.

SPEAKER_01 (30:53):
I don't choose the first one.
AI lawyer, it would be tooexpensive.
You will have an AI lawyer andthen a physical lawyer or who is
overseeing what the AI isoverseeing what the AI does.
If the AI lawyer is for freebecause it's the free version,
it's still the same cost.
Uh just the physical lawyer, soit depends.
But I am strongly against onlyAI lawyer for all the reasons

(31:16):
that we just discussed.
No lawyers, no, I don't thinkit's a good idea.
We will lose jobs, and a lot ofmy students.
I taught in academia for 10years, like my all my students
from different countries willlose jobs.
Um, I'd say rather no one needsa lawyer for different reasons
than there is no crime, nothing,no happiness, happiness, but

(31:38):
yeah, it's a bit yeah, it's abit like futuristic stuff.

SPEAKER_03 (31:41):
Yeah, hopefully.
Hopefully, there will be amoment that uh we're all gonna
live in harmony and 100% inpeace.

SPEAKER_01 (31:49):
Let's hope for that.

SPEAKER_03 (31:51):
So um public policy made in parliaments or prototype
in hackathons?

SPEAKER_01 (32:00):
I like the prototype in hackathons, but I don't think
that it's democratic enough.

SPEAKER_02 (32:06):
Okay.

SPEAKER_01 (32:06):
But it's very much it mirrors the idea.
You remember the citizenparticipation, like uh in the
production of information,including the one that uses
evidence and lawmaking.
So this uh prototyped inhackathons, I think it's stage
one, and then still uh debatedin parliaments as stage two,
because parliaments are stillthe most democratic uh

(32:28):
institutions, especially in theEuropean Union.
I'd say so.
I I say step one, prototyped inhackathons, step two made in
parliament.
I mean not made, but let's sayapproved, finalized, finalized
in parliaments.
Yes.

SPEAKER_03 (32:43):
Okay, interesting.
A world where every startup hasa sustainability clause, or
where no startup needs one.

SPEAKER_01 (32:52):
Hmm.
The second one is a nice one,but I think it's really uh very
much futuristic.
And the first one is alreadyit's the reality.
So a startup that has asustainability clause, every
startup, yeah, is better.

SPEAKER_03 (33:06):
Uh teach ethics to machines or teach systems to
listen to people.

SPEAKER_01 (33:14):
Oh, I'd say teach systems to listen to people,
ethics to machines, yeah, veryfuturistic.
And also it's it's it's veryrelative, it's extremely
relative.
The values are yeah, there's noabsolute uh yes, in ethics as an
absolute, so yeah, yes, andsomething that was relevant uh

(33:34):
or shocking or horrible 100years ago is just normal or
obsolete today, right?

SPEAKER_03 (33:40):
Yeah, exactly.
Copyrights that expire withcultural relevance or last as
long as human memory.

SPEAKER_01 (33:50):
Um I'd say uh copyright might maybe in five
five percent of cases being goodto last long and for all in all
other cases as long as it's uhyeah culturally relevant that it
expires.
I would I would rather say so.

(34:10):
Yeah, I am quite a bit in favorof open licenses and shared
solutions and shared things, butagain with uh some limits, with
some protection.

SPEAKER_03 (34:22):
Climate lawsuits argued by AI or peace treaties
negotiated with virtual reality?

SPEAKER_01 (34:30):
Well, peace treaties negotiated in virtual reality
would mean virtual treaties invirtual reality.
So for virtual peace and the warand peace situation, they will
they would all be virtual.
It's not true.
So I'd say climate lawsuitsargued by AI with human
oversight, always more expensiveclimate lawsuits than now, but

(34:53):
uh yeah.
I uh because I have to chooseone of the two, I prefer the
first one, the second one.

SPEAKER_03 (34:57):
I don't know, it's a VR legal reforms crowdsourced by
global citizens or co-draftedwith machines trained on
centuries of case law.

SPEAKER_01 (35:11):
Well, here again is the same.
Like you said, you use a lot AIfor some basic stuff, and uh I'm
sorry, especially for legaltrainees and and legal interns,
that machine can be much fasterin revising centuries of case
law and like tons of case law,right?

(35:33):
So it can help, but it should beagain maybe step one, and then
crowdsourced by global citizensis awesome.
I love it.
So I think both, but step stepone, yeah, is like co-drafted
with machines, but then uhcrowdsourced by global citizens.
I really like it.

SPEAKER_03 (35:48):
Okay, final flash question
public good or has a personalproperty with royalties?

SPEAKER_02 (35:57):
Hmm.

SPEAKER_01 (35:58):
Uh yeah, there was at a higher level discussion.
I remember at least uh KatarinaPistor uh discussed this data as
public good in her code ofcapital, maybe even people
before.
I just remember this work.
Uh to be honest, um, that wasquite a theoretical work.
In practice, I'd say uh personaldata is not only properties,

(36:25):
like there are not only propertyissues.
I mean, to be protected, but allthis like security and and
stuff.
I think personal personal dataand sensitive data should be
protected as personal property,maybe not with royalties.
But then non-personal andnon-sensitive data could be and

(36:47):
maybe should be a public good oreven commons.
Data as commons, also again, Idon't remember if it's Katarina
Pistol or somebody else.
There are a lot of writings anddiscussions now about data as uh
as the commons, why not?

SPEAKER_03 (37:02):
Okay, interesting.
Okay, let's you can take thepalette.
Ah yes, yes, yes, yes.
And now we go to the game trueor futuristic.
Um worries.
So true or futuristic.
True, meaning that it's gonnahappen soon, it's about to

(37:23):
happen, it's already happening.
Or futuristic, not yet, maybenever, you're too far ahead.
Okay, let's start.
Every digital license willinclude sustainability terms by
default.

SPEAKER_01 (37:37):
Yes.
I think it's already sort ofstarted happening.

SPEAKER_03 (37:42):
Yeah, yeah, it's something that we're seeing, and
people are more aware of thefootprint of technology.

SPEAKER_01 (37:47):
But also sustainability by design and not
only environmental but alsosocial and economic
sustainability.
Exactly.
I think 10 years from now itcould be quite true.

SPEAKER_03 (37:57):
Yeah.
AI agents will negotiatecopyright and licensing for
creators.

SPEAKER_01 (38:03):
Yes, it could happen.
Not now, but yes.

SPEAKER_03 (38:08):
Platforms will be legally required to report on
their algorithm impact.

SPEAKER_01 (38:14):
True.
I think with the CyberResilience Act, Cyber Security
Act, all that, then uh there areother acts, including the
Digital Services Act in the EU.
At least in the EU, this isalready becoming true in a way,
partially.
Yeah, yeah.

SPEAKER_03 (38:30):
IP enforcement will happen through decentralized
community-led systems.
I hope.
No.

SPEAKER_01 (38:37):
Well the courts.
Futuristic.
I mean, yeah, okay.
I mean, you mean DAOs, likedecentralized.

SPEAKER_03 (38:47):
Yeah, I I mean that they they will they will replace
the the court system.
Futuristic.

SPEAKER_04 (38:56):
I I need to make some outrageous statement,
right?
Now you give a lot of ideas toPhD students to write like you
can you can send them thisquestion this these questions,
maybe they will get inspired.

SPEAKER_01 (39:09):
Yes, totally.

SPEAKER_03 (39:10):
Open source AI will carry a trust label to ensure
ethical deployment.

SPEAKER_01 (39:16):
Again, it's already happening in a way very
partially.
It's we're not there yet, but insome sense and in some fields,
yes.
For example, in Digital forPlanet, we have uh this one
project which is called Certain.
It's about certification andregulation of AI, and it's in a
way, I mean it's not about that,but it it's becoming true.

SPEAKER_03 (39:38):
Okay, legal user experience will become a
standard field in both tech andlaw schools.

SPEAKER_01 (39:47):
For law schools, I really hope.
I mean, it's already reality insome schools.
I will not give you the names, Idon't want like to, but yes,
yes, totally true, and it'salready becoming reality in some
progressive law schools.
Uh the tech schools have noidea, but I hope as well.
Very good question.
I like it.
I love it.
Yeah.

SPEAKER_03 (40:07):
Digital sustainability will be measured
and taxed like carbon emissions.

SPEAKER_01 (40:12):
Yes, it's already again, it's already like in a
way happening, but not thatclear cut.
But yes, there are all thesecalculators about websites.
Uh, you have calculators thatcan show you how many emissions
a website like emits, like CO2emissions.
Then you can tell me, okay,there are like a lot of

(40:32):
calculators, and uh what is themethodology behind?
Of course, if you want to go theway you first study the
methodology behind thecalculator and then you trust
the calculator, but we arealready there.

SPEAKER_03 (40:43):
Okay, so it's already it there's already in
the mindset of the people to notonly mindset, these calculators
are using.

SPEAKER_01 (40:50):
They're tools that are available, yeah.
Calculators, and on the otherhand of the spectrum, I worked
with that a lot when I was likemore like a freelance consultant
uh to create uh it's aneco-design of the websites.
It's here for the last at leastfive years already.
Sober websites, it depends whichinterface you use, like all

(41:10):
these like pop-up windows.
It's like it's not verycomplicated, but you can design
a very non-heavy andecologically friendly website.
And then the hosts and theplatforms who use only green
electricity, you know, all thesethings.

SPEAKER_03 (41:26):
Okay, interesting.
Yes, yes.
A global IP commons will emergeto encourage innovation with
share benefits.

SPEAKER_01 (41:36):
I hope.
Yes.
That's a little bit that what wediscussed regarding data, also
commons, yeah.

SPEAKER_03 (41:44):
Yeah.
Consumers would vote directly onplatform public on platform
policy through civic apps.
Ah, I like this.

SPEAKER_01 (41:54):
That will be great, right?
That will be great, and it'salso a little bit like uh
regarding the citizensparticipating in the making of
the information, and thenfinally of policy and lawmaking.
Totally.

SPEAKER_03 (42:06):
And finally, inventorship will be redefined
to include collaborativenetworks, human and machines.

SPEAKER_01 (42:15):
I think it's already happening.
It's already uh the case.
Inventorship.
No, as we discussed in IP, onlya human being can be the author.
That's very important, but let'ssee how it develops.

SPEAKER_03 (42:28):
Yeah, there's there's a lot of room there to
grow.
So, one final question to toclose the the episode.
If you could share one messagewith everyone shaping the future
of law, tech or innovationtoday, what's the one principle
or mindset you hope they carryforward?

SPEAKER_01 (42:52):
I'd say this collective co-creation,
bottom-up co-creation.
So something that we alreadydiscussed.
Uh and user-centered, meaningcitizen-centered uh legal
system, which is such becauseit's also co-created by
citizens.
And here we touched upon likecitizens giving the evidence,

(43:13):
participating uh through theevidence in lawmaking and policy
making, and also this consumersvoting directly on platform
policy through civic apps, itrejoins completely the um the
message.
The more people are aware, themore they have information,
knowledge, and the more theyrelate or they can relate to

(43:36):
policy, uh, to law, to atechnology, the less there is
fear, and the more there is notonly acceptance, but also
desirability.

SPEAKER_03 (43:46):
Of course.
Thank you so much.
Uh, it's been lovely talkingwith you.
Um it's it's been a veryinteresting, uh uh fulfilling
experience uh to see someone whois uh working in the midst of
all these technologies, but youhave also a very inclusive and
and and kind of um uh collacollaborative or or kind of a

(44:09):
community uh sense of howtechnology can develop and how
we can form uh our society withtechnology.

SPEAKER_01 (44:16):
Yes, that's a message because I mean in
digital sustainability, if wecare about environmental
sustainability, which is veryoften the case when you say
digital sustainability orsustainability for the tech or
for digital or for ICT, peopleimmediately think about
emissions and environment, butit's also social and economic
sustainability, and they're notseparate, they're all together.

(44:39):
That's for you need like socialacceptability and social
desirability and socioeconomicrights and equality and all
that, so all these socioeconomicand environmental values, yeah.

SPEAKER_03 (44:53):
Yeah.
So the future of technologydoesn't have to be cold or
chaotic, it can be thoughtful,fair, and full of opportunity if
we design it that way.
Thank you, Anna, thank you somuch, for reminding us that law
is not just a set ofrestrictions, it can be a
platform for vision,collaboration, and global

(45:14):
progress.
Because when purpose meetsinnovation, we don't just
imagine better systems, we buildthem.

SPEAKER_01 (45:22):
Yes, absolutely.
Thank you so much, Ladisia.
Let's make it happen.

SPEAKER_00 (45:30):
Thank you for listening to Intangibilia, the
podcast of Intangible Law.
Plain talk about intellectualproperty.
Did you like what we talkedabout today?
Please share with your network.
Do you want to learn more aboutintellectual property?
Subscribe now on your favoritepodcast player.
Follow Wells on Instagram,Facebook, LinkedIn, and Twitter.

(45:50):
Visit our websitewww.intangiblia.com.
Copyright Leticia Caminero 2020.
All rights reserved.
This podcast is provided forinformation purposes only.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by Audiochuck Media Company.

The Brothers Ortiz

The Brothers Ortiz

The Brothers Ortiz is the story of two brothers–both successful, but in very different ways. Gabe Ortiz becomes a third-highest ranking officer in all of Texas while his younger brother Larry climbs the ranks in Puro Tango Blast, a notorious Texas Prison gang. Gabe doesn’t know all the details of his brother’s nefarious dealings, and he’s made a point not to ask, to protect their relationship. But when Larry is murdered during a home invasion in a rented beach house, Gabe has no choice but to look into what happened that night. To solve Larry’s murder, Gabe, and the whole Ortiz family, must ask each other tough questions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.