All Episodes

March 6, 2025 • 31 mins

In this episode, Erick Robinson and Dr. Sonali Mishra examine AI's impact on intellectual property rights, privacy, and accountability, featuring landmark cases on AI inventorship and GDPR compliance challenges. The discussion also highlights explainable AI, regulatory efforts to combat bias in algorithms, and real-world examples of AI-assisted innovations.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:35):
Welcome to the AI Law Podcast.
I am Erick Robinson, a partner at BrownRudnick in Houston.
I am Co-Chair of the firm's Patent Trialand Appeal Board Practice Group.
In addition to being a patent litigatorand trial lawyer, I am well-versed not
only in the law of AI, but also have deeptechnical experience in AI and related

(00:58):
technologies.
As always, the views and opinionsexpressed in this podcast do not
necessarily represent those of BrownRudnick.
This podcast is presented forinformational and educational purposes
only.
Today, I am here with my friend, fellowattorney, and AI expert, Dr.
Sonali Mishra.Thanks for having me today,Erick!Sonali, when we talk about AI and

(01:21):
intellectual property, I think we'relooking at what is possibly the most
revolutionary change to IP law in decades.
I mean, historically, IP law has alwayshinged on the question of human
authorship and inventorship.
But now-Right, Erick, but the issue isthat AI doesn't exactly fit those boxes,

(01:43):
does it?
I mean, if an AI generates a novelinvention or a stunning piece of art, who
exactly owns that?Well, that's exactlythe crux of the debate right now.
Take copyright law, for instance.
In the U.S., the Copyright Office hasstarted to acknowledge works that include
AI-generated content-but only if there'ssufficient human involvement.

(02:07):
They're looking for some tangiblecreative input from a person.
Without it, the work is essentiallyineligible for protection.Which feels, in
a way, like the law is scrambling to findmiddle ground.
Because on the one hand, letting AIoutputs go unprotected could discourage
innovation.
But on the other hand, givingAI-generated works blanket coverage might

(02:29):
flood the system with claims that lackany real human touch.
Kind of a Catch-22, isn't it?Exactly.
And patents are even messier.
There's been this growing push for AI tobe recognized as co-inventors.
You've got examples like that IBM and MITcollaboration, where an AI played a

(02:51):
critical role in inventing asemiconductor material.
The question is, does it make sense toput an AI on a patent application when
it's not... well, legally a person?ButErick, think about the patents we're
potentially losing out on by not adapting.
Imagine all the breakthroughs thatcould've been recognized if these systems

(03:12):
had a formal way to share credit withtheir human collaborators.
AI-assisted patents could really changethe game for innovation.That's true, but
creating a new legal category for
"AI-assisted patents"
would be no small feat.
You've got considerations likeaccountability, licensing,
enforcement-And the ethical side of it,too.

(03:33):
Let's not forget, some of these systemsrely on training data that may not have
been ethically sourced.
How do we handle fair use laws in caseswhere AI insights were built on
proprietary datasets?Yeah, the genie'sout of the bottle there.
But I think we're also seeing someprogress.
Frameworks for licensing AI training dataare starting to appear.

(03:58):
The key, I think, will be creatingstructures that encourage innovation
without opening the door to rampantabuse.You know, we're going to have to
rethink so much about fair use if AIkeeps growing at this pace.
It's not just about intellectual propertyanymore.
It's bleeding into privacy, consent-Andliability.

(04:19):
Don't forget that.
So far, IP law is leading the way, butit's only one piece of the
puzzle.Building on how AI is reshapingintellectual property, there's another
area we can't ignore-privacy.
AI and GDPR compliance-it's anunderstatement to say this is a
minefield, right?Yeah, a completeminefield.

(04:41):
AI systems need an almost insatiableamount of data to function effectively.
The problem is, GDPR wasn't designed forsystems with this level of complexity in
processing personal data.Exactly!
Think about AI profiling.
These models can map out behaviors,preferences-down to details most people

(05:04):
don't even realize they're giving away.
And honestly, GDPR compliance isbecoming...
well, harder to navigate with everyiteration of these technologies.You're
right.
One major issue is transparency.
GDPR requires organizations to clearlyexplain how data is being used.
AI, especially those relying on neuralnetworks, often functions as a black box.

(05:29):
Explaining decision-making?
Not exactly straightforward.And theregulators aren't exactly taking it easy
either.
Just look at the fines some big companieshave faced for lack of transparency.
I mean, if they can't figure it out, whathope does a smaller AI company have?
You see what I mean?Oh, absolutely.

(05:50):
The disparity is staggering.
Then there's the matter of consent.
Many AI deployments-and I mean,especially those in areas like targeted
advertising or financial tools-they...
well, they stretch the limits of whatpeople actually understand they're
consenting to.Which is where this conceptof "AI-specific consent" could come into

(06:11):
play.
Imagine interactive consent.
Real-time demonstrations of decisions,showing users exactly how their data is
being processed.
It's ambitious, sure, but that's whatit's going to take to get meaningful
consent with these systems.Ambitious, butalso necessary.
Otherwise, we risk undermining the entirepremise of data protection laws.

(06:33):
What concerns me is that we're not evenscratching the surface on cross-border
data transfers.
That's yet another layer ofcomplexity.And speaking of complexity,
let's shift to liability andaccountability-a natural next step when
we talk about transparency and consent.
The EU's AI Liability Directive is acritical piece of legislation updating

(06:59):
legal frameworks to handle the uniquechallenges AI introduces when things go
wrong.
At its core, it's about assigningresponsibility in scenarios where AI
systems cause harm.
But the thing is-Wait, are we talkingabout the directive that includes the
presumption of causality?

(07:20):
Because that's groundbreaking.
If harm occurs, claimants can assume theAI provider was at fault unless the
provider can prove otherwise.
That's a huge shift in legalthinking.Exactly.
And that presumption is what makes thisdirective so powerful.
In theory, it simplifies things for, say,consumers who might not have the

(07:42):
technical expertise to figure out how orwhy an AI system malfunctioned.
But, uh, what's your take on how thismight play out in practice?Honestly, it's
a double-edged sword.
Sure, it lowers the burden of proof forusers.

(08:03):
But it also puts AI companies onedge-forcing them to be ultra-transparent
about their systems, which is a greatthing for accountability, but-But it's
also risky for innovation, right?
I mean, the more disclosure obligationsyou pile on, the likelier it is that
smaller players might just throw in thetowel.
The big tech companies can handle it.

(08:24):
Startups?
Not so much.Exactly.
And think about how this directive tiesinto product liability laws.
AI isn't a traditional
"product,"
but its effects-when it goes rogue, likegiving unsafe medical advice, for
instance-can still lead to serious harm.
Courts are already struggling to fit AIinto existing product liability

(08:46):
frameworks.
It's messy.Messy is right.
You've got these hybrid AI systems thatblur the lines between software and
product.
The liability directive recognizesthat-it urges companies to adopt
something like
"AI-specific due diligence."
But how enforceable is that in, say, aglobal marketplace?Exactly.

(09:13):
And let's not forget that the directivealso pushes for algorithmic transparency.
Which, I mean, is vital, but also feelsnearly impossible in cases where
developers themselves don't fullyunderstand how their AI systems make
decisions.
If even they're in the dark, how do youprove anything?Or defend yourself against

(09:34):
claims.
You know, one workaround could bealgorithmic audits-a sort of diagnostic
checkup for high-risk AI systems.
But would they hold up in courts whenlawsuits come rolling in?And that's why
this directive is so fascinating.
It's progressive, and it closes somecritical gaps in consumer protection.

(09:55):
But at the same time, it's creatingentirely new legal questions-especially
when we look at real-world scenarios.
Like, what happens if your autonomous cargoes haywire during an over-the-air
update?
Who shoulders the blame there?Greatquestion.
Is it the car company?
The developer of the autonomous system?

(10:18):
Or the software provider for the update?
These are precisely the kinds ofquestions that'll keep liability lawyers
busy for the next decade.You know, Erick,speaking of accountability, privacy plays
a huge role too.
Managing AI projects in places like NewDelhi and Dallas, I've seen how vastly

(10:39):
different attitudes towards privacy canshape expectations and approaches-and
that impacts everything from transparencyto trust.Really?
I mean, I know there are culturaldifferences, but give me an example.
What's something you've run into?Well, inNew Delhi, there's a growing focus on
collectivism.
Privacy isn't always seen as anindividual right-it's part of the

(11:04):
community's welfare.
Take consent.
People there are more likely to trustinstitutions to handle their data
responsibly, especially if they can seetangible benefits, like healthcare
improvements.
It's not perfect, but it feels, um,different from the data paranoia I see
here in Dallas.That's fascinating.

(11:25):
And in Dallas, or broadly in the U.S.,it's almost the opposite.
"Paranoia" is a strong word, but there'sdefinitely more skepticism around data
usage.
People want control, or at least thefeeling of control-Exactly!
And that's where projects get tricky.
In India, I've worked on AI initiativeswhere we leaned on implied consent-it was

(11:50):
manageable within the legal framework.
But in Texas, oh, you need explicitconsent, clear disclosures, and audits
just to stay afloat legally.
The compliance burden-it's huge.I can seethat.
And yet, doesn't that stricter frameworkgive end-users more confidence?
I mean, sure, it's tedious for companies,but aren't we building trust and

(12:14):
safeguarding rights?Ideally, yes.
But there's a cost.
I've seen smaller startups in Dallasstruggle to scale because legal
compliance ate all their budgets.
In New Delhi, businesses-especiallyAI-driven ones-still experiment freely
while regulators...
you know, play catch-up.
It's a fine line between fosteringinnovation and protecting

(12:38):
privacy.Interesting point.
How about enforcement?
India's Data Protection Act has come upso many times in my readings, but is it
as rigorous on the ground as GDPR ishere?Not quite.
It's still maturing, but there's momentum.
In fact, the big difference is inpenalties.
The GDPR fines can be truly debilitating,remember that French telecom case?

(13:04):
That made global news.
In India, enforcement is often moreforgiving, especially for first-time
offenses.
It's like they want companies to fixissues, not shut down entirely.
It's cooperative-Whereas GDPR enforcementfeels more like an iron fist.
But cooperation sounds... refreshing,doesn't it?It is.

(13:27):
But you've got to balance that witheffectiveness.
I've worked on cross-border AI projectswhere this difference created chaos.
Like, having to explain to an Indianpartner why user data couldn't meet
Europe's stricter localization needs?
Oh, that conversation gets heatedfast.And it probably doesn't help when
laws evolve at very different paces.

(13:49):
I imagine global compliance is alogistical nightmare.You have no idea.
And that's before we even touch on how AIuses data differently across markets.
Let's say you're using a neural networktrained on U.S.
datasets but deploying it in India.
You're looking at misinformingconclusions unless you account for

(14:10):
localized biases.
That goes deeper than privacy-it's aboutoutcomes.Misalignments in outcomes.
That makes me think-where does one drawthe line between adhering to local
privacy expectations and maintainingglobal consistency?That actually reminds
me-balancing local privacy rules andglobal frameworks is one thing, but what

about the bigger challenge (14:37):
transparency?
When the system itself is a black box, Imean, even the creators struggle to
explain how neural networks makedecisions.
How do you regulate something likethat?Exactly!
And that's the heart of the problem,right?
If nobody can explain what an algorithmdoes, how can you trust it?

(14:58):
Companies are under pressure-transparentsystems that still deliver results?
That's a tough balance to strike.It is.
And that lack of trust is why regulatorsare stepping in.
Take the EU's AI Act-they're mandatingalgorithmic impact assessments for
high-risk systems.
Developers have to identify and disclosebiases, risks, limitations...

(15:23):
it's a lot to manage.And let's be honest,Erick, most developers don't have the
tools-or, frankly, the expertise-for thatlevel of disclosure.
You can't just crack open a deep learningalgorithm and explain it like a recipe.
It's complicated.Right.
That's why Explainable AI, XAI, hasgained so much traction recently.

(15:46):
These techniques-whether it's visualizinghow a model weighs variables or
simplifying outputs into human-readableformats-are designed to bridge that
understanding gap.But isn't that a littleambitious?
I mean, the black box problem hasn't beensolved yet, right?
XAI can only go so far.

(16:07):
There's still a big gap between what'sexplainable and what makes sense to
end-users-or even regulators.
You agree?Oh, definitely.
And there's a tension here.
Oversimplify, and you risk losingaccuracy.
Stick to purely technical explanations,and you alienate the very people the

(16:28):
transparency rules are supposed to help.
It's...
a tightrope.Not to mention the cost!
Smaller companies are at a cleardisadvantage.
Developing explainable systems takesresources that startups just don't have.
I mean, how are they supposed to competewith the big players?Which is a fair
point.

(16:49):
But without transparency, these systemsare going to face an even bigger barrier:
public trust-or rather, the lack of it.
Regulators aren't just passing theserequirements to be difficult.
They're trying to ensure accountabilityin high-stakes applications like
healthcare and criminal justice.Right,but I wonder-how enforceable are these

(17:11):
mandates?
Think about algorithmic impactassessments.
Who's qualified to audit them?
Who decides what's fair or biased?
This isn't as clear-cut as, say, afinancial compliance audit.True.
And verifying algorithmic fairness oraccuracy isn't even standard yet.

(17:32):
We're still in the early days ofstandardized practices for algorithm
auditing.
That's why there's this push for morestructured frameworks across
industries.Frameworks that...
might come too late for most companies.
By the time one sector adopts a standard,AI has already evolved into something
entirely different.

(17:52):
It's like regulators are chasing a movingtarget.A moving target, yes, but that's
no excuse to give up.
Take algorithmic impact assessments-theymight not be perfect, but they force
conversations about accountability.
They make developers pause and ask, "Arewe putting something harmful out

(18:13):
there?"True.
And I've seen tangible benefits inapplying XAI techniques.
Like, when AI explains a credit decision,it builds confidence.
But the real issue?
Who defines success here-developers,users, courts, or regulators?
Because their priorities don't alwaysalign.That's the debate, isn't it?

(18:34):
Explainability isn't the end goal; it's ameans to an end.
Regulations like the EU AI Act are tryingto make AI safer, fairer, and ultimately,
more useful.
But there's still so much work todo.Speaking of making AI safer and
fairer, let's talk about bias.

(18:54):
This is one of those issues that hits theheadlines regularly-hiring tools
rejecting qualified candidates, loanalgorithms discriminating against certain
groups.
It's scary how much systemic bias cancreep into systems that are supposed to
be neutral.Right.
And it all comes down to the trainingdata, doesn't it?
AI models are only as unbiased as thedatasets they're trained on.

(19:18):
If the data reflects societalinequalities, then the system is going to
reinforce those same inequalities.
Garbage in, garbage out.Exactly, and it'snot like developers are intentionally
embedding bias-most of the time, they'renot even aware of it.

But here's the thing, Erick (19:36):
even when companies recognize bias after
deployment, their response is often,well, inadequate.
You've seen those hollow apologies, right?
"We're working to fix it." But what aboutlegal accountability?You're right to
point that out.
Addressing bias has real legalconsequences.

(19:58):
Anti-discrimination laws already apply toautomated decision-making systems, but
enforcement is lagging.
Take the U.S., for example-companiesusing biased hiring AI could be violating
both federal and stateanti-discrimination statutes.
And yet, who's monitoring thesesystems?And that's where fairness

(20:23):
standards come in.
There's real movement in industries likehealthcare and finance to develop
standardized metrics for algorithmicfairness.
But Erick, they're so inconsistent!
One sector uses one definition offairness, another sector uses something
else entirely.
At what point do regulators step in andstandardize this for everyone?Well,

(20:45):
they're starting to.
We've seen attempts at sector-specificfairness guidelines, but they're still
fragmented.
And frankly, until we get globalagreement-which, let's face it, is
ambitious-we're going to keep seeingthese patchwork approaches.
It doesn't help that debiasing istechnically challenging.

(21:08):
AI systems don't just rely on onevariable; they process thousands at once.
How do you identify and remove bias inthat kind of complexity?Yeah, that's the
technical side.
But let's talk incentives.
Companies aren't going to spend the timeor money on debiasing unless they're
forced to.
Public pressure helps, sure, but withoutlegal consequences, how many

(21:33):
organizations are actually going toprioritize fairness?Exactly.
And just look at the initiatives aimed ataddressing training data bias-it's a step
forward, but slow.
Licensing frameworks for ethicallysourced training datasets are gaining
traction, but we're nowhere near havingthose be industry-wide standards.

(21:54):
That means a lot of these AI systems arestill being built on skewed
foundations.And it's not just aboutskewed datasets.
Think about the downstream effects.
A biased algorithm in hiring doesn't justresult in unfair practices-it can
perpetuate discrimination at scale.
Perfect example being-Like I was saying,a biased algorithm in hiring can have

(22:18):
massive downstream effects.
Take Amazon's hiring tool, forexample-that fiasco where their system
systematically discounted resumes fromwomen.
It's such a clear case of how analgorithm designed to be neutral ended up
amplifying existing biases instead.Oh, Iremember.
And the root cause?
It was all in the data.

(22:39):
The system was trained on ten years'worth of hiring data that reflected
Amazon's historical hiring patterns.
Guess what?
Those patterns were already skewedtowards hiring men for technical roles.
So, the algorithm basically learned toprefer male candidates because the data

(22:59):
told it that was the norm.Right!
And what makes it worse is how long thiswent unnoticed.
These biases weren't discovered until thetool had been in use for a while.
Erick, doesn't that just scream for moreproactive algorithmic testing-before
deployment?Absolutely.

But here's an interesting question (23:18):
whose fault is it legally?
Is it Amazon for deploying a flawedsystem?
Or the developers for not catching thebias?
That's the legal gray area we're in rightnow.
It's not like the algorithm consciouslymade biased decisions-it simply did what

(23:40):
it was trained to do.And that's the scarypart-these weren't wild edge cases.
This was systemic, baked into the corelogic of the tool.
For me, it highlights the need formandatory fairness audits.
If companies had to prove their AIsystems were unbiased before rolling them
out, situations like this could beavoided.It's an appealing idea, but

(24:03):
enforcement would be a nightmare.
Who sets the standards for what
"unbiased"
even means?
Are we talking about demographic parity?
Statistical thresholds?
And more importantly, who has theauthority to conduct these audits?Well,
we've already seen regulators push foralgorithmic impact assessments in

(24:23):
high-risk systems.
Maybe hiring tools should fall under thesame category.
After all, they directly affect people'slivelihoods.
But it's not just about audits-it's alsoabout accountability.
If an AI system discriminates like this,who's held responsible?Exactly.
And that's where the legal angle getstricky.

(24:44):
Do you treat the AI system like adefective product?
Or do you hold the developers accountableunder negligence laws?
Courts are still figuring this out, butin the meantime, companies are finding
themselves increasingly on thedefensive.And they should be.
Because this isn't just a PRdisaster-it's a legal one waiting to

(25:06):
happen.
Anti-discrimination laws clearly applyhere, even if the tool was unintentional
in its bias.
But I wonder, Erick, do you thinkexisting laws are enough to address
this?Not really.
The laws weren't written with AI in mind.
Sure, you can apply anti-discriminationstatutes, but most of them were crafted

(25:28):
for human decision-makers, not algorithms.
That's why we need adaptations-policiesthat directly address AI accountability
and transparency in hiring practices.Andtransparency is key.
Companies like Amazon can say they'vefixed the issue, but we have no way of
knowing if those fixes are robust-or ifthe next hiring tool will make the same

(25:50):
mistakes.
Which brings us back to explainability.
If hiring algorithms are black boxes,solving bias is almost impossible.True.
And that's where a lot of these systemshit their limits.
Even with explainable AI techniques, it'shard to assure people, much less courts,
that systemic bias is fully addressed.

(26:12):
It's not just technical-it'sphilosophical.
Can AI ever make decisions free of humanprejudices?That's the question, isn't it?
And until we figure that out, regulationswill have to keep playing catch-up.

But one thing's for sure (26:30):
the more prominent these cases become, the harder
it'll be for companies to ignore thelegal and ethical stakes involved.You
know, Sonali, this whole debate reallyshows just how much AI is reshaping the
legal landscape.
When you think about accountability,transparency, intellectual property-every

(26:53):
area is having to evolve just to keep upwith the pace of these
advancements.Right, and what's strikingto me is how layered it all is-like, you
don't just have one set of challenges.
Each legal question opens the door tofive more.
This isn't a case of tweaking a fewstatutes and calling it a day.Exactly.
And it's not just about lawmakers.

(27:16):
Judges, attorneys, businesses-everyone'sstruggling to navigate this landscape.
You take something like AI-assistedpatents.
I mean, the idea that an AI might needlegal recognition down the line?
That's a game-changer.It is, but let'snot overlook privacy.
AI's hunger for data is reshaping how wethink about consent and protection.

(27:39):
We're already seeing frameworks emerge,but...
honestly, Erick, do you think they'removing fast enough?Fast enough?
Hardly.
Especially with global discrepancies inenforcement.
The GDPR might set the gold standard, butcompliance hurdles are immense.
And then you've got systems like, uh, theAI Liability Directive trying to tackle

(28:03):
accountability in entirely new ways.
It's a mess, but it's...
progress, I suppose.Progress, yes, butincomplete.
For me, one of the biggest risks lies inbias.
We've seen how algorithms can scalediscrimination-like that hiring tool from

(28:26):
Amazon-and the legal system isn'tequipped to fully address those failures
yet.No argument there.
And transparency is still the Achilles'heel of AI.
If we can't explain how a system makesdecisions, we can't properly challenge-or
defend-those decisions in court.
It's a foundational flaw.But Erick, flawslike this also represent opportunities,

(28:51):
you know?
Transparency, fairness audits,collaborative frameworks-these are areas
where law and technology can worktogether.
If anything, the gaps show us what needsto be fixed next.True, and I think the
next wave of innovation will be as muchlegal as it is technological.
Specialized AI statutes,industry-specific regulations, globally

(29:16):
standardized frameworks.
It's going to take time, but thegroundwork is being laid, piece by
piece.And that's what's exciting aboutthis space.
Sure, there are challenges-massive ones.
But there's also so much potential.
AI is reshaping not just how we work, buthow we think about fairness,

(29:37):
accountability, and justice.
The law isn't just adapting; it'sevolving right alongside the
technology.Well said.
On that note, I think we've covered a lotof ground today.
From intellectual property quagmires tothe tangled mess of bias and beyond.
Sonali, I've got to say, it's always apleasure having these conversations.

(29:59):
You somehow make legal headaches feelmanageable.Likewise, Erick.
And, hey, here's hoping our listeners arenow a little more prepared for the
whirlwind that is AI and the law.
A lot to think about, but also a lot tolook forward to.Absolutely.
And that's all for today, folks.
Thanks for joining us, and we'll see younext time.Until next time, take care!
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.