Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:53):
Welcome to the AI Law Podcast.
I am Erick Robinson, a partner at BrownRudnick in Houston.
I am Co-Chair of the firm's Patent Trialand Appeal Board Practice Group.
In addition to being a patent litigatorand trial lawyer, I am well-versed not
only in the law of AI, but also have deeptechnical experience in AI and related
(01:19):
technologies.
As always, the views and opinionsexpressed in this podcast do not
necessarily represent those of BrownRudnick.
This podcast is presented forinformational and educational purposes
only.
Today, I am here with AI expert, Dr.
Sonali Mishra!Thanks for having me,Erick!Absolutely!
(01:39):
So, let's talk about AI training-what itactually means and why these systems need
oceans of data to work.
I mean, we're talking billions of textfiles, images, audio clips...
everything.
It's like feeding everything ever writtento create something that can, uh, learn
patterns and replicate human language onsteroids.Exactly.
(02:01):
And this is where it gets messy, right?
Because a lot of that 'ocean of data' is,well, copyrighted.
AI systems can't exactly distinguishbetween public domain works and
copyrighted ones when training.
They just absorb it all.Well, yeah,they're not picky.
But how did we even get to this point?
I mean, copyright laws have alwaysstruggled to keep up with technological
(02:24):
innovation.
You've got things like photocopiers,VCRs, even MP3s pushing the limits.
And now, AI's the next wave.Right, andunlike VCRs or MP3 players, AI doesn't
just copy-it creates.
AI-generated content raises totally newquestions about ownership.
Like, if an AI writes an original poem,who owns it?
(02:47):
The programmer?
The user who gave the prompt?
The AI itself?I think we both know it'snot the AI itself.
At least, not until they let robots havelegal rights, which... let's hope doesn't
happen anytime soon.But that's thething-countries across the globe are
tackling this issue differently.
The EU, for example, is playing itconservative, pushing for stricter
(03:10):
copyright protections even in AI trainingdatasets.
Meanwhile, you've got countries likeJapan or Singapore trying to be more
permissive to encourage AI innovation.Andthen there's the U.S., where the Wild
West approach always seems to dominateuntil something forces regulation.
OpenAI has been pretty vocal aboutlobbying for exemptions, though.
(03:33):
And that could reshape the entireframework of copyright law.And whether
those exemptions actually help or hurtcreators is still...
well, very unclear.
What about the musicians, artists, orjournalists whose work ends up in an AI
training set?
They might argue this isn't justinnovation-it's exploitation.Ah, the
(03:56):
eternal balance-progress versusprotection.
And this isn't the first time we've seenthis catch-22 in copyright law.
It's just the stakes are much higher nowwith AI potentially rewriting the
creative and legal playbook.And OpenAI'srole in the U.S.
debate might be the tipping point.
(04:17):
But whether it's a game changer or justanother chapter in this saga-Yeah, that
remains to be seen.So, as we were saying,OpenAI's role in this debate could really
be the tipping point.
They're pushing hard for copyrightexemptions, arguing that without them,
the U.S.
might lose its edge in the AI race tocountries with more flexible copyright
(04:41):
standards.
It's a bold move, but also kind ofpredictable when you think about it.It's
a fair point.
I mean, when you look at the globallandscape, countries like China seem to
have far fewer restrictions on trainingdata.
If the U.S.
wants to keep up, let alone lead, youcould argue this flexibility is
critical.Right.
(05:02):
And OpenAI's not just talking aboutstaying competitive for the sake of it.
They're saying it's crucial for drivingAI innovation-things like better language
models or breakthroughs in naturallanguage processing.
And innovation, theoretically, benefitseveryone, right?Well, theoretically.
(05:25):
But there's a catch-they're relying onconcepts like 'fair use' to make this
work.
Fair use is already, uh, pretty murkywhen it comes to AI training.
Does using copyrighted works in trainingdatasets really fit within those
parameters?It's murky, for sure.
But OpenAI's argument hinges on the ideathat AI needs to be trained broadly to
(05:49):
learn patterns at scale, not to replicatecreative works directly.
They're walking a fine line betweensaying, "We're, we're not stealing," and,
"We need this data to function."And thetech industry isn't united on this either.
(06:10):
Some companies back OpenAI's call forlooser restrictions, but
others-especially the ones with big mediadivisions-seem to be really wary of
carving out these exemptions.Exactly.
You've got companies on both sides of theequation.
A company producing AI tools might favorthese exemptions, while a company
(06:31):
producing, I don't know, blockbusterfilms or chart-topping songs, they're
likely to push back hard.
It's a classic case of conflictinginterests within the same industry.At the
same time, OpenAI's messaging feels likea pretty confident gamble.
They're basically betting that thebroader economic benefits-better AI
(06:53):
tools, job creation, keeping up withcompetitors-will outweigh the concerns
about, well, copyright harm.But that'sthe question, isn't it?
Do these purported benefits justify thepotential fallout?
Because if the courts don't buy thisjustification, it could mean harsher
restrictions across the board, not justfor OpenAI but for AI development in the
(07:16):
U.S.
as a whole.So, while OpenAI is presentingthese potential benefits as a winning
argument, let's switch gears and considerthe concerns coming from content creators.
They see their work-whether it's writing,art, music-being absorbed into AI
training datasets, and their first reflexis to ask, "What's in it for me?"Exactly.
(07:40):
And their second question is probablysomething like, "How do I even stop this
if I wanted to?" Copyright law wasn'texactly designed to handle situations
where millions of works can be ingestedin a split second.Right.
And for a lot of creators, it feels likethey're losing control over their
(08:01):
intellectual property.
Even if the AI isn't reproducing theirwork verbatim, it's still fundamentally
built on their creativity.
That's a hard pill to swallow forsome.Especially when you're talking about
industries like publishing or music,where profits are already razor-thin.
Throw in AI, and there's this fear-maybeeven a legitimate one-that it could wipe
(08:22):
out jobs or diminish the value of whatthey create.It's more than just fear.
Look at the lawsuits already cropping upagainst AI companies.
Illustrators, musicians-they're saying,"Hey, this isn't just innovation.
This is exploitation." And they're askingcourts to weigh in.And the courts
definitely have their hands full.
(08:44):
I mean, how do you measure something like'transformative use' in the context of AI
training?
Is it enough to say, "Well, the AIcreates entirely new things, so it's
fine"?
Or does it come down tocompensation?Compensation is such a
sticking point.
Some people argue for revenue-sharingmodels-where creators get a cut for their
(09:06):
data being used.
Others are talking about outrightlicensing agreements.
But those ideas are, well, let's just saythey're not exactly popular with the AI
companies pushing to minimizerestrictions.Of course they're not.
If you tell companies they need to payevery single content creator included in
their training datasets-Which could bemillions-Right, it could become
(09:29):
financially and logistically impossible.
So instead, they're arguing that thebenefits to society, to innovation,
outweigh the need for individualcompensation.But that's where the debate
keeps circling back.
Because if creators feel they're beingexploited with no upside, it's hard to
get their buy-in.
And if they manage to push for tighterregulations, the entire AI development
(09:54):
process could hit a majorroadblock.That's the gamble, though,
isn't it?
Either create a system that works foreveryone-or watch the legal and creative
pushback slow down innovation.
The stakes couldn't be higher here.It'sclear that creators are asking tough
questions-and demanding answers.
That brings us to the legal backbone ofthis whole issue: U.S.
(10:18):
copyright law.
It's been evolving for centuries,adapting-sometimes begrudgingly-to new
technologies.
With AI, though, it's like the law isscrambling to keep up, especially when we
look at how training datasets are beinghandled.Right.
Copyright law, at its heart, is aboutprotecting original expression, not ideas
(10:41):
or facts.
That's why the idea of training AI oncreative works-it's so legally
provocative.
The AI can't steal someone's thoughtprocess, but if it trains on their book
or artwork...Exactly.
And then you've got the concept of 'fairuse.' Courts have wrestled with this for
years-it allows unauthorized use ofcopyrighted material in certain cases.
(11:06):
It covers things like criticism,commentary, teaching.
But AI training?
That's uncharted territory.And the 'fairuse' test isn't exactly straightforward
either.
Judges look at factors like the purposeof the use-is it transformative?
Does it harm the market for the originalwork?
Training an AI doesn't fall neatly intoany of those categories.Which brings us
(11:30):
to court cases that are shaping thisdebate.
Take Google Books, from, uh, a few yearsago.
Google scanned millions of books and madethem searchable-it wasn't just
republishing content outright.
The courts ruled in favor of 'fair use',but AI training adds complexity that
(11:54):
wasn't present in that case.Right.
In Google's case, they weren't creatingnew books.
But if AI models are generating poems,paintings, or even software based on
training data?
That starts to look a lot less like fairuse and more like appropriation-or at
least, that's what plaintiffs are arguingnow.And that's honestly the crux of the
(12:16):
debate.
We're not just talking artistic workshere-it's data from news outlets,
scientific papers, product manuals-thingsAI needs to 'learn' from.
So, should the law carve out explicitexemptions for training data?And those
exemptions would sit outside fair useentirely, right?
Like a new legal category specificallyfor AI training.
(12:40):
That's an ambitious shift-and definitelya polarizing one.
For businesses leaning heavily into AI,those exemptions could be a
game-changer.And for content creators, itmight feel like writing off their rights
entirely.
This is why lawmakers are treading socarefully.
It's a balancing act between fosteringinnovation and protecting individual
(13:03):
creators.
Neither side is gonna get everything theywant.Policy changes here are a
double-edged sword.
Stricter protections could stifle AIbreakthroughs, but overly broad
exemptions might crush creativeindustries.
Shareholders might love one side more,but courts have to look at the broader
(13:23):
implications.And speaking of broaderimplications, it's not just courts taking
this on.
Agencies like the U.S.
Copyright Office are reassessing theirrules, and Congress is, well, taking its
sweet time figuring out if this needs tobe legislated at all.
It's not exactly a swift-movingmachine.And it can't move too slowly, can
(13:47):
it?
Because the speed of AI's progress is,well, kind of jaw-dropping.
If the U.S.
doesn't establish a workable legalframework soon-We risk regulatory
chaos-or worse, falling behind othercountries entirely.So, if the U.S.
doesn't act quickly, we risk fallingbehind-and that's not just speculation.
(14:09):
Take a look at what other countries aredoing.
The EU?
They're building on their DigitalCopyright Directive to craft an AI regime
that's, let's just say, not exactlylenient.
It's a more rigid approach compared towhat we've seen here, but it's setting a
precedent-and fast.Right.
(14:30):
The EU's directive is pretty strict.
They want AI systems training oncopyrighted works to get explicit
permission, which...
I mean, sounds good for creators but notgreat for innovation.
Think about the sheer volume of licensingthat would require.It's a bureaucratic
nightmare.
And then there's Japan, where-surprise,surprise-they've taken a much more, uh,
(14:55):
pragmatic approach.
Japan's stance is that as long as the endproduct doesn't compete directly with the
original work, AI training is fairgame.That's interesting, though, because
Japan's exemptions are partly why theirAI sector is thriving.
They're basically saying, "We're notgonna stop you from innovating, as long
(15:16):
as you aren't outright copying." Itfeels...
well, balanced, doesn't it?Balanced,perhaps.
But then there's China, where thegovernment's strategy is less about
balance and more about brute force.
They've adopted policies that,essentially, prioritize state-driven AI
(15:36):
growth above all else.
Copyright?
It's almost a, a secondary concern.Right.
And that's partly because China sees AIas a national competitive advantage.
Their copyright laws are flexible when itserves government ambitions, which gives
their companies a, let's face it, massivehead start.And that brings us back to the
(16:00):
U.S., where-honestly-it's kind of theWild West.
We've got a cacophony of policies, courtcases, and lobbying efforts but no
cohesive strategy.
Meanwhile, you've got countries likeSingapore and South Korea, quietly
modeling their approach somewhere betweenJapan and China.It's funny how far apart
(16:23):
these approaches are.
But at the same time, they're all pushingfor influence in global AI governance
bodies.
The World Intellectual PropertyOrganization, for example, seems like
it's trying to play referee in thischaotic game.True, but governance isn't
easy when the players have such differentpriorities.
The EU wants stricter rules; China islikely lobbying for standards that align
(16:47):
with its broader policies.
And the U.S.?
Well, we mostly seem focused on holdingon to our tech dominance.It makes you
wonder if we'll ever see something likeinternational copyright agreements for
AI-kind of like the Berne Convention buttailored for training data.
Wouldn't that begame-changing?Game-changing, sure.
(17:09):
Realistic?
That's, uh, another story.
You've got creators, lawmakers, and techgiants globally-all pulling in different
directions.
And let's not forget, this isn't justabout regulation.
It's about competitive advantage.Speakingof China, what happens if-or maybe
(17:30):
when-they take the lead in AI?
We just touched on how their flexiblecopyright laws give them an edge, but
pair that with their access to massiveamounts of data and government-backed
initiatives, and it's clear they'resetting themselves up to dominate.Exactly.
(17:50):
And it's not just population size.
They have a massive ecosystem thatoperates-let's be honest-outside the
ethical constraints we see in the West.
AI developers there don't have to worryabout lawsuits over data breaches or
informed consent.
It's efficient, but ethically...
fraught."Fraught" is one way to put it.
(18:12):
And then there's the fact that China's AIpolicies are state-controlled.
Unlike the free-market chaos we've gothere, their government steers the ship
outright.
They're not just competing againstprivate companies-they're competing as a
nation.And that centralization gives thema huge edge.
(18:33):
State-backed AI projects get unlimitedresources and direction.
Compare that to, say, the U.S., whereprivate companies drive progress but also
constantly butt heads with regulatorsand...
each other.Right.
It's like we're playing chess, andthey're playing...
I don't know, blitzkrieg.
(18:55):
The question is, what does that do toglobal competition?
If China doesn't just leadtechnologically but also controls how AI
shapes industries worldwide...It's gameover for a lot of industries.
I mean, imagine AI innovationbottlenecked by policies crafted in
Beijing.
They'd dominate not just the technologybut the global standards for AI.
(19:17):
And for the West, catching up would getexponentially harder.And it's not just
about economics either.
The geopolitical implications arestaggering.
A world where China leads in AI could seetheir values baked into international
tech.
Privacy?
Free expression?
(19:38):
Those concepts might take a permanentbackseat.And let's not forget innovation
itself.
China's ability to move quickly-due tofewer restrictions-could stifle
creativity elsewhere.
Countries still worrying about legalboundaries might struggle to keep up,
while China shapes the playingfield.There's also the matter of ethics,
(19:59):
or maybe we should say the lack thereof.
Training AI without privacyconsiderations or copyright limits might
produce results faster, sure.
But what does that cost on a humanlevel?And can the rest of the world
afford to care?
That's the tough part.
You can't outpace a country that viewsyour ethical debates as weaknesses.
(20:23):
It creates this perverse incentive to,well, turn a blind eye.So, we're left in
a bind.
Compete by bending the rules, or stick tothem and hope innovation prevails anyway.
Meanwhile, China plows ahead with fewerhindrances.
The global balance of power might dependon how other nations respond to
(20:44):
this.Alright, before we delve further,let's zoom out for a second.
We've been talking about these globaldynamics-China's rapid advancements,
ethical challenges, and centralizedapproach.
Now, one big factor in this race iscopyright exemptions.
Let's talk about the ripple effects here.
(21:04):
The first argument that gets thrownaround a lot?
Innovation.
Exemptions could unleash a wave of AIdevelopment, driving economic growth.
I mean, that's the theory, right?It is.
And to some extent, it's alreadyhappening.
Think about all the industries AI isdisrupting-education, healthcare, even
(21:28):
agriculture.
With fewer copyright restrictions,companies could train AI faster,
integrate it into tools more effectively,and, well, compete globally.Exactly.
It's a multiplier effect.
Faster innovation means better tools,better tools mean higher productivity,
(21:48):
and higher productivity fuels economicgrowth.
Companies save time, businesses scalefaster-everyone wins.Not everyone.
What about the creative industries?
Those jobs that rely on copyrightprotections to, you know, stay afloat?
If we're saying economic growth at largeis the goal, someone's always paying the
(22:11):
price for it.Fair point.
And that's where the job market getsmurky.
On one hand, AI could create entirely newfields, new opportunities.
But on the other hand, the risk ofdisplacement-especially in content
creation industries-is very real.Veryreal, and very immediate.
(22:33):
AI can churn out marketing copy, generatecode, compose music...
If it's cheaper and faster, businesseswill opt for that over people.
So, does economic growth offset those joblosses?
Or are we just trading human labor formachine efficiency?Look, the productivity
gains are undeniable.
(22:54):
AI is already improving efficiency acrosssectors, whether it's automating
repetitive tasks or solving problems atscales humans can't even touch.
But the issue is how those gains getdistributed.
Does every sector benefit-or just the bigplayers?Right, and that's where
monopolization comes in, doesn't it?
(23:15):
The companies with the resources todevelop advanced AI might just dominate
entire industries, leaving smallerbusinesses-or even governments-struggling
to keep up.
The playing field isn't exactly levelhere.And the irony is, copyright
exemptions could actually cement thatdominance.
More data equals more powerful models.
(23:38):
And who's got access to the most data?
The tech giants pushing for theseexemptions in the first place.It's a
feedback loop.
They get broader exemptions, gain evenmore competitive advantages, and leave
everyone else scrambling to catch up.
How do you balance that with the need tofoster innovation overall?Well, that's
the tightrope policymakers have to walk.
(24:00):
Too much protection stifles AIdevelopment; too little risks undermining
entire industries and, frankly, publictrust.
There's this myth that you can justlegislate your way into a perfect
compromise, but...
history shows that's more wishfulthinking than reality.And the longer it
(24:21):
takes to strike that balance, the moreentrenched the big players become.
It's like trying to play referee afterthe game's already started-and the score
is something like, I don't know,10-0.Yeah, except in this game, the
referees are still figuring out the rules.
Meanwhile, everyone's arguing aboutwhether the benefits are actually worth
(24:43):
the risks.
And that's not a question with an easyanswer.We were just talking about
balance-finding that midpoint betweenfostering innovation and preventing
monopolization.
But let's dig deeper into anotherchallenge that complicates this balance:
transparency in AI training.
For starters, no one really knows exactlywhat data these models are being trained
(25:08):
on-not even the developers half the time.
It's like trying to audit a black box,but the box just spits out Shakespeare
one day and bad karaoke lyrics thenext.Right, and that lack of transparency
isn't just a technical headache-it'sactively problematic.
Imagine your art or your writing ends upin a training dataset, and no one even
(25:33):
asks you.
Or worse, you only find out after the AIstarts mimicking your style.
Feels like a breach of trust, doesn'tit?It does.
And it's compounded by bias.
These models, as advanced as they are,can reflect-and amplify-the biases in
their training data.
You feed it skewed perspectives, you getskewed outputs.
(25:56):
And the scary part is, it's tough to spotunless you're looking for it.And who's
really looking for it?
Most companies just wanna ship the nextbig AI product, not sort through millions
of training samples for implicit bias.
But the consequences can be massive.
You know, like misinformation,reinforcing stereotypes...
(26:16):
even ethical dilemmas around AI-generatedcontent being seen as fact when it's
just, well, wrong.Exactly.
Now throw in the ethical dilemma of usingcopyrighted material without permission.
Even if it's technically 'fair use,' itfeels exploitative, especially when the
original creators don't see a dime fromthe process-or even know their work was
(26:42):
used.
It's a moral gray area that could havelegal ramifications down the line.And it
goes deeper than just legality, doesn'tit?
Let's say the AI generates somethingtotally original-or at least it looks
original.
How authentic is that piece if thefoundation is someone else's work?
It's like a remix of a remix, but withoutcredit or compensation.So that's the big
ethical quandary (27:08):
How do we balance
fostering AI innovation with protecting
the integrity of original work?
Because right now, it feels like we'releaning too far towards utility and
turning a blind eye to the ethicalcost.Some people are calling for
frameworks-basically, ethical guardrailsfor using data in AI.
(27:30):
Licensing, transparency mandates, evenlimitations on what types of data can be
used.
But creating that kind of standard isn'teasy.
Who decides?
And who enforces it?And even if we couldfigure that out, would it truly be
enforceable?
The global nature of AI means regulationsin one country might not mean anything
(27:53):
elsewhere.
It's an enforcement nightmare, but thatdoesn't mean the ethical concerns should
be ignored, either.Ignoring them isn't anoption.
If we don't address them now, we risksetting a precedent where AI development
always comes at the expense of humancreators.
That's...
well, kind of dystopian, isn't it?It is.
(28:15):
And solving it might require boldcompromises-or, you know, entirely new
approaches to compensation andcollaboration for creators.Building on
what we discussed earlier about boldcompromises and new approaches for
collaboration, one possible solution isimplementing compensation models for
copyright holders-like licensingagreements.
The idea is pretty straightforward (28:38):
AI
companies would pay creators to use their
works in training datasets.
Sounds simple, doesn't it?Simple maybe,but in practice?
It's a logistical nightmare.
Licensing millions of individual worksisn't exactly efficient.
And let's not forget, when you're dealingwith datasets this massive, creators
(29:01):
could get lost in the shuffle completely.
Is there even a fair way to distributeroyalties?Yeah, fair point.
But if licensing isn't feasible, whatabout focusing AI training on public
domain works or materials under somethinglike Creative Commons licenses?
It's already cleared for use, and itcould sidestep a lot of the legal
(29:24):
headaches entirely.It could, but there'sa catch.
Public domain and Creative Commonsmaterials are limited.
Relying solely on them could potentiallystagnate innovation because, well, the
dataset wouldn't be as diverse.
AI thrives on variety-the broader thescope, the better the output.Right.
(29:45):
So maybe it's about striking abalance-focus on freely available data
but expand it responsibly where necessary.
And that's where government regulationsmight come in, yeah?
Some kind of framework that incentivizesinnovation but doesn't completely
bulldoze over creators' rights.Thatsounds good in theory, but government
(30:10):
intervention can be, um, let's say,slow-moving.
AI's evolving so fast, regulations mightend up outdated before they're even
implemented.
What about industry-led initiativesinstead?
Companies could create their ownstandards for ethical and fair use-kind
of like self-regulation.Self-regulationis an option, sure.
(30:33):
But history shows it's... inconsistent.
Companies aren't exactly famous forpolicing themselves when profits are on
the line.
And even if a few industry leaders setthe bar high, there's nothing stopping
smaller players from ignoring thoseguidelines entirely.That's fair.
(30:54):
But it doesn't have to be an either-orsituation.
Maybe the answer is a combination-a bitof government oversight, a bit of
industry responsibility.
Plus, you could foster AI that assistscreativity rather than replaces it.
Collaboration over competition, youknow?Collaboration sounds nice, but the
(31:15):
challenge is convincing companies andcreators to trust one another when the
stakes are this high.
Everyone's worried about losing controlof their piece of the pie.True, but the
stakes are precisely why we can't justthrow up our hands.
Whether it's licensing, open datasets,regulations, or industry standards,
(31:35):
something's gotta give.
Innovation can't come at the cost ofdismantling creative industries.And it
might take a bit of all those approachesto find the middle ground.
Compensation, public interest, ethicalassurances-they're not mutually exclusive
ideas.
They could work in tandem if, and it's abig if, the stakeholders are willing to
(31:56):
cooperate.So, if we're talking aboutfinding middle ground-and let's face it,
we have to-we're looking at this messyoverlap between innovation, copyright,
and creativity.
It's clear that no one's happy with thecurrent setup.
Creators feel sidelined, lawmakers areracing to catch up, and AI companies are
(32:19):
just trying to stay ahead of the curve.
How do we bridge that?Exactly.
And it's not just about laws, right?
It's about values.
Innovation's important, sure, but ifwe're steamrolling creators or skirting
ethics in the process, how much progressis too much?Right, and the stakes go far
(32:39):
beyond copyright.
We're talking about fundamental questionsof ownership, fairness, and trust.
These issues, they're not going away.
In fact, as AI advances, they're justgonna get, well, messier.That's why
businesses, creators, and policymakersneed to start preparing now.
(33:00):
Understand the trade-offs, buildframeworks, and, honestly, start asking
tough questions.
Because the worst-case scenario?
We wait too long to act and find out it'stoo late to fix.And for our listeners,
whether you're a lawyer, creator,entrepreneur, or just curious about how
AI touches every part of life-a debatethis complex needs engagement from
(33:23):
everyone.
Stay informed, voice your perspectives,and don't, uh, don't underestimate the
influence you can have.Exactly.
The more we talk about AI, copyright, andinnovation, the closer we get to
(33:45):
solutions that, hopefully, work foreveryone.
It's a dialogue-an ongoing one-that needseveryone at the table.And with that, I
think we've covered a lot of ground today.
As always, it's been fascinating wadingthrough this legal and ethical
maze.Couldn't agree more.
And thank you to all our listeners forjoining us on this journey.Yeah, we
(34:06):
appreciate you tuning in.
On that note, we'll close out here-untilnext time, keep asking the big questions
and challenging the status quo.
Take care.