Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Brenda, welcome.
Great to have you on the podcast.
Thanks so much for inviting me.
I'm glad to be here.
Why do we need AI lawyers or maybe asked another way, is there something distinct aboutthe practice of law around AI?
uh I think there is, obviously, since I claim that I am an AI lawyer.
I obviously hope that there's a need.
(00:21):
It may be that over time, and this may be part of what we talk about as we go, that itstops being sort of its own specific thing, and every lawyer needs to have enough
understanding about AI to be able to apply to their particular specialty.
But at least for now, AI is not, even though it seems like it's in everything, it's notyet.
ubiquitous in every context in a way that is familiar to most lawyers practicing, youknow, the traditional areas, whether corporate law, individual, civil law, criminal law,
(00:51):
whatever that might be.
So we, I think, are benefited by having some people, uh my team, other lawyers in otherfirms who are really focusing in on what are the new risks that this kind of technology is
creating for companies, for individuals, for society.
And what are the ways that we can address those risks by developing best practices,industry standards, federal regulations, and of course, ultimately legal frameworks and
(01:17):
laws for that.
So yes, is the short answer for all of those words.
What got you personally to move in this direction?
What was your path to becoming an AI lawyer?
Well, I have a background in privacy, which is also not a centuries-old area of law, morelike a decades-old area, uh with the idea of privacy relative to digital privacy, the
(01:44):
internet.
Obviously, we've had the concept of privacy before computers.
But the generation of data that has happened since the advent of the internet and then themobile world and then the internet of things,
has, as I think everyone acknowledges, just really exploded the amount of digitalinformation that is out there and the idea that much of that is very revealing about us as
(02:09):
people, as families, as groups, as organizations.
And how do we sort of address that?
That it's created this whole new ability to access information and make predictions andunderstandings.
so privacy law came into being sort of digital privacy.
How are we going to make projections about that?
And that's where I started.
about not quite 10 years ago now, maybe eight or nine years ago, I was at the FuturePrivacy Forum and we were looking at the collection of data by industry, by application
(02:42):
and all kinds of different ways and thinking about those risks.
And one of the things people were talking about back then, was sort of the era of bigdata, that was the term people were using.
And the idea was, now that we have all this big data and our...
and processing power is increasing, we now have machine learning.
So what does that mean to all of us?
All of us who are privacy and data focused people, what is this AI thing that we'rehearing about and what is that gonna mean to us?
(03:07):
And again, AI had been around a long time, but new and different in this digital age.
So I had the opportunity to start really digging into that in the early days, early daysrelative to our recent timeframe and think about what are the risks that machine learning
and AI create in a digital.
data-driven, hopefully privacy-centric approach and started building out what that mightlook like and what that might mean.
(03:32):
And so I sort of drifted away from the privacy focus into the AI focus and I've been hereever since.
So um it's been a journey.
Absolutely.
And so I believe it's now six years since you co-founded Luminose Laws as one of the, ifnot the first AI specialist law firm.
Let me just first ask you generally, how has the legal landscape around AI changed sincethen?
(03:55):
Well, just to be clear, I was not one of the co-founders.
I was one of the early partners in it, but it was co-founded very uniquely and stillfairly uniquely by a partnership between a lawyer and a data scientist.
And they brought to us that whole concept that this was a uh needed perspective that was anew way of thinking about law in this context is to have that technical expertise on board
(04:16):
as well.
I joined within a couple of years of that.
ah And that was our whole approach was
If all we're doing is looking at the machine learning, AI aspects, and of course, a coupleyears later, generative AI jumped into that short list of things we were focused on.
Again, what are the risks?
What are the concerns?
(04:37):
And what are the both technical and policy legal ways that we can help companies orindividuals or organizations address those risks?
Do you think it will become more common for law firms to partner with data scientists inthat way?
I don't know.
(04:58):
I think it will possibly and maybe already is becoming more common for law firms to pullinto their inside the organization people with a certain amount of technical expertise and
capabilities.
So I don't know how many are going to actually pull it up to the partnership level andmake that their focus because you know for big law and things AI is still going to remain
(05:18):
just one of the things they do even if it's across many applications and use cases.
uh
But I do know already others that have the whole data scientist idea of either contractingwith or bringing on board people like that in particular.
So law firms tend to be a little traditional and change a little slowly sometimes.
(05:40):
But ours, our startup firm was breaking that mold very obviously.
And then Zwilgen, which acquired us recently, uh has also sort of been a ground breaker interms of technology law, generally privacy and security law, and now AI law.
they're more than willing, I think, to look at things in a new and different way andfigure out what does the future look like for legal services on things that require this
(06:03):
level of technical understanding.
Yeah, so maybe let me ask it a different way.
What's the benefit that you get as an attorney engaging with data scientists as part ofthat legal practice?
Well, I get a really daily education for sure about a lot of things that are prettyinteresting.
um I'm a technically savvy lawyer, but at end of the day, I'm still a lawyer.
(06:29):
I'm not a programmer.
I'm not an engineer.
I'm not a data scientist.
I'm not even a statistician.
Not saying that's minimal, just saying that's been around even longer.
And so um down at the very detailed level, I still don't
you know, know how to do those things.
I know more about them than many people do or even than many lawyers do.
(06:50):
But when we want to talk to a company about the risks of certain kinds of models or therisks of certain kinds of uh predictive systems that they're going to incorporate into
their business functions, I can go so far, but there is a point at which their technicalpeople are asking the question, okay, but what about this or what does that mean to me or
(07:12):
how do I
How do I reach that sort of policy ideal or goal?
And it's really, it's not just helpful, it's invaluable to have the technical people on myside who can actually answer that question.
Who can say, this is what that means.
This is, you know, using technical language or statistical analysis or whatever else cansay, this is what that means.
(07:33):
So I learn more every day.
I get a lot of expertise from being with them, but I will never.
you know, unless I go back to school, I will never be a um specialist in any of thoseareas.
And since the people on the other side of the table have their trained, the lawyers on theother side of the table have their trained specialist sitting next to them, it's really
(07:55):
almost essential for me to have one too.
How much of the practice today would you say is the kind of situation where it's fairlyclear what the requirements are and the company is just trying to comply with the legal
regime versus no one's really quite sure exactly what the obligations are and companieshave to decide, all right, what's the level of governance that we need to put into place
(08:23):
in the absence of that clarity?
Um, well, compared to many other areas of law, I would say we fall entirely into thesecond bucket of there's no clarity, very little consensus, guidance or certainty.
Um, there are emerging agreements, emerging ideas and expectations, know, NIST has put outthings, other places are, but even at that level, you know, with the current
(08:49):
administration has changed focus to some degree on, their desired approach to AI.
And so some of what we felt like was some fairly stable guidance from some of the federalagencies has been retracted.
So to whatever extent we even had certain kinds of certainty based over a relatively shorttimeframe of two to three to four years of actions by federal regulators, some of that has
(09:17):
either gone or evolved or is up for grabs, so to speak.
And then out in the court system,
There just hasn't been a lot of time for a lot of cases addressing all the differentvarieties of things to really make their way through.
So we're starting to get some of that, getting some in biometrics in certain places,getting some in other kinds of, uh you know, truth and representation of AI um in a couple
(09:43):
of, you know, various states that have taken that on to not let companies misrepresentwhat AI can or can't do to be transparent about it.
Obviously, there's a lot of churn in the intellectual property world.
uh What are the impacts on copyright and other questions of these new use cases of data totrain these large systems?
But there's just, you know, there just hasn't been a lot of time for the questions to evenall be well articulated, much less clearly and definitively answered.
(10:10):
And where there is guidance that's come out, it frequently is in such an early stage thatit changes.
You know, we have a law in New York City for audits that is sort of under discussion forpotential change.
uh
Colorado passed a law that we're already looking at changing the EU AI Act is out and ifif anybody felt like anything with that was a stake in the ground or the pin in the map or
(10:31):
whatever the metaphor is it was that and now they've also put things back on the table toyou know, maybe we want to maybe they want to change how they approach things so You know,
we try to help companies reach what is your reasonable?
is defensible?
What is um Ethical in some cases um
what they can do to show consistency because we know that just in law generally if youcould show that you had a plan and you followed it, you documented it, like if you had
(10:59):
some sort of structure, that's always better than not.
So we certainly have helped many companies do that.
And again, there are, you know, down at the detail level, there are things that we canhelp companies do with audits and uh testing and things like that to try to get some
clarity.
But at that top level and at the legal compliance level, there's just not a lot of clearanswers yet.
(11:21):
Yeah, so then how do you help companies understand what it is they should be doing, giventhat it's not necessarily a matter of compliance?
Well, we do look at, again, sort of the things that are well established frameworks.
So things like consumer protection approaches to things, things like false advertising,things like product safety.
(11:43):
If we're gonna be talking about autonomous vehicles, we have robots, we have other thingsthat have these AI components now, but they exist in the physical world.
So we can look and see how has product safety been applied in the past, how haveexpectations been generally.
prioritized, things like that.
So we can take the frameworks that exist.
(12:05):
we're working in a regulated space like healthcare or financial services, we can look at,again, the sort of uh expectations and ideas behind what regulators have done, what their
underlying perspective and approach has been for how they assess new.
developments in those areas and we can sort of, you know, predict, make reasonablepredictions that this at least will be, uh you know, again, a good starting point, a good
(12:33):
way to approach or structure things.
We have in the financial world, for example, they've had model risk management forstatistical models, not specifically AI models for over a decade.
And so, uh you know, there's a lot of that, that's a lot of uh precedent in the worldwe're operating in generally here for some ways.
And so we can look at that and
(12:54):
what are the expectations and requirements and infrastructure that they've assessed aroundfinancial services models that maybe make, that do credit scoring, that make credit offers
to people, that approve loan applications, that do know your customer, anti-moneylaundering, fraud detection, all those kinds of things are existing functions that now
(13:19):
have an AI component and we can look at how have they been handled before and what is our
sort of reasonable and rational approach to how that might look under AI uh until we getfurther clarification from regulators or legislators, either in the US or abroad.
What's the hardest uh thing for companies to do?
(13:42):
Either maybe too general to answer definitively, but either the legal issue that you findcompanies struggle with the most or the step in terms of doing effective AI assessments
and governance that's the most challenging.
I don't know this is specifically a legal issue, but I think the hardest thing for all ofus is just to keep up.
(14:03):
Things are changing so fast.
had, like I said, we had this idea of big data once systems went mobile in the 2008 to2010 timeframe, like the mobile phone first came out in 2008, the iPad came out in, or
2007, I think, the iPad came out in 2010.
And so when computers were on a desktop at home, even if they were laptops and prettyportable,
(14:28):
there was a much more limited sphere of interaction data.
Once that went mobile and we all started carrying these things with us everywhere we wentand the amount of data that that generated just literally exploded.
And then we started connecting other devices to the internet and more exploded.
So we had that big data, then we started having the better processing power.
um We got to machine learning.
(14:49):
We were only barely starting to get our sort of hands around that or ideas around that.
um
And it started being integrated into a number of systems in ways we maybe weren'texpecting.
And then we had generative AI.
And we've only had that for, you know, really a couple of years.
And now we have agentic AI that we're trying to figure out, keep up with.
(15:09):
So it's just a lot.
And even the technology companies whose whole focus is on these things have a hard timekeeping their policies, keeping their training, keeping their uh user
disclosures and interfaces, keeping uh their safety procedures, building new incidentresponse plans that include AI beyond just traditional security, uh figuring out what
(15:35):
safety is going to look like.
So keeping up with all of that is just a challenge.
It's really hard.
that goes maybe back to your very first question of why do we need AI lawyers or AIanything is because you need somebody who's just like just thinking about this all day to
try to kind of keep up with
what even the new challenges are, much less the answers to them.
(15:58):
What are the most uh important elements of doing an effective AI audit or assessment?
well, to back up a little bit of step before you do an audit or an assessment, I think themost effective thing is to have some kind of formalized AI governance.
So even if you're not a technology firm, even if AI is not a part of your outward facingcustomer facing product, you are probably using AI.
(16:24):
You might be using it your HR function.
You might be using it just in the tools that your employees use to do their job.
but there's probably some AI in there somewhere.
So having some kind of oversight and functional framework.
that tells you how are you using AI, where are the places that it creates risks, and ifnothing else, your vendors are using AI and you're having to screen for what AI services
(16:47):
am I being offered and how do I keep myself safe, either technically or contractually umin those relationships.
So putting a little bit of time and attention and resources to that at the appropriatelevel for different companies is the first step.
A part of that attention is figuring out where and to what extent are you responsible forthe functioning and the performance of certain systems such that you should be testing
(17:14):
them, auditing them, red teaming them, uh monitoring them in some other way.
So definitely that is a key point, but it has to be sort of a part of a larger governanceoversight function.
No, it's a very good point and um I can certainly see that.
But then once the company gets to that point of saying we've got AI governance and werealize that we need to have some sort of systematic testing, whether it's called an audit
(17:44):
or not, uh there are lots of different tools out there, lots of different vendors outthere, lots of different things that people call an audit.
uh So are there things that you think about in terms of what would
make this something serious and effective?
Well, again, the probably one of the first sort of threshold questions at that point iswhat do we need to audit?
(18:08):
What are the systems that we're using that create the most risk?
Because there's there's just not time, people and money to test and look and trackeverything.
And so every company is going to have to determine where where are my highest riskfactors?
Is it is it dealing with, you know, people's individual rights like uh employment?
(18:29):
financial services, healthcare, uh education, whatever some of the highly regulated spacesmight be.
uh And if so, those are, you know, almost my definition gonna be some of the places wherethe risk is higher, i.e.
that's why the EUAIA act focuses on those kinds of things.
And then within that, uh again, what are the functions that we're actually responsible forand in control of?
(18:52):
Is it a vendor provided service that we just wanna sort of make sure we have some sort ofliability protection against?
Or is it something that we are actually responsible for?
And if so, then that's where you take the deep dive and you say, okay, if it's my model orI'm the owner operator of a model in a way that makes me responsible for the outcomes,
what are those outcomes and how do I need to do a risk assessment?
(19:15):
ah Do testing?
What standards am I gonna apply?
What thresholds am I gonna establish?
What's the frequency of my oversight gonna be?
And
to what extent am I doing that in response to potential uh liability or some other kind ofresponsibility to make sure I'm sort of balancing that approach necessarily.
(19:39):
So at the end of the day, that will mean that there's probably a couple of models at anygiven company that they wanna do a model audit for to figure out is it operating
correctly, there's an accuracy performance measure, there's also measures for bias,there's measures for safety.
And, you know, just to sort of maybe approach the elephant in the room a little bit.
(20:01):
When I say bias, I'm not just talking about anti-discrimination or protected classes orcivil rights or anything like that.
Bias is just unequal operation on people in different groups that can be identified by aset of characteristics.
So you don't want that from a business standpoint in many cases.
You don't want bias to people regionally perhaps, or, you know, some othercharacterization or categorization.
(20:24):
If that's not part of the product or service you're offering, that it's sort of de factogoing to have that sort of uneven uh attraction and impact for people, you want to figure
out and make sure that you're not inadvertently biased against some group or category thatfrom a business standpoint is bad for you.
(20:47):
And then, of course, obviously, hopefully for reasons of human rights and uh other...
civil liberties and things like that, you also have uh those kinds of oversights, but itis a functional question from a business point of view regardless.
Yeah, and as you mentioned, the Trump administration has come in and really pulled back ona lot of the prior Biden administration's efforts on AI has taken a very deregulatory
(21:14):
tone.
What are you seeing in terms of how companies are responding to that in terms of howthey're on their own thinking about AI governance?
Um.
So obviously whoever our clients to some degree are sort of a self-selected group ofpeople who have identified this as a priority for them.
(21:35):
So they are probably biased.
They may not be representative of the large number of companies that there are out there.
But at least to the extent that I work with companies who are making these decisions,again, many of them see this as framed as a business, uh responsible business operational
question.
not just a legal compliance question.
(21:57):
There's a lot of overlap and some aspects might be one or the other, but you want to beable to know that your system is doing what you need it to do in the way that you want it
to do it.
So if you're making credit offers or you're offering, you know, health diagnoses or your,you know, whatever it is that you're doing and you're now doing it with AI, you want to be
(22:19):
able to verify that the way you're doing it with AI is actually better.
than what you were doing before or whatever your other alternatives might be.
And you wanna make sure that it is actually working and getting you the benefit that youprobably chose to use AI in order to achieve.
And if you can't demonstrate that even for yourselves internally, you're sort of operatingin the dark.
(22:41):
um And then hopefully, you can demonstrate that internally and then hopefully obviouslyyour business partners, your third party vendors, your customers um and regulators are all
gonna wanna have
a little bit of a stake in can you demonstrate that you're doing something well?
And then of course at the end of the day, hopefully if you're doing it better than thenext companies, you know, that's your competitive advantage as well.
(23:05):
We have certainly done audits and certification kind of things for companies because theywant to be able to show it to potential business partners and say, look, we have a model
that does X, lots of people have a model that does X.
But we had ours tested and it does acts really well or it does acts to this reliable,dependable, consistent standard.
And we can show that to you.
And maybe the other companies don't have that or can't have that.
(23:26):
So, you know, there's a lot of reasons for it.
And you mentioned a couple of times the European Union AI Act.
We have a model from privacy with GDPR coming out of Europe and there was a Brusselseffect, either directly or indirectly.
It was a template.
Multinationals were complying, even if they're not based in Europe.
(23:49):
uh It seems like that's not entirely the pattern that is happening with the AI Act.
um
But I'm curious about your thoughts about its implications uh globally.
if Europe is not basically setting the tone in the same way, how do we get to some sort ofglobal understandings that companies can respond to in terms of what the basic legal
(24:18):
regime should be?
Well, I do just want to clarify that this is just my opinion.
If you asked 10 other people on 10 other, you know, series of your podcasts, you'dprobably get 10 other opinions and all of them would have some, you know, probable
defensible rationale behind them.
um But I think it really has.
I would say we're not at the end of the chain yet in terms of the Brussels effect hasn'tput us in a here's how we're going to operate sort of state.
(24:44):
But it certainly has framed the conversation.
You know, they were the first ones to put something out that was comprehensive, uh youknow, that applied to AI across industries and contexts and use cases.
And so the way that they approached it, if nothing else, the fact that they are using thisrisk based approach, prioritizing things that have high risk, prioritizing impacts on
(25:06):
individuals and their human rights has now been the first people to lay, you know, thelegislative action to lay that something down.
and saying this is the way we're gonna do things and here's how we're looking at creatingthose standards.
And they haven't really gotten to all of the details yet about um how they're gonna carryout some of those sort of ideals and ideas that are in the act.
(25:29):
They're starting to get some of that more guidance out and now we're gonna have morediscussion, I think, around some of it.
But I think, you know, they have already had the Brussels effect and the fact that nobodyelse is gonna be able to do a comprehensive law that isn't gonna be compared to the EUA.
So it might be the same, it might be different, but it is by definition going to becompared against that.
uh And so, you know, again, so if nothing else, they've they've put a framework out there.
(25:52):
uh But I do think to your point, it isn't yet the GDPR in the sense of sort of creatingthis sort of standardized baseline and operating standard that does ripple out beyond just
the areas of scope of the law, partly because the law hasn't even gone to effect yet, youknow, entirely.
mean, parts of it have.
but it hasn't even fully matriculated yet.
(26:13):
And there isn't all of the guidance necessary to really implement it yet.
And there is a lot of pushback.
mean, it's hard to make a comprehensive law that affects so much across so many different,you know, aspects of our life and world.
And so there is, you know, it was what two, three, four years in the making uh just to getit to that point.
(26:33):
And generative AI kind of came up in the middle and they had to really scramble and read.
big group to incorporate that.
And there was a lot of change in that last year to get it across the finish line.
So the fact that it's still being, you know, seen as somewhat malleable or open todiscussion, I think is reasonable.
Like I said, I think the biggest change is just keeping up and trying to pass a law in asystem that takes years to pass it and then then implement a law is always going to be
(27:02):
playing a little bit of catch up at least for a while.
So we'll see.
We'll see what happens with it.
But, uh you know,
They've taken it on and they've put their mark down and uh we're all going to have to dealwith that now even if that mark changes somewhat.
At least they have a legislature that seems to be able to pass things with someregularity, unlike the Congress in the United States.
(27:26):
we will see there as well.
yet have comprehensive federal privacy legislation, and we've been talking about that fora decade, um I'm not going to hold my breath on comprehensive AI legislation.
I'm also not entirely sure that comprehensive AI legislation is really the only or thebest answer or that it's the final answer, because I think AI is going to be working its
(27:50):
way into so many different aspects.
Like I said at the beginning,
we may not need an AI only focus forever because eventually it's going to be people areback to whatever the focus of their specific operations are and understanding AI and how
it impacts that.
it may be that while we need some general sort of high level expectations and guidancearound anything that incorporates these autonomous learning systems, at the end of the
(28:17):
day, that will never be enough in and of itself.
We're always going to have to have finance specific, retail specific.
consumer specific versions of that to really make it have any real value.
How is AI, oh, sorry, no, we'll go, yeah, certainly.
Yeah, I just got one or two more questions.
yeah, no problem.
(28:39):
How is AI affecting your own legal practice?
um Well, obviously, since my practice is entirely focused on AI, it impacts it prettydirectly in terms of the content of my work.
But to your point about like as a law firm, um I would say I don't have firsthandknowledge of what other law firms are doing, but I would say we probably are interested in
(29:01):
experiment and try things out as much as anybody.
I certainly use chat GPT a lot.
I do not have it generate briefs for me such that I'm going to have false citations.
First of all, I'm not a litigator, but you know.
The litigators in my firm do not do that, thankfully.
Sure.
how many of these examples we are seeing?
The first couple, it was fairly early on and it was like, okay, people don't get it, butthese examples keep coming up of these hallucinated citations.
(29:28):
Yeah, no, I am surprised in one way, um but at the same time I think if nothing else it'ssuch a great example of how influential, I'm trying not to use word manipulative,
influential the interaction with these systems can be because they are designed to be umbelievable, to be certain, to be convincing.
(29:57):
They're not built to be true, but they are built to be convincing.
And so the idea that we continue to preach over and over and over again, be cautious,double check specifics, don't trust outputs.
It's literally on the bottom of your screen.
If you're using any of the big commercial LLMs, this system may be wrong, check yourresults.
And yet as people, we are just ah easily...
(30:25):
too easy to trust and relax into something because it feels right and it sounds right andit just seems so confident.
And again, I use these systems and I know to check things and it's still hard.
It's still hard to do that.
I'll feed in some prompts and get some output and get something that I think based on thefact that it's something I have a lot of knowledge of already, I have a good output for, a
(30:51):
good overview of, but I'll still send it.
because a lot of times it has more technical stuff in it too.
I'll send it to one of those data scientists we were talking about earlier.
saying like, before I, you know, use this as a conversation starter with a client or usethis as a way of framing something, is this right?
Are they, are they representing the technical aspects correctly?
Like they feel right to me, but do you think so?
(31:13):
And almost inevitably I'll be like, well, not quite.
There's this and there's that.
And so it, always makes a great starting point.
It's always great for kind of to get things rolling, but
It's never the end point.
It's never the final product.
You have to go back and put the real expertise and uh edits and I don't want call it spin,the uh adjustment into how it really needs to be for accuracy, especially as a lawyer, if
(31:42):
I'm giving people advice, I have to be confident that everything in it is right to theextent that I can validate and ascertain that it's right.
And it isn't, it isn't ever completely right.
But it's hard not to be, it's hard not to take it at face value.
Yep.
And how else do you see uh AI changing legal practice in the sense of, you know, where'sthe dividing line between the things that turn out to be easily automated uh and the
(32:13):
things where they're still really value and having human who may be augmented, of course,by the technology, uh but it's still unique.
I mean, you know, it's great for some kinds of research.
And so for legal research, you know, I went to law school after, you this is sort of mysecond career.
So I went to law school, you know, more recently than some of my peers.
(32:35):
And when I was in law school, things were already automated in the sort of like Westlaw,you could already do searches.
And I remember learning how to go find things off shelves in the library and thinking Iwould never have survived if I had had to been a lawyer, you 20 years before I became a
lawyer.
I don't know that I would have survived.
Whereas the lawyers who did learn that didn't trust and were uncomfortable with the searchfunctions.
(32:56):
But I was used to automated things and searching and understanding how search functionswork and what to be cautious about.
And it was just a immense beneficial tool for me.
And AI has taken that to the next step.
It can provide additional uh analysis across multiple input sources as opposed to justcase law, for example, or it can give you
(33:18):
additional sort of multi-layer insights or conclusions or connect things together that youmight not have thought of or you can think of it and then it can go out and connect it
together in a way that you wouldn't have been able to do necessarily, certainly not asquickly as it can.
um So, you know, it offers a lot of things for that.
It's just, again, you always have to be careful that you're double checking everythingthat it's doing and you're verifying and validating the things that it says it's pulling
(33:46):
from or pulling on.
uh But I think it could be a really useful tool for that.
uh I'm not a litigator, but again, I understand that there's some really great tools tospeed things up or make things easier, more consistent.
uh Document review and discovery could be a lot more efficient and comprehensive.
(34:07):
There's the old movie sort of cliche of somebody carrying in boxes and boxes and boxes ofdiscovery because they know they're going to overwhelm the opposition with
that won't have enough time for people to go through everything.
Now with an AI, you can.
And again, you're going to have to double check and validate some things, but still you'regoing to get hugely further down that road than you could have otherwise.
(34:29):
just all kinds of things that I think it will do and has the potential to do.
Great.
Well, I think we are out of time.
Brenda, I say thank you.
It's been really pleasure to speak with you.
Great, thanks so much, I enjoyed it.