Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:18):
Hello, and welcome everyone to yetanother edition of DRIs Cybersecurity and Data Privacy
Podcast. I'm one of your hosts. I'm Chris Flurry. I'm an attorney
with Ellison Winners out of Raleigh,North Carolina, and I gotta tell you,
even though it's starting to feel likefall and college football is about to
get started, it's still like onehundred degrees outside here in Raleigh, so
it's it's pretty crazy. But youknow, I'm ready and excited for another
(00:43):
episode. Jeremy, how's everything goingfor you? I was in Birmingham earlier
this week where it was one hundredand five, so ninety five no problem.
I hear that I grew up inAlabama, so I know all about
one hundred and five and you know, one hundred and forty percent humidity,
so I have felt it. Today. We've got a really cool guest,
(01:04):
so a fam Okiky. He's thefounder and he's an intellectual property and technology
attorney at Aiokiki Legal PLLC. That'snot a Houston, Texas a farm holds
a master's in Businesses Administration. He'salso certified a Farm's practice area. It
includes IP law, technology, entertainment, sports, data privacy, cybersecurity,
(01:29):
and he counsels his clients from allover the world, clients from early stage
startups to establish companies on a varietyof technology issues related to artificial intelligence,
machine learning, blockchain, cryptocurrency,n f TS, metaverse, all that
stuff on the bleeding edge of thewall a fom. He you know,
he brings a unique blend of businessacuity and technical expertcy expertise to assist his
(01:53):
clients in navigating these unique legal complexities, mitigating risk, and achieve their strategic
objectives. And today, well,if Im is here and he's going to
talk to us a little bit aboutone of the hot asaries of the law,
and that's artificial intelligence. If I'mhow you doing and tell us a
little bit about you? All right, thank you for the introduction. You
(02:15):
know, I'm well, yesterday wasactually my birthday, so happy thirtieth birthday
to me, birthday to you.Yeah. So, technically, the way
that artificial intelligence just has been growingall over the Internet and in the world,
(02:37):
it's now come to a concern thatwe need to figure out how to
adapt to the these changes of artificialintelligence. So in this present or this
podcast, I'm happy to speak moreabout artificial intelligence and just how affects society
(03:00):
as a whole. Well, we'recertainly you know, we'd love to have
you here and happy to talk aboutthat topic. One thing I want to
lead out with because I always liketo lead out with a big, obvious
question because that's where I am.But when we talk about artificial intelligence,
that phrase artificial intelligence AI, whatdo we mean? Like, what are
(03:21):
we talking about when we say artificialintelligence? Right? So the way artificial
intelligence general definition is that it simulateshuman intelligence and complete certain cognitive functions like
memory processing, learning, particular taskthings like things in that nature. And
(03:47):
what's fascinating about artificial intelligence is notjust the general basis of artificial intelligence,
but it's subsets, and that includesmachine learning. The underpinning technology behind machine
learning is based off of neural networksor deep neural networks under deep another subset
(04:14):
of deep learning. And these variousdifferent subsets make AI even more substantially stronger
than actual human intelligence. So justbased on how it can learn from the
data that is presented in certain AImodels. And I know for a lot
(04:36):
of us when we think of AI, now we're thinking of chat GPT,
but it really goes back further thanthat, doesn't it. Oh, of
course, you know, artificial intelligenceis not something relatively new or is not
even in its nascent stages right now. Artificial intelligence was actually introduced in nineteen
(04:59):
fifty six by John McCarthy, butthere hasn't been that much governance behind it.
And now with the new introduction ofgenerative AI with chat, GPT and
things like Google Bard or Meta's versionof Lama, these generative AI learning models
(05:19):
now make AI even more powerful thanit was decades ago. So when we
use that phrase AI, I meanwould we be accurate and talking about lots
of other things that are you know, smarter and better than I am,
Like you know, word auto completeor Excel with doing formulas. Think are
(05:44):
these also AI? Or you know, is there something in the technical aspect
of how it works that makes thatdefinition apply differently? So artificial intelligence?
That's a great question, by theway, So artificial intelligence is not based
off of of set algorithm. Sofor example, if we have at a
(06:11):
chatbot where they are expecting a certainresponse, for example, you say yes
or no, then you get amessage back whether or not you responded yes
or no. That is just abasis general basis data processing. But with
artificial intelligence, it incorporates a consciousnessin the sense that it provides answers that
(06:36):
are appropriate in the circumstance. Andit also models its language after human human
communication, such that it's not somerobot that you're talking to, or for
example, then Excel spreadsheet, it'snot just two plus two, but it's
also two plus two. And thenwhere can we go with two plus two?
(07:00):
And what else do you need fromme after I given you two plus
two? That sounds really useful becauseI often don't know the answer to that
question. And so as we thinkabout broader incorporation of AI and chat GPT,
what are some risks for folks asthey incorporate that into their daily businesses?
(07:27):
Great question. The businesses now aretrying to utilize automation and optimization with
AI, right, we have humanhuman HR they use ATS to read the
review resumes. So in that limitedexample, there's other examples that also bring
(07:51):
about certain risks. For example,there's risk of bias, their data privacy,
risk of the personal data that's collectedthat you that that's based off the
data that the airain learning model UHanalyzes and there's also you know, intellectual
(08:11):
property concerns, several private or severaltransparency concerns. Given that what if this
is, if the AI language modelis process without any human involvement, then
that there incorporates it incorporates something whereyou to which you have to you have
(08:37):
to understand exactly where that information iscoming from. Right, So, if
you are solely relying on an AImodel or AI generative model, then there's
a multitude of risks that you runinto. We have a recent example,
(09:00):
it's not in the business sense,but more so in the legal arena,
where there was a New York lawyerwho used chat GPT to file legal legal
documents sided cases that didn't exist.This is just one recent example, and
there's other examples of LAEV in Texasand Illinois and even in d C.
(09:22):
And these states actually implemented their owncourt mandates where they require attorneys to add
disclosure disclosures into their legal documents ifthey're utilizing AI. That's exactly right.
I've seen in some local courts,I think in Texas where there's a rule
(09:45):
that you have to certify that you'renot using chat GPT to conduct your research,
and in the New York example thatyou mentioned. I think there were
some hallucinations that the chat GPT kickedout for our viewers and listeners. Here,
can you tell us what what isa hallucination in the chat GPT context.
(10:09):
Of course, a hallucination is essentiallychat GPT trying to emulate real,
real citations. So, for example, in the New York case, if
the if the real case is forexample, Marbury versus Madison, the attorney
(10:33):
actually use the chat GPT to havea fake case for example Okiki versus Falcone.
But though it is emulated similar toactual citations, but it's completely wrong.
And that's one of the issues withchat GPT is that that the information
(10:54):
that it provides is not completely accurate. So it's the duty is on the
person who's using the generative AI modelslike a chat GPT or a Google bar
to do their further research and goback and make sure that the information is
correct and that the laws correct,that the citations are correct if they're going
(11:18):
to use that for their research.But if they are using it for their
research, I will say this froma data privacy perspective, you have to
you have to be mindful of thekind of data that you are inputting into
these prompts and make sure that you'renot providing any confidential information that would bring
(11:41):
up attorney client privilege where you waivethat privilege by providing confidential information to a
third party. And we seen caseswhere or there's cases in development that are
against open AI for you under copyrightlaw for using copyrighted material to train their
(12:05):
AI learned AI model to best helpor to best serve the needs of whoever
is using chat GPT. So ifyou need an article written, or you
need a reference of a book ora summary of a book, well in
the back end of chat GPT,they already have a list of copy copyrightable
work. Now the question is isthis fair use. That's I think that's
(12:31):
left for the court to determine.In my position, I believe it's all
dependent on how much human involvement isused with these AI or generative AI models,
how much creative thinking is used onpart of the human to actually implement
something that is really that is newand that is not robotic. People can
(12:56):
I I can tell and certain peoplecan tell if something is something was written
by chat GPT, and that's alsosomething that we needed to need more awareness
on people learning figuring out. Okay, what how do we analyze something that
(13:16):
is generated by AI versus something that'sgenerated by a human. What do y'all
think about that? So, ifI'm a couple of weeks ago, I
was at a trial academy and BobChristy, who's an attorney out of Seattle,
and he's a huge proponent of technologyin the law and particularly like use
of technology in the courtroom. Anyways, really excited by the chat GPT process,
(13:39):
and so Bob put on this presentationwhere basically what he did is he
took just a couple of sentences,basic basic sentences about the background of a
case, put it into chat GPT. Chat GPT provided like a full on
opening statement, and then Bob tookthat opening statement, put it into a
(14:00):
another AI system to generate an openingstatement as as narrated by Atticus Finch,
to include like the full face thingand all that kind of stuff. And
it was really cool demonstration of Ithink, what, you know, use
of these technologies in the legal fieldthat we may not think about as lawyers.
You know, you might think aboutlike, well, you know,
could I use it to help inlegal research or could I use it to
(14:22):
help in drafting, you know,a contract or something like that, but
totally different to think about can thisthing, you know, present an opening
statement? And so my question iswhat do you see from where you're sitting
today, what do you see aslike kind of the the limits of the
use of these technologies in the legalpractice, and what do you see as
the big risks that the lawyers needto think about as as maybe on their
(14:48):
own initiative or maybe as they're demandedby clients to use these technologies more and
more. Yes to the first ispart of your question. The certain of
research that we do or certain researchsoftware for example like Westlaw or Lexus,
(15:11):
they already have some sort of AIcomponent in them, you know, whenever
we research and there are times thatwe input may input uh into the search
box something that is privileged, right, And that that's one of the issues
(15:33):
that that that that attorneys deal withwhenever they are are trying to use AI
to help with their legal research.But and just in saying that in itself,
is that nothing is new is underthe sun. Everyone is adapting to
(15:56):
new ways of how to do pastthat initially required human intelligence, but now
we can work more people can workmore efficiently utilizing AI. So to that
end, while we're utilizing this technology, it's important that there are some type
(16:18):
of guidelines or safe practices. So, like one of the main points is
not entering public or private information right, But then again you have to also
be mindful of the privacy policies ofthe certain generative models that you're using to
understand what kind of data that they'recollecting and when do they delete the data
(16:41):
off of their server. So thethree main criterias is one be mindful of
the personal data that you input intoany type of generative AI model. The
second one is to understand to reviewit, make sure that the that the
(17:02):
information that you receive is accurate andit pertains to your case. And the
third is transparency. Some some clients, when they receive something, some people
may be reluctant to let them knowthat they use artificial intelligence to conduct their
research. But I think it comesto the extent where people's lives are at
(17:29):
stake. I think that that ascourts have implemented required disclosures at the court
level, I think that's important.But outside of that context, I think
it depends on the circumstance and becautious about setting OKI be felt call unless
it's about this podcast, and that'sokay. What do you know? Can
(17:51):
you talk a little bit about whatmaybe what you've seen or what you would
expect to see on a little bitof that governance or regulatory front, whether
it's from the federal government or thesame I think we've talked a little bit
about, you know, how courtsare reacting, but what about from the
broader perspective. Actually, the federalgovernment, they didn't release any laws.
There are currently no federal laws onartificial intelligence, but there are there is
(18:15):
some guidance, I believe. Inlast year, the United States released the
AI Bill of Rights, and underthat AI Bill of Rights, it provides
several considerations that people who utilize artificialintelligence in their daily lives or even businesses,
(18:37):
how they can have safe use ofAI, but also understanding the risk
presented by the use of AI andartificial intelligence is huge in data privacy and
cybersecurity, primarily because of data collectionand the data that's used to make the
(19:00):
learning models better. So with theAI Bill of Rights, they also address
these concerns related to data privacy relatedto transparency also related to just basic human
rights people. People may be subjectedto biases based on how the AI,
(19:23):
how the AI is trained to toanalyze the data that it's that input it
so to have much safer practices andhave much more ethical because then we have
another accountability issue, right, Soif who who is respond who is responsible?
(19:48):
Is it the AI model or isit the human So the AI Bill
of Rights provides guidance on all thosekey concepts. I wish I could go
into more detail on the Bill ofRights, and I don't have it in
front of me, but it doesdefinitely help guide not only federal legislation,
(20:10):
but also state legislation. You mentionedbias, which I think is an important
consideration with all of this. Isaw a news story about a non Caucasian
woman who took a picture of herselfand asked an AI program to make her
picture a LinkedIn ready or something likethat, and the AI took the picture
(20:36):
and the result was a very Caucasianlooking woman. So it certainly seems like
there's a lot of opportunity for biasthat needs to be addressed on the AI
side. Have you seen anything aboutefforts to try and control those biases.
I know you mentioned the Bill ofRights, But what's being done to try
(20:59):
and address that bias before we takeoff running using this this new AI technology.
I haven't seen any specific instance,but I've briefly came across it earlier
this year where the there was aninstance in which they needed to understand the
(21:26):
data that's being collected to under tothen understand how the AI model is training,
training that data and learning from thatdata to UH to output whatever result
that may be from the data setthat they collect. So in the instance
of for example, I mentioned ATS, that's a perfect example of bias based
(21:51):
on AI. There's been a studyI think of it was definitely earlier this
year where there was they said thatATS is actually biased because just based off
of how the algorithm is trying toanalyze analyze the information that it sees on
(22:14):
a resume, and so then peopletry to gain the ATS algorithm in certain
ways and it just becomes a cyclicalloop. That's why human involvement again is
so important when using AI, becauseif we just let AI roam free,
then whatever biases that are present inthe algorithm well will be exacerbated. Well,
(22:42):
if im this has been incredibly interestingon a really cutting edge topic.
We really appreciate your time with ustoday. For the folks who are listening,
how can they get in touch withyou? Of course, they can
find me on my website. Mywebsite is www dot ai O legal dot
(23:03):
com. You can also send mean email at a F A M at
AIO legal dot com. Thank youa form so much, and we'll make
sure that we also get those waysto get in touch with you down in
our show notes for anybody that youknow doesn't have their typing fingers going while
they happen to be listening to usa fon. This has been a fascinating
conversation and I hope for everyone that'slistening that you enjoyed this as well.
(23:27):
And if you'd like to know moreabout our podcast or all the great things
that d r I has to offer, uh, please take a moment and
go and visit d r I dotorg. Thank you so much