Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
how humans make
decisions based on our entire
history.
That you just said right.
All of our past experienceslead up to how we make our
decisions.
But if you put AI in there andwe've taken all of that bias
away, then how can we trusttheir decisions that they're
making, because they don't havethat historical context around
that.
Speaker 2 (00:22):
Welcome to Tech
Travels hosted by the seasoned
tech enthusiast and industryexpert, steve Woodard.
With over 25 years ofexperience and a track record of
collaborating with thebrightest minds in technology,
steve is your seasoned guidethrough the ever-evolving world
of innovation.
Join us as we embark on aninsightful journey, exploring
(00:44):
the past, present and future oftech under Steve's expert
guidance.
Speaker 3 (00:50):
Welcome back, fellow
travelers, to another exciting
episode of Tech Travels.
In today's episode, we're goingto dive deep into the topic of
AI cognitive personas, and todaywe're excited to have Dr Alan
Bedeau returning to the show.
Dr Bedeau is a seasoned AIevangelist, ceo of Alan Bedeau
LLC, where he specializes in AI,blockchain, quantum computing
(01:13):
and other advanced technologysolutions for his customers.
He holds a PhD in mechanicalengineering and he boasts over
two decades of experience,bringing in profound
understanding of the technical,business and ethical aspects
around AI application, and hisexpertise in this domain is
unparalleled.
It's a pleasure to have himback on the show to dive deep
(01:34):
into this fascinating world.
Alan, welcome back to the show.
It's amazing to have you backon the podcast.
Speaker 1 (01:40):
The listeners have
used ChatGPT, large language
models, whatever, cohere, etc.
Etc.
It doesn't always listen to you.
You'll ask it a question and itmay give you the right answer.
It may give you the wronganswer.
When you ask a follow-upquestion, it may give you the
exact same answer that it justgave you, and that's an issue
(02:05):
for a lot of folks.
And you start to interact withthese things enough.
They start to behave in certainways.
So you start to learn what toexpect from them at times and
you know we'll use promptengineering to say, oh, I want
you to behave like Shakespeareand write something or a play
(02:27):
about X, y and Z, right, and youknow that'll work for a little
while.
But then all of a sudden, forsome reason, it starts to give
you the wrong answer, gives yourandom answers, those kind of
things, and it's because itdoesn't have a personality.
You've just told it to behavelike him.
You haven't built that into hisDNA, and so, from an AI
(02:48):
cognitive persona perspective,we believe that we can take the
properties of that person, orthe properties of whatever the
entity is, and we ask itpersonality questions.
We ask it a whole bunch ofother different types of
questions around leadership,your big five that they use in
(03:11):
psychology, and then we trainthe models based on that and
then when you ask it to behavein a certain way, it's more
accurate.
It gives you the right answerssignificantly longer.
Usually we see about a 30percent improvement over just a
normal chat GPT by using thistype of approach and it's it's.
(03:32):
It's significant when you startto apply it to real world
scenarios, because then you canactually develop software
engineers, you can developdifferent, you know different
markets and verticals and peoplewithin those.
So that is the underlyingfactor of what an AI cognitive
persona is.
Speaker 3 (03:55):
So it sounds like and
if I could kind of wrap my head
around it, it really is tryingto kind of train an AI model to
kind of mimic, or almost kind ofmimic, certain behaviors or
communication styles that tendto be more human-like and
specifically around human-likeinteractions and engagements
with humans from an AIperspective, right.
Speaker 1 (04:17):
That's exactly right,
because what we want to do is
we want to improve the userexperience.
And I don't know if you've youknow if you've called in to a
credit card and you talk to theold bots and you're like, you
know, you get off the phone andyou're angrier than when you got
on the phone right, or ifyou're in a hospital.
You know, they've tried a lotof different studies with robots
(04:37):
in hospitals and they tried toalways make the robots happy.
Well, people didn't like thatbecause they knew it was fake
and it's fraudulent.
And so what we want to do is wewant to give them the entire
spectrum of personality traitsand then the interactions become
real.
Speaker 3 (04:56):
It seems like there's
a lot of the psychology that's
kind of wrapped into this and Ikind of want to dive into this a
little bit deeper.
As you mentioned kind of thetop five, kind of personality
traits or tests.
I think it is right, I thinkthat's the big five.
How are you kind of taking thatand you know again, to
technical people, how are youtaking that and kind of applying
(05:16):
that into something that's morebinary, something that's more
kind of robotic, somethingthat's more like a machine?
How do you really get into thatapplication of a personality
into like a natural languagemodel?
Speaker 1 (05:32):
Yeah, and that's the
fun part, and you know.
So it was about three years ago.
I even started thinking aboutsome of these things and I
didn't get to apply it in mylast job, unfortunately, but it
was always around the userexperience and thinking you know
what, if I'm a soldier in thefield, if I'm a hospital worker,
(05:55):
if I'm something like that, howam I going to make that
interaction as real as wepossibly could, right?
And so you go through theentire process of answering,
just like you would, aspersonality test.
Uh, the human will go throughand I'll select whatever the
appropriate answer is, based onthe behaviors that they're
(06:15):
trying to to mimic in that, that, that persona that they want to
leverage, and it's about 75.
It's pretty in depth, but you gothrough the process and we
score it.
And once we have that score,then we can say that, um, they
have a neuroticism score, theyhave an aggressiveness score,
(06:36):
they have an openness score, andthese are all the traits of
those scores.
We build those into the models,train them.
Course, we build those into themodels, train them.
You know we've got about 75,000different data points that
we'll use to train our models,solely based on the
characteristic traits and thenwhen it comes out, then it has
(06:58):
that personality that we aretrying to shape it to.
Speaker 3 (07:02):
That's incredible.
Is it actually able to respondto people that have high levels
of neuroticism, or people whohave high levels of sarcasm?
Is it able to almost kind ofdetect the type of person that
it's interacting with?
Right, some people tend to bemore one on end of the spectrum
than the next, and the AI modelmight say okay, I think I know
how to kind of gear my nextresponses based upon this
(07:22):
person's personality type.
Speaker 1 (07:25):
It's a little scary.
A good example is a developmentteam, and that's where we
started to play, because I had along time ago we were doing a
demo.
The demo went awful just becausethe team dynamics were terrible
.
Right, Just personality causeddestruction, not technical, but
(07:49):
just personalities.
And so when we started with thepersonas that's where we put
them we had some folks that werevery aggressive, very
dominating from a conversationperspective, and we wanted them
to interact with other personasthat were not all technically
skilled and trained to do.
We had trained them on softwaredevelopment, and when we put
(08:10):
those five together it was adisaster.
They couldn't solve anything.
They would divert, because thedominant persona in the
conversation would always try tobutt in and say, no, we have to
do this, no, that's not correct.
No, it's this and no, it's that.
And watching those AIscommunicate back and forth was
(08:32):
it was, quite honestly, it wasfascinating, but it was a
nightmare.
And so that's when we reallyknew that we were onto something
.
And then you start to apply itto other things product
evaluations, you start to applyit to any sort of evaluation or
any type of work that reallyrequires some sort of
specialized skill.
(08:53):
Then it really takes off.
Speaker 3 (08:55):
It's interesting and
I guess the.
I guess the end result is thatyou know you're really kind of
looking to kind of have theseinteract.
I mean, what's the what's the?
I guess these are things thatyou would probably use like
customer service, something thatyou'd use as maybe therapy aids
, maybe educational assistance.
What are some other use casesthat you're looking for as kind
(09:15):
of the ideal use case forsomething like this?
Where are you specificallytargeting kind of that first
reach?
Speaker 1 (09:23):
So it's going to be
around their product evaluations
is a good place to startAnother one.
If you think about any sort ofservice industry, you're always
trying to start up a service orturn off a service and bring
something new.
If you have a customer base thatyou would like to model and say
if I take this service away,what sort of impact is that
(09:46):
going to have on my bottom lineand are my customers going to be
ticked off because I just tooksomething away that was one of
their favorites, right, that's aperfect use case for that.
Also, looking at trying to giveleaders different perspectives
around their leadership team forexample, if I'm the CTO of a
(10:11):
certain company and I have apersona that I can just tap into
and say, hey, I want you tolook at this, this and this from
the competitor's perspective,then that just gives me more
information that I can use tomake a better decision.
And then when you add that tothe rest of the team, then your
(10:34):
information that you get is muchmore accurate.
Your information that you'reable to process then and make a
better decision is much moreimpactful than if you don't have
something like that.
Speaker 3 (10:45):
That's incredible.
I mean, it seems like there'swide ranges of applications and
uses for something like this.
You mentioned kind of having asecondary sounding board.
Is something when someone likea cto to kind of evaluate,
examine, give some sort ofprescriptive or predictive, uh
type of indicators?
Um, what are you seeing interms of kind of certain some of
the certain behaviors andcommunication styles that that
(11:07):
that are key ingredients thatmight still be kind of missing?
Speaker 1 (11:13):
So the biggest thing
that we always are looking for
is the leadership styles,because there's so many
different indicators forleadership styles, and we
continue to play around withsome of those.
(11:34):
One of the biggest challenges,though, is always going to be
making sure we get thatinteraction with the human right
and it's presentedappropriately.
Our biggest, you know, triumph,I think, is really that ability
to have those communicationchannels be like they are with a
human, you know, and, of course, the large language models have
really made that possible, butwe only rely on about 30% of the
(11:58):
large language model as ourbase technology, and the rest of
it is everything else thatwe're doing from an AI
perspective.
So, you know, we're taking allof those important functional ai
fields of research and we areintegrating them together so
that we can really do a muchbetter job in, in modeling, you
(12:18):
know, those types of things, andwe've got a long way to go.
There's a lot more tech that Iwant to.
I want to build into it.
It's just.
Then it comes down to time andpriorities and everything else
that, uh, that goes along withthat you mentioned.
Speaker 3 (12:31):
You mentioned the,
the complex center, interactions
and everything else that, uh,that goes along with that.
You mentioned, you mentioned the, the complex center, at
interactions and engagements,right, that humans and normally
more normal everyday people,right, in our normal roles, uh,
you know, we deal with conflicta lot and when, of course, we
deal with things that happenover a long period of time, I'm
wondering, you know, are, are,are you, are you thinking about
managing those complexinteractions that evolve over a
(12:55):
long period of time, and how AIis able to identify the relevant
topics, how they're able tothink through appropriate
responses, just the personas,just based upon short-term,
long-term memory?
There's got to be a lot ofcomponents that are being built
into this, not just from thetechnology aspects, but also
from kind of the psychology ofthis as well too.
(13:16):
Are you bringing in differentpractitioners from different
disciplines and psychology tokind of help help it build,
understand and train that model?
Or is it we're kind of justbringing in just kind of a
different approach where we'rejust using technology and then
five different types ofpersonality traits?
What are some of the otherdisciplines you're incorporating
into this new type of venture?
Speaker 1 (13:37):
Yeah, psychology is
the biggest one.
I mean, you know, if we can'tget that right, then we're
really in deep trouble, and sothat's been the, you know, the
primary focus.
From that perspective, we areusing, you know, some folks for
AI and the other pieces of that,but that's the most important
(13:57):
piece and I think people haveforgotten about that.
And even when we talk aboutbias right, and you know,
everybody wants the AI to beable to make a good decision.
But if you think about howhumans make decisions, we make
them based on our entire historythat you just said.
Right, all of our pastexperiences lead up to how we
(14:18):
make our decisions.
But if you put AI in there andwe've taken all of that bias
away, then how can we trusttheir decisions that they're
making?
Because they don't have thathistorical context around that.
And so we're quantifying thebias, and that's the other area
that is really helping us isthat we are quantifying how much
bias they have, we'recontrolling that and then we're
(14:41):
building around that and then,as they are making decisions,
then that context becomes muchmore relevant and much more
important.
Speaker 3 (14:48):
I love that you've
mentioned the bias topic because
this is something that I'vewanted to explore and I hope
that you can double click onthis for a little bit.
When it comes to buildingbiases into artificial
intelligence, I think there'ssome people that maybe don't
understand its full implicationsof what they mean by bias, and
I've always kind of said, youknow, building bias into AI is
somewhat of a good thing, right,because it kind of helps
(15:10):
forgive.
It's a set of preset, a presetof guardrails that allows it to
make sure it can't operate outof a little bit.
So, from your perspective, helpour listeners understand a
little bit more about what biasin AI really means and what is
the good application of a biasin an ai model yeah, and that's
(15:34):
that's a huge topic I love, Ilove talking about it.
Speaker 1 (15:37):
So that's that's
fantastic.
Because you know, whateverybody believes is that when
they go to any of these largelanguage models, that it's
evenly distributed, that it'spositive, so it's going to come
right down, right down themiddle, the normalized curve,
and the answer is right in themiddle, whereas in reality we
(15:59):
know that these models areinherently biased because the
data is so large there's no waythat we can shape it one way or
the other.
They try to by putting a filteron it and they'll say, oh, it
can't do political stuff and itcan't do this and it can't do
that.
Well, they haven't realized theimplications you know, chaos
(16:20):
theory of what can happen insideof that data when you start
putting a filter, because thereare going to be downstream
effects of those.
So these models, to start, areinherently biased one way or the
other.
You have to figure out what thatis, per your question.
What we're doing is then we'retraining those characteristics
to help us quantify what thatbias is, so that if somebody
(16:46):
wants to model a negativebehavior, then they can do that,
but they know a certain numberis going to be representative of
how much it's biased in thatdirection.
If they want to, if they wantto do a salesman that has this
kind of personality, then we'regonna bias it.
(17:06):
So it's happy, so it's got anenergetic personality, so it.
But we know we can tell themexactly how much it's happy, so
it's got an energeticpersonality.
But we know we can tell themexactly how much it's biased in
that direction so that when theygo through and they make their
evaluations then they understandthat upfront and that's
something that we don't talkabout.
If you ask any of thesecommercial large language models
, how much are you biased?
(17:27):
It can't answer the question orit won't answer the question
usually and I think folks have amisconception that these things
are right down the middle andthey're not, not even close.
Speaker 3 (17:41):
How does and help us
understand that just to just
keep double clicking on thatagain a little bit is that when
is it kind of up to theorganization, the entity, the
creator of the artificial, theAI, is it up to them to kind of
look at bias and figure out whattype of guardrails are going to
build into it?
Is there AI that already kindof comes already pre-built with
bias built into it and we justadd in extra filters and extra
(18:03):
guardrails?
How does that process work interms of just understanding it
from a layman's perspective?
Speaker 1 (18:09):
Yeah, they take all
that data and I think, as they
put that data together andthey're training their models, I
think they try to weed out asmuch as they can.
But we know in reality there'stoo much data, way too much data
, and they can't.
And so as they start to see amodel drift and it starts to
maybe it's answering a politicalquestion that it's not supposed
(18:32):
to answer, Maybe it's asking ahot topic from a gender
perspective, potentially andthey start to see negative
feedback come, Then they'll flipa switch and pretty much say
you cannot answer those kind ofquestions anymore and that's how
they try to put thoseguardrails around that.
(18:52):
But in reality it doesn'talways work because there are
always different ways to getaround it.
You can ask it and,irregardless the best example
that I had irregardless, if youask a certain model to write a
poem about a certainpresidential candidate on a
certain side, it will write anabsolutely beautiful one.
(19:15):
It's fantastic.
If you ask it to do the exactsame thing on the other side, it
says I can't answer thatquestion because it's political
in nature.
Okay, Well, there's a perfectexample of bias, right, If you
ask it to generate a scriptabout who makes a good scientist
(19:36):
, for example, it will give youan answer and it's going to
upset an awful lot of peoplebecause it's going to be biased.
I'm not going to say which wayit's biased, but it is biased,
and those are activities thatfolks can go out and do today.
Speaker 3 (19:51):
Yeah.
Speaker 1 (19:52):
And that's a problem.
That's a problem.
Speaker 3 (19:55):
Yeah, it's funny.
It's often the debate likewho's the best scientist?
I have this question sometimeslike who's the greatest
historian sometimes, and I getall types of who's the greatest
philosopher?
Right, it's very subjective, itdepends on who you ask.
That's right.
You know, when you're lookingat building these models and
you're building them now withthese cognitive personas, what
(20:17):
are some of the things that arethe challenging aspect of it?
Is it more of the ethics andcompliance?
Is it more regulatory?
Is it?
You know, what are some of thechallenges that are aside from
the technology, right, justtechnology out of it.
Outside of that, what are somethings that you see are some
challenges?
Speaker 1 (20:37):
Yeah, the ethical
piece is a huge one for us and
we pay very close attention tothat.
We do not try to mimic aspecific person.
We don't want to do that at all.
That doesn't do us any sort ofgood.
We want to look at the groupdynamics.
We want to understand thatbecause, as a whole, when you're
(20:59):
looking at these kind ofnumbers and the size of the data
that we're using, getting itdown to a single person is not,
I don't think it's realistic.
But, you know, making sure thatwe can continue to quantify
what those biases are.
So we are meeting the ethicalstandards that we are have set
for ourselves.
That is the most importantthing for us.
(21:21):
We do not want somebody to usethis for or have the model go
sideways and cause HR issues,cause any sort of ethical you
know issues when it comes tothose kinds of things.
So we pay really closeattention to that.
Speaker 3 (21:37):
And do you have to,
and how does this typically work
?
I know there's been a lot ofattention being paid to things
like you know what's happeningin the European Union with AI
legislation.
They've been even talking aboutJoe Biden's executive order
around AI.
I mean, what are we seeing fromthe US government in terms of
kind of applying some sort ofkind of guardrails around
(21:58):
ethical and responsible use,around artificial intelligence
and creation of cognitivepersonas?
Is there anything that we'reworking towards, kind of from
the tech level all the way up tothe federal level?
Are we kind of working incompliance, or is this still an
area that we are still exploringwith vague regulatory
guidelines?
Speaker 1 (22:18):
Yeah, I would say
it's the latter.
I think the president'sguidelines that he set out was a
good start.
I would say that the EuropeanUnion is far ahead of us when it
comes to legislation around.
You know those types of things.
The European Union, though,decided that they were going to
focus more on the data aspectand the interactions that, you
(22:41):
know, a human has with the AI,understanding how the data is
stored, their personal data,those kinds of things, which is
fantastic, so that they can, youknow, have their own, you know,
get their digital ID back.
But you know they are stillfarther ahead of us, though.
From a legislation perspective,I think we've got a long way to
(23:02):
go.
I'm encouraged that thepresident was able to get
something out, but, you know, Ithink until we start to actually
put some real legislation inplace, then I think we're all
going to be in a guessing game,because what we don't want to do
(23:23):
, at least from our perspective,we don't keep any data.
We don't keep personal data.
We don't even go.
We don't buy personal data.
We don't do any of that stuff.
We use our psychological examsthat we take, and you are
filling that out based on whatyou're trying to model or
simulate.
I mean, I could go buy shopperdata, I could go get all that
(23:45):
kind of information on what ashopper or a typical demographic
could look like.
But from our perspective that'snot really what we're trying to
do, because you know it's notgoing to be helpful in the long
term for us.
So we're just trying to dothose sort of things and we
believe everybody should havetheir own digital identity.
You know, I don't want to holdany of that stuff.
(24:06):
I don't want to model a certainperson.
I don't want to keep any ofthat stuff.
Speaker 3 (24:09):
It doesn't help
sourced from published texts,
(24:30):
probably published tests or whattype of data is it specifically
?
Speaker 1 (24:33):
when you mentioned
tests or things like that, yeah,
it's psychological tests.
It is now about 20% of thatdata that we're using is
actually AI augmented data.
So we use we'll mix that inwith our validation data just to
have some additional questionsaround that but the rest of it
are human generatedpsychological responses that
(24:57):
we're using.
No names, no information likethat, but just general answers
to those types of questions, andthat gives us so much that we
can work with.
It's really quite fascinating.
Speaker 3 (25:12):
That's interesting,
because I would think that the
opposite would be true is thatthe more data you have on a much
more wider net you can cast,the better your data model might
be.
But it seems like you're ableto do this more efficient,
better, leaner, faster with lessdata.
Speaker 1 (25:33):
Yeah, we don't need
billions and billions of data
points for these sort of things,because it is really
fascinating that you start toswarm into different categories
based on certain, you know,characteristic profiles and you
know, as those responses startto come in, they quickly group,
(25:55):
very, very fast, and that's oneof the good things that has
really helped us.
If we had to use the sameamount that these large language
models have to use, we would,we would be in trouble, we, we.
There just wouldn't be enoughdata out there for us to be able
to to do that because, andagain, we're not trying to get
down to the individual person,we're trying to get to the group
(26:18):
size.
Speaker 3 (26:19):
And I guess that's
because what we're.
I guess maybe that's becausewhat you're trying to do is so
very niche, right, You'rebasically looking to build kind
of a cognitive persona and youonly really need a smaller
amount of data to kind of reallykind of create some sort of
persona that could respond tohumans, respond to certain
behaviors and certaincommunication styles.
So I guess that in a sense,it's almost like its own layer
(26:43):
of data that you need to be veryprescriptive to have a very
specific outcome.
That's right.
That's very interesting.
What are you looking at fromyour forecast, let's just say,
the next 12 to 24 months?
What are you kind of seeing onthe horizon?
Where do you kind of see thetechnology moving to?
Speaker 1 (27:01):
Well, I think it's
going to continue to get more
advanced to Well, I think it'sgoing to continue to get more
advanced.
We've seen some informationthat's been published on some
folks doing emotions, and that'sgreat.
Emotions is a small part ofours, but we're looking at the
broader context.
Let me take a little bit of astep back.
(27:23):
We've started to see a littlebit of stagnation in these large
language models.
Okay, they've released a brandnew one.
It can do a little bit more.
They've released something else.
It can do a little bit more.
But people are using it for thesame thing.
You know, just because they gofrom three, five to four in chat
, gpt, for example, they're notdoing any more.
(27:44):
For the most part, they'reusually asking it the same sort
of questions that get themthrough their day.
We want to expand that bylayering in new capabilities,
and some other folks are lookingat other capabilities that they
want to layer in.
I think that's where we'regoing to go in the next 18
months.
Is it's not going to be?
Oh, I can interface with Excelor I can use this tool.
(28:07):
No, it's going to be.
Oh, from my perspective, it'sdoing it for me and it's acting
like me.
And now the results I'm gettingback are more like my results
that I would normally do.
That's where I think we'regoing to get to.
Speaker 3 (28:26):
That's interesting.
I kind of think for a second.
You know how profoundlyimpactful that would be in terms
of our society to be able tointeract with a true cognitive
type of AI persona.
I guess maybe in a second here,and if you could kind of going
back to you know 2001, A SpaceOdyssey, when you had Hal, the
computer right, and you interactwith an AI entity that almost
is stubborn and refuses tocomply, right, it's like sorry,
(28:47):
Dave, I can't do that.
I mean, is there a way that youcan, that you're looking at,
kind of looking at how it couldpossibly be an AI entity that
does not comply, or how are youkind of factoring in that
non-compliance ability withinside the AI entity?
Speaker 1 (29:05):
Yeah, we have modeled
some of those and we have
played with some of those.
Some of the large languagemodels that come that are
unfiltered.
We have layered our personas ontop of that, and some of the
responses that we get are notvery nice when it does certain
things.
Some of the responses that weget are not very nice when it
does certain things, and so it'snot so much that.
(29:26):
You know, we've created almosta bill of rights that we will
build in as part of ouroperating model and as we train
them.
These are the things that you'reallowed to do.
These are the things thatyou're not allowed to do.
Don't violate X, y and Z to do.
Don't violate x, y and z, andif you do, then you know you
(29:46):
have to let the user know thatthis is.
This is the reason why you'redoing, uh, these things.
It's.
It's.
It's actually, uh, made some,some decisions and putting guard
rails on it significantlyeasier.
Again, it's.
It's part of the training andthe building process of the
models themselves.
So that is, uh, that'ssomething that we're watching
out for, though, because thereare more and more large language
models coming out with nofilters on.
Speaker 3 (30:07):
Incredible.
Speaker 1 (30:08):
And it's yeah, it's
going to be interesting.
Speaker 3 (30:11):
And, from the user
perspective, would I ever know
that I'm interacting with an AIcognitive persona?
Is there any type of warninglabel that you know, hey, me as
a consumer, as an everyday user,I'm going about my life,
working, and I'm interactingeither with online travel or
booking a car.
Is there any type of anythingthat would allow me, as an end
(30:36):
user, to know that I'minteracting with an AI cognitive
persona?
Or do I just have to guess andsay I don't think this person's
real?
Speaker 1 (30:44):
No, from the app.
When they use our app, theyknow it's all over the place.
One, they've helped build it.
Two, they know what itsdecisions are.
Three, it's quantified rightfor them.
If they're interfacing with oneof our customers, they know
because, again it is, it isbroadcast in there.
I, I, I, I have an issue withpeople interacting with things
(31:08):
that they don't know are ai.
I I don't think that'sappropriate.
Personally, um, even if it isas simple as a travel agent or
something else, people stillhave the right to know that
they're not talking to a humanor they're not talking to what
they thought they were they weretalking with.
That it's actually an AI, andso we take that as part of our
ethics.
Uh, credo that everybody knows.
(31:31):
If you're interacting with apersona, you know it's a persona
.
Speaker 3 (31:34):
Interesting.
It's an, it's an, it's an, it'san, it's a bright and her, it's
a bright horizon.
I think that, uh, it's going tobe interesting to see how
everything plays out over thenext six to 12 months.
What are some of the projectsthat you're currently working on
?
Is it just the AI cognitivepersona?
What other ventures are youcurrently working on that we can
still keep in touch with?
Speaker 1 (31:54):
Well, we're still
looking at ways that we can
accelerate AI from a quantumperspective.
I can't wait until that hitsbecause, you know, then the
capabilities are just going tobe phenomenal, you know.
And so we're looking at waysthat we can layer identification
of objects and you know, like,taking those you know some of
(32:15):
the newer models, like the, youknow the YOLO stuff that has
come out, and putting somequantum classification in there
to see, oh, how much better canwe get at different grid scales,
at different sizes.
That's one of my other bigprojects that I'm working on.
Speaker 3 (32:30):
That's fascinating,
alan.
Thank you so very much forcoming on the show and talking
with us and helping to educateour listeners on this topic.
This has been an excitingconversation.
I've been looking forward tothis all week.
Thanks for taking the time andthanks for dropping the
cognitive persona info on us.
I really greatly appreciatethis.
I would love to have you backon to talk about AI and quantum
(32:52):
computing.
I think that's a conversationwe've still not yet really truly
explored, so I am definitelylooking forward to that deep
dive.
Speaker 1 (32:59):
Yeah, I appreciate it
, steve.
Anytime Awesome.
Thank you, cheers.