Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jeannie Walters (00:09):
It's the
Experience Action podcast.
I'm Jeannie Walters and I'm soglad that you are here.
I am glad that you are here forthis episode because we are
diving in to all things about AIand CX in the news, and we are
so lucky to do this on this CXPulse Check episode with my very
(00:31):
special co-host, Ovetta Sampson.
Ovetta, it's so good to see you.
I've known Ovetta for a millionyears and you are one of the
first people who really tappedinto the power of AI and also
the warnings about it, what wecan do, some of the things that
we need to be aware of.
So I'm really excited about ourconversation.
(00:53):
Will you please tell ouraudience all about you and your
brilliance?
Ovetta Sampson (00:58):
Oh, wow, I don't
know about that, but I have
longevity.
I think it would be.
Jeannie Walters (01:05):
Oh, that's a
good word, that's a good word.
Ovetta Sampson (01:07):
So I don't know
if we could call it brilliant,
but definitely in the game for awhile.
I started doing dabbling inartificial intelligence back
when I was in grad school atDePaul University (go Blue
Demons) in 2013.
And then went on to start acareer in design, artificial
(01:34):
intelligence products design, atIDEO Chicago in 2016.
And just left Google, where Iled a design team that built
their machine learning, AIplatforms and tooling for
generative AI in February, andnow I'm the owner and founder of
(01:54):
RightAI.
It's a consultancy that helpsCEOs and enterprise businesses
implement AI in a safe andefficient way, and I have been
studying and researching andcreating AI products for the
last oh my Lord 10 years.
(02:16):
So, yeah, longevity, I think.
Jeannie Walters (02:19):
There you go, I
love it.
I love it.
Ovetta Sampson (02:21):
In AI, which, by
the way, is saying something.
Of course, now everybody claimsto be an AI expert.
Jeannie Walters (02:29):
It's true, it's
true, and so I am really
excited about this conversationbecause it is everywhere, right,
you can barely have aprofessional conversation
without somebody bringing it up,and it's really an interesting
time conversation.
I feel like we're at the tip ofthe iceberg for a lot of things
, because I feel like people arefalling into the camp of, you
(02:52):
know, this is so new andexciting and shiny, and look at
all the things that it can do,or they're like the robots are
coming and we're all doomed, andso I really kind of subscribed
to the idea that we aresomewhere in the middle of that.
And one of the things that youtalk about a lot is how
important it is to be thoughtfuland intentional about how we
(03:15):
actually implement these things,and that aligns with what we
talk about, with customerexperience that we have to be
proactive and intentional andthoughtful about it, because
it's going to happen whether youtalk about it or not, right?
So why not own it and why notdo it the right way?
So I pulled a couple of funarticles for us to discuss here.
(03:37):
So the first one this is fromCX Today and the headline is a
contact center chatbot inventscompany policies, now customers
want out.
And this article was reallycompelling because I think this
isn't the first kind of hiccupwe've seen with chatbots, but
(03:57):
this one they were, you know,inventing things like refund and
rebate and things that were notavailable, and so customers
were hearing about expectationsabout promises that essentially
weren't going to be fulfilled,and now they're coming back and
saying, well, this is a mess andeverything else.
So I'm just curious, likechatbots are probably one of the
(04:20):
first places that we really sawAI implemented, especially in
the customer experience world,and so what are some of the
things that leaders need toconsider as they're introducing
AI chatbots, as they're goingthrough this, and what will this
be a warning for them about?
Ovetta Sampson (04:37):
Yeah, I think.
I saw, I read that, and I knowthat that a lot of your
listeners and people who aretuning in have heard about
generative AI, I'm assuming.
And that's when AI modelsgenerate or even create new text
(04:59):
content, anything that peoplecan engage with nowadays, and
chatbots once the purview ofAlexa, Alexa, Alexa have now
turned into.
Jeannie Walters (05:15):
You just turned
on a bunch of speakers.
I don't know if you.
Ovetta Sampson (05:17):
Right, Right.
Like now.
I know I turned on a bunch ofspeakers where you had to repeat
yourself.
You had to, like frustratinglytalk to them because of
generative AI.
They can have conversations andseem as if they understand us
and can engage with us on analmost imperceptible human level
(05:38):
.
Right, Turing Tests, and forthose of you who are not tech
savvy, the Blade Runner test,right.
So these chatbots seem as ifthey're human, but the problem
is they're not, and the problemis they're not even close to
(05:58):
understanding us.
What they are are probabilisticsystems who base their responses
on the probability of what theysay is correct.
So when you ask a chatbot aquestion, what that chatbot is
doing is going through all thetraining data that has been
(06:21):
inputted into the model systemby a human, labeled by a human
and described by a human to thechatbot what it is, and then
doing a statistical probabilitymodel to say what you're asking
is once a response in thisoutcome, and because it's a
(06:44):
probabilistic system, it will bewrong.
And so chatbots will give youwhat they're calling the
industry hallucinations and whatwould call lies.
It's not because it'sintentionally trying to deceive
you, but it is because of howit's built and how it works.
(07:04):
So companies who decide to usethese chatbots if they decide to
set it and forget it, they willget problems like this.
And what's going to happen a laGoogle and its first release of
Gemini, giving you Indian andblack popes right.
What's going to happen isyou're going to have to go back
(07:27):
through your system and figureout why your chatbot made that
mistake.
The problem is, your customersmay not give you that grace
period, so you need to installrigorous testing, and even to
the point of what I it's calledadversarial testing in the
industry, but I'm calling itMike Tyson proof right.
(07:50):
It needs to be aggressive.
You need to beat up your AIchatbot to ensure that it
doesn't give you these waywardoutcomes.
Jeannie Walters (08:01):
Yep, and I
think, with this story in
particular, one of the thingsthat struck me was like it's one
thing if you hear from onecustomer right, it's one thing
if you hear from like that's ared flag, that's something you
can say oh, what's going on.
It's another one you hear from abunch, and so that tells me,
Ovetta Sampson (08:22):
I also think
your example is an example of AI
exacerbating an underlyingcustomer experience issue.
So if you read some of thecomments of the people on that
Reddit thread who said that theywere leaving and stuff, it
wasn't just what the chatbot wassaying.
It was this overall view thatthe company was trying to corner
(08:44):
the market on them, and thenthe chatbot's new policy was
just another list and examplesof that company not treating
them well.
Jeannie Walters (09:00):
You are
absolutely right.
Ovetta Sampson (09:01):
So I think if
you add AI to bad customer
service, you will get outsidecustomer service issues, right.
Well, it all goes with anoutcome.
But adding AI will give you anoutside scaled customer service
problem.
Jeannie Walters (09:22):
And you know
what we talk about a lot is that
every step of the customerjourney is either building or
eroding trust.
Ovetta Sampson (09:29):
Correct.
Jeannie Walters (09:30):
And so if
they've already eroded all that
trust through other experiencesand then this happens, that is
absolutely something.
But I think one of the thingsthat I wish more customer
experience leaders would kind ofembrace is this idea that, to
your point, you can't set it andforget it.
You need to keep oversight onit, you need to keep testing it.
(09:50):
Because it evolves.
That's the whole idea is thatit learns and generates and
evolves, and so if it's evolvingin the wrong direction, you
need to be aware of that prettyquickly, because then you can
put a pause on it and figure itout.
If it goes completely rogue,like that's hard, you can't put
that genie back in the bottle,right?
Ovetta Sampson (10:11):
So yeah, and I
think I when I speak about AI
and especially generative AI,because it seems magical.
It does seem as if you havejust hired a, a new employee who
you do not have to train andyou do not have to educate and
(10:31):
you do not have to do all theinvestment that you had to do in
another employee who's alreadyready to take your company, your
company policies, your businessand deliver that to customers
scot-free.
The problem is that newemployee you hired has the human
(10:57):
engagement quotient or theemotional intelligence of a
toddler.
Right and no offense totoddlers.
Jeannie Walters (11:07):
They're little
tyrants.
Let's be honest.
Ovetta Sampson (11:09):
They are right.
They have no conscience, right.
Their moral code has not beenset.
We know from research that thatcomes around age seven, right?
So if you're talking aboutsomebody two to five, two to six
, they're all about themselves,right, and all about what you
give them, right?
And generative AI model systemsare a lot like that.
(11:33):
It's not that they don'tperceive their environment, that
they don't understand, thatthey can't learn, all those good
things, but they also aren'twielding a lot of emotional
intelligence at all, and I won'tsay a lot, at all.
And they have no moral code.
So right and wrong meansnothing to them, right?
(11:55):
And so you can't just unleashthat sociopathic machine like
toddler onto their customerswithout a handler, right?
Jeannie Walters (12:07):
Somebody is
writing a Hollywood script based
on that right now,
Ovetta Sampson (12:09):
Right, right.
You know that movie Baby, right.
Like it's just a machine.
I mean, WALL-E was cute, rightLike it's like oh, that's the
robot we want.
No, that's not what we'retalking about.
WALL-E had some emotionalintelligence, right?
Jeannie Walters (12:25):
I do love
WALL-E.
Ovetta Sampson (12:31):
Yeah, you can't
just unleash that on your
customers and be like, oh, handsoff, and so what it requires is
treating your customers as apartnership in this journey.
So that means you need to offerwidespread feedback loops in
many different ways so that yourcustomers can tell you if your
new employee is behaving right.
Jeannie Walters (12:54):
That's right.
I love that.
And I think the other thing isknowing you know, like every so
often, have your customerservice reps all ask the same
question of the bot and see whathappens Like are you getting
consistent answers, Are yougetting the right information?
All those little like spotchecks, those can help a lot.
(13:14):
If people just know that it'snot, you can't just release this
and hope for the best.
So we've talked abouthallucinations, which is a big
one.
The other thing that I knowyou're passionate about is this
idea that when we talk about AIand we talk about bots and all
these things, I think this is agreat segue.
(13:36):
You know you just talked aboutthey don't have a moral code,
they don't know what's right orwrong, and yet sometimes it
feels that way because they showbias right.
So this headline really caughtme.
So this is from iotforallcomand the headline is when
algorithms deny loans, thefraught fight to purge bias from
(13:59):
AI.
And what really struck me aboutthis is this is about, you know
, the loans that people applyfor for homes, for cars, for
their education, and the biaswas basically that certain
marginalized groups were justnot getting the loans, even
though they might have had thesame or similar loan application
(14:20):
information.
So this one, I think, issomething that, to your point,
like I'm sure the peopleinvolved probably looked at this
and thought what?
This isn't who we are, thisisn't our moral code, this is
not who we want to be, and yetthis is what was happening.
So how do you advise leaders,advise organizations, about this
(14:42):
bias that might be in therethat they don't even know about?
Ovetta Sampson (14:45):
Yeah, and this
is really, this is kind of like
the heart of my work, right,like this is the heart of why I
started my company and why I getup, why I lay awake at night
and why I get up in the morning.
Right.
Jeannie Walters (15:01):
I relate to
that.
Ovetta Sampson (15:08):
the reason why
I'm on this earth, I think, and
the reason why I consult withcompanies.
A AI is neither artificial orintelligence.
I say that all the time.
It's not artificial and it'snot intelligent.
Despite what you read in theNew York Times, let me not go
(15:28):
there.
On this prediction of what theycall artificial general
intelligence All BS.
It is not even .
close.
So that's why I like to saySkynet, not yet right.
(16:14):
The reason is that until yet,AI starts functioning on its own
data, that they're modelsystems.
So these model that it createsby itself not by humans, we are
still in control of training theartificial intelligence to give
us outcomes, right.
So these are model systems.
They're not entities.
They're not people.
They're not personas.
They're not any of that.
They're are creating thesemodels that are underlying the
trained on currently the wholeof the internet, so OpenAI,
Google Gemini, Meta, Alanda, allof those big tech companies who
are generative AI evolution wehave right now train these
models on, basically a now,souped up search engine of
(16:39):
everything that you can find onthe internet.
The problem with that is whoputs stuff on the internet?
Jeannie Walters (16:46):
Right.
Ovetta Sampson (16:47):
Right, like
people with access to computers,
people with access to copiousamounts of Wi-Fi access, people
who spend more time on there,people who all those kinds of
things, and the people who don'tput stuff on the internet are
philosophers, academics orethicists.
(17:09):
You know anyone who are like outthere living their lives, and
so we have an overproliferationof Real Housewives versus you
know somebody talking aboutPlato.
So when you put all thattogether, not that Plato is not
on the internet, but when youput all that together your model
system is being trained on thezeitgeist of culture, but also
(17:32):
all of the human, what I calltrauma, that imbues culture, and
that's all the isms, first ofall, sexism, racism, homophobia,
transphobia, all of thosethings.
The internet follows thezeitgeist because that's what
(17:52):
it's trained, that's what isinputted into it, and then the
models are trained on that andthat is the input for models.
So what you have are models.
It's kind of like I like to dothe visualization of the fifth
element, when the young ladylooks at she's discovered and
(18:14):
then she looks at the TV andthen decides I don't want to
save humanity because of all thewars and all the bad things
that humans can do.
Well, all the good things arethere, but also all the bad
things.
And the model system doesn'tknow the difference.
So what it does is itrecognizes patterns.
And if it recognized that moremen have mathematics degrees,
(18:38):
degrees in math, or more womenare nurses, then when your model
system, when you go to train itto recruit employees, it will
just decide hey, I'm going to gowith the pattern that I've seen
, that I've been trained on.
All the men with, with all theapplications with male sounding
(19:03):
voices or male sounding names,then I'll just put them in the
to go pile right, the pile ofhiring.
It doesn't know that that'swrong, it doesn't know that
that's discriminatory, it has nosense of that.
So you have to teach the modelsystem.
So what I say is that you haveto detrain these model systems
(19:26):
to get biases or mitigate thosebiases.
Jeannie Walters (19:29):
Wow, and it
makes me think of the first
couple of times I wasexperimenting with image
generation, with AI.
I wanted a picture of, I said,a diverse set of American
customers waiting in line and itcame back with the weirdest
(19:50):
combination of people, becausediverse meant everybody was
black in the line and theylooked like they were like
african tribal people and I wasjust like what is happening?
because I and then I was like.
Ovetta Sampson (20:07):
So it's mixing
indigenous which first people,
Jeannie Walters (20:09):
Yeah
Ovetta Sampson (20:10):
with diversity,
which is the stereotype of race.
Jeannie Walters (20:14):
It was so
bizarre
Ovetta Sampson (20:15):
Yeah
Jeannie Walters (20:16):
I couldn't ever
like no matter what I prompted,
and then I started feeling like, oh my god, what am I?
Am I a part of this?
This is awful, and I thinkthat's what you know, that.
But it kind of struck mebecause I'm like you know what
the more that humans engage, themore that they'll also
recognize this, the more thatthey'll start seeing this as
what it is.
And even that article that wetalked about, the headline that
(20:39):
I talked about, I mean, there'srecognition that this is
happening, and so my hope isthat, to your point, as they
develop these tools, as they putthese things out, they've
already done their homework,they've already taken the time
to detrain, as you say, or atleast say here's what to look
for and here's what we'll doabout it, so that people feel
(21:01):
empowered to do something.
Ovetta Sampson (21:02):
Yeah, this is
the bulk of my company, where I
work with companies to what Icall de-risk their AI
implementation, so de-risk thechance that their AI product
will have human engagement risks.
(21:23):
Mitigate those risks.
I will never say eliminate them, because it doesn't matter how
you train, how you test, how youproduce or how vigilant you are
.
I think with these AI systems,because they're probabilistic
and because they're stochastic,which means they're dynamic,
that you're not the only onethat owns and controls their
(21:47):
learning patterns.
It is virtually impossible toextract bias and inequity from
these systems, but you canminimize those bias and
inequitable outcomes.
You can do that.
I think anybody who says thatthey can totally eliminate bias
(22:10):
from your model system isselling you a bunch of rocks,
because it's not true, because Awe actually don't know how
biased these systems are untilwe start testing them right, and
so I think it's reallyimpossible to say that you've
eliminated this risk.
Jeannie Walters (22:29):
Yeah, and I
mean frankly, you can't get bias
out of humanity, you can't like
Ovetta Sampson (22:34):
Right.
And we're the blueprint, right.
So these models aren't justkind of like coming out of the
blue deciding what they want todo, right, Like we're the
blueprint for it.
Do I think there will come aday that they will?
Sure, but I also think that, orI hope, that if that day ever
comes, it won't be in mylifetime, but if that day ever
(22:57):
comes, they won't make thechoices that we make and that
we're not the blueprintholistically right.
Because holistically theculture of history and who we
are, there's a lot of greatthings, but also, you know, we
tend to have some bad outcomesas well in human history.
Jeannie Walters (23:22):
Well, I love
all this talk about AI because I
think it's so important andit's one of those things again
that it's happening whetheryou're thinking about it or not,
so it's better to think aboutit, it's better to be
intentional about it and to makethose decisions early on.
I also know that you love totravel,
Ovetta Sampson (23:41):
Yes
Jeannie Walters (23:42):
and you just
came back from another one of
your many adventures and so Iwanted to.
This is a little bit differentfor our last few minutes
together, but I thought this wasa fun way to think about the
customer experience too.
So this headline is frombreaking travel news and the
headline is Singapore Airlineselevates customer experience
(24:03):
with $45 million investment inShanghai Airport, and the part
that stood out to me about thisis it was really an investment
in their lounges for theirpremium customers.
And what I like about thinkingabout this is that a lot of
times when we talk to differentclients about their customer
(24:25):
experience, they kind of narrowin what that journey is.
They think it's like okay whenthey buy the product or when
they walk in or this, but whenyou think about the travel
experience and how much that'schanged in the last several
years.
I'm a frequent traveler, I getto the airport earlier now
because I can't trust thatthings are going to go well
(24:48):
right, like I just can't trustit.
I used to.
My goal used to be going toO'Hare and never sitting down.
Like I had it timed soperfectly that I would walk
through security, I would get tothe gate and I would board.
Now I just can't handle itbecause things are so different.
They're boarding so muchearlier.
They are, you know, requiringdifferent levels of security
(25:11):
sometimes or they don't havethings open or whatever.
So the airport experience hasbecome more important to
travelers, and that's what mademe kind of have a light bulb
about this, because you thinkabout the journey and what this
could do.
Well, this might prompt peopleto say, well, it's a little more
expensive, but they have such agreat lounge, right.
(25:31):
So, as a frequent traveleryourself, as an adventure
traveler, what's your take onthe airport journey as well?
Ovetta Sampson (25:42):
Yeah, you know I
don't think I'm the right
person to ask, because I loveairports and I love hotels and I
love airport lounges.
I just love to just go intodifferent airport lounges and
kind of like compare and I spendso much time at airports that
I'm like the opposite.
I definitely get to the airportway earlier than I need to.
(26:04):
It allows me to kind of likejust relax and get in.
If I was going to do work oremails or anything, I can do it
in the lounge, whatever.
Jeannie Walters (26:13):
Uh-huh.
Ovetta Sampson (26:14):
I spent most of
my time in airport lounges
because, uh, I'm a Unitedcustomer, so I go to their
lounge.
But I do see, the first time Iever saw kind of like the.
This was even before chat GPTcame out.
I was in the United airportlounge and this little robot was
(26:35):
going around collecting dishes,right, and it had like a sound
and I even took pictures becausepeople kept looking at it.
They were like what's going on?
And it was like I was like I'mlike when I write the book, this
will be the zeitgeist momentwhere I realized that the thing
that I have been studying in theobscure, dark corners of my
(26:56):
career has now gone mainstream.
Right, there was this robot thatwas like going around the
United lounge and just peoplekind of like would hesitate and
they would like approach it andthen like, and then it would
kind of like go two steps back,right, cause it has this program
, proximity alert, right.
(27:17):
And so it was like thishesitation dance like, where
like people are like what do Ido with this, right?
Except for the kids didn't care, they were just like all over
the robot, right, and so it was.
It was my.
I remember distinctly thinking,and this was like two, two or
three years ago.
I remember distinctly thinkingthis is is this is an indicator
(27:41):
that companies are going toadopt and scale these types of
automated technologies in theireveryday um customer service
offering right and and I and Ithought about the workers who,
whose job it was to clean up,working with this robot.
(28:01):
I thought about us as travelersengaging with this robot and the
robot going around in circlesbecause its proximity alert was
just going off the rails.
Jeannie Walters (28:12):
There was
nowhere to go.
Ovetta Sampson (28:14):
This is AI,
right.
They decided to create thisrobot to make the lounge
experience faster, moreefficient, all this kind of
stuff.
The workers don't know what todo with this robot and the robot
itself is just getting trippedup and I'm like, yep, this is
going to be us for the next fiveto 10 years.
This is what it's going to looklike.
And then pretty soon we'll haverobots taking at TSA and we'll
(28:38):
have you know we already havebiometrics through clear, like I
do clear, and so they just lookat my eyes and I'm like I'm
there.
Now TSA has it, where you cando touchless right.
So all of these little thingsare going to be integrated into
our travel experience that youwon't notice it.
Jeannie Walters (28:58):
Yeah, yeah,
yeah,
Ovetta Sampson (28:59):
You won't notice
it.
You'll just be like oh, I justgot to my gate in 30 minutes and
you could have just been on apart I tried to avoid and that
is stepping into the robot orstepping into the airport and
then going through security andthen getting and doing all that
stuff until I just get to theplace where I feel stable,
(29:21):
peaceful and waiting to get onthe plane.
Jeannie Walters (29:25):
Yep.
Ovetta Sampson (29:25):
Right,
Jeannie Walters (29:26):
Here's to more
peaceful travel.
Ovetta Sampson (29:29):
That's going to
be automated.
You're not going to have tolike worry about that, because
companies are going to realizethat that's where and to me,
this is the key of AI.
It's not that whether AI shouldbe there or whether it
shouldn't be there.
It should be, the questionshould be is it applicable or
(29:49):
desirable?
You, and any traveler, wouldlove AI between the door to the
airport and getting throughsecurity.
Wow, because that's thehorrible part of the experience.
It's the most horrible part ofthe experience, and if you can
use AI to make that experiencenot horrible, then AI is worth
(30:12):
it.
Jeannie Walters (30:13):
Yes!
Ovetta Sampson (30:13):
But when you
start putting AI in things that
you shouldn't mess with, like Iwant to order my own drink, or
why do I have to go to a kioskand then I order and then the
order's wrong and then now I gotto go talk to a person and they
don't know what I ordered.
Then I got to tell them what Iordered.
Like all of that that's justaggravating travelers or anybody
(30:34):
.
So you want to use AI in theworst parts of the experience,
and if you use it there in theworst part of the experience, I
guarantee you will be successful.
Jeannie Walters (30:47):
What a
conversation.
I knew we would have so muchfun talking about all of this.
Thank you for being here, andif our audience wants to know
more about you, reach out.
I think you have a specialoffer, so how can they reach out
to you?
Ovetta Sampson (31:01):
Yes, so I do
what I call the CEO Lunch, and
that is where I sit downone-to-one with CEOs to kind of
give them a roadmap to how tonavigate AI implementation.
So I'm running a special onthat half off, as well as any
half off of any of my one-on-oneconsulting.
That's coaching for designers,researchers and marketers who
(31:26):
are wanting to know more aboutwhat they can do to add AI to
their careers.
There's coaching and one-on-onesessions for folks who want to
de-risk their AI engagement and,of course, the CEO lunch that I
have one-on-one.
I'm running that special rightnow for all of Jeannie's
(31:49):
listeners.
So, oh wait, I forgot thecoupon code for that.
Jeannie Walters (31:57):
We'll have it
in the show notes.
So if you're interested in that, that will be in the show notes
for this episode.
Ovetta Sampson (32:05):
Make sure that
it's for your listeners and your
listeners only.
Jeannie Walters (32:09):
Awesome.
Well, thank you so much forbeing here.
It's always so fun to talk toyou.
Keep up the great work.
I think what you're doing is someaningful and important right
now.
Ovetta Sampson (32:16):
I appreciate you
.
Jeannie Walters (32:22):
We are just at
the starting line with all of
this stuff, so it's cool to seewhere we're going and I feel
better knowing that you're atthe starting line with us.
Ovetta Sampson (32:31):
Yes, yes.
I appreciate you.
Jeannie Walters (32:31):
Thank you so
much.
All right, and thank youeverybody.
Thank you for listening andwatching and being a part of
this Experience Action podcast.
I love these CX Pulse Checkepisodes.
If you have suggestions onguests or topics that we should
cover, please let us know.
And, as always, I am here foryour questions.
Don't forget you can leave me avoicemail at askjeannievip and
(32:56):
I might answer your question onthis very podcast.
Do not be shy.
We want to hear from you.
Thanks again to Ovetta, thankyou to all of you and we will
talk to you soon.