All Episodes

August 5, 2025 37 mins

AI isn't something you turn on like flipping a switch. Chances are, your team is already using it. That chatbot pilot you ran and the sentiment analysis humming along in your stack, those are AI at work. The challenge isn't adoption, it's making it work for you instead of against you.

This week, I’m joined by Jen McCorkle, a data-driven leader with 25+ years of experience in revenue generation and analytics, for a grounded, tactical conversation about how AI is reshaping customer support from the inside out and what you can do to lead the change. We unpack the alphabet soup (LLMs, GPT, ML, agentic AI), share practical use cases, and get real about the risks, biases, and blind spots that come with these tools.

If you’re ready to stop reacting to AI and start directing it, you need to give this episode a listen.

What we get into:

  • Why your AI tool might be hallucinating… and how to spot it
  • The problem with sentiment analysis (and what to do instead)
  • How to start learning prompt engineering without taking a class
  • Guardrails, audits, and smart pilots before you scale
  • The rise of AI governance, and how support leaders can get ahead

Whether AI feels like magic, mayhem, or just another Monday, this episode will give you the clarity to stop reacting and the confidence to start directing.

Keep Exploring:

📬 Subscribe for weekly tactical tips → Get Weekly Tactical CX and Support Ops Tips

🔍 Follow Jen McCorkle on LinkedIn for more  insights →Jen McCorkle on LinkedIn

🎙 Keep listening → More Episodes of Live Chat with Jen Weaver

🗣 Follow Jen for more CX conversations →Jen Weaver on LinkedIn 

 🤖 Sponsored by Supportman: https://supportman.io


Episode Time Stamps:

0:00 Why AI Feels So Overwhelming

3:17 Generative vs. Agentic AI Explained

6:50 AI Use Cases That Actually Work

10:45 Real-Time Agent Support with AI

13:30 The Flaws in Sentiment Analysis

17:04 Hallucinations, Bias & Bad Data

20:30 Building AI Literacy on Your Team

23:40 Always Pilot Before You Scale

27:15 Human QA for AI Systems

32:42 Will AI Replace Support Teams?



Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Don't let it fake its empathy.
If the chatbot says I am sosorry you've gone through that,
I can understand.
I'm thinking like no, you don't.
You've just been programmed tosay that I'm looking for that
empathy in the human.
I don't necessarily want theempathy in an AI.

Speaker 2 (00:16):
Welcome to Live Chat with Jen Weaver, the podcast
where top support professionalsunpack the tools and tactics
behind exceptional customerexperiences.
In this episode, finally, we'redemystifying AI for support
leaders.
My guest, jen McCorkle, takesus beyond the buzzwords and into

(00:36):
practical, no-fluff strategiesto harness AI in your support
team.
She is an amazing supportleader who has tons of
experience with data and AI.
We talk about how to startsmart with chatbots, how to keep
AI tools honest and to buildthe skills your agents need in
this AI world that we're in now.

(00:59):
If you've felt lost in the AIalphabet soup or wondered how to
integrate AI without losing thehuman touch, whether that's in
your contact center, in yoursupport queue or in your own
workflow, this episode is foryou.
Before we get started, though,our QA tool, supportman, is what
makes this podcast possible, soif you're listening to this

(01:22):
podcast, head over to theYouTube link in the show notes
to get a glimpse.
Supportman sends real-time QAfrom intercom to Slack, with
daily threads, weekly charts anddone-for-you AI-powered
conversation evaluations.
It makes it so much easier toQA intercom conversations right

(01:45):
where your team is alreadyspending their day in Slack.
All right on to today's episode.
As we all know, ai is in theheadlines all the time right now
.
We haven't done a ton ofepisodes on AI, but today I'm
here with Jen McCorkle to sharesome baseline information for
customer support leaders who areawash in this new world.

(02:07):
Would you go ahead and just letme know a little bit more about
what you do, how you came tothis point in your career and
how people can get in touch withyou?

Speaker 1 (02:16):
Yeah, absolutely Thanks, jen, and it's a pleasure
to be on the podcast with you.
I'm really excited to sharesome information about AI and
what our customer support teamscan do to really leverage it.
I am a data-driven leadershipexpert and I help organizations
and people get comfortable withAI and data to make better
decisions.
And leadership is not justabout the gut and the human and

(02:37):
the emotional element, but it'sa lot of using information to
tell a story and make decisions,and that information can be
both quantitative, like withdata and numbers, or qualitative
, which is more subjective,about the human element side of
leadership.
So I really work with leadersacross all industries and all
functions inside of acorporation to get ahead.

Speaker 2 (02:59):
That's fantastic.
I'm so glad you're here.
We have a lot of content thatyou've prepared, a lot of really
interesting stuff to talk about, so I want to just jump right
into it.
Can you just give us kind of anintro to why AI matters, amidst
all the noise about it?
What does this mean forcustomer support leaders?

Speaker 1 (03:19):
Yeah, it is a lot of noise, and what we say with data
is if data isn't helping youmake a decision, it is noise and
AI is the same thing.
If AI isn't helping you, it'snoise and AI is a toolbox.
It's not a brain, it's really atool that we use.
You know, especially with allof this noise and our fire, when
it's really hard to kind of getthat picture of what can it do

(03:39):
for me and I think, a lot ofpeople.
They say AI and they're reallytalking about generative AI or
chat, gpt kinds of things thatgenerate predictive text.
So that's what it is.
Gpt is generative, predictivetext.
So you remember a while ago like, yeah, like I remember a long
time ago when you first startedtexting on your phone and it
would start to pop up the wordsoh yeah, that's generative

(04:02):
predictive text Okay.

Speaker 2 (04:04):
But that wasn't based on AI necessarily.

Speaker 1 (04:07):
Oh, yeah, absolutely.

Speaker 2 (04:09):
Oh, okay, I had no idea.
Yeah, okay.
So when you say AI, what's thesimplest way to explain kind of
the GBT, the LLMs, all thealphabet soup for support teams,
maybe that aren't supertechnical.

Speaker 1 (04:22):
That's a great question because AI is
artificial intelligence and it'sin many, many flavors.
So we talk about GPT, which isgenerative predictive text.
It's basically a large languagemodel that's been trained on
infinite amounts of data to helpthis AI algorithm understand
how to think like a human, andthat's what AI does, is it?

(04:43):
How do you think like a human?
So some of the flavors are, andwe hear agentic AI a human, and
that's what AI does, is it?
How do you think like a human?
So some of the flavors are, andwe hear agentic AI a lot.
It's a really big buzzwordright now.
And agentic AI are tools thatact.
So this is like triggerworkflows, auto-close tickets,

(05:03):
automated ticket routing basedon intent.
Those are agentic things.
They act, Whereas generative AIor GPT generates content, and
what we've seen is we've movedfrom that predict the next word
in my text on my phone to beingable to write articles and books
and generate art and video andaudio.
So it's really it's moved soquickly in the last.
I mean it's like every sixmonths we're making a major jump

(05:24):
in what AI is capable of doingSome of these things.

Speaker 2 (05:29):
that's kind of in the future, maybe with the voice,
but what we're seeing now withAI in customer support
definitely chatbots, but I'veheard more buzz about how AI can
help us internally with ouroperations on support teams.
What are some of the use casesyou've seen that maybe work
really well?

Speaker 1 (05:49):
Obviously, there's the agentic and the routing of
tickets and so forth, andgenerative AI being able to
write scripts or write FAQs orgenerate those kinds of things
that it would have taken hoursand hours and hours of a human
time to go through and look atall of these calls and figure
out what are the most commonkinds of questions or concerns
we're getting and then what wasthe best response and what was

(06:10):
the thing that people understoodthe best?
That's how AI can really speedthat up and generate.
That is taking massive amountsof data and then being able to
say here's the summary of whatwe found in the next best
response.
Next best task is another thingfor agents is being able to as
an agent is on a call, beingable to pop up right on the

(06:31):
screen in the tool this is yournext best action to take.
So, for example, I've workedwith call centers in telecom and
one of the next best actions wehad been working on is, as
you're talking with Jen Weaverand she's talking a little bit
about her telecom needs what'sthe next best promotion that
would be perfect for you to helpyou save money, drive higher

(06:51):
customer satisfaction, get youembedded in adopting all of the
different products and servicesthat we offer.

Speaker 2 (06:57):
So that's an AI thing , is being able to pop that up
for an agent, so that it's likeright there, yeah, and as
somebody who's been on chat andphone as the customer support
agent, well, and as a customer,it's very hard to think through
that kind of data.
What tier is the customer at?
What are their needs?

(07:17):
What is all this data?
While I'm talking to them,right, it's like I don't have
that many streams in my brain,and so just being able to have
something compute, that for me,does make a lot of sense.
But then the AI, whatever it is, is not speaking to the
customer.
It's presenting it to me as thehuman and making me better.

Speaker 1 (07:37):
Absolutely, and you know, jen, I've been in data and
analytics roles since 1996.
I'm coming up on 30 years of acareer in analytics and data, I
know right, and we were doingartificial intelligence and
machine learning back in the 90sin colleges.
This has been around fordecades and I think one of the

(07:57):
things that I would stress foranybody who's watching that
might be in the analytics team,the BI team, the business
intelligence team, any of thedata science teams is go sit and
watch your agents, do yourwrite-alongs with your agents
and watch them all on the phone.
What has surprised me the most,and especially very, very large
data centers or very, verylarge call centers, is you've

(08:20):
got agents with five, six thingsgoing on on two different
screens and they're writing thisup while they're checking an
address and they're doing thesedifferent things and it is
amazing the kind of multitaskingthese agents are doing yeah,
and as a customer, I'm notseeing that, and so I'm just
thinking there's such a delay,why is it taking them so long,

(08:40):
exactly?
And, as an analytics person, wedon't always understand why, but
we put this algorithm together,we're popping this next best
action.
We're giving you what you want.
Why aren't you using it?
And when you go sit down andwatch, you're like oh yeah, this
needs to be much simpler than Imade it.

Speaker 2 (08:56):
I think a lot of call center employees definitely
feel like the tools and supportthat they receive are not
necessarily tailored as theywould be if someone had watched
them actually work.

Speaker 1 (09:07):
Yeah, exactly.
So let's talk a little bitabout TLA, the three-letter
acronyms.
I started my career in IBM andeverything was a TLA, a
three-letter acronym, at IBM.
But let's talk about thatbecause I think it's important,
as somebody in a support role ormaybe somebody that's not
technical, to be able toidentify what kind of AI
somebody is talking about.

(09:27):
So we talked about agent techand we talked about GPT
generative, predictive textwhich the most popular ones
right now are chat, gpt and theclock.
What else are we looking at?
Yeah, so NLP, which is there'sactually two different kinds of
NLP.
There's natural languageprocessing and then there's the
other one, which is there'sactually two different kinds of
NLP.
There's natural languageprocessing and then there's the
other one, which isneurolinguistic programming.

(09:50):
Two different things, sameacronym, two different things,
different things.

Speaker 2 (09:53):
Great, that makes it easier.

Speaker 1 (09:55):
So I was talking about.
I remember when I was workingand I was working with a company
that scales educational contentand materials and working with
the marketing team and I saidsomething about NLP and this
person says you know, I actuallyhave a master's degree in NLP.
I'm like you do.
She was the email contentspecialist and so in her job,
being able to haveneuro-linguistic programming was

(10:16):
very important.
And then we realized that it'sfunny, in NLP, in
neuro-linguistic programming, wehad two NLPs that were very,
very different, but naturallanguage processing.
This is what we use to help AIunderstand things and speak like
a human.
So this is understanding myorder didn't arrive and AI
understanding what that meantHaving a voice I am recognizing

(10:40):
I want to cancel my subscriptionand being able to recognize I
want to cancel my subscriptionsto be routed here.
I want to cancel mysubscription is to be routed
here.
So that's how we use NLP to makeGPT think and understand like a
human.
Llms are large language models,so this is massive amounts of
content that GPTs are trained on.

(11:00):
So I think what's interestingis like with ChatGPT when they
were training the model withOpenAI.
They were adding things likebooks and websites and lots of
different content in there totrain it.
But it was also trained onWikipedia and Reddit, because
Wikipedia is the largest sourceof human-generated content and
it's important that it learnshow to talk like a human.

(11:23):
So that's why they use Redditand Wikipedia to understand how
humans, the everyday human,would write and type and talk.

Speaker 2 (11:30):
Yeah, with not perfect grammar, with some
inconsistent capitalization andjust the syntax, probably things
we don't even think about ashuman beings.
But how would—it's probablyimpossible to separate the
syntax, the grammar, the waythat people write, with the
content, so you're feeding itboth the content and the style

(11:51):
and then it's using thatinformation.
Is that related to what we talkabout as hallucinations?
It's a term I've heard.

Speaker 1 (12:01):
Maybe we're getting off track?
Yeah, no, hallucinations andbias are definitely there.
And think about it as a human.
We hallucinate things,sometimes by having a missed
memory or, you know, I didn'tremember it that way and I think
I'm correct until I find outI'm not correct.

Speaker 2 (12:18):
Yeah, or if I'm confident that I know a fact,
until that's challenged, I stillam confident about it, even if
it's incorrect.

Speaker 1 (12:27):
Right, and so that's kind of what we talk about with
hallucinations is sometimes AI.
Chat GPTs generate things thatdon't exist.
For example, I did some datavisualizations for a company
that I'm consulting with and Ifed it into chat GPT to see how
well could it summarize thevisualizations that I created
and be able to generate contentlike an article or a

(12:49):
summarization for an executive.
And as I was checking the data,every single data point was
wrong.
Every single data point that itsaw on the chart was wrong.
So I would do things like 34%.
It put 39.
So I don't know if it's justthe way that it processed the
image in its little brain or ifit was trying to subtract things

(13:09):
.
I have no idea where it got it,so you have to be very careful.

Speaker 2 (13:12):
What did you do with that?
Did you go in and manually editthose?

Speaker 1 (13:16):
I highlighted everything and showed them to
the client on.
This is why we have jobsecurity.
This is my job security.
You can't just feed it in, andthat's what I was finding is
people will feed it in andthey're like this is great, it's
a great summary, and they neverchecked it.
So think of it as.
Think of your AI tools, yourcollaborative tools that support

(13:38):
you.
Think of them as somebody whojust started.
They went on the job and youjust got to double check
everything until you're really,really confident that it's doing
it right.
Oh, the other thing we talkabout is AI and ML.
That's the other algorithm orthe other acronym that I wanted
to talk to you about, andmachine learning is what ML
stands for, and machine learningis how we program AI to think
like a human, so it's trainingmodels, testing models, having

(14:02):
it do the predictive analytics,creating these algorithms and so
forth.
So that's sort of we talk aboutAI and ML machine learning, and
we talk about AI as chat GPT,and we talk about AI as agentic
AI, and so when somebody says AI, it's important to ask them
what kind of AI are you talking?

Speaker 2 (14:18):
about LLM, GPT, ML, different kinds of AI.

Speaker 1 (14:24):
Yeah, for me, as a data scientist myself, when I
talk about AI and ML, I'mtalking about creating
predictive algorithms that willpredict customer churn, or
creating algorithms that willidentify agents that might be
able to close tickets versus notclose tickets or upsell versus
not upsell.
So that's when I think about it.
I'm thinking about an algorithmI'm developing that's going to

(14:44):
predict something that we couldthen take an action on.
So it's a prescriptive kind ofan output.

Speaker 2 (14:51):
Yeah, and what I'm hearing you say repeatedly is
that it's about taking actions.

Speaker 1 (14:55):
If it's helping you make a decision, if it's helping
you take an action, that'swhere it's really important and
that's the intersection of whatI do with data-driven leadership
is it's not just about the dataand the analytics, but the AI,
because we use AI every day toget information.
So how do I take my data and myanalytics and everything I'm
doing in AI and blend ittogether so that I am the
strongest leader I can be?

(15:16):
How can I future-proof mycareer by using these tools
collaboratively?

Speaker 2 (15:22):
Yeah, so I would love to dig into that more.
What of all that alphabet, orof all the tools that are
available, what flavor do yousee as kind of the low-hanging
fruit that any supportorganization or support leader
can adopt pretty quickly?

Speaker 1 (15:42):
Yeah, I'd be surprised if people aren't using
chatbots.
I use it on my website and I'ma solopreneur.
I was in a one-person companyand I use it on my website a lot
just to automate those quickthings that are coming in that
you can quickly answer.
It's that low-hanging fruit,that tier one kind of stuff that
has for resets and using yourchatbots to do those kinds of
things.

Speaker 2 (16:01):
And that has an immediate ROI, because you're
then like when we implemented achatbot, it took I don't know
like 70% of our conversations,which for our support team, was
an immediate relief of hoursspent that we're putting now to
more valuable stuff.

Speaker 1 (16:17):
Yeah, Knowledge bases and FAQs.
Using your AI tools to researchand find out what kinds of
conversations are agents havingand how can we put together the
best knowledge base and the bestFAQ for our customers and being
able to get that back on thewebsite or big successes that
teams you've worked with haverun into, maybe because of the

(16:46):
way the AI was set up, or maybeas a lesson for the rest of us.
Yeah, that's a really greatquestion, you know, thinking
about it in the terms of, likecall centers.
One of my I shouldn't sayfavorite, but one of those
things that kind of makes you gohmm a little bit, is when we
look at sentiment analysis.
So when you do a sentimentanalysis, what's happening is AI

(17:06):
or machine learning algorithmsare looking at the words, the
transcript words, to see wherethere are positive, negative or
neutral words, and then, basedon a scoring algorithm, getting
things negative scores orpositive scores and looking for
phrases and words.
It then is going to come upwith a sentiment number and that
sentiment number will thentranslate into positive,

(17:28):
negative or neutral, and I thinka lot of agents have seen that
Was this a positive sentiment ora negative sentiment?
So they see the positive,negative and neutral, but they
don't always understand how thatwas generated, how that was
actually derived in thealgorithm.
And so, as we were working onthese things, we would get these
things back where we wouldreport out as an analytics team
that an agent had particularlylow sentiment, and then they

(17:52):
would go listen to the calls.
You know supes would go listento the calls and then find out
like this is fine, what's goingon.
So as we started looking throughit we found two things that
were really interesting.
The sentiment analysiscurrently is trained on
transcribed information, so it'snot getting tone and inflection
.
So if you're sarcastic and youmade a sarcastic comment, it
might come off as very negativein the call, but it was actually

(18:13):
sort of a thing that you wereusing to build a rapport
relationship quickly with yourcustomer.
So it doesn't do tone andinflection.
But what was really interestingis the word cancel.
So it doesn't do tone andinflection, but what was really
interesting is the word cancel.
I'm working with the sametelecom company that I worked
with.
Cancel comes in as a negative.

Speaker 2 (18:29):
So as I was Cancel my subscription.
I don't like you anymore.

Speaker 1 (18:32):
Yeah, exactly so you're working with like sales
and retention analytics andsomebody says cancel, that's a
negative thing, but fortroubleshooting and repair,
cancel can be a good thing.
Somebody that had anappointment and for tech to come
out to their home and visittheir home and the agent was
able to resolve their problem onthe call, would you like me to

(18:52):
cancel your appointment?
Since I fixed your issue,cancel was coming up as negative
.
You thought it was a positivecustomer output.
A lot of these tools aren'tstrong enough to bifurcate its
algorithm to say, if it's thiskind of a call, think of
sentiment this way and if it'sthis kind of call, think of it
that way.
So there's a lot of these toolsare what we call black boxes.

Speaker 2 (19:11):
If you hear somebody say it's black box in
relationship to AI, it meanssomething's happening behind the
scenes that you don't know andnobody knows, right Like even
the developers of that tool, putstuff in and get stuff out and
in between, no one knows what'sgoing on.
That's exactly right.
Yeah, that makes total sense.
And so we were talking aboutinputs and outputs.

(19:31):
Have you seen support teamsinput bad data, train it on bad
content and what does that looklike, I guess?
And how do we avoid that?

Speaker 1 (19:41):
When I started my career at IBM, I was working
with outsourcing clients and oneof the things we would
outsource would be help desksand technical support for their
customers so very largecompanies.
They would take over theirdesktop support and help desks
so that their customers weregetting better support from IBM.
We worked with those agentsback in those days and we would
add these forms so that theycould ask the customer these
questions, and what we found isa very common agent behavior is

(20:05):
they pick the first thing on thedrop-down because they just got
to go faster, or push to get onthe next call right.
And it's funny because ithappens today, In 30 years, that
agent behavior hasn't changed.

Speaker 2 (20:16):
When you're measured on the number of calls you do
per hour, it disincentivizes youfrom creating good data.

Speaker 1 (20:23):
That's correct, and so that's why it's really
important as a data team tounderstand all of those kinds of
pressures that are coming inand how it can skew the data and
being able to set it upcorrectly.
So one of the companies that Iwas working with previously was
getting ready to start doingAI-generated text and notes from
the call so that the agentdidn't have to do post-call work
to stick it in.

(20:44):
And so it was a little bit moreof an accurate way of getting
the right information, nothaving to rely on the agent, and
put the agent where they reallyare the most valuable, which is
connecting with their customersand making those relationships
with our customers.
So yeah, it's a garbage in,garbage out thing.
If you have garbage coming intoyour machine learning algorithm
, you'll have garbage coming.
What's a?

Speaker 2 (21:02):
guardrail that you would recommend a support team
use to try to help prevent an AIagent or even an internal ops
tool from hallucinating thevalue away.

Speaker 1 (21:17):
I think especially for support teams, as AI starts
to replace those tier one sortof low-hanging tasks and
repeatable tasks.
So in that kind of a case wherewe see these things being
automated from the tier onesupport, repurpose that time
into having them check andvalidate the outputs from the

(21:39):
algorithms.
Check and validate the outputsfrom the algorithms.
So if customer says this andchatbot recommends this action,
get those logs and make surethat they're being checked by a
human to ensure that thedecision your AI is making is
the right decision.

Speaker 2 (21:53):
So this brings up a whole new career for AI
babysitters basically and Idon't think we have all those
roles and titles defined.
I think it's emerging and maybeI'm getting ahead of myself a
little, but we could talk abouthow does a customer support
leader future-proof their careerso that they're building these

(22:16):
AI skills that they need to thenhave these roles?

Speaker 1 (22:19):
And this applies to everybody those of us who are
not using AI will be replaced bythose who are, and I don't
think that AI is going toreplace people.
I think it is going to reshapeour roles.
As you said, they're stillbecoming part of our culture.
We're redefining.
What does this all mean?
How does it all look?
And again, I'm speaking fromwhat I know today.
This could all be differenttomorrow, but again, those

(22:43):
simple, low-level tasks can beautomated, but that's where we
can then have our agents goingthrough and checking the
outcomes.

Speaker 2 (22:52):
Customers can tell when bots are faking empathy.
So we maybe should build botsto be bots and not to try to
pretend, because more and morecustomers are being aware of
they're aware of when they'retalking to a bot, but also
Customer and public attitudesabout AI are shifting rapidly,

(23:13):
and so people are beginning totrust it more.
Some people are beginning totrust it less, so we have these
varied attitudes that customersare coming at.
How do we prevent ourselvesfrom overtrusting the AI to just
do whatever it's going to do?
How do we calibrate thosechatbots or those tools for a
changing?

Speaker 1 (23:30):
customer perspective.
I think you hit the nail on thehead.
The first thing is, don't letit fake its empathy.
If the chatbot says I am sosorry you've gone through that,
I can understand.
I'm thinking like, no, youdon't.
You've just been programmed tosay that.

Speaker 2 (23:44):
Right, and that's annoying and unnecessary.

Speaker 1 (23:47):
Yeah, it would be like if I was talking to an
agent that said I am so sorrythis happened to you, let me fix
your problem.
There's no empathy there inthat voice.
I'm looking for that empathy inthe human.
I don't necessarily want theempathy in an AI.
You're not trying to be aperson, you're trying to be an
AI, just be an AI.

(24:07):
So, understanding how it'strained against us.
The biggest thing isunderstanding how something is
trained and knowing where itstops.
It stops being a human becauseit just tells me I've got a
great idea.
When I ask it to find my blindspots, it will.
So you have to make sure thatyou are prompting it to do that.
So, thinking about yourchatbots and setting up a
chatbot from a customer supportperspective, making sure that

(24:29):
you're being transparent, thatthis is a chatbot here's what I
can do, here's what I can't do.
But getting people theopportunity to opt out of that
chatbot experience and talk toyou as human.
Our aging communities hate AIand computers.
They want to pick up the phone,dial the number and talk to a
person.

Speaker 2 (24:46):
I can't imagine wanting to call a person as a
millennial, but my parents wantto.

Speaker 1 (24:51):
Yeah, and my grandmother, when she's 90 and
she doesn't get on the phoneanymore but as of a few years
ago, she would get very, veryangry when she would have to go
through the IDR or the automatedkind of things and she would
not use a chatbot to save herlife.
So giving people theopportunity to opt out of your
chatbot to get to a customerservice agent is, I think,

(25:12):
really important inunderstanding what does our
customer want.
And sometimes we try toautomate it because we think
about how it's going to save ustime and how it's going to
reduce our times per call andour first call response and all
that kind of stuff, and we'rethinking about it from the
metrics perspective and the KPIperspective, but we're not
thinking about what does ourcustomer want.
And that's really the mostimportant thing as far as when

(25:33):
we talk about CX, we forget whatthe customer wants.

Speaker 2 (25:37):
Yeah, and I think, especially in the world of tech,
we're very accustomed toworking with not just AI but
various computers and tools andIVRs, whereas I think we lose
touch a little bit with youraverage customer who maybe
doesn't sit at a computer allday long and maybe has very
little trust or proficiency, noteven older folks, but just

(26:01):
anyone who I tend to think.
Everybody works with computersright, but coming back to
reality, not everyone does.

Speaker 1 (26:07):
Not everybody does.
That's very true, and that'swhere it's the governance is.
What's really important is thatyou have, as a customer support
team, have governance put inplace.

Speaker 2 (26:17):
So tell me more about that.
Do you mean like I could havein the future my title might be
customer AI governancetechnician or something?

Speaker 1 (26:28):
like that?
Yeah, specialist Somebody.
That is part of that QA process.
How do you do QA and QC foryour chatbots and your AI
responses, so having that humanin the loop to make sure that
they're catching those errors?
As somebody that's programmedAI, I love when somebody tells
me that something's doingsomething wrong because I've set
it up the best way I can andthe more information I get from

(26:50):
the humans about where it'sfailing and the more information
I get from the humans aboutwhere it's failing, the better I
can be as a developer to haveit catch up If I wanted to
prepare myself for a future.

Speaker 2 (26:59):
if I'm a support leader and maybe I want to move
into AI governance, are therecourses you recommend or ways to
become more AI literate andmove in that direction?

Speaker 1 (27:11):
Seems like there's a course for everything out there
and, honestly, you don't needthem all you don't need them all
, I think, getting reallyunderstanding the tools that
you're using in your supportgroups and understanding how
they're being implemented andhow they're being scaled in your
organization, and then learningthings like maybe it's less
about taking the class andlearning how to ask, so asking
for things like I'd like to workwith you on tuning your AI

(27:36):
algorithms or tuning your AImodels.
I want to be a part of the QAprocess, and that's the hands-on
kinds of stuff.

Speaker 2 (27:43):
That's really helpful because I've also heard this
advice and I think it's goodadvice unrelated to AI Become an
expert on if your team usesZendesk, become a Zendesk expert
right.
Get whatever certification youhave.
If your team uses Intercom,really dig into FIN and how that
works.
It's sort of a variation onthat old saying my mom used to

(28:03):
say do what you can where youare with what you have right,
and then you can learn a lotfrom learning a specific tool
Totally, and there's tools thatare changing every day, so get
used to using them.

Speaker 1 (28:16):
Get a chat GPT account or a cloud account and
ask it.
Here's what my job is and I'mworried about content.
Engineering is a big thing.
I'm worried about what's goingto happen to my job.
Give me five ways I canfuture-proof my role as a tier
one support person or a tierthree, or a supervisor or a
leader.
Help me, you know, identify mythings that I can do today to

(28:39):
start taking action.

Speaker 2 (28:40):
That's great, I love that.
And you used this term that wasbrand new to me not long ago
prompt engineering and I've kindof waded into this world, but
can you tell me a little bitmore about what is that?

Speaker 1 (28:56):
Prompt engineering is how do you ask AI to do
something without introducing?

Speaker 2 (29:04):
your own bias.
Prompt engineering is atechnique you can learn where
I'm giving it.
Prompts that challenge my ownbiases.

Speaker 1 (29:16):
Do you remember?
We talked about garbage in,garbage out a little while ago,
about how, if you feed youralgorithm bad data, you get bad
algorithms out?
This is what prompt engineeringis.
If you give it a very simple orI don't want to call it a
garbage prompt, but you give ita prompt that isn't really what
you want it's going to give youwhat you ask for.
So prompt engineering is goinga level deeper.
So I'm going to be verytransparent and kind of

(29:37):
vulnerable.
Right now.
I am horrible with shopping forclothes.
I don't like shopping.
I walk into the store and Ilook around and I'm like, okay,
there's all of these things andI don't like it, can't do it.
So if I asked my chat GPT toolto recommend some outfits that
would look nice on screen andthat are in a certain price

(30:01):
range and this is my colorpalette, like I like blues and
greens and blacks, right, andit's going to give you a whole
list.
But is that really what youwant?
So when I did my prompt and Iliterally seriously actually did
this I told it I hate shoppingI don't know what anything is
called and I don't accessorize.

(30:21):
I prioritize comfort overfashion or fit.
I want things that look nice,that stand up well with being
washed, that will last.
Okay, now I'm like I'm up tolike eight things.
I'm telling it here's what Iwant it to project about me and
being confident in me and whatI'm helping me do.
I want it to be a bit trustfuland I need you to tell me where

(30:45):
everything's at.
What is it called?
Now, it's interesting becauseit gave me this stuff.
I'm like this is fantastic, Ilove it.
Every link I clicked on waswrong.

Speaker 2 (30:55):
I've had that experience too.

Speaker 1 (30:56):
But I asked it describe the piece of clothing
that you're recommending so thatI can go do my searches, and it
helped me so much.

Speaker 2 (31:06):
Yeah, you got specific and it was useful to
translate and give you things tosearch.
But it actually linking to.
I did this with shoes not longago and I was like none of these
are even.
They're all out of stock, likewhat?
But it does.
It doesn't give you the rightlinks, but it gives you
information for you to use.
That's really interesting.
Back to kind of the bigquestion that might be.

(31:28):
It's on my mind, it's onhopefully, hopefully, people are
thinking about this.
Will AI replace us as supportpeople?

Speaker 1 (31:38):
I think it's going to replace some of the work we do.
So I think that AI is replacing70 to 80% of common FAQ tickets
.
They're handling it in achatbot world or you know other
ways that they can log in and dothings on their own Routine
order status checking.
You don't need to call an agentto find out where your thing is
.
It can actually send it to you.

(31:59):
You're tracking logs, all thatkind of stuff.
You can log in and do that.
Chatbots are perfect for that.
And then we've already talkedabout things like password
resets Anything that's reallyreally simple, that can be
automated, but where the humanelement is essential.
As we talked a little bit aboutrelationshiping, being able to
um, if, if, so, if a devicefails, something is not working,
ai can't navigate or build thetrust.

(32:20):
If something stopped working orsomething upset a customer,
what about and I mean this never, never happens in the cs world
that somebody calls in andthey're angry because they call
them five times?
Ai is not going to be able tohandle that customer.
That's had it, that called infive times and they're not
getting their resolution, andthen AI will probably not be

(32:42):
able to handle that.
It'll probably just take thatperson off a little bit more.
Tier three level support thestuff that you really need a
person to troubleshoot.
What were we talking about whenI was talking with the
developer and somebody askedChatGP to develop code and the
code was beautiful, it wasperfect, it was laid out correct

(33:03):
.
When they actually implementedit, it didn't work.
None of the code worked.
So I think that's where we seea lot of these things, where it
looks like it can do the work,but you really need the human to
be able to troubleshoot it,implement it, fix it.
You also have things wherewhich probably never has
happened to you where somebodysays something but that's not
what they meant.
But as an agent, you can teasethose things out Like is this

(33:26):
really what you meant?
Is this really what you werelooking for?

Speaker 2 (33:29):
Like if there's a typo in someone in a customer's
reply for what you were lookingfor, like if there's a typo and
someone in a customer's reply,all along they've been talking
about one thing and then theyhave one reply.
That's like I do not want arefund, but the not was an
accidental typo.
They do want a refund.
I can gain from context, fromthe rest of the conversation,
that they do, and check in withthat.
Yeah, complex emotionalproblems.

Speaker 1 (33:47):
People are still going to need to do that.
And then, obviously, we talkeda little bit about agency
needing to supervise AI andhandling those trust moments
being in part of that loop wherehumans are moving up the value
chain and letting AI do thestuff down.
Here we're moving up the valuechain where we can provide
better value and handling thesecomplicated, highly charged,
highly emotional kinds ofinteractions with our customers,

(34:09):
If you were advising a supportleader setting up AI governance
today, right now.

Speaker 2 (34:14):
what are the first three steps on your checklist?

Speaker 1 (34:18):
So if I'm setting up AI and I'm getting ready to
scale it or adopt it inside ofmy organization, the first thing
that I want to do is ask fortransparency from the vendors
the vendors that are creatingthese tools, or your internal IT
teams or development teams thatare creating these tools.
Ask for the transparency.
How is this being trained?
What data is it being trainedon?
Sometimes we don't know what wedon't know and there might be a

(34:40):
better data set to train it on.
So ask those kinds of questionsand present it.
As I want to help, I want tomake this the best tool we could
possibly use to automate thingsand make our customers happy
and keep our agents happy.
Asking for ways to audit thedecision logs and asking for
vendors for ways to do theexplainability.

(35:02):
How can I explain how this isworking to other people in our
organization?
Guardrails, so setting up thoseguardrails for CX.
So company policies Sometimesit gets it wrong, especially if
you've had two or threedifferent policy changes.
Sometimes the AI will go getthe last one.
So making sure that things likewarranty or those kinds of
things like as far as refundsand things like that are the

(35:24):
most up-to-date and doing thatkind of check.
Building AI literacy for youragents, helping your agents
understand how to use AI in thefront lines and how to
collaborate with my AI tools.
When should I use AI-directedcontent?
When should I use a chat, gptor a general tool?
Teaching them how LLMs work andwhat is an LLM a large language

(35:44):
model, how does it work, sothey understand the risks and
they don't blindly trust theoutputs.
And then piloting, first, inthe escape.
I think a lot of times we makethe mistake because this tool
comes in and the vendor saysit's going to fix every problem
for us and we implement it onscale.
And start with the pilots.
Start with certain sites orcertain kinds of calls you know

(36:07):
if you're a multi-siteorganization or if you're a
group that's gotten multiplekinds of different calls coming
in start with one of thosesegments of call center agents
to ensure that it's workingcorrectly and pilots are perfect
.
And involving your agency inthe outputs and the tuning and
the QA process that we talkedabout and helping them get
involved as well.

Speaker 2 (36:25):
It seems like that's a great opportunity to use that
percentage of agent time that'snot in the queue Because it's
good not to overload agents orspecialists with too much queue
time, and so they can do some AItrading and AI governance a
little on the side.
Cool, great.
Well, is there anything else?
I think we've coveredeverything.

(36:45):
I mean so much.
I almost feel like this couldbe two episodes that we just
talked through.

Speaker 1 (36:50):
Yeah, you know, I think the biggest thing, the
last thing I would say to yourCS and your CX leaders that
might be listening to this is AIis already in your support
stack.

Speaker 2 (36:59):
You just need to learn to lead with it.
Yeah, I love that.
Learn to lead with it Fantastic.
Well, thank you for being here.

Speaker 1 (37:04):
This was so much fun.
Thank you for having me.
This was really fun.
Thanks, yeah.

Speaker 2 (37:09):
Huge thanks to Jen for being here and educating us
on AI.
I hope you're leaving withclear, actionable steps to bring
AI into your support operationsor to better manage the AI
tools that are already beingimplemented, without sacrificing
empathy or accuracy.
If you enjoyed thisconversation, please do

(37:29):
subscribe to catch the nextepisode wherever you're
listening or watching, and ifyou know another support leader
facing AI overwhelm, please dopass this along.
Thanks for listening and we'llsee you next time.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.