Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to the deep dive.
Okay. So you've sent us quite a pile
of sources on, well, a really fast shift
happening in enterprise software.
Our job, as always, is to give you
that strategic roadmap, you know, the core knowledge
so you walk away informed, but hopefully not
overwhelmed.
And today, we're digging deep into something pretty
revolutionary,
(00:21):
integrating the GPT three API
into customer support. Yeah. We're really talking about
moving past those, frankly,
irritating old chatbots,
the rule based ones Oh, yeah. Those. And
stepping into the era of AI that can
actually, you know, converse. That's the big promise,
isn't it? Moving from something that just causes
frustration to something that, well, actually solves problems.
(00:43):
The sources lay out a blueprint for using
this tech. And it's not just theory, that
blueprint. Yeah. It's measurable. Yep. We found solid
evidence that doing this right, integrating GPT three
properly, can cut customer service response times by,
get this, up to 70%.
70. Wow. Yeah. And it boosts CS scores,
customer satisfaction
significantly. Plus, and this is crucial, it makes
(01:04):
your human support agents more productive. Okay. That's
a lot to unpack. The mechanics and the
actual return on investment. Exactly. So let's start
right there with the,
the core AI advantage. Uh-huh. Because I think
for anyone who's only experienced those old, you
know, keyword driven bots,
the difference GPT three brings is
(01:25):
well, it's huge.
Transformative.
It really is. Conventional chatbots, they just operate
on a strict decision tree. Right. They hunt
for keywords Mhmm. Follow a preset script. And
if the customer says something even a little
bit different And it breaks. Yeah. You get
stuck in that loop. I'm sorry. I didn't
understand that. We've all been there. We have.
But what's the actual operational cost?
(01:46):
For someone listening who runs a large call
center, what does that failure cost them? It's
high. Every time that bot fails, it escalates
straight to a human agent. Right. Which means
that agent is now spending valuable time on
a simple problem the AI should have handled.
GPT three aims to stop that.
It uses advanced natural language processing NLP
to grasp context,
intent,
(02:07):
even subtle nuances. Not just keywords. Not just
keywords. Okay. Give us that example from the
sources. The one that really shows this leap
in understanding context because that's the moment you
realize, okay, this isn't just a better bot.
It's a different kind of system.
Yeah. The sources had a great one. A
complex scenario
where a customer asks, how can I reset
my password if I also can't access my
(02:27):
recovery email?
Okay. The double bind? Exactly. A legacy bot
would probably force them down one path or
the other. Yeah. You know, password help or
email help treats them separately. But the real
problem is the link between them. You need
a solution that sees that dependency.
And GPT three gets that connection. It understands
the combined problem and can suggest integrated solutions,
(02:49):
like maybe using security questions or a different
verification method entirely. Which avoids that customer frustration
loop. Hugely.
And it handles those complex tier one issues
without needing a human right away. So beyond
just being smarter, we we get the sort
of cold hard benefits of efficiency and scale.
Like, it runs 247.
(03:10):
Never sleeps. That nonstop availability alone is a
big deal. It gets rid of the need
for expensive after hours shift coverage. Right. And
we saw reports of
cost reductions for businesses using GPD three ranging
from 30 up to 50%.
30 to 50%? That's substantial.
It is. And it scales seamlessly. You get
the same quality of response whether you're handling
(03:31):
10 requests a day or 10,000.
Which is crucial for handling those unexpected
spikes in volume. Absolutely critical. Okay. But consistency
under load is one thing.
The other big hurdle or maybe perception
is that automation feels
robotic,
impersonal.
How does this AI scale personalization,
(03:52):
not just efficiency?
This is where it gets really interesting. First
off, it remembers the conversation history. So the
customer doesn't have to repeat Exactly. No more
starting from scratch every single time. It knows
what you talked about five minutes ago. Okay.
That alone makes it feel less transactional, more
like an actual conversation. It does. And it
goes further. It can analyze sentiment. It picks
up on the customer's emotional tone from their
(04:14):
language. So it knows if someone's upset. Yeah.
And it can adapt its response,
show empathy if they're upset, maybe mirror their
excitement if they're happy about something. Plus, it
can tailor its language style to match the
brand's voice, you know, technical, formal, or super
casual. That level of customization, though, suggests you
can't just, like, plug it in and walk
away.
The sources really hammered this point about needing
(04:36):
good instructions.
Prompt engineering
sounds like a whole new skill. It's probably
the most practical takeaway for anyone looking to
implement this. You can't just feed the AI
a vague question like, how do returns work?
You'll get a generic answer
or maybe a wrong one. Probably generic may
be inaccurate. Yeah.
Success really hinges on crafting detailed prompts,
(04:57):
instructions that tell the AI not just what
information to give, but how to phrase it,
what tone to use, and critically,
what exceptions or details to include. And there
was a really good example of that structure.
Right. Instead of just how do returns work,
a good prompt would be something like, explain
our standard thirty day return policy using a
friendly and helpful tone.
(05:19):
Make sure to mention the specific exception for
electronics, which only have a fourteen day return
window. Ah, okay.
That level of detail. That specificity
ensures the answer is accurate, it's on brand,
and it covers the necessary compliance points all
at once. Okay. Moving beyond just answering questions,
the sources also talked about the AI acting
(05:39):
like a like a smart traffic cop for
incoming requests. Intelligent routing. Yeah. Exactly. Like a
digital bouncer or a triage nurse.
GPT three can analyze the content of a
new ticket or chat, figure out what kind
of problem it is, and even gauge how
urgent it seems automatically.
So no more waiting through complex phone menus
or waiting for a human to read and
(05:59):
sort every single email. That's the goal. It
cuts out that initial sorting step. So if
someone types, help, my credit card was charged
twice, the AI flags that. Instantly. It sees
charged twice, recognizes it as a high priority
billing issue, and routes it straight to the
finance or billing team versus something routine like.
Like, how do I change my notification settings?
That gets tagged as a regular priority, maybe
(06:21):
routed to general support or self-service docs. Critical
stuff gets seen fast. Routine stuff doesn't block
the emergency lane. Makes sense. And the last
piece on scale and capability,
global support.
This is a huge pain point for multinationals.
Right? Hiring native speakers for every single language.
Incredibly expensive and difficult, but GPT three has
(06:43):
pretty sophisticated multilingual skills. It can translate support
chats in real time. And not just the
major European languages? No. The sources specifically called
out its ability to handle complex language as
well, mentioning Japanese and Ukrainian as examples while
keeping the technical terms accurate. And these aren't
just clumpy literal machine translations like we sometimes
(07:03):
see. That's the key difference. It's designed to
maintain the conversational tone, the idioms, the technical
jargon. It sounds much more natural. So you
could theoretically
have your English speaking support team effectively handle
chats in Japanese or Spanish. In theory, yes.
It drastically expands your reach without needing to
hire specialists in every single language. That's a
(07:24):
massive advantage. Huge.
Okay. Let's shift gears a bit. Let's talk
strategy security.
The big concerns
and, you know, the the actual results people
are seeing, the nuts and bolts. Right. Implementation
first. It's actually way more accessible now than
people might think. We're deep into the low
code, no code era. Meaning you don't need
a massive
(07:45):
custom software build. Not necessarily.
There are platforms with direct integrations.
The sources mentioned Zendesk AI as one example,
integrating this capability into existing CRM and support
systems.
That makes getting started much smoother. But you
still need a plan. Right? You can't just
flip a switch for everyone overnight. Definitely not.
The clear advice is to start small. A
pilot program. Maybe handle just 15 or 20%
(08:08):
of inquiries initially. Test the waters. Exactly. Focus
on common, fairly simple questions first. Get your
baseline metrics, see how it performs, then gradually
expand the scope. Okay. And cost.
This pay as you go thing based on
tokens, that can sound a bit abstract compared
to a flat subscription.
How should listeners think about the actual expense?
(08:28):
Yeah. A token
is roughly a word or maybe a part
of a word depending on the language. Okay.
The sources showed a typical support interaction like
a full back and forth Mhmm. Might use
somewhere between five hundred and fifteen hundred tokens.
And what does that translate to in dollars
and cents? Or I guess, just cents. Usually,
just a few cents per interaction at current
rates. Yeah. It's tiny on a per customer
(08:50):
basis. So when you compare that marginal cost
to the fully loaded cost of a human
agent's time salary benefits
overhead. The ROI becomes pretty clear pretty quickly,
often immediate. That is compelling. But now the
elephant in the room Yeah. Security. We're potentially
dealing with sensitive customer info,
GDPR, HIPA,
CCPA. These aren't optional. How do you handle
(09:10):
that? You absolutely cannot just send raw sensitive
data to a third party AI. That's a
nonstarter. Okay. So what's the fix? The smart
strategy is designing your system to avoid sending
personally identifiable information PII
to the external API. So you keep the
sensitive stuff, credit card number, Social Security numbers
on your own secure systems? Precisely. You process
(09:32):
or store the sensitive PII internally in your
existing secure environment.
When you need the AI's help for, say,
summarizing a conversation or drafting or apply,
you send the data, but use placeholders for
the sensitive bits. Like credit card number instead
of the actual number? Exactly like that. It
creates a secure boundary. The AI gets the
(09:52):
context it needs without ever seeing the critical
raw data. That sounds crucial. Is setting up
that placeholder system complex? It needs careful thought
during the initial setup, the architecture phase. But
once it's built right, it's far more secure
than trying to clean up a data exposure
later.
And you absolutely need safety nets too. Right?
What if the AI just isn't sure? Yes.
Essential.
(10:12):
You need a clear, easy way to escalate
to a human.
If the AI's confidence score for an answer
is low or if the issue is flagged
as particularly sensitive or complex,
it needs to seamlessly hand off. With a
smooth transition message. Yeah. Something like, to make
sure you get the most accurate help, I'm
connecting you with one of our human experts
now. Mhmm. That maintains trust and avoids frustration.
(10:35):
Okay. Let's look at proof.
Real world results.
Shopify was the first big case study mentioned.
What did they find? Shopify used a blended
approach.
They aimed the AI at their most frequent
questions, setting up a store, payment issues, that
kind of thing. But with humans still involved?
Yes. With human oversight ready to jump in.
And the results were striking.
(10:56):
A 70%
drop in their average first response time. 70%
faster. Yeah. Wow. And
interestingly, their customer satisfaction, their CSAT scores actually
went up by 12%.
Right. That really pushes back against the idea
that faster means lower quality. It suggests customers
value getting the right answer quickly, maybe more
(11:16):
than anything else. Seems so. The AI delivered
on speed and accuracy for those common issues.
Then the second study was Zendesk. They took
a slightly different approach. How so? They positioned
the AI more as an assistant for the
human agents working behind the scenes. Like a
copilot. Exactly. The AI would suggest
possible responses to the agent. It could summarize
long email chains or chat transcripts instantly,
(11:39):
pull up relevant help articles in seconds. Taking
away the grunt work for the agent? Precisely.
And that boost
agent productivity
by 25%. It also improved the accuracy of
the final resolutions because the agent had all
the relevant info right there instantly. Okay. That
flows perfectly into measuring the impact and looking
ahead. If someone listening implements this,
(12:01):
what are the absolute key metrics they need
to track? Well, initial response time, IRT, is
often the most dramatic. We just heard Shopify
cut theirs by 70%.
What were typical numbers before? Before GPT three,
many companies were looking at average first response
times around, say, three point five hours, sometimes
longer. Three and a half hours wait. Okay.
(12:21):
And after. The sources showed drops to averages,
like, forty five seconds. Hours down to seconds.
That fundamentally changes the customer experience.
No more waiting forever. It eliminates a huge
source of churn right there. Then there's resolution
rate. The percentage of issues solved on the
first contact. Yep.
That saw massive jumps too from maybe 23
(12:41):
contact resolution
up to 67%
in some cases. Wow. That's a huge efficiency
gain. Fewer follow ups, less back and forth.
Monumental gain. And for the agents, we saw
efficiency improvements reported between 3050%.
They're freed up from those repetitive simple questions.
And that's the qualitative shift too. Right? Take
taking away the boring stuff. Agents actually report
(13:03):
higher job satisfaction. They do. Because now they
can focus their energy on the complex challenging
problems
where their human expertise
really shines. They get to develop deeper skills.
Their role changes. It moves from just delivering
service to more, like, strategic problem solving. Which
is exactly where you want your expert human
talent focused. So thinking about the future Yeah.
(13:25):
What's next? What comes after these current GPT
three integrations?
Well, the sources are pointing strongly towards multimodal
AI. Meaning AI that understands more than just
text. Exactly.
Systems that can process images, maybe even videos.
Imagine a customer sending a photo of a
broken part or a screen recording of a
software bug they're seeing. And the AI can
(13:46):
analyze that visual information directly. That's powerful. Instant
visual troubleshooting. Yeah. Yeah. Plus, we're seeing much
better voice enabled support becoming feasible.
Really natural
conversational
spoken
interactions with AI. Blurring the lines even further
between automated and human help. Pretty much. Okay.
So let's wrap up this deep dive.
The big picture is GPT three integration
(14:08):
really can reshape customer support. It makes it
faster. It makes it cheaper.
And
maybe surprisingly, it can make it more personalized.
Oh, yeah. But the strategy is everything. It
has to be that blend. Mhmm. AI handling
the volume and speed, but with crucial human
oversight, especially around security, complex cases, and just
final quality checks. The tech is powerful, but
(14:29):
the planning determines success. Absolutely. Especially getting that
prompt engineering right and building the secure architecture
from day one, that's where the real work
lies. Which brings us to a final thought
for you, the listener, to take away from
this. If AI is getting so good at
handling, let's say, 90% of the routine questions,
the information lookups, the summaries,
(14:49):
if it removes that foundational workload,
what's the next critical skill your human support
agents absolutely must develop? Yeah. What do they
need to do to stay indispensable and maybe
even start driving strategic improvements back into the
product or the business itself? Because their role
seems to be shifting, doesn't it, from just
responding to service requests. To becoming strategic influencers,
(15:11):
using their insights from those complex edge cases
to actually make the product better. That's the
new frontier for support professionals.