Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jim Iyoob (00:00):
You gotta be
careful about this fricking
narrative that's out there,that it's eating all the jobs.
It's a lot more complicated thanthat, and I think the problem
is nobody wants to admit it.
Galen Low (00:10):
Some of these
companies are taking away jobs
that maybe they shouldn't havecreated in the first place.
Jim Iyoob (00:14):
It's not your
decision to tell me how
to contact you if I'mbuying your product.
I can buy yourproduct from anywhere.
So the only thing thatdifferentiates you from anybody
else is the customer experience.
Manu Dwievedi (00:24):
When you think
about implementing AI, you don't
implement AI to replace people.
You implement AI to make thingsbetter for your customers.
Jim Iyoob (00:33):
Everybody's trying
to automate everything.
Bull crap.
Pick three things thatkeeps you up at night.
Start there.
You should see an ROIin 30 to 45 days...
if you do it properly.
Galen Low (00:49):
Welcome to The
Digital Project Manager
podcast — the show thathelps delivery leaders work
smarter, deliver faster, andlead better in the age of AI.
I'm Galen, and every week wedive into real world strategies,
new tools, proven frameworks,and the occasional war story
from the project front lines.
Whether you're steering massivetransformation projects,
wrangling AI workflows,we're just trying to keep
the chaos under control.
(01:09):
You're in the right place.
Today, we are talking aboutlarge scale AI transformation
projects and what wecan learn from what goes
right and what goes wrong.
In particular, we're gonnabe zeroing in on the world
of customer experience wherecustomer contact centers
everywhere are trying toincorporate the right level
of AI into the customerand agent experience.
With me today are two expertswith deep expertise in both
(01:32):
AI and change managementin the CX world — Jim
Iyoob and Manu Dwievedi.
Jim is the President of ETS Labsand the Chief Customer Officer
at Etech Global Services.
He's my go-to person when itcomes to pragmatic, straight
shooting AI innovation thatleads to operational excellence,
and he's also a prolific authoras well as a member of the
advisory board for our sisterpublication, The CX Lead.
(01:55):
Manu is the Assistant VicePresident of ETS Labs and the
co-host of Etech's podcastwhere he does deep dive
explorations into AI and CX withrespected experts in the field.
He's also the person that Icall when I get asked about
enterprise scale changemanagement challenges in
the AI transformation space.
Jim, Manu — thanks forbeing with me here today.
Jim Iyoob (02:13):
Thanks so
much for having us..
Manu Dwievedi (02:15):
Thank you.
Galen Low (02:16):
I love digging
into our conversations.
I've known you guysfor a little while.
I expect us to zig and zaga little bit in interesting
ways, and that's totally cool.
But just so our listenersknow, here's the roadmap
that I put together today.
I was mostly excited becauseyou two, you're in this
space, you're deep in theCX AI transformation space.
There's been a lot ofheadlines in the news lately
(02:37):
about big companies tryingto incorporate AI into
their customer experience.
Some have a good outcome,some have a bad outcome.
So what I wanted to do is Iwanted to start us off and
just address the big elephantin the room about AI call
center transformation projects.
And then I wanted to justget a few levels deeper in
the headlines about likewhat is actually happening
in these headlines.
And then I just kind of wantedto zoom out, just use CX as a
(02:59):
lens to talk about three things.
I wanted to talk about, kindalike the state of AI, human
hybrid business models today.
I know this is like a bigtopic, but I don't wanna
just talk about the tech.
I wanna talk about likethe cultural impact as
well of like what you'reseeing, boots on the ground.
Then I'd like to hear fromyou about, yeah, change
management best practicesaround any project that's
implementing AI related change.
And lastly, I'd like to getyour thoughts on how like an
(03:21):
individual department heador project leader should
approach AI led change of thisnature, especially if they're
like already mid-flight inIT and how they can help make
it do more good than harm.
How does that sound to you too?
Jim Iyoob (03:34):
That sounds awesome.
It's gonna be a good one today.
Galen Low (03:36):
Well, let's start
with the hot question, and
here's the way I see it.
Even outside of the world ofcustomer experience, A lot of us
have seen these headlines, likethe one about Klarna replacing
almost all of its human callcenter agents with AI only to
like rehire most of their callcenter staff after a few months.
Other examples, just like that.
So here's the question.
When we hear about layoffs androles that actually appear to
(03:58):
be getting replaced by AI as aresult of a big strategic pivot,
how much of this is exactly whatit looks like on the surface
and how much is perhaps likemore complicated than it seems?
Jim Iyoob (04:08):
Yeah, I'll probably
get in all sorts of trouble
for this, but in 2019, Ispoke in Montego Bay, Jamaica
and basically said, AI isnot replacing your people.
But like you, you really gottathink about this differently.
You gotta be careful aboutthis fricking narrative
that's out there, thatit's eating all the jobs.
I mean, it's a lot morecomplicated than that.
(04:29):
And I think the problem isnobody wants to admit it.
So if we look at it froma holistic viewpoint, most
companies overcorrected when wehad this pandemic, so basically
people were showing up for work'cause they were all remote.
Let's be clear.
So let's, hey, peopleare showing up for work,
let's hire more people.
You know?
And really what's happeningis a lot of people are just
(04:50):
right sizing right now.
'cause you hired so many people.
And now they're saying, we'repivoting to AI, which is why
I'm getting rid of people.
That's not really true.
It's really not strategicwhen you think about it.
They didn't plan properly andnow you're reacting to what
the market is telling you.
You know, I can tell youEtech, as a company, we've
(05:11):
grown jobs, even with AIdeploying AI everywhere.
What AI is doing nowin, in my opinion.
Is really it's productivityto get people to do things
better, faster, but it'senabling them to do it.
What it's allowing us to dothough, is give more skilled
(05:31):
people more roles, right?
And I think the companiesthat are using these mass
layoffs are using that becausethey're basically saying,
Hey, it's not me, it's AI.
That's why I'm laying offthese people, and they
just think it's wrong.
AI is always gonnaneed human people.
We talk about it all the time.
Human in the loop.
AI is only as smart as theperson that programmed it.
(05:53):
So guess what?
You need people to do that.
And that's really whatI'm looking at it.
Versus what everybodyelse is saying about it.
Everybody's just blamingAI for why I am downsizing.
And by the way, the reasonyou hear these stories about
people bringing back is'cause they didn't know how to
deploy it in the first place.
So instead of taking thatcustomer experience, you're
pissed off half your customers.
(06:13):
So now guess what?
You gotta pivot back becauseyou didn't do it right in
the first place, which I'msure we're gonna talk about
today with change management.
Galen Low (06:20):
I actually, I
really like that perspective
of like, the cause andeffect is a longer period of
time than we're looking at.
And also that notion that AI isbeing used as like an excuse.
And you could as you were sayingthat, I'm like, I can see how
that works up and down, right?
You're like, you've got yourshareholders, and they're
like, why aren't you 10 Xingshareholder value by replacing
everyone with AI and on whenyou're looking down, you're
(06:42):
like, okay, well also I needto justify some of the changes
that we're making, whereas.
It is actually a correction.
I remember it was early 2020and like I was part of a
massive layoff at a largeconsultancy and part of it
was yeah, lockdown and likethe pandemic and everything.
But then, you know, whenI got down to it, they're
like, actually, we wereplanning this for months.
It was just the circumstancesthat made us pull the trigger
(07:03):
a bit sooner than we meant to.
Then we had planned to, but itwas always the plan and I was
like, oh, that's interesting.
And I suspect like, I'mglad that like we're talking
about this 'cause I couldsee how that correction
would take place, right?
It's like, oh, well a lot ofthe contact centers didn't
even think that they coulddo remote call center at the
beginning of lockdown andthen suddenly it worked and
then you, I could just, Icould imagine it perfectly.
(07:24):
Okay, you're gettinglike higher call volume.
It was hire more people.
It's actually easy now.
We don't even have to like getlike workstation set up, like
they're remote and actually thisis like a correction of that.
I think that's fascinating.
Manu Dwievedi (07:34):
Absolutely Galen.
When you think about itWatson, and just like you
said about stakeholdersspecifically, right?
What's an easiest story to tell?
Is it that, Hey, I had easymoney, I hired a bunch of people
and I don't need them now.
Or is it that AI, I don'tneed more people now.
So it's, people areactually trying to just
use AI as a scapegoat.
Galen Low (07:53):
And that's like, it's
interesting about the headlines
and like what I learn about newsmedia more and more every day
is that it's political, right?
Like the stuff that's inthe headlines is political.
It's going both ways.
It's you're thinking aboutthe stock or the investors,
and you're also thinkingabout messaging to the
broader population of what'shappening in the world.
And sometimes that gets muddled.
And our takeaway is, oh my gosh,AI is taking all the jobs, even
(08:17):
though it said it wouldn't.
Whereas the answer is some ofthese companies are taking away
jobs that maybe they shouldn'thave created in the first place,
and they're gonna use AI as theexcuse, which is fascinating.
I wanted to zoom out maybea little bit because.
Folks in my network and folkswho listen to the show, a
lot of them are being askedto reimagine parts of their
business through the lens of AI.
(08:37):
Like that's the sort ofdirective that they're given.
But when they receivethose marching orders,
it's like pretty vague.
It's more vague and likejust directional than a plan.
And a lot of them are sortof less scratching their
heads about where to start.
I was just, you know,thinking about contact centers
and you know, AI assist.
And what thetechnology's doing there.
(08:57):
And I have been in the spacefor a little while, or I've
touched it at least when itwas kind of not super AI, it
was just like chat bots, right?
It was like how do weincorporate this and like
create like more of aomni-channel customer experience
that doesn't require likemassive staffing changes.
But CX has always been sortof re-imagining parts of this
business around technology.
For folks who are like justgetting started when trying to
(09:19):
imagine or reimagine a core partof a business operation using
AI, what's the first thing thatan organization actually should
do before entering a planningand implementation phase so
that when people receive it,they're not like, what do
I do with this directive?
Jim Iyoob (09:33):
Yeah, it's
a great question.
So I'll tell you this.
All you CEOs out therelistening and all you people
that are bosses are telling'em to automate everything.
First of all, I havethis quote since 2003.
It's not your decision to tellme how to contact you if I'm
buying your product, it's mine.
So number one, stop trying toautomate everything because
you're gonna just upset meand I'm gonna leave you.
(09:53):
In a global marketplace, I canbuy your product from anywhere.
So the only thing thatdifferentiates you from anybody
else is the customer experience.
And if you destroythat, I'm gone.
So that's the firstthing I would say.
Number two, if you're on Kindle,by the way, here's your guide.
It's free on Kindle.
This is my latest publication,AI in the Contact Center.
(10:14):
That's a good step by stepto tell you what to do.
In the planning phase, I thinkwhere people get it wrong
is they think they have toautomate everything, and that
is so far from the truth.
You're sitting on the largestdata set in the planet,
which is your interactions.
If you actually analyzethose interactions, you
(10:35):
can get low hanging fruitthat can be automated.
You might lose a few people,you'll improve your customer
experience, or at leastkeep it the same, but give
customers what they want.
That's my 2 cents on it.
And Manu, but is much smarterthan me, as we all know.
He can probably give you alot more on the, but that's
just my common sense is whatI call it, approach to it,
(10:58):
where every software company,I'm, they all came to me.
They came to me.
I can remove all your people.
Oh, please.
Like you're an idiot.
And the funny part aboutit is the people that are
telling you, you can removeall your people, never took
a call a day in their life,never worked in a call center.
And they're clueless onhow it actually operates.
(11:18):
Manu, I'll pass it to you.
Manu Dwievedi (11:19):
Galen, every time
this happens, like a customer
comes to us and says, Hey, Iwant to automate everything,
because there are a lot ofthings that are wrong within the
whatever is set up currently.
I go back to something thatJim has always taught us.
He said, if yourprocess is broken.
AI is not gonna fix it.
AI is an efficiency tool.
Its automation is notgonna fix something
(11:40):
that is already broken.
And I think that's inherentlywhat's wrong with our approach.
When you zoom out and whenyou think about implementing
AI, you don't implementAI to replace people.
You implement AI to make thingsbetter for your customers.
Do I want my customers towait for 10 minutes in a
queue to be able to get asmall simple answer like, Hey,
(12:02):
what's the balance on my card?
Or password reset, right?
Once you understand thatyou are automating because
you want to solve yourcustomer's problem, improve
CX, the entire thing changes.
Now you're not thinkingfrom that negation mindset
that, Hey, I want to getrid of 200 people and that's
why I'm gonna automate.
So one thing that peoplecan learn is go back, get
(12:24):
brutally honest with yourself.
Understand how I can helpimprove my customer experience
and then see where you can drivethat efficiency, which will
result in a better sentiment.
Galen Low (12:36):
I love that.
And like, there we go againwith the idea that AI is
just this sort of like,trigger for a thought that
we should have been having orhave been having for years.
When you go to process design,like, yes, we, you know, we
want to become more efficient.
We don't wanna scalesomething that's broken.
And to that same point, it'slike, well, I hear you, Jim,
about like the outreach.
Like, I get it too.
(12:57):
Right?
It's like, and you can tellthat it's like the generic ones.
It's like as a CCOat, you know, Etech.
I'm assuming that you'd liketo increase profit margins by,
you know, a hundred percentmonth over month, and I can
help, and you're like, no,actually that's not my goal.
That's not my goal.
Because my goal is to improvecustomer experience everywhere
by delighting the end user,not by dictating what the
(13:21):
end user should experience.
And it's funny because.
Thiss lag in everyindustry, right.
CX and others where, and Icome from a world of like,
you know, user-centric design,mostly digital, where that
was sort of a big flip.
Like, you know, when Istarted in like just before
the, like the 2010s whereit was like, wait a minute.
But it's not just shovean experience down
our users' throat.
So it's actually like,think about what they want.
(13:41):
And then I'm thinking aboutCX and like, you know, that
the classic contact centerexperience where you're on
hold, the music is there.
They're trying to route youso that you kind of get lost
in the system so that maybeyou'll drop off and then we
won't have so much call volumeand it's like, oh, great.
Like that wasn't the goal.
The goal wasn't to havefewer calls hitting agents.
It was to be able tolike actually delight the
(14:03):
customer, retain them, dogood for your brand and
do good for your people.
It's like the reasons toredesign, the reasons to
redesign are important.
I guess if I were to boilit down, then I'd be like, A
great place to start is not.
What can AI replacein our chain?
A great place to start islike, what's broken that
we need to fix or what'sworking that we need to
leave alone and like what isthe actual end goal for the
(14:26):
people we're trying to serve?
Whether that's, youknow, internal staff or
customers or you know, maybeshareholders, I don't know.
But it's, we kinda have to kindof take it back to the studs and
figure out what the goal is, notkind of just do automation and
AI for automation and AI's sake.
Jim Iyoob (14:42):
Agreed.
Absolutely.
Galen Low (14:44):
I wonder if I could
circle back around to the
first question because like.
I think a lot of people arebeing told like a very binary
story about either AI istaking over your job, or AI
is your helpful intern, and Isuspect that like the reality
is somewhere in between.
I do think that like in theCX space, a lot of folks
are actually ahead of otherindustries and you're really
(15:05):
close to some of thesetransformations in that space.
So I thought I'd ask you like.
What is the state of humanAI hybrid operations in
contact centers aroundthe world right now?
And like what is workingwell and where does it
have some room to grow?
And just to make it even morecomplicated, like what are
some of the implications yousee for other industries?
Jim Iyoob (15:22):
So I can tell you
firsthand, there was a big,
even at Etech, we have 4,000people in three countries.
When we introduced it,everybody is gonna replace me.
You know, you're trying to getrid of me, I'm not gonna help.
I don't wanna supportyou because your end goal
is to fire me, which isfarthest from the truth.
Here's what we found.
(15:43):
Once you teach them that,my goal is to make your job
better, more strategic, thenyou'll see the benefits.
So as a live example, whenyou get that one call.
Two calls a day.
They're complex.
Let's be honest.
You learned it in training.
(16:04):
You've taken 50,000 callsand this is one that you
got training two years ago.
Would it make sense if Igive you an assistant to
help you with that questionthat you get once every six
months, they listen to that?
Because that's whatI'm trying to do.
Now, should I focusthat on automation?
No.
'cause you only get fiveof those a year, right?
(16:24):
But let's help the agentwith that, and then when you
show them that part of it.
And then the low hangingfruit, let's just call it
password reset for an example,that should be automated.
Let's be clear, right?
So there are companiesthat still don't have it
automated, which is fine.
But when you think about that,when I take that volume away
from you agent, what it's gonnado is it's gonna let us go to
(16:49):
our customer and say, Hey, nowthat we saved you 30% of your
call volume, what other thingsare you doing internally right
now that can be outsourced?
I can skill my agent upnow to actually handle
more complex interactions.
Fraud prevention, frauddetection, things I know you're
doing internally that can beoutsourced for less money.
(17:11):
And that's really themessaging that we're doing
that seems to be working.
But everybody's afraid becauseof, I guess they watch YouTube.
I'm replacing all your jobsand you have to get over that
to say what's in it for them.
My goal is not to replace you.
My goal is to augment youand then skill you up.
(17:33):
So to give you a life example,Manu was an agent 10 years ago.
He's an AVP today.
Galen Low (17:39):
You
yourself are an agent.
You started on the floor.
Jim Iyoob (17:41):
I had green screens,
so nobody remembers what those
are, but like, think about it.
If Manu didn't learn morethings, he'd never be
sitting in an AVP role now.
He challenged himself to learnnew technologies, embrace them,
and learn what he could, domore complex stuff, which is why
(18:01):
he's so successful what he does.
And that's really the messagingthat has, 'cause you're
always gonna have people, Ihave a customer that has 99%
containment in their IVR.
So think about it, 99% of theinteractions are contained
in the IVR, which meansnever hits a human being.
I still have 135 agentsanswering those calls, that
1% because an IVR can't do it.
Galen Low (18:24):
And those are
the high value interactions
because the IVR can't do it.
Correct?
Correct.
It's interesting what you'resaying too, because random
context for our listeners,one of my first jobs at a
university was actually eBayInvestigations contact center,
where we were kind of likelooking at the fraud folks, you
know, we were the specialists.
It was an email center.
I wasn't on the phones.
But what was true aboutthe organizational sort of.
(18:47):
Org chart at the time was thatspecialists, were a small team.
It's like the generalists thatare, you know, doing the like,
you know, copy paste template,password resets or you know,
low level customer support.
That was a big team.
And then like our teamwas the smaller team.
And I could see how thatcould be like, oh great,
yeah, you're gonna elevate me.
But I see that thetriangle, you know, upwards,
(19:08):
there's fewer roles.
So that definitely still gonnamean less jobs for me, Jim.
So I could see itfrom that angle.
But I also like the sort ofthe flip side, which is like.
Maybe even to put it anotherway, it's like if you're being
told, yeah, don't worry, you'regonna like add more value and
be more strategic, but you'renot being given any training
to actually do any of thosethings, then that actually might
be a signal that maybe there'ssomething more going on there.
(19:31):
Maybe you shouldn't trustthat message as much, but
if you're an organization,if you're leading an
organization that wants.
To level these people up, thenI think it is a matter of like
showing them that there is gonnabe enough opportunity up there.
It's not gonna sort of narrowand that you are gonna support
them by providing training.
They don't have to go andfigure it out on YouTube,
which is a hundred percent.
Frankly, what a lotof people are doing.
They're like, I know searchingbe more strategic in my role
(19:52):
on YouTube and or TikTokand hoping for the best.
Like is not exactly I thinkwhat would be healthy for
the workforce at large.
And I guess like the otherside of it too, like you
mentioned this, some ofit's automation, right?
Password reset.
Those things are gonna happenand that just goes away.
It gets delegated and you startdoing, you know, a different
part of your job in there.
You also mentioned like thesort of like assistant, right?
(20:12):
Like the AI assist of somethingkind of popping up on your
screen going like, Hey, thisquestion you don't get often,
like last time you got trainedon it was like two years ago.
Here's like the script andthe policy and like, you know,
here's what you ought to dobecause it happens rarely, but
it is high value and I thoughtthat was a really interesting.
If I was to kind of takethat lens and look at it
from the point of view oflike, you know, a lot of my
(20:34):
listeners are project managersor they're leading teams or
project managers, a lot ofthem are in the digital space.
They're trying to figureout the sort of like hybrid
way of working internallyand sort of externally in
terms of like the projectsare delivering that value.
They might even be implementinga project where they're
bringing AI into a workflow.
But do you find that alsogets some, like pushback in
(20:56):
when you're training agentsto be like, okay, you would
normally right, like aveteran contact center agent
who came from green screen.
You have this new thing that'sgonna pop up like clippy, you're
gonna be like, Hey, it lookslike you're writing a letter.
And do you find a lotof people being like, I
don't need that thing.
Turn it off.
Jim Iyoob (21:11):
I don't see
too many on the complex
things because what happenswhen they're not prepared?
They're going tolook at something.
Let's be clear, you're nota mastermind, that you're
gonna remember everything.
So you're going to yourtraining manual, you're going
to your handouts, you'regoing to your job aids.
What if I just droppeda job aid right in your
screen for you so you knowexactly what you're doing?
(21:32):
Wouldn't that help you?
Wouldn't you be less stressed?
When you wanna continueeating your cookie that
you're not supposed to haveat your desk right now, right?
While you're doing this, Imean, that's really what it is.
And I think if you have funwith it and explain to them
that, I think it works well.
To answer your question,there's always 1%.
Doesn't matter what you do,they're just gonna be miserable
(21:52):
and you can't fix those.
Those are the people thatare gonna be replaced, by
the way.
Manu Dwievedi (22:00):
In fact while Jim
was saying this, I just recalled
something that just happenedon a call before this where
you and I, we were both there.
I'll tell you a story,Galen, but I'll tell
you the result first.
So there is this call happening.
A lot of people aresitting there and you will
hear someone saying theyhave amazing retention.
I've not seenretention like this.
Agents are nottaking easy calls.
(22:20):
So agents are happier.
So that's the reason I thinkthey're able to retain.
Tech is working out very well.
Our customers are happy as well.
They are not waiting in queues.
And this is theend of the story.
But the way it started wasyou have to ensure that the
agent is educated enoughabout AI, that the first
experience that they have withAI is honest and authentic.
(22:41):
The moment that happensnow they know that it's
solving problems for me.
I don't have to go and as Jimsaying, I don't have to go and
look at those five PDFs, right?
The customer said, I'm lookingthis at this information and
my screen tells me what to say.
I don't have to answerthose 50 calls every day
that says the same thing.
Hey, thank you somuch for calling me.
I reset your password.
So, doing the exactsame thing every day.
(23:03):
And this is a story that wejust came from a call where
this is exactly what thecustomer was saying about us
because of these changes andwe are seeing a very positive
shift in the demographic.
We're now peoplewho understand it.
They actually want to be herebecause they know, hey, this
job just became, you know,something more than that
(23:24):
standard contact center jobwhere I'm just picking up phone
and answering it all the time.
Galen Low (23:28):
I love that.
I actually remember I wasrecently at a conference
and they put this sortof AI adoption chart on
screen for their internalteam, and the context was
more on the coding side ofthings like using cursor.
But basically they're like,we did nothing differently.
We just we kept plugging alongwith sort of our messaging
and driving adoption, andthe line goes flat, spike.
Because I think it's that, Ithink it was like, okay, well,
(23:49):
you know, like I have my doubtsthat this is like actually
gonna make my life better.
I'm being forcedto tinker with it.
I guess I'll use it.
Fine fine.
And you're like,well, wait a minute.
What?
And then you're like, actually,parts of my job did get better.
And I'm gonna tell my friends,or my friends are gonna tell me.
And they're like,ah, like actually you
should get into this.
Because like now I don't haveto do password resets, like
for, you know, six hours aday, I'm feeling more engaged.
(24:10):
And suddenly thatspike I don't know.
From a change managementperspective, like we talk
about this valley of despairwhere people kind of get
stuck in change that has beensort of pushed upon them and
they kind of need to likeget themselves out of it.
But this was kind of more like,it just seemed like a sort of
cultural shift internally oflike realizing the benefits
after being told them andthe benefits didn't change.
(24:31):
But once they it took themtime to sort of realize how
it actually impacts them.
And I like what you said about.
My first interaction with AIshould be delightful as well.
Right?
And like that's the trueof the the customer in the
customer experience and alsolike the employee for the
employee experience, which,I mean, I was thinking about
it, I was gonna ask youactually about like, sort of
(24:52):
change management, like yoursort of philosophy on change
management, ETS Labs, likeit's a services organization.
You are helping your clientswith this, so you might
not always get to see.
Change managementfrom end to end.
But in terms of likerecommendations that you make
to your clients about how theycan make or break their project
(25:14):
by doing change management,right or wrong, like what are
some of the things that you'resort of recommending to folks
to do, like before or duringor after an implementation of
your services and solutions?
Jim Iyoob (25:24):
My news,
the brains behind that.
I'll let him take that one.
Manu Dwievedi (25:30):
I'm gonna
start with something
again that, you know, Jimalways make that a soft.
If you spring up somethingon people as a surprise
and say, Hey, this isAI, it's gonna help you.
There will always be doubtbecause people fear what
they don't know, but peoplewill support what they have
helped create, and that'svery important when it
comes to change management.
If you buy in top downor bottoms up, doesn't
(25:53):
matter if everybody in theorganization is bought in.
They understand that thischange is coming to help me
here is how it's gonna function.
They understand AI as well sothat it, they're not fearful
that, Hey, it's gonna make medo something wrong and it's
gonna impact me the first tier.
And in fact, you know, wetalked about this at CCW as
(26:14):
well, any AI implementation.
80% is change managementand only 20% is technology.
You have to educate people.
You have to make sure thatthey are providing their
own, you know, input into it.
That as an agent, thisis what I want to do.
Think about a guild.
They are agents out there in theworld right now that have been
(26:36):
penalized for making spellingmistake or grammatical mistake
on a ticket mode when you neverhave to summarize a quality.
Think about it, it'sstill happening.
So when you tell anagent, Hey, you know what?
This will help you.
Make sure that you never have towrite those notes down anymore.
You don't have to worryabout spelling, you don't
have to worry about grammar.
(26:57):
It's perfect.
It sends the notes over,and you can move on to
solving next problem.
Tell me one agent who's gonnasay that, Hey, I don't want it.
The problem is younever explain it.
You never educated themenough and now they don't
know what's gonna happen.
They just know that they'reincluding AI and there are
all already these, you know,new story circling right.
(27:17):
I just wrote about it onLinkedIn a few days back.
There is this new storysaying MIT's report says 95%
of the AI project failed.
Not one person went intothat NANDA project report
and actually read it.
The report says that 95% ofproject that fail are because
of change management or peopleand leadership problems.
(27:40):
Yep.
But the sexier story wassaying 95% of them fail, right?
Yep.
So that's what, so the narrativeis being, you know, driven
either AI fearful or, youknow, companies saying that
AI is doing great, and it'sactually turning into fear
for, you know, frontline.
So people arealready afraid of it.
So first pointcoming back, educate.
(28:00):
Make sure that you understandwhat you want to implement and
why you want to implement it.
Have a very clear and narrowview of your use case.
I want to implement AI becauseI want to save those 60 seconds
that an agent takes after everycall just to write things down.
Now you have a veryclear use case.
(28:22):
It's helping your customersas well, because that 67
second, where is it going?
It's going into makingsure the next customer
is waiting 62nd less.
So it's helping yourcustomers as well.
Also, it's helping yourcustomers because now agents
are not making grammaticalmistake or forgetting to
write something in a ticket.
So when the next agent picksup a call, anytime they would
know exactly what the issue was.
(28:43):
And what is all of this doing?
It's saving your62nd every call.
Galen Low (28:47):
And not getting
penalized for having a typo.
Manu Dwievedi (28:49):
Exactly,
and not getting penalized.
So as a company, you have justsolved the problem for your
people, for your customers,and made money off it.
So when you manage changemanagement this way.
You educate people, you findright use cases, you go very
narrow, implement it, irate,fix what's the issue is,
(29:10):
and then go, you know, wideand try to build the ocean.
It always works out.
That's what we areseeing everywhere.
Galen Low (29:16):
You know, it's
funny you mentioned about
that headline from the MITreport because you know, and
I don't have the stats frombefore, but I'm willing to bet.
That the same percentageis almost true of any
project AI or not becauseof change management.
Yes.
So we're talking about thissort of like, you know, some
folks listening might belike yeah, change management.
I get it.
Like it's important.
But what I like about whatyou said in that change
management process withthe AI inflection is show
(29:39):
them early what it can do.
Tell them early what thegoal of it is and create that
delightful experience around AI.
You know, it's nota slimy sales pitch.
The reality is you do earnestlywant to solve problems for
your agents or whateveryour staff and whatever
industry you're in are doing.
But also it's kind ofshowing them instead of just
(30:01):
telling them because we can.
And I think that's wherea lot of the distrust is.
A lot of folks, you know, somefolks in my network haven't
even, you know, they're like,I know I'm supposed to like
crack the lid on AI, butlike, honestly, I haven't yet.
I'm using like Gemini formeal planning, and that's as
far as I've gotten because,you know, like I'm scared.
I don't know what it can do.
And it's like, well, let melike just show you, because.
Jim Iyoob (30:21):
Let me take a note.
You mean Geminidoes meal planning?
That might be something I'venever used AI for meal planning,
but I might have to start.
Galen Low (30:28):
I actually
use a different LLM
for different things.
My Gemini is mostly inventingsongs about Pokemon or
Minecraft that we can thensort of record with my kid.
Jim Iyoob (30:41):
Oh, cool.
Galen Low (30:42):
That's nice.
And then ChatGPT is allmy work stuff and Claude.
That's my sort of divisionof labor across my LLM staff.
But I think it's reallyinteresting that.
Some people are talkingabout AI transformation
as this big new thing.
It's different and likeit's, you know, we all have
to figure it out and ofcourse we're gonna fail.
But actually that's likenot necessarily true.
What is true is thatit's probably should go
(31:04):
through almost the sameprocess of any sort of
large scale transformation.
Yep.
And it's just, wehaven't been doing it
right before either, so.
Jim Iyoob (31:12):
So it's
funny you say that.
When IVR came out,what'd they say?
Gonna replace all the agents.
Yeah.
Bull crap.
Then chat, oh, they'regonna be able to do two,
three chats at a time.
You won't need nearas many people.
Bull crap.
Now they're sayingthe same thing here.
It's just a different storyand it's not true what
they're doing with thisfear mongering with AI.
(31:33):
I dunno if you saw the latestvideo with the robot that
was beating up the peoplethat were working on it now.
That was funny.
I'm gonna use that onein Vegas next year.
That one was a funny one.
The robot went crazy andstarted beating up the guys
that were working on it.
Is that the corridor crew one?
I forget which one it was,but it, I watched this
like 15 times 'cause it wasa riot and I'm saying to
(31:53):
myself, whoever programmedthat one didn't program it.
Right.
When that goes back towhat human intelligence.
Galen Low (32:00):
I wanted to
circle back on something
because I think it'simportant and realistic.
You said, sometimes youmight need to lose people.
It's not like this utopia wherelike, don't worry, there's gonna
be more and more jobs like Etechhas been successful at doing
that, at creating more jobs.
It's not necessarily truethat everyone is gonna be
able to like retain their job.
You also mentioned thatsome of the folks who are
like not high performers areprobably gonna lose their job.
(32:23):
But again, this is not a, thecausality goes further back.
It's not just because ofAI, it's because you're a
low performer and not takingyour job seriously and
not learning, not growing.
So it's not really AI's fault,it's just the kind of trigger,
and then the other thingyou said was we need people
to build these things too.
We need them to program them.
And you know, I'm watchingthese sort of waves of people
(32:43):
that I never thought wouldcode in their lives making
lovable apps and you know,doing the vibe coding thing.
And they're in N eight N. Youknow, internally, we just did
a big sort of like HackathonQuest week where we all sort
of took on an AI experiment anddid a bit of a show and tell.
I think folks listeningmight be like yeah.
So what, like a callcenter agent's gonna become
like an AI developer.
(33:04):
Is that a, like a logical step?
Is that a bit of a reach orlike, is that kind of what
we're saying in terms of theworkforce, or not necessarily?
Jim Iyoob (33:11):
So, great question.
I say if you have thedrive, like as I say this,
you have the will, hecan teach you the skill.
Okay.
But you have tohave the will first.
Manu, no disrespectto him, was an agent.
He's a coder today.
But it's because he had the willto learn those other skills.
It's like my daughter.
(33:31):
My daughter is, she graduatedfrom college and now she
wants to be a lawyer.
And I told her, I said,ChatGPT is not gonna
help you get those LSATs.
So theory, you might beable to do it, but when
you go for the actual test,understand you're gonna have
to know what you learned.
(33:51):
That's what some people,unfortunately in our society, I
see a lot of people in school,like now they're banning,
you know, my 17-year-old,she's a senior now, they're
banned her phone from school.
So now she has toactually think on her own.
Well, it's good for you, right?
Because what are youusing the phone for?
Like, I don't have my chat gpt, like so, oh my goodness.
You're gonna learn.
This is just a terrible thing.
Galen Low (34:14):
Actually that's
a really interesting use
case because you think aboutlike the calculator or like,
you know, when I was inwhatever 11th grade sort of
math, you get the like Texasinstrument calculator that.
Jim Iyoob (34:23):
Yeah, I remeber
the Texas Instruments one.
Galen Low (34:25):
TI 83 and then you
know, at some point someone's
like, yeah, because you don'tneed to know all the math.
Like in the real world, in theworkforce, you're gonna have
a calculator like we didn'tknow at the time, but like
on your phone at all times.
Now we have AI assistant inthe contact centers like.
You know, is there a fathomablefuture where actually your
daughter is a lawyer withlike hologram AI assist with
like, you know, the law justreferenceable there with
(34:46):
like a clippy there goinglike, oh, did you mean this
part of the legislation?
Yes.
Okay.
Oh, we could probably arguethat this, you know, do
they actually need to know?
Jim Iyoob (34:55):
They do.
Here's why, because ifyou look, there's tons
of articles out there.
If you Google it on AIgiven wrong answers.
So that's why you needthat human being, like AI.
That's why we say it's notreplacing, it's the cystic.
I use AI all the time.
I have five differentplatforms like you that
use it, different things.
But the problem is, it'sall based on who coded this
(35:17):
thing to understand, right?
And yes, can it learnfrom you eventually?
Yes, it can learn yourpersonality, it can
learn your style of talk,your style of writing.
But at the end of the day,you still gotta read it
because it does make mistakes.
I could tell 'causepeople use it to send
me stuff all the time.
And you see it on LinkedIn.
(35:37):
You can tell 'cause it's gotthe emojis, it's got the crap in
there that means nothing to me.
They copied and paste it.
I got BPOs by the way,coming to me in LinkedIn
saying, Hey listen, I lookedyou, I just had one today.
I see you're in logistics.
No, I'm not.
Not even close.
And they're trying tosell me their services.
So that tells me theguy didn't even read it.
(35:57):
He just copied and pasted.
Probably used it for everybodyand that's why you always need
that human being part of it.
That's the way I look at it.
Galen Low (36:04):
Actually, it's
just a really clean picture
of the future, whereasthe workforce is changing.
Education is important.
We need people to upskilland we need to support
them in upskilling.
There has to bethat will to do it.
If you don't have that willto do it and you're a low
performer, yeah, you'reprobably out of a job.
And that's probablyyour own fault actually.
But you know, not AI's fault.
Yeah.
And the mission is to get AIto be better and hopefully
(36:25):
not take over the world like Idon't think that's what we're
saying, but I think that's.
There is this sort ofharmony that can be achieved.
And even though you'retaking on this big sort of AI
transformation project thatis gonna impact lives and
people are gonna hate it andsome people are gonna love
it, at the end of the day,it kind of boils back down to
any kind of change, any kindof transformation, you know,
whether it's iiv or because ofthe pandemic or because of AI.
(36:48):
It's still, youknow, the approach is
actually still the same.
It's still a human in theloop at some point, and
it's still about the people.
I wonder if I can ask onelast question because I wanna
kind of take this back downto like a practical level,
but I was talking about mylisteners who might be sort
of, you know, departmentheads or project leaders.
What advice do you havefor a department head or
a project leader who islike already mid-flight on
(37:09):
a transformation project?
That is adding sort of AI intolike a core business function,
and they're listening to thisand they're going, oh crap.
I should have listenedto this before I started.
Now, like, am I screwed?
What can I do mid-flight?
That can make sure that from achange management perspective,
from a tech perspective, froman operational perspective, that
it'll do more good than harm.
Jim Iyoob (37:29):
I would tell you
domain expertise means more
than any tech expertise.
Number one, I wouldsay crawl, walk, run.
Everybody's trying toautomate everything.
Bullcrap.
Pick three things thatkeeps you up at night.
Start there.
Once you deploy AI and onceyou do it in a controlled
environment, then youcould pick up other things.
(37:51):
But I would be going for whatwe call low hanging fruit, and
I think that's where everybodymesses up because these projects
take six months, nine months,12 months to see your ROI.
You should see an ROI in30 to 45 days if you do it
properly, because if you're notgoing in with the big, huge,
let me automate everything.
Let me automate apiece of the business.
If I drop my call volume by 5%,that's real money, real savings.
(38:13):
Then again, scale it up.
I think that's whereeverybody makes the mistake.
Everybody makes the mistake'cause they think they,
they have to do it alland that project never
gets off the ground.
I tell our customersall the time, they're
okay, I wanna do this.
Yeah, good for you.
Let's pick three.
Let's start with these three.
Let's make sure itworks first, then scale.
'cause once you get it down,you'll learn there's, and by the
(38:34):
way, learn from your failures.
Failures don't mean bad things.
Failures mean the way youtried it, it didn't work.
Let's try something different.
And when you're doing itwith 3, 4, 5 little items,
it's much easier to beadjustable, nimble than it
is if you tried to do it all.
And the third thing I lalast thing I would say is
don't copy the process.
(38:55):
If the processtoday doesn't work.
Galen Low (38:56):
The scale failure
and then automate it.
Jim Iyoob (39:00):
Yes.
Mannu, what do you got to add?
Manu Dwieve (39:02):
This is exactly it.
You should actually takethis, print it and put it
somewhere if you're startingan AI process, because if
you do these three things,everything is works out profit.
In fact, I was just a friend ofmine, he's playing around with a
with GPT-5 old accounting firm.
They're trying to put introubleshooting using AI
internally for help desk.
And he calls me a few weeks backand he's like, you know what?
(39:24):
This GPT-5 is amazing.
It actually went throughtroubleshooting steps for one
of my internal employees, eventhough I didn't train at all.
And then he says, buthow do I make it do
things right every time?
Well, that's your problem.
You decided I'm gonna take a botand tell it, Hey, troubleshoot
for people, and it's justgonna do it every time.
No, you have to define avery specific use case.
(39:46):
My outlook is not working.
Now, this is the use casethat you work on email or
something like that, sothat's one of the big problem.
People are using AIas a magic wand that's
gonna solve everything.
The moment youorder the word AI.
But that's nothow it's gonna be.
Galen Low (40:01):
I love that,
like it's almost poetic,
but logical that the pathto macro transformation
is actually through micro.
It's like we need to gothrough these motions.
We need to knowwhat our goals are.
If we don't.
Figure out what we'retrying to do at a big scale.
Wholesale transformationhas always been hard, even
from that first wave ofdigital transformation.
I know organizations thatare still journeying to
cloud or trying to get ontothat other CRM or you know,
(40:24):
trying to digitize this partof their business like 10
years later, 12 years later.
And the reality is thingswill change in the world
of AI and the world of workduring that period of time.
We have to thinkin shorter cycles.
Manu Dwievedi (40:36):
Agree?
Jim Iyoob (40:36):
Yes.
Galen Low (40:36):
Awesome.
Jim, thank you so much forspending the time with me today.
This has been a lot of fun.
Just before I let you go, wherecan folks learn more about you?
Jim Iyoob (40:44):
So easy for me.
LinkedIn, Jim Iyoob.
I put out a remarkable CXnewsletter once a month,
subscribe to my blog.
Mine's all about content I love.
I say things people thinkabout, but don't say it.
So I tell people whatthey need to hear, not
what they wanna hear.
So it's all educational stuff.
Have some fun withthat, by all means.
(41:05):
That's probably theeasiest way to find us.
Try not to sell me onLinkedIn, and if you do,
I'm gonna critique youlike I did this morning.
Galen Low (41:11):
Yeah,
do your research.
Jim Iyoob (41:13):
Do your research
first before you reach
out to me on LinkedIn.
You can connect with meall day long, but if you're
trying to sell me on LinkedIn,you better do your research
first, or you're gonna get themessage back from me that's
critiquing how you did it.
Galen Low (41:24):
Amazing.
Jim Iyoob (41:24):
Manu.
Manu Dwievedi (41:25):
You can find
me on LinkedIn as well.
I think that willbe the easiest way.
So Manu AI, youcan go on LinkedIn,
linkedin.com/in/manuai/.
Galen Low (41:34):
It's a really
good LinkedIn URL.
Jim Iyoob (41:36):
Yeah, all
of you would have that.
Galen Low (41:39):
Amazing.
I will include all thoselinks in the show notes.
And again, thank you forcoming on the show, sharing
your knowledge and insights.
This has been great.
Jim Iyoob (41:46):
Thank you so much.
Manu Dwievedi (41:47):
Thank you.
Thank you for having us.
Galen Low (41:50):
And that's it for
today's episode of The Digital
Project Manager Podcast.
If you enjoyed thisconversation, make sure
to subscribe whereveryou're listening.
And if you want even moretactical insights, case studies
and playbooks, head on over tothedigitalprojectmanager.com.
Until next time,thanks for listening.