Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:00):
Today, we'll
talk about the top skills for
working with AI agents and whobetter to talk about it than
someone who's actively workingand thinking about that.
John Thompson.
Hey John.
Thank you so much for joining.
John Thompson (00:10):
Hey, Andreas.
It's nice to see you.
How you doing today?
Andreas Welsch (00:13):
Doing very well.
Hey, I'm so excited to have youon the show.
I've been following your contenton LinkedIn for a number of
years.
So I'm really thankful thatyou're making time for me and
making time for us.
John Thompson (00:24):
Yeah.
As we said in the pre-show I'vefollowed your content for a long
time, and when I got theinvitation, I was really
excited.
I'm really looking forward tothe conversation.
Andreas Welsch (00:33):
Fantastic.
Look for, those of our guests inthe audience who don't know you
yet, maybe you can share alittle bit about yourself, who
you are and what you do.
John Thompson (00:41):
Sure.
John Thompson.
I'm the Senior Vice Presidentand Principal at The Hackett
Group, and my job there is toset up a Gen AI services
practice and help build out theGen AI products.
Previous to that, I was theGlobal Head of AI at EY.
Previous to that I was theGlobal Head of AI and Advanced
Analytics at the second largestbiopharmaceutical company called
(01:02):
CSL Bearing.
Before that I was at Dell andIBM and a few other companies,
and at the same time I'vewritten, you can see over my
shoulder, I've written fivebooks.
About data, AI and analytics,and I'm a professor at the
University of Michigan as well.
Andreas Welsch (01:17):
Hey you've been
in the industry for quite some
time.
You've seen it from manydifferent angles.
This will be a really good andmeaty conversation.
I'm super excited.
Why don't we jump straight tothe questions that we talked
about for today's episode.
There's so much hype and so muchbuzz in the market and it's been
there for the last nine monthsprobably around the topic of AI
(01:37):
agents, Agentic AI.
We can delegate goals tosoftware now all of a sudden,
and they will.
Work on all these magical thingsthat we have been working on
ourselves before.
And they will give uspredictions and recommendations.
They can also implement actions.
And while that sounds all verynice and very lofty in terms of
goals, I think there's also avery tangible aspect to it.
(01:59):
And that is what happens whenyou actually introduce that kind
of AI into a workplace.
What changes if anything?
So I'm curious, what are youseeing in your conversations
with businesses and withleaders?
What changes?
John Thompson (02:12):
There is a lot of
buzz and a lot of hype around
gen AI and large language modelsand large reasoning models and
AI agents and, they're allintertwined.
Some people say, oh I wanna talkabout agents only.
I don't want to talk about GenAI.
And I said you, really can't dothat.
They're intertwined they're thesame.
(02:32):
And if we've had agents for along time.
We've had agents for maybe 40years or something like that.
Those are symbolic neuro,they're not neural network, but
there's symbolic driven rulesbased agents.
So agents are not new.
We've had them for quite sometime.
But what changes with AI agentsis that AI agents can do
(02:54):
anything a person can do.
If you think about an AI, a GenAI driven agent.
You can have them do pretty muchanything.
They can spend money, they canbook airfares, they can book
hotels, they can fire people,they can hire people.
We've all seen those missteps.
I think the Canadian Airlines isprobably the most famous one
(03:15):
where they're putting outtickets for 79 cents or
something like that.
And the airline, to theircredit, honored all of those.
So the thing is, that a lot canchange.
What has to change?
Not much really,'cause you canbuild an agent and you can
control that agent much more ina much more fine grain way than
(03:35):
you can control an employee.
So if you're building an agent.
I always tell people, don'tanthropomorphize models.
Don't use pronouns like he andshe and talk to them as if
they're John or Andreas.
They're just models.
They don't think they don't haveemotions.
But if you think about what amodel can do, it's helpful to
(03:56):
think of a model as a proxy fora person.
So anything a person can do, amodel can do.
So the breadth of change iseither nothing to everything.
Andreas Welsch (04:06):
I love that and
I must say I'm in the same camp
or of the same opinion that weshouldn't anthropomophize
systems and, say, call him Avaor Eva or something else or John
or Andreas, because it's itdoesn't mirror what they can do.
They're still based on data.
They're still working withprobabilities and making
predictions.
(04:26):
But I find a lot of times nowthat we talk about agents and
these systems that can take overmore complex tasks that we've
been.
Doing so far I a lot of timesfind that these analogies to,
how would you manage a team of,people actually work quite well
or what are parts of the tasksthat that you are doing that we
could now delegate or that youwould now delegate to an agent?
(04:49):
What do you think about that?
How far can we actually comparehumans and the way we work in an
organization with how AI is nowworking?
John Thompson (04:59):
Yeah, I think the
one thing that's gets in a
parallel path or very close towhat we're talking about here
is, then, all the buzz in thelast week apple put out their
paper, does AI really think andI've been very vocal about this
for the last three years, andthe AI does not think, AI
infers, it infers on steroids.
(05:19):
It can infer billions andbillions of times a second, but
it's not thinking.
It's clearly not doing that.
So if we look there's a greatdeal of conversation about the
crisis of AI.
Is it gonna take all the entrylevel jobs?
You mentioned it in the intro,and Dario Amodei of Anthropic
stirred the pot a couple weeksago by going in a big forum and
(05:42):
saying, nobody's gonna hire anykids outta college anymore,
which that's not true.
That's not the case.
I think what we're gettingconfused here is that many
people are looking at what AIdoes now, and they're saying,
oh, they're taking over theentry level jobs.
That's not true.
What AI does is it does a goodjob, as you said moments ago, of
(06:03):
predicting what is the nextletter, the next word, the next
frame, the next concept, thenext paragraph, whatever it is.
So if you are a person or anemployee that just does rot.
Tasks, A, B, C, that kind ofthing.
Your job is in at least roles.
Parts of your job is, going tobe automated away.
(06:23):
But if you're a higher levelthinking thinker, someone who's
doing creative work insynthesizing different parts of
information and coming up withoriginal thoughts and adding
value to the organization, youhave no worries.
AI doesn't do that.
AI is nowhere near that.
So what AI will take over is thevery low level mechanistic,
(06:46):
repeatable work that's gone.
So if you have a role and halfof your role is mess
mechanistic, and half of yourrole is creative, stop doing
that part and fill out yourwhole job with the roles over
here.
Andreas Welsch (07:00):
Now this is a
pretty big change to some
extent, right?
On one hand, we've always hadthis technological change where
things have been automated thatwe've done before.
That's the whole premise ofsoftware to begin with.
But it feels like there's asense of urgency.
On one hand, there's a order ofmagnitude that seems to be a lot
(07:23):
bigger than what we've seen fromprevious waves of innovation.
And so not all leaders arecomfortable with this change,
right?
First of all, their employees,their team members are not
comfortable.
If you read the news and theheadlines, whether it's the
entry-level roles or if you saythat has some ripple effects and
goes up the hierarchy.
What's the role of aprofessional?
What's the role of an expert?
What's the role of a leader?
(07:44):
You see the CEO memos fromShopify, from Duolingo.
We want to be AI first.
We're not hiring new peopleunless, and until we can do that
with AI, there's a lot of angstand confusion and fear there.
So how do you see leadersbecoming more comfortable with
this situation and also helpingtheir team members go through
(08:05):
that transformation?
What are the skills that leadersneed?
John Thompson (08:08):
I see it all the
time.
We've seen it with Klarna andShopify and all these different
companies.
We're AI first we're and Italked to many companies, I
think I'm over 500 companiesover the last three years that
I've spoken with.
Managers have said, when I go tohire a new person, I have to
justify why I'm gonna hire thisperson and I can't do it with
AI.
(08:28):
Fair.
I get it.
But the VP, SVP, EVP, C-levelexecutives need to understand
that you can't really just turnthese things off.
It's more you can migrate to it.
You still need to hire people.
You still need to do your jobs.
You still need to build thetechnology.
We're not there yet.
We're large language models andthe whole debate over the last
(08:53):
week about large lms largeresource models, or large
reasoning models, sorry, talkingabout, hey, these things are
gonna take over thinkingpositions.
That's not the case.
Again, I'll say it over and overagain.
Large reasoning models do notthink, they do not reason.
They infer at a high rate.
So you just can't say, Hey,we're not gonna hire any more
(09:14):
people.
We're gonna do it all with AI.
What you can do is say, we'regonna look at some of the more
mechanistic roles.
We're gonna automate those.
We're gonna hire differentpeople.
Maybe you don't hire the sameprofile you did before.
Maybe you hire a differentprofile, but you're still gonna
hire people.
You're still gonna bring peopleinto the business, you're still
gonna grow them and you're gonnaaugment them with AI.
(09:35):
So to answer your questiondirectly, I think that leaders
at the top of the organizationsat the top of the pyramid need
to have a more nuanced view ofthis.
It's not an on or off questionit's more of a here's a
continuum, we wanna take AI,this far, we're gonna still hire
people over here.
But where we were hiring peoplethat had no.
(09:57):
No knowledge of our skills or noknowledge of our industry.
We're gonna need people with alittle bit more nuanced
understanding and subtleunderstanding of what we do.
So I don't think it's gonna turnthings on or off.
I think it's gonna change themmaterially.
Andreas Welsch (10:11):
That deeply
resonates with, honestly, how
I'm thinking about this too,right?
I'm visualizing this as asliding window basically, from
where you're further to leftmore basic research, gathering
information, gathering data.
If we talk about entry-levelroles now that shifts further to
the right.
You have software that does thelegwork for you.
Yeah.
But you are still in charge toensure is it complete, is it
(10:34):
accurate?
Are there any areas in anyaspects that we have not
explored that, again, maybe AIcan help us explore or we should
be exploring, synthesizing, andthen working more on the
decision proposal as opposed tocan you get as much data as you
can and put this together?
John Thompson (10:50):
I think a really
good example is David Solomon at
Goldman at the last WorldEconomic Forum, the last Davos
meeting.
He got up in front of all theworld leaders and said, Hey we
generate all these differentdocuments for investment
banking.
I think it was like a prospectusor something like that.
He said 95% of that can be donewith AI and that is true.
(11:11):
A lot of it is gatheringinformation from the SEC and
getting information from thecompany and the market and
competitors and Bloomberg anddifferent places, and putting it
all together in a predeterminedform that everybody understands
very well.
And then the last 5% of theprocesses you bring in experts
who review it now, that'sexactly how all these processes
(11:31):
are gonna work.
If it's all about just gatheringand collating and stacking up
information, AI can do that allday long.
Andreas Welsch (11:39):
Now that brings
up another question and, to me
that is we've talked so muchabout automation for the last
10, 15 years, right?
Obviously we've had rule-basedautomation for a very long time
and we said there are some someautomations between different
systems logging here.
Download a vendor invoice, saveit locally, extract information,
put it into your ERP and thenkick off of the process.
(12:02):
So things like robotic processautomation, are we now talking
about agents as an RPA 2.0 or,3.0?
Or is it a big fundamental shiftin how we work?
John Thompson (12:13):
Andreas, you're
the first person that's put it
that way, and I love it.
Because I've been positioning itthat way for about a year and a
half now, is that when peopleask me about automation, the
first thing I say to them is,how are you thinking about
automation?
Are you thinking aboutautomation in the old line, RPA
way, which a lot of people doand I say you could do it that
(12:34):
way, but that's A to B, C to D,X to Y never changes.
It's always the same.
So I say, you really shouldthink about this as an agentic
workflow.
Which is then very dynamic,okay, the conditions are so much
so that this really shouldn't goto the governance department.
This should go to the legaldepartment.
(12:55):
Or the velocity of change issuch that you really need to
talk to someone who's managingthe plants.
Because we're seeing the demandsignals ramp up significantly,
but our production signals, ourproduction plan is not changing.
So automation from an AIperspective can be very dynamic
and very interesting and veryexciting and sensing what's
(13:18):
going on in the world.
As you said earlier in RPIA toB, c d, it's just it just runs.
It doesn't change.
Automation from a gen AI and anAI perspective can be really
exciting and dynamic.
That automation conversationreally needs to be grounded in a
context of what are youexpecting automation to do for
(13:38):
you.
Andreas Welsch (13:40):
I like that
part.
First of all, that we'rethrowing each other the ball
back and forth on this.
But the part specifically aroundwhere you said, making a
decision should this go to routeA, to route B to route C and
using AI to make that decision,right?
We've used machine learning fora while to make decisions, for
example is this an expense thatcan be approved easily because
it's below a certain thresholdwhere it's in a neutral or
(14:04):
category or this is actuallyneeded person to review this.
Now we're taking this stepsfurther by saying, based on the
information we have and allthese different input factors.
Which route is the optimal oneto make a decision.
So great to, to hear that fromyou as well.
Now, when we do that and weoutsourcing a lot of our
(14:24):
thinking to AI, there've beensome recent studies about this.
What do we actually retain?
What's the role of us as teammembers, as team leaders if we
just say, Hey, AI, agentic AI,whatever thing, agentic
workflow.
Yeah.
Go figure this out for me.
I want to think about this.
Is it taking it too far?
John Thompson (14:45):
No and I don't
think at this point with the
technology, as we've talkedabout, large reasoning models
and the inability of LLMs tothink I don't think you can take
it too far.
I think it's really mechanisticthe way that the AI that we have
today works.
It's great, it's fantastic.
It says a lot of things, manythings that we can't do.
(15:06):
If you brought to me, let's say25, 30 page legal documents and
you said compare them all andlet us know where the privacy
shortcomings are and all thesedocuments, it would take me
weeks and I would be very bad atit.
AI could probably do that and.
I don't know, 10 minutes, fiveminutes, six seconds.
I don't know It depends on themodel and the compute
(15:26):
environment.
No, I don't think we're takingit too far.
I think what we're doing iswe're still feeling around in
the dark trying to understandwhere, what is the domain of
computing, which we've alwayshad to do and what is the domain
of humans, and once we havethat, and we, and in general, we
understand it, but it's not awidespread understanding.
(15:47):
Once we get that more clear andsay, okay, these kind of
comparative exercises againstlarge corpuses of information,
definitely in the domain ofcomputing.
The idea of creativity andkeeping things under control and
making decisions with the, vastcontext we have in our minds.
(16:07):
Definitely human.
We just need to bring thosetogether.
Andreas Welsch (16:12):
Do you see, or
to what extent do you see people
just more or less blindly or ingood faith using AI and say I
let the thing do the groundworkfor me?
I don't need to think aboutthis.
And then I just sent this offand we end up with AI slop.
John Thompson (16:25):
Yeah, it's
that's, a great question and a
great point.
I've been, feeling around in thedark myself trying to figure out
what is my next book, my sixthbook, and it's actually gonna be
on on how people need to be,developing and refining the
critical thinking in the age ofAI, which is just what you
described.
It's one of those things thatwhen you ask people to
(16:47):
critically think, you can almostsee them go blank.
They're like.
I don't even know what you'retalking about.
I think that's an in indictmentof our education system.
We're not teaching people tocritically think and we need to
do a better job on that.
Andreas Welsch (17:00):
It's difficult,
right?
Even before Gen AI, you couldmake the argument that critical
thinking or, the value ofcritical thinking has been
diminishing through social mediain the mix and
compartmentalizing or living inyour own social bubble.
Now you add AI in when we'realready not as critical thinkers
anymore as we used to be.
(17:21):
That's true, right?
That gets more difficult.
One of the other questions thatI that I keep pondering with
experts in the industry is agentAI so much different from how
we've usually and traditionallyused software?
Are there even new skills thatwe need to learn?
Or is this just anotherconversation with a more capable
(17:43):
chat bot, with a more capableassistant and there's just a lot
more, almost like an icebergunderneath the surface that we
don't see as users, but I justask it a question, does the
legwork for me Then comes backbefore I did have to do that.
Where do you see this?
Is it just another evolution ona continuum?
Or is that a revolutionary thingthat we now need to train
(18:06):
everybody on and change how theythink about it, how they
articulate their goals?
John Thompson (18:11):
Another great
question.
I think, Andreas, we could behere for about five hours doing
this, but we're gonna run outtatime.
I always get painted with thebrush of being a pessimist, and
that's not true.
I'm an AI optimist.
I think we have over the lastthree years seen a step
function.
It's not a smooth evolution.
It is definitely a step functionin the capabilities of AI, but
(18:34):
the way we're going to use itand spread it more widely across
the world, professionally,personally, educationally,
academics, universities,research.
Is that we're gonna understandthat AI in this respect, what
I'm talking about now is nodifferent than any other system,
at least any other AI system wehave built.
It's all about the data.
(18:56):
It's all about managing itcorrectly.
It's all about integrating it.
It's all about bringing anensemble of data together in an
intelligent way that allows AIto understand, analyze, and
project from it.
What we've done over the lastcouple years, three years, is
we've gone from using 10% of theworld's information to a hundred
(19:16):
plus percent of the world'sinformation that people have not
got their head wrapped around.
But once everybody, in, at leastin information management,
understands that in systems anddevelopment and AI, we're gonna
have a, just a golden age of AI,in my opinion.
So is it different?
Yes.
Is it really different?
No.
(19:37):
I'm on both sides of thatequation.
Andreas Welsch (19:39):
I like that.
That's wonderful.
So, maybe then to bring it homeand, make it tangible for
business leaders, again, maybethose that are not quite sure is
this real?
How big of a thing is thisactually?
Where should I even start?
Are my people using this and doI even want them to use this?
What's the one thing that theycan implement or encourage the
team to do today?
John Thompson (20:00):
You know what?
It's, I don't know.
I think this is a great idea.
I did it at EY and it worked outexceptionally well.
Every organization, everycompany, no matter what size you
are, you should build a safe,secure, private gen AI
environment, and you should leteverybody use it.
For unfettered reasons.
(20:21):
Let them write their weekendholiday agendas.
Let'em write sonnets to theirpuppies.
Let'em use it for anything,because what happens is you
raise the level of capabilityacross your whole organization.
Now, they're gonna use it forwork reasons too.
Of course, they're not gonnawaste a whole lot of your time
and money, but you really shouldhave every part of your
(20:42):
organization using Gen AI in asafe, secure way.
You really should.
Don't wait.
This is not a hallucination.
This is a real transformation.
Andreas Welsch (20:51):
Great.
And again, very actionable.
And I've seen many largeenterprises do that in fact,
make different models fromdifferent vendors available to
their teams.
And then it's up to thetransformation team,
transformation office, to getpeople enabled and on board to
rather use our safe and secureenvironment than some public
version of whatever assistantwhere we don't know what's
(21:11):
happening with our corporatedata and so on.
Great point.
Now, we're getting close to theend of the show.
John and I was wondering if youcan summarize the key three
takeaways for our audience todayof how can you build the top
skills for working with AI andagents?
John Thompson (21:25):
Right?
First and foremost, you shouldhave some part of your technical
team spending their timelearning anthropics model
context protocol.
And Google's a to a protocolbecause that's how agents are
gonna talk to each other.
MCP is going to be at the lowlevel, A to a is gonna be across
multi-vendor, multi-agent.
(21:46):
So you should have your teamslearning that now.
That's, they're gonna need it.
It's like a one-on-one skill,they have to have it.
So get'em, learning that.
Now, secondly, if you'reconcerned about AI and you're
reticent about it and you'reworried about moving forward,
you gotta get past that.
You need to talk to Andreas orme, or whoever on your team or
(22:09):
whoever, whatever consultant youwanna bring in to make you feel
comfortable.
But now's the time.
Every day that you wait, thefurther from further behind you
fall.
And then lastly, as I said,everything I implied this, but
I'll be explicit.
You really need to buildenvironments that are safe and
secure.
And it's not hard to do.
It's quite easy to do.
(22:30):
The one thing that andnon-technical people need to
understand is that Gen AI andmodels and all these different
kinds of agents and chat botsand all this kind of stuff are
easily knitted together.
You and I, Andreas have beendoing this for decades.
You remember when it was reallydifficult to get two systems to
talk to each other as nimpossible.
(22:51):
Now, you put a model repositorytogether, you drop in Claude,
one day, you drop in GPT,another day you drop in and
coheres model, anybody can pointat any model and use it.
You know that, that's just, it'swild, it's great.
It's, the tech has reallyenabled us to do some great
Things.
(23:11):
build stuff that your people canuse and let'em go in an
unfettered way, but make sureit's safe.
Andreas Welsch (23:17):
Wonderful.
I love that.
That really brings us home.
So John, thank you so much forjoining us and for sharing your
experience with us.
I really appreciated theconversation.
John Thompson (23:25):
Thanks, I've
really enjoyed it.
Hopefully we'll do it againsoon.
Andreas Welsch (23:27):
Yeah, that'd be
great.
So folks, for those of you inthe audience, thank you for
joining us as well, and see younext time for another episode of
What's the BUZZ?
Bye-bye
John Thompson (23:35):
Bye.