Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:00):
Hey everyone,
and welcome to what's the buzz,
where AI leaders share how theyhave turned hype into outcome.
There's no shortage of newsaround AI agents and what they
mean and what they bring forbusinesses, for the economy, for
professionals alike.
Today, we'll take a look at thisstate of AI agents, and I'm not
doing that alone but to,together with Jon Reed, one of
(00:21):
the esteemed analysts in thecommunity.
Jon, I'm so excited to have youon.
Thank you for making time forus.
Jon Reed (00:28):
Yes I enjoy our
dialogue.
Thank you for commenting smartlyon my SAP AI deep dive and it's
good to be here.
And I hope this feels more likea discussion to our viewers
'cause I'd really like to hearyour take on my views as well.
Andreas Welsch (00:41):
Fantastic.
So why don't we jump right infor, this one and we'll go a
little longer as, as usual tohave enough time.
Now I know you've spent a goodamount of time with different
vendors at differentconferences, which is wrapping
up the first half of conferenceseason this year, and I'm really
curious, what are you seeingwhen it comes to agents?
It sounds like everybody'stalking about it.
(01:01):
Many sound alike.
What do you make of it?
What's real?
What's the state of it all?
Jon Reed (01:07):
That's a really good
question and, look, we may not
get to the bottom of what'stotally real even by what we
finished today, but I havegotten off an extensive series
of user conferences and events,both pressing vendors on these
issues and also talking withcustomers.
What we are seeing, on the eventcircuit, the spring is the
(01:28):
beginnings of the firstdocumented agentic use cases
where we can actually talk to acustomer who is live with some
various agents and can speak tohow they got there, what the
challenges are and, in somecases what some of the results
are.
But I would say that we arestill in a pretty early stage in
terms of comfort.
(01:48):
Level with customers and agentsand primarily aggressive early
adopters are the ones that are,taking the stage at the moment,
and in fact.
And this is what I was hoping toget into with you a bit.
I think we're actually at areally crucial crossroads in
enterprise AI right now.
And I would say there's a coupleof forks in the road right now
(02:10):
that are really, important forcompanies.
And one of them is to decidewhat kind of AI company they
want to be and, the second is,to start taking some positions
on autonomy and what that meansin an agentic capacity.
And that's been a big topic thisspring and happy to get into it
(02:30):
further.
Andreas Welsch (02:31):
Awesome.
Like I said really lookingforward to it and hearing what
you're seeing.
And especially the.
Seeing the first customers andcompanies that are not just
playing around with this, butwho are actually thinking about
deploying it and who aredeploying it.
I was fortunate to attend acouple vendor sessions and
events over the last coupleweeks as well and, I saw as well
(02:53):
what you shared just now,companies and IT leaders who
share here's how we areapproaching this.
Yes, we're looking at this.
Through an automation agentautomation lens we need to make
sure right?
It's, secure, it's safe, there'sgovernance around this.
So great to see this come to theforefront very, quickly and very
early on.
(03:14):
I see a lot of discussion aboutprotocols like model context,
protocol agent, protocoldiscussions around how do we
make sure that.
Agents, when they communicatewith each other, they
communicate safely and securely.
I don't know how many more timesI need to mention security and
safety.
But good to see that, all ofthat is top of mind for, many
(03:37):
leaders.
One thing that I heard the otherday that surprised me to, to
some extent is that leaders.
Have said, Hey we, are beyondthis point of defining an AI
strategy.
And to me it seems that's mainlythe large organizations that
have been doing this for a whilenow, that woke up two years ago
when Chat GPT came around, weneed to figure out our AI
(03:59):
strategy.
Now they have it.
They might not even have adedicated AI budget anymore.
It's part of the IT budget.
What are you seeing there?
Is, that the, new normal or isit really just front runners at,
the moment that are thinkingthat way?
Jon Reed (04:12):
Yeah I would take a
little bit of an issue with
companies that say that theyhave, that, I'm not gonna say
that no company has a coherentAI strategy, but for the most
part I, feel like I could pokeholes in most of them in terms
of what their employees arereally feeling and responding
to.
For example, like what is yourstance on headcount reduction in
(04:33):
AI?
How are you using this?
Are you are, is it just a workenabler for you or are you
actually reducing, for example,new hires?
There are.
There is some evidence ofdramatic moves.
Moderna made some headlinesrecently by fusing HR and I it
together.
There's some interestingquestions around that, but at
the same time Moderna itself Ithink is still feeling its way
(04:56):
through how some of this isgonna work going forward.
So I think we're actually at apretty early stage and I don't
think many companies have.
Really define this well.
In fact, I think the, AI firstmantra.
Is really problematic in a lotof ways, which we can get into.
The thing I ki wanted to s sstart with is I, wanna get in
(05:18):
all of this with you, but Iwanted to take a step back for a
minute because I think it'simportant for people to realize
how, we got here and what I'mtrying to do, because I have an
interesting role in all of thisbecause I don't really have a
dog in this fight.
As far as, I do have somepassionate views on AI but I
don't really have a dog and afight in terms of what
technology companies adopt.
(05:39):
I'm, I don't sell technology.
I if, AI works, that's frigginggreat.
If blockchain gets revamped nextweek and works better, of course
that's not gonna happen thenthat's fine with me.
I'm pretty.
Tech agnostic.
I'm looking for successfuloutcomes.
And one interesting thing for meis that there's so many
polarizing debates right now.
So you know, the Apple reasoningwhite paper, for example, I
(06:00):
wrote about that and how itpolarized in such a way that
really, I think it was hard inthe middle of that for
enterprises to get in, decisionmakers to get useful takeaways,
which there were plenty of inthat report that they could
actually use.
Which is way more important thanan esoteric discussion about
reasoning.
But one of the things I thinkreally important is that I don't
just study enterprise projects.
(06:21):
I study culture, I studyscientific breakthroughs in AI.
I, designed my own trainingprogram that I spent thousands
of hours on.
That doesn't make me theultimate expert in the world,
but I think it brings aninteresting perspective that
combines different, approachesnot just an enterprise
discipline.
And here's why that matters,because really a lot of
(06:41):
enterprises like to think that,and they make a big mistake.
And I have some fresh mistakesand, takeaways for you that I
prepared just for your show, bythe way.
But they, make a mistake inthinking falling for the vendor
rhetoric that in a few monthsthese problems are gonna get
solved.
And what I'm trying to do ishelp.
Enterprises see the pros andcons and, every technology has
(07:05):
them.
And in fact, this is a subset ofwhat we're dealing with now is a
subset of, a deep learningdiscipline that really goes back
decades.
And it was at least two decadesago that I started seeing some
white papers criticizing somedeep learning flaws around
things like.
Causality versus correlation,inability to to come to proper
(07:28):
accurate decisions outside ofthe distribution of the training
set.
Lack of a world model of ourbasis in actual factual
groundings.
These are actually things thathave never been solved and, so
when vendors say This is gonnaget solved in three months, just
put this in.
That's some, that a real BSdetector goes off for me.
And so that's why the history ofthis is important to realize
(07:49):
that.
People have been trying to solvethese problems for a long time.
And what, happened over time isthere were some really momentous
occasions where deep learningstarted taking to the forefront
over other forms of AI.
And then there was a huge,really important paper and I
think 2018 trans attention isall you need.
That launched the sort oftransformer technology on which
(08:10):
generative AI is built.
And then you had a really,important.
Commercial breakthrough withchat GPT that popularized the
technology, which is of courseso important.
But it's really important tokeep in mind that the people who
developed this technologyoriginally, they were going,
they were taking a moonshot forartificial general intelligence.
That's what they were going for.
(08:31):
They weren't trying to buildenterprise productivity system.
This is really, important tounderstand this.
And so, what happened was, I.
We had this popularized chat,GPT technology and then it
started to become like what canwe do with that in an enterprise
context?
And, of course what happened wasa lot of disappointing results.
And so you read a lot aboutproject failures and
(08:54):
underperforming projects.
There was a big study that wasprimarily Danish, but it was a
big one that came out not longago that looked at, I think, 20.
25,000 users of 7,000 companiesusing generative AI solutions
with no productivity increasesor gains.
This is now, the data from thisis now a couple, years old, but
the point is there, there's beensome discouraging results.
(09:17):
But in the meantime, what'shappened is that a lot of
vendors, and you could pickwhichever you want, you could
pick SAP or Salesforce, orServiceNow or Workday or what
have you, Oracle to some extent,the big hyperscalers they, sat
down and said.
How can we make this technologymore accurate and more useful in
enterprise context?
(09:37):
And, that's the kind of stuffthat you and I.
Like kind of look at all thetime.
So things, for example, like RAGwas developed as part of that.
So rag, retrieve, augmentedgeneration is way of bringing
context and new data into theLLM context.
And of course now in the agentscenario, RAG is just one of
many tools that a agent can callto perhaps.
(09:58):
Be smarter, be more real timeaware.
Maybe do math problems since lLMS suck at math.
Like, all these different thingsthat are, being developed as
part of this enterprisearchitecture.
But it's important to keep inmind that the folks that were
chasing a GI, they, theyconsider this kind of stuff
band-aids they, a lot of themhave moved on to other pursuits
(10:19):
and it's just important to keepthat context because what was
happening was in recent years,the idea was let's just keep
scaling and eventually we'regonna get there.
But now we've really hit thewall with scale.
And so as a result of that, nowthat's, in a way, that's been a
healthy thing.
There's no more data to train onreally, there's specialized
(10:39):
data, like industry data, butthere's no more general data to
train on.
And so that's what's promptedthis, kind of revisiting of
things like reasoning modelsand, what we call like point of
inference type of testing, testscale scenarios that bring in
more data at the point ofinference to try to make these
machines smarter.
And the reason I give you allthis background is to, realize
(11:00):
that, that we're.
That we're making the most of atechnology that wasn't quite
built for this purpose.
And so that's why when you, runinto things like agent to agent
protocols and you say yeah, butthese agents don't have a
hundred percent accuracy rates.
They land somewhere in the 60 to98% range or what have you,
based on how sophisticated theyare, maybe how well they're
(11:22):
trained, all that.
And then you set them loose in abunch of.
Agent to agent discussions andyou have compound error risk and
stuff like that.
And I think when enterprisestake a step back and understand
this whole drama that's playedout, that's led us to this
point, it helps them tounderstand that they can still
use these technologies for manyvalid use cases, but.
It's not like in three months orsix months, all of this is gonna
(11:44):
get solved.
And I, it's just such acrucially important point to me
because again and again, I hearvendors saying, in three months
or six months, this is all gonnabe fine.
There won't be anyhallucinations.
I'm sorry folks.
This has been a problem for 20years.
They're trying to fix this.
It's a trillion dollar market atstake for anyone who can create
a more generalized intelligentsystem that doesn't require five
(12:07):
tools and, multiple agent callsback to the language model and
verification steps.
There's a big prize waiting fora more simple architecture.
But in the meantime, what do wehave?
We have more complexarchitectures.
However, those complexarchitectures can still make
money and get good results forcompanies.
Andreas Welsch (12:24):
Yeah.
Awesome.
I think that sets the stageperfectly, right.
And we don't even need to goback as far as the 1950s when AI
was coined.
With this whole agent to agentcommunication.
The way I think about this isin, in human terms I usually
don't like to compare AI andhumans for the reasons that AI
(12:46):
is not human, doesn't have humancapabilities.
But let's just, play along interms of communication.
We have this AI this, largelanguage model based capability
or technology that's based onlanguage, right?
We communicate and converse inlanguage and how many
misunderstandings do we have,right?
If, you're married, yoursignificant other, they can tell
(13:08):
you how many times, right?
You didn't really get themessage.
So now you have these agentsthat communicate based on
language.
So language can, be ambiguousin, many cases, right?
To your point, compound errorrates and so on.
All because it's based onlanguage and the, communication
articulation of what that goalis that the next agent should be
(13:29):
looking at.
The interpretation of, that.
In a sense, whisper down thelane.
Nonetheless I, feel there's alsoa big opportunity to improve on
how industries, how consortsdefine these protocols.
If you think back to the earlyprotocols of the internet.
Things like DNS or, HCDP and,other protocols.
(13:51):
They're super, super simple.
Security wasn't as big of athought in, in, in designing
those.
Now we've moved on so, we canmake it better.
We can make it a different inthat sense.
But I do agree with your pointthat the LLM still need managing
and keeping wood rail, right?
Jon Reed (14:09):
So you make a really
good point and it's really
important that we have theseprotocol discussions, but I do
want to emphasize that I thinkthose discussions are generally
getting a little bit ahead ofwhere a typical customer is
right now, but they do need tobe happening in parallel.
So for example, one thing I'mpretty encouraged by is Google
just this week turned over theA2A.
(14:31):
Protocol to the Linuxfoundation, and not only Google
has signed off on that, but alsoMicrosoft AWS.
How often do you see those threeplayers involved in the same
thing?
That's a, instead of launchingcompeting protocols, for
example, that's a reallyencouraging sign that these
vendors understand the urgencyof, like you say, establishing
(14:54):
something of a commontranslation layer.
And then some big enterprisevendors and software vendors are
involved also.
And there's a long list but, ingeneral right now for companies,
if I were advising companies, Iwould say save the agent to
agent workflow connectivityquestions for a little bit right
now and, start identifying somemore.
(15:15):
Out of the box use cases thatcan deliver some business
results, get your users morecomfortable with this technology
and, start to build on that withunderstanding that eventually,
yes, you're gonna wanna connectto processes outside of company
walls and such.
But at the moment, I think.
It talking.
I know you get excited aboutwatching agents communicate, but
(15:37):
for the average customer, Ithink just getting one more
basic ag agentic workflow inplace and, successful, and then
building on that is, is often agood first step.
And a lot of customers aren'tthere yet.
Andreas Welsch (15:50):
I would agree.
Many of the conversations that Ihave, especially with upper mid
market customers.
They say, what, where do I evenstart?
All I hear is AI and I need todo something, or I'm falling
behind.
Can you help me with mystrategy?
Okay, we have a strategy.
Can you help me prioritize?
What are the things that Ishould be looking at?
Or maybe I've already rolled outsomething.
(16:10):
Can you help me?
And, our teams get people onboard that they know how to use
it and how to use it well.
And how do you comm how do wecommunicate?
That this is there to help you,not to replace you.
That's the other big discussion.
And, the other big feartechnology is all nice and fair
and I, agree with you.
Protocols need to be there at,some point.
Hopefully many companies will,leverage them in whatever they
(16:34):
build.
But I think there's anadditional angle at the moment
that is that's been lingeringthere, that's a lot more urgent.
And, that's the, cultural piece,right?
In, the fall.
Slack came out with theirworkforce analysis study, they
found among, I think four or5,000 participants, 46% said, I
don't even tell my manager thatI use AI at work because they
(16:55):
think I'm lazy, I'm incompetent,or much just end up with more
work now that I'm 28% moreproductive.
Now as is the case in many timesyou might think that's a vendor,
vendor focused study.
But then Duke University cameout in May, and they actually
confirmed those findings.
They they had about 4,500participants in, their study and
(17:18):
came to the same conclusion.
People are using AI, so shadowAI usage, right?
Those of you have been around inthe industry for a while, have
seen shadow shadow AI.
Shadow it.
Sorry.
Now we have shadow AI.
And I think that's a big problemfor, organizations to, to
address, right?
It doesn't matter whattechnology you use or what
(17:39):
vendor you build on, but how doyou encourage your people to use
it, to share with others howthey're using it and actually
make this something that's,attainable, something that we
want to share as opposed to.
Being punished or, giving youthe, feeling, the perception
that you are being punishedbecause you're using some
advanced technology?
No, I think it's actually quitethe other the other way around.
Jon Reed (18:00):
Totally agree and a
lot of foresight there because I
made a list for you, my topthree to four mistakes customers
are making and my top three tofour success things.
And I won't reveal'em all atonce, but you hit on two of them
just in that one segment andI'll, get to it.
And I think one thing I shouldhave mentioned is that.
That when we talk about I was alittle bit critical about some
(18:23):
of the limitations of thetechnology, but the other thing
I've fight.
Against is employees can comeinto the workplace with a very
negative perception of thesetools based on bad experiences
they've had using these toolswhere it coughs up gibberish or
things they don't want.
And then also, like we discussedsome failed initiatives using
more generic versions of thetechnology that aren't trained
(18:45):
on quality enterprise data.
And so a big part of whatcompanies.
Need to do in order to get moresuccessful is to build momentum
with more accurate AIarchitectures that are, built
on, quality data sets withintheir company.
But as, they do that, the othermistake that they can make, and
this is right on my list, isfailure to create a safe.
(19:07):
AI sandbox with the requisitepermissions as otherwise, you
are gonna have the shadow AIexperiences, which has a lot of
potential IP risk as well, bythe way.
So we need to, companies need tocultivate safe environments that
are secure and provide theproper role-based access to data
(19:28):
so that you can sandbox your ownexperiments and play around with
this technology without anypressure.
And, in fact, what I would liketo see companies do.
And this is one of my othermistakes, is to pull back from
this like mandatory AI firstintimidation stuff of, use these
tools or else and, instead setup sandboxes for people to play
(19:48):
around and set up reward andrecognition systems for
employees that use those toolsto propose cool new workflows,
cool new apps, cool new ideas,instead of making this a
punitive measure.
Make it an exciting measure and,then I think you're gonna see a
lot less use of the shadow AItools because the sandbox tools
are gonna have all the requisitecorporate data to build specific
(20:12):
value add use cases for yourcorporate setting.
And so I think that's one of thebiggest mistakes companies are
making.
Andreas Welsch (20:17):
So I'm excited
to, to see the how especially
large companies are evolving.
I remember as far as two yearsback, companies said, Hey, look
we, are building a largelanguage model, playground.
People folks in ourorganization, please don't use
public versions of chat, CPT andcloud and whatever for it
leakage risks and data privacyrisks.
We use the APIs, we build a niceUI around it, and then we enable
(20:40):
our workforce through atransformation program.
Here's how you prompt how youuse these tools and we, get more
safe usage that way.
The point around now connectingthat to, data that you have
in-house or expanding thatplayground into something that's
more agent or that's more of aworkflow where you can actually.
Build it out and, see does thiswork for my use case, without
(21:03):
having to go to it, withouthaving to put in a huge request
in the business case and whathave you to try this one thing
out.
I think that's, going to beexciting how large organizations
who I think will be at theforefront of this are going to,
think about this.
Jon Reed (21:17):
You made a really
important point to which, I, one
of the few things I should havementioned and didn't in my
little recap is that the otherbig lesson from the early
generative AI projects.
Where the stats aren't so greaton product success was, is
incorporating that customerspecific data, but also this
move to more agentic systems,which is really this thing
(21:38):
around, instead of just havingdiscussion based interactions
through a bot, the bot couldpotentially start executing
actions either in, in cohortwith you as an enabler or in
some cases more autonomous stepsas well, and.
And it's clear to see that thereis more potential value to be
had there, right?
(21:58):
Because if you're if you'rehaving an interaction with a bot
around what is my work?
Leave time for the rest of theyear.
And and then the bot can notonly tell you but say, would you
like me to block this in yourteam's calendars?
Or blah, blah, blah.
It's clear to see how the, valueof engaging with these tools
goes up in a ag agenticcapacity.
(22:19):
And we could spend 10 minutesdebating the issue of agents.
I don't think that's.
The how they're strictly definedreally matters all that much
except to understand there's avariety of agents.
But in general, I think when youthink about actions, that's a
really important variable there.
And, like you said, a lot ofcompanies are still trying to
figure out how to do that.
(22:40):
And it's gonna really vary alot, by industry.
For example I just got back frommy, one of my most recent shows
was Salesforce connections and.
Some of the a agentic use casesin there were a little more
advanced for customers, but oneof the reasons for that is a
LinkedIn comment you madeearlier about quote unquotes.
(23:00):
Good enough.
Generally speaking, in mostindustries, working around
sales, marketing, and serviceexperiments, I.
In a, even in a live setting,doesn't necessarily have dire
cons.
Consequences whereas if you'retalking about industrial AI
where you're having agentsinteract with your manufacturing
and your shop floor, the stakesare obviously much, much higher
(23:23):
before you set those agentsloose.
And in, that context of talkingwith marketing and salespeople,
and to an extent service aswell.
Service is a little higherstakes, I would argue than
marketing but the point is,there's still this thing around,
okay, we can roll out some ofthese things and experiment with
having agents take on more andmore steps.
And I think those are really,healthy discussions to be had as
(23:45):
long as workers feel likethey're still a part of the
picture and valuable and aren't.
Gonna be replaced for no reason.
And the, one of the companies Idid a use case on, I really
liked how they had communicatedwith their employees around how
their roles might change and howthey hope to free them up for
more value add activities.
And there wasn't an intent to tolay people off.
(24:09):
And they were very clear aboutthose discussions.
And I think when you can beclear about those, you can
really then.
Move into a culture ofexperimentation and enthusiasm
rather than one where peoplefeel like I'm being surveilled
and potentially automated out ofexistence.
Andreas Welsch (24:24):
So look I'm
working on two new courses with
LinkedIn Learning specificallyaround the topic of how can I,
as a leader, encourage andempower my team to use AI and
use agents, but do it in a waythat's responsible, right?
A lot of times we see somebodysends you a, draft and says,
Hey.
Can you take a look at this andyou start reading it and you
feel like, Hey, come on.
You just poke that straight outof ChatGPT.
(24:45):
You didn't read it, you didn'tedit it, and now you expect me
to take a look at this.
Putting myself in into the shoesof a leader in an organization,
I think we need to setexpectations that regardless of
the tools that you use, whetherit's yourself, whether it's you
and a team or it's you and anagent.
(25:06):
The quality needs to be topnotch.
That's, what we expect.
It doesn't really matter if, itwas Jane and John or Jane and
five other people that createdthis credit where credit is.
How, do you manage that now thatthese tools are available that
your employees are afraid totell you because I think you
might think that they're lesscapable, which again, in my
view, isn't the case.
(25:27):
So that's one dimension.
And then on the other side, asan individual contributor, how
do you use the tools responsiblyand communicate that to your
peers and to your manager?
Jon Reed (25:35):
I'd love to take your
course.
That sounds very appropriate andI totally agree with you and.
One thing in terms of mythbusting that I'd like to put out
there is I think that.
Look, shareholders love, talk ofautonomy and because whenever
you can take humans off the lineitem on a balance sheet looks a
(25:56):
lot better.
Consequences to culture andsociety be damned, but it looks
better, right?
But in fact, like there'sdegrees of agentic autonomy and
it's actually wrong that youcan't develop ROI with a hybrid
type approach.
In some use cases where there issome elements of crucial human
(26:18):
supervision and handoff, butalso some AgTech autonomy within
those scenarios, it's actually amatter of use case design.
It's, a myth that once you pullhumans in that you can't get
ROI.
But there are certainly timeswhere, and this is where it
takes such careful evaluation onthe part of companies.
There are times where, forexample, having agents push
(26:41):
production, supposed productionready code into production ends
up costing more work on thebackend fixing it than it did to
do it autonomously.
And that's why you have to lookreally, carefully at each
situation there.
There isn't something, somebulletproof thing for your.
Culture, industry and use case,but it is encouraging to realize
(27:03):
that you can have a lot ofsuccess with these hybrid
designs where humans are stillinvolved at crucial points.
Andreas Welsch (27:09):
I think so too.
And coming back to your earlierpoint about the detrimental
effects of going for an AI firstculture, or at least putting
that out there in, in a memo andsaying, we want to be AI first.
I've been thinking about thisrather in terms of how you first
of all become AI ready.
We, we've been talking aboutthis for, a little bit now.
How, do you create this cultureof experimentation and
(27:31):
encouragement so that peoplefeel more confident, they feel
safe about the use, about theirjob and their wellbeing to begin
with and their existence, andthen, yes.
Then we become AI first becausewe are enabled to think where
else can I use this?
Do I really need this step tobegin with?
Is there a way that I can usetechnology to do that?
(27:53):
What technology can I use?
Maybe AI, maybe not AI.
That's perfectly fine as well.
It doesn't have to be AI only.
So I think there again,culturally, there's a lot that
needs to be done.
Jon Reed (28:04):
I really like your AI
readiness theme and I, we could
talk like probably the rest ofthe show about what an AI ready
company looks like, but I reallydo like that, and I don't like
AI first.
At really any point, because oneof the reasons I don't like it
is this, the technology thatwe're talking about.
First of all, there's a lot ofdifferent kinds of AI.
(28:25):
Yeah.
Not just generative AI but,secondly, generative and agent
AI is, generally a costintensive solution with a very
specific set of pros and consthat also has environmental
consequences.
This, technology is not acommodity yet.
I just got through.
Criticizing Mary Meer and Bond'sreport for implying that it is,
(28:45):
and it's not as far asaffordability is concerned yet.
And and it's good for somethings and not for others.
And so one of my I have, alongwith my mistakes, I have my
underrated keys to productsuccess, which I will run by
you.
And what I like to see companiesstart with first is not anything
about AI, but something aroundwhat's the most compelling
(29:07):
future for you?
Your role in your industry goingforward?
How are you gonna compete?
No vendors, no technology, buthow are you gonna be successful
going forward?
What do you want to be knownfor?
Do you want to be known for thebest customer service?
Do you want to be known for themost differentiated personalized
product configuration?
What are the things that youreally wanna stand out for?
Then with trusted advisors starthaving the technology
(29:31):
conversation around how that'sgonna be enabled and, what's
ready to put to work now andwhat's not.
Quantum computing, for example,probably isn't ready yet, but
some forms of age agentic AIwill be.
But in some cases, and we, youand I had messaging about before
this, sometimes.
Some RPA scenarios will actuallybe part of the mix and are still
useful.
It really depends.
So you need to figure out thedifferent roles different
(29:54):
technology can play.
The only thing that I concede,and I had a big argument about
this with the CIO on video awhile back, was the CIO wanted
to say everything should beframed in terms of business
problems, not technology.
I do think that once you ha.
Plot that FUT future for yourindustry.
You do need to say, what is ourAI strategy specifically?
(30:15):
And the reason you need to dothat is because your board
expects you to have one.
So you don't have the luxury inthis case, of saying it's just
another technology.
We'll use whatever tool isappropriate in the tool belt.
Ultimately, I think that's theright approach, but the board
wants to hear something more.
So you do need to have an AIstrategy, but I would argue.
It goes back to what you justsaid.
(30:36):
It needs to be an AI readinessstrategy, which is really a data
platform strategy that involvesnot just AI, but analytics,
better decision making,democratizing tools.
All of that should be part ofyour AI readiness strategy.
Andreas Welsch (30:50):
The part that I
find so encouraging is that
we've seen early movers two,three years ago even during the
machine learning cycle.
And I've gone through this mymyself when I was at, SAP and
coming out of machine learningin inter generative AI, I saw a
lot of good feedback from theanalyst community back then
saying if you've done machinelearning and you've done your
learnings, now you know how youcan apply that to Gen AI and
(31:13):
what mistakes not to repeat.
Now we're coming out of gen AIgoing into a gen.
I think there are additionallearnings, but the, thing that
encourages me is thatorganizations are, reaching out
to me that are a hundred, 200,500, a thousand people strong.
Not the 10,000, 100,000organizations, right?
So now they're looking at this.
So there is this majorityhappening and, smaller
(31:35):
organizations are trying tofigure out, okay what do we,
really do?
I have all these differentvendors and consultants and,
system integrators that arewhispering in, my ear.
I can help you.
I have the board that putspressure on me that I need to
figure this out.
To your point, AI strategy,right?
What do we do?
Where, do we start?
So to me, that is,
Jon Reed (31:54):
yeah, that's why
you're, that's why you're paid
the big bucks, because those arechallenging questions in terms
of who to work for and all ofthat.
If, I were able to give onepiece of advice to the smaller
companies you refer to, it wouldbe, unless you are very
sophisticated in your approachto AI and data science, which
most smaller companies are not,I would strongly advocate
picking a, vendor or two toreally work with, to build out
(32:18):
what you're doing because.
When you look at thearchitectures that work that
deliver the good results, andyou can go on YouTube and look
at this anytime you want.
Look up things like ragarchitectures and accurate age
agentic evaluation and stufflike that.
You'll notice thesearchitectures are very complex.
I.
And, the different models thatprovide superior, performance
(32:39):
are constantly shifting andbeing retrained and different
models for different purposesand all of that stuff.
Let the vendors with deeppockets sort a lot of that out
for you and, you wanna haveconversations with folks like
yourself that can help you startthinking about.
What does our AI readiness looklike?
Are there areas where we havehigh quality data and a real
(32:59):
obvious use case or need wherewe can start getting started?
Because one of the cool thingsabout some of these products and
some of these vendors you canwork with is you don't need
multi-year implementations tostart getting some wins here.
And so it's so important to beable to not only say, what is
your AI strategy, but.
Hey, we actually can alreadynotch a couple of wins.
We already beefed up our websitedocumentation using an agent, or
(33:21):
we already did some other kindof HR related documentation that
new employees needed.
Things like that they, getpeople started.
Andreas Welsch (33:29):
Look, I've been
saying this for a number of
years now and I think it appliedwith machine learning, it
applied with Gen AI and, it doesapply with agentic AI as well.
Since we're talking about thistopic, and that is, you don't
have to build everything fromscratch right there, there are
companies, whether it's startupsI see a lot of activity in the
sales, go to market space doingoutreach, doing go to market,
(33:50):
doing contact lead augmentationand data and enrichment type
things.
If that's an area where you havea need, you don't need to build
this right there, there arecompanies that, do this.
If, you're focusing on marketingand communication, there are
lots of companies out there thatspecialize in that domain as
well.
And same with any other businessfunction.
So the approach that I usuallytake is to say first of all,
(34:13):
what is your business strategy?
Similar to, to what you shared,where do you want to go?
How can technology help youaccomplish that?
And then maybe take one or twodepartments or teams where you
want to prove this out.
What do they actually do?
Where are they spending most oftheir time?
What is what are the, costlytask and, process that they run?
(34:34):
And within that process, arethere certain steps that you can
now augment or automate with thehelp of technology like AI?
If people feel more confident,then you can increase.
The amount of, technology inthat process.
But I think there's this, humanelement that's stayed true
throughout many, technologygenerations and evaluations and
(34:55):
evolutions.
And that is.
We need time to adapt.
We, need time to warm up to thisand feel comfortable so we can
use it.
Now granted the, time to use itand, get it into production has,
decreased significantly.
So we, don't have infinite timeto, to play around with this.
Everybody else is probably doingsomething similar, but I think
(35:16):
there's a there's a tension orbalancing act between.
Doing something and, getting itin production quickly, but also
making sure people can followand can adopt the technology as
well.
Jon Reed (35:29):
Love it.
And it's, I think it'sfascinating that you and I come
at this from, I.
Such different backgrounds in away, but are meeting like in a,
core agreement on how to moveforward with customers.
That really pleases me a lot.
Let me throw a couple otherthings in there and you can riff
on them that are tied into therest of my success.
Tips, I think fit in here.
(35:50):
One of them is.
No AI proof of concepts.
Don't do that.
Do pilots instead that are live.
And you might choose internalpilots if you don't wanna do
something that's customer facingyet.
But do something live.
You could pick a, use case thatisn't high on the risk gradient
in terms of the EU AI Act, forexample.
(36:12):
And pick something modest but,do it live because.
You need to see it with your owndata.
It's so important and you needto see it in an iterative way
where you can improve upon itand perhaps in a co-innovation
capacity with whatever vendoryou're working with.
But don't do POCs.
Get going on the pilots and,that, that to me is a real key,
but treat it like a data projectmore than a software project.
(36:36):
That, that's a really big keyfor me.
Andreas Welsch (36:38):
I remember a
couple years ago we, had the
term for, this kind of thing, itwas called throwaway proof of
concept.
Jon Reed (36:44):
Yes.
Andreas Welsch (36:45):
Is it still a
thing?
Are people still doing this?
Jon Reed (36:49):
I still think there's
a little too much kind of
dabbling and, we still hearabout POCs.
I heard about a customer doingone and I cringed, but I didn't
say anything.
But I, just wanted to say listento what you're hearing on the
keynote stage of early adoptersand how they're using these
tools and they're figuring outwhat they're good for and, the.
The, you know what the reallycool thing is, and I'm sure
(37:10):
you've experienced this so youcan probably speak to it.
Yeah.
The way the light bulbs go offfor users, once they start
seeing what the tools can do ina particular context then they,
start developing their ownthings.
'cause they come to theirsupervisor, they come to their
team and say, Hey, why can't itdo this?
Why can't it pull thisinformation?
Why isn't it doing that?
(37:30):
And you can say, actually it cando that.
Let's do that.
And, that's exciting.
Andreas Welsch (37:35):
Now, a couple of
months ago Johnson and Johnson
was in, in the news, and I thinkit was their CIO who said, Hey,
look, we're changing theapproach to our AI program.
We've we've stopped 90% of ourpilots or proofs of concepts or
whatever you wanna call them.
But things that are not in, inproduction, that are in
evaluation because we see it'sactually a very small amount of
(37:57):
AI scenarios or use cases todeliver the most significant
impact.
So we're focusing, I think itwas on supply chain and maybe
one, two other areas.
And I, I remember the, reactionsonline in on social media were,
mixed with oh, did they throwthree years worth of, money and
time and resources away orsuper, super great.
Now they're focusing and I mustsay I'm, in, in Kemp.
(38:19):
Great.
Those, were learnings and therewas an investment that a company
and, a large organization ofdead size like it had to go
through to come to the point.
But I'm hopeful that along theway they were able to, bring
people along too.
So there's actually anotherLinkedIn learning course that
that I created that just cameout on the topic of mitigating
AI project risks.
(38:40):
Because a lot of times it's notjust the technology there.
Yes, there is data.
Yes there are businessrequirements, but there are also
people, right?
It's, the people that are beingaffected by this change.
It's the people leading thechange when status reports turn
from red to bright green.
The higher they go up in, in thehierarchy.
And then it gets difficult whenthings take longer, when the
(39:02):
results are not what youinitially expect them to be.
When you cannot prove the,hypothesis.
So when things drag out, they'renot successful.
At some point you need to lookeach other in the audience.
Do we stop this or do we throweven more good money after that?
And so how do you avoid this?
But still, I think if you dothat on a smaller scale, maybe
if you don't spend three years,especially in a smaller
(39:24):
organization, they're importantlearnings to, be made that help
you and, prepare you for thenext step then.
Jon Reed (39:32):
So one really
interesting point that I didn't
get to that really fits in wellhere I think is like.
we have to be careful not totake our jaded experiences on
more trivial generative AIexperiments with the real
impactful stuff.
So for example, like I'm aGoogle Workplace customer and I
(39:52):
got a note the other day.
This got a lot of social mediahits.
I saw about these priceincreases pertaining to the AI,
and Google was bragging aboutall the AI value it's been
delivered, so therefore it'sraising the prices.
And I asked my colleagues, haveany of you gotten any value for
many of this technology acrossthe productivity suites we use?
And the answer was no.
(40:14):
Like not any value whatsoever.
And, I think that's a prettydamning indictment, but I, that,
that's not to say that Googlewill never deliver real value
with AI.
I would argue that, Googledelivered tremendous value with.
An algorithmic form of AI withwhen it built Google search a
long time ago.
(40:34):
So it's not like Google doesn'tknow how to deliver value, it's
just that a lot of theseproductivity, generative AI
things are really not where theimpact is.
And if Johnson and Johnson wasplaying around with some of that
stuff, I can't really blame themfor saying, yeah, we're not
making that much money, likehelping people write stuff that
has to be writ rewritten anyway.
And one of the.
Really interesting things istaking a step back from this
(40:56):
urge to replace what humans doand look at what machines are
really good at that.
Humans are not, because a lot ofthe most exciting AI
developments, such as thingslike alpha fold, are about
setting machines, in this case,into a healthcare setting.
To do protein folding typethings that human brains and
human processing, we can't dothat.
And one thing I love, you sayoh, I'm really scared to bring
(41:19):
AI into generative AI into asupply chain or man or agent AI
into a manufacturing setting.
Start with quality assurancebecause you can set a agent's
loose on analyzing yourprocesses and spotting anomalies
and problems.
And a lot of vendors have thistechnology, you probably have
experience with it and, surface.
Those anomalies to your managersand shop floor leaders and,
(41:41):
you'll start to see, I think,much more powerful impact than,
Hey, do you want me to writethis LinkedIn post for you in
generic language.
Andreas Welsch (41:49):
Right?
Now that we've covered a, lot ofground over the last 40 minutes.
From where did we start?
How did we even get here to.
How are people using this?
How should they be using it?
How do they want to use it?
And, what do organizations andleaders need to do to drive more
(42:09):
genuine adoption and bring thepeople along on the journey.
But before we wrap up, John I'mwondering any any famous last
words.
When it comes to the state of AIagents in, the enterprise as we
head into the summer, and by theway I'm, hoping that we can do a
few more of, these over thecourse of, the summer.
Jon Reed (42:29):
Yeah.
I think you and I have somepotential to continue this
dialogue with the feed differentangles so that, that'll be cool.
I think the main thing that Iwould leave you with, and you
may have a couple observationson this, is it's, good to agree
with everyone okay, you havethese sandbox experiments and
you're cultivating this cultureof experimentation, which is
(42:51):
great, but when it comes tothese pilots.
And launching those.
It's good to agree with everyoneon, how we're gonna measure
success and what does that looklike, and also how are we gonna
evaluate the performance ofagentic tools.
And I've spent a lot of time onDe Genomic this spring writing
about evaluating RAG andevaluating agents and how to do
that because there's a lot ofcool vendors of one of which I
(43:12):
wrote about Galileo, but there'sother ones and there's open
source tools as well that allowyou to begin to monitor the
performance of these agents inreal time and start to figure
out.
Where are the glitches and makeand making those corrections.
So, think of it like, like acontinual improvement type of
technology.
It's radically different thanthat classic ERP go live of old
(43:33):
where you turn it on and you,deal with the bugs and then it's
basically working for you.
Now we, I could poke some holesin that mentality too, but the
point is that's what softwarewants to look like.
This is not that.
This is a continual state.
Of measurement, evaluation,performance, and then bringing
this back into your user baseand, saying are we, doing our
jobs better?
(43:53):
Are we happier?
I don't know why more peopletalk.
Don't talk about that.
Are we happier?
Are we having more impact withour customers?
All of that stuff fits in.
Andreas Welsch (44:02):
I love it.
And, especially the last part,not just the happiness factor,
but the, how much more can weactually do with the help of,
technology, so again, comingcoming back one more time to AI
first, which, which seems totake a, very cost centric
approach.
And how can we take out cost totake out resources from, our p
(44:22):
and l?
I think the other opportunity isreally what else can we achieve?
If we have now an, army ofagents or teams of agents at
everybody's disposal that theycan use.
Jon Reed (44:34):
And by the way, I do
realize that companies are
obsessed with operationalefficiency right now because of
the way the economy is currentlyin the macroeconomic challenges
and all of that.
There's not that many companies,aside from maybe Nvidia, that,
that have the luxury of justgrow growth, all the time.
And even Nvidia has had astumbling blocks of weight.
So the point is if I soundidealistic when I talk about
(44:57):
things like happiness andexperimentation, it might be
surprising to think, but thosethings actually get back to
operational efficiency becausewhat happens is that as
employees become more versedwith these tools, they do punch
above their weight in certainareas.
And guess what?
You may not have to hire as manypeople for that team next year
and all of that but that's amuch less punitive way of going
(45:20):
about this than prematurelyreducing headcount, which by the
way, we see a lot.
And then having to hire peopleback on contract and, then your
customers are complaining andall of that.
There's much better ways to goabout this that all have to do
with employees embracing thesetools, overperforming, and then
your operational efficiency is ahappy byproduct of doing things
(45:40):
the right way.
Andreas Welsch (45:42):
So it also seems
that paying more attention in
your employee service if you dorun them.
To employee engagement indexes,to work life balance, stress
factor happiness in, in, in thework.
All these factors that largecompanies ask their employees
to, to submit their feedback on.
I think those are becoming a lotmore important and, we should
(46:04):
pay a lot more attention tothose.
Jon Reed (46:06):
Yeah.
Instead of AI first, I wouldlike to know what is your talent
automation balance and have youfigured that out?
Yeah.
And if you haven't, what are youtrying to do to increase both?
Andreas Welsch (46:15):
Alright, Jon, it
was a pleasure having you on.
Like I said, we've covered a lotof ground.
I'm already looking forward toour next episode together.
Cool.
And see what has changed anddeveloped until then.
So John, again, thank you somuch for your time and for all
your insights.
Jon Reed (46:28):
Thanks for the great
discussion.
Learn a lot from you.
Thanks.