Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:05):
Today we'll talk
about what's next with AI
agents, and I'm so excited tohave Jon Reed, esteemed analyst
and co-founder of diginomica,with me again.
Hey, Jon.
Thank you so much for joining.
Jon Reed (00:17):
Yeah, so glad to be
back.
It's really been great to havethis dialogue going this summer,
and it's been great to get thefeedback on it as well.
And I'd like to think we'regetting somewhere, but things
are also moving really fast, so
Andreas Welsch (00:30):
we'll see
Jon Reed (00:30):
how we do.
Andreas Welsch (00:31):
Yeah, exactly.
Look the first two episodes thatwe've run we're a huge success.
We had lots of great feedback,like I said, lots of interest
in.
People engaging in a gooddialogue.
So if you have any questions inthe audience, please feel free
to put them in the chat.
We're super excited to hear fromyou and we'll answer them as we
go along here.
But Jon, without further ado,obviously it's been an amazing
(00:55):
summer.
We've come from the Spring eventseries.
We are seeing now agents maturea little more.
We're talking more aboutsecurity, it feels, and trust
and authentication and thesekind of things, but to me those
are only just some of the topicsof what's next.
What are you seeing?
What's next for you?
Jon Reed (01:19):
Every day is a
different day, and it's actually
been I think a prettysignificant week.
It's gonna be interestingheading into the fall event
season.
A lot of big shows on deck.
I know you're going to a numberof them also, and we're really
gonna get a, gut check prettysoon on customer adoption and
where everything stands.
(01:40):
But one thing I thought wasreally fascinating, just in the
last week, you had the.
The underwhelming release ofGPT-5 with even a lot of AI
fanboys disappointed in therelease.
And I think the big takeaway, Ithink more than anything is that
traditional approaches to justthrowing scale.
(02:00):
Data scale at AI problems aren'tworking very well.
And, we're seeing a real divide,I think now between the big AI,
AI companies and this pursuit ofartificial general intelligence,
which we could debate versus.
A really different approach toenterprise use cases that I
think is far more promisingthat, that recognizes more
(02:23):
technical limitations andconstraints, but then applies
them very cleverly to enterpriseand industry scenarios.
And we could talk a little moreabout that and why I think
that's a really importantdistinction because you can run
the risk of thinking if thesemodels are underwhelming, then
they're not gonna work for myproject and.
(02:45):
Along those lines.
This week I think was anotherinstructive thing because we saw
a ton of headlines for this MITstudy, that 95% of AI pro
generative AI projects fail.
Now, what's interesting aboutthat is as soon as that came
out, a bunch of people publishedlittle articles on it, none of
which I think were particularlyinsightful.
(03:07):
And then it turns out to be agroup inside of MIT that is
working, I think on somespecific stuff around AgTech AI.
They immediately pulled thestudy offline.
I found it.
But but, so now we're alltalking about a study that no
one can actually really find.
I think that's a really goodsummary of the potential for
(03:28):
confusion.
And actually when you dig intothe study, and we can maybe talk
a little bit about it today,it's actually not like this
damning, like we can't doanything with AI thing.
And in fact, what they found.
Alongside the idea that 95% ofthe, these pilots are, that they
looked at in their study, whichis a pretty small sample size,
(03:50):
by the way, a few hundredcompanies.
But 95% didn't get past thepilot phase, but I think the
tantalizing part is the 5% thatdid.
Had really big success, right?
Yes.
So that's a, really interestingthing too, which is like not
just small success, but bigsuccess.
And the other interesting thingis that when you dug further
into the study, and I don't knowtotally how to reconcile this
(04:12):
yet, but there were differentnumbers that indicated that 30%
of projects were successful ontheir own.
But more like closer to 65% weresuccessful when working with an
external vendor.
And this is something we talkedabout in past episodes where I
think a lot of customers aregonna do right now to avoid
(04:34):
trying to recreate their owncutting edge AI architectures,
but churning instead to trustedvendors to help manage the
complexities of theseenvironments.
So anyway, I just think it'sreally interesting because just
in a week's time.
We, had so many interestingjuxtapositions between different
ways of thinking about the exactsame thing
Andreas Welsch (04:55):
Yeah.
It's hard to comment withoutcoming across as being a little
cocky.
But I think if you've been inthe trenches, if you've seen
this play out over the last fewhype cycles machine learning
being one of them, you know thatit's not that easy.
It's not that simple.
It's not just about the lowhanging fruit and lips.
Boil the ocean and see what wecan do with this magical
(05:17):
technology.
It's about where do we have aproblem?
Can we measure the impact and.
Can we solve it with the data wehave and with the technology
that's available in very, fewcases, technology is the
limiting factor, right?
It's, either we haven't reallyunderstood or articulated that
the problem well enough, orwe're not bringing our people
along.
Yes, there's technology ininvolved as well, but like we've
(05:40):
also seen in, in previousreleases, whether it's GPT-5 or
four or three, some of them weremore and more groundbreaking
than, others, but technologyitself isn't necessarily the
limiting factor.
I like that you mentioned the,MIT study.
I, I was going to bring that uptoo.
I need to update my slides.
(06:01):
I need to tell you that for,years I've been saying it's the
85% even here on, on the backcover of the AI Leadership
Handbook.
It, says 85% of AI projectsfail, goes back to 2018.
When Gartner said, don't delivervalue, not just fail.
Now we're at 95.
Okay, fine.
But I think it goes back againto this fundamental part of have
(06:23):
we even understood what theproblem is that we're trying to
solve.
The second part for me is.
Also something that, that I'vebeen sharing for years is don't
build everything yourself justbecause you can.
There are vendors, there arecompanies out there that do that
for a living, that do that inthe areas that, you use software
in anyways.
So look where opportunities, isthere AI in these products we
(06:46):
already use?
How do we get it?
Is it an additionalsubscription?
Is it a higher tier?
Does it even add value?
And start there instead of firstbuilding your AI platform and
everything else around it forthis one little use case that
might or might not play out theway you hope to.
Jon Reed (07:02):
Absolutely.
And I wanna talk a little moreabout this AI readiness concept
that we touched on in pastepisodes.
'cause we did get some feedbackon that.
But one thing I wanted to alsoshare that I think is a really
interesting juxtaposition with.
With this sort of like broadersort of failure rates thing and
(07:23):
the GPT-5 underwhelming releaseand you know these other vendors
that are also, I think.
Really releasing more likewhether it's gr or anthropic
releasing more, I would sayincremental improvements in a
lot of ways rather than likedrastic jumps.
(07:46):
Like we have between GPT-2, 3,4, for example.
But as a contrast to that I waswatching a video on YouTube.
And I'm not gonna call out thevideo'cause I'm actually
somewhat critical of, part of itand I wanna, don't wanna get
into this whole back and forthwith this person, but it had to
do with churning LLMs into likedomain experts.
(08:09):
And I think.
This is the sort heart of theenterprise play and so what this
individual said is when it comesto vertical AI applications, the
systems that you build forincorporating your domain
insights is far more importantthan the sophistication of your
models and your pipelines.
So they were saying thelimitation these days is not how
(08:31):
powerful your model is, but it'smore does it understand the
context in your industry foryour customer.
Does it perform the way you needto for that industry?
And so I think that's reallyinteresting because how do you
apply these large languagemodels to specialized
industries?
(08:52):
There's, a last mile problem.
And the last mile problem has todo with giving that model the
specific information, contextand understanding for that
particular industry setting.
And, that becomes reallyimportant.
And when you, take that on.
There's really promising groundthat can be made.
(09:13):
But the key thing to understandthere is that LLMs outta the box
aren't gonna understand a lot ofthese specialized contexts
because even though they've beentrained on the vast internet,
like they haven't necessarilybeen trained on these very
specific domains in thiscustomer, specific data for a
variety of reasons, includingthe, privacy part.
And so, that's the reallyinteresting contrast, I think.
(09:35):
And, so you have a, someone likethis talking about achieving.
Upwards of 90, 95, 90 8%accuracy rates in those domains
focused on the specificconstrained problems, and that's
really exciting.
Now, one of the reallyinteresting questions is.
Is that enterprise promise thatI think people are starting to
(09:58):
see, is that enough to maintainthe whole AI economy?
And the question is uncertainbecause the big I, the big AI
companies are counting onvaluations that I think extend
like pretty far beyond that.
This doesn't necessarily savethe whole AI economy, but
actually I think what it does isit creates PE opportunities for
(10:18):
people like you and me andopportunities for vendors that
focus on the enterprise to beardown on these industry problems
and provide LLMs with thecontext they might not have.
Andreas Welsch (10:28):
I think so too.
And.
To me there, there's so muchpotential that is still
untapped.
And I see that largely likewe've talked about in, in our
previous episodes, in the smalland medium sized business sector
people and, organizations whohave a need to drive more
automation to increase the,speed that their business is,
able to function at.
(10:49):
But who might not necessarilyhave the, deep pockets or, the
resources like largerorganizations or global
organizations do to getconsultants to, to fix this or,
build an army of engineers to dothat for them.
And I think there is a hugeopportunity in at least two
areas.
One, how do we use the toolsthat are available to us?
(11:11):
Again, could be things likecopilot or ChatGPT or, some
other tools.
Some basic AI literacy.
How, do I use these toolssafely, responsibly?
How do I make sure that whateverI pass on to my colleague or to
my customer has been checked bymyself or, by somebody on the
team?
So I don't just generate AI slopand send it on Then on one hand,
(11:32):
but on the other hand, how canwe.
In very concrete ways, optimizeand automate some of our
standard processes.
And maybe that could be an RFPprocess, like we send an RFP out
to a number of suppliers.
Then we get responses, and nowwe need to figure out what did
they respond to, which parts canthey actually complete?
And, all of these things whereyou have a lot of people doing
(11:54):
this work still today as, oneexample, right?
Or again, content creation,newsletters, simple things where
organizations have people.
Doing these tasks for a goodamount of their, at their time a
week or a month.
And we're not tapping into thoseyet.
So to me, that's a big area.
I also wanna say I, was just in,in Europe for a couple weeks and
(12:16):
the conversation there wasreally interesting.
When this topic of data and,agents came up, yes, models are
great.
They're trained on vast bodiesof information and, language,
but they don't know yourspecific business data and,
processes and product formulasand what have you.
The next topic that came up wasmaybe it's small language models
(12:38):
that are either specialized ortrained on industry knowledge or
that you fine tune on yourbusiness data.
That could be some one oneapproach.
The next topic that came up wasare we seeing a pendulum that
keeps swinging from on-premiseto cloud and now with costs in
privacy concerns?
To me that seems to be more of aEuropean topic.
(13:00):
Are we seeing the pendulum swingback in our organizations,
larger organizations puttingNvidia hardware back in their
data center to run these modelson their premise so they have
better control?
Just two of the, big questionsthat, that came up and, to me
that they're super fascinating.
I'm, not seeing that here in, inthe US yet.
I dunno if, you are, I thinkhere it's still API based, we
(13:21):
call a model from somewhere.
Certainly some organizations,some industries that are more
concerned about what's happeningwith my data, what do vendors
potentially do with the data andwith the prompts they have.
But it was more noticeable inthese conversations than I had
in Europe.
Jon Reed (13:38):
Yeah, and this is part
of the.
This experiments in a good waythat people are gonna find in
terms of what are the right sizemodels for their use cases.
And in some cases you can, I'vetalked with vendors who start
with larger models and then areable to scale that down which is
a little bit like what Deep Seekdid to some extent, but
(13:59):
basically you can diffuse orboil models down to smaller
models once you have establisheda use case.
One.
But there, there arecharacteristics of different
size models that have to beanalyzed too.
Like for example, larger modelsare often better at.
At sort of the languageunderstanding part.
(14:20):
So if you're, if you havecustomer facing stuff, you might
want the larger model that has,that does a better job of
understanding what the customeris saying.
'cause they might use differentwords that mean the same thing
and the larger models are gonnabe better at that kind of
understanding.
Whereas but, if you have, forexample, a team of super users
using an internal model.
(14:41):
They can be taught like theright ways to interact with that
model.
And so they might do just fineon a smaller model where they
realize, okay I need to promptit with these kinds of words
but, it won't understand theseother words.
And so it's just reallyinteresting and I, did a podcast
this week with.
Aaron Harris, CTO of Sage, andthey essentially found they had
(15:02):
to train an LLM because outtathe box LLMs just didn't
understand finance terms wellenough to actually use them.
And so we got into that in thepodcast.
And that's really interestingbecause sometimes you'll need to
do that.
Other times you won't need to dothat, and other vendors are
doing just fine, like using, RAGand tool call type context and
(15:26):
using a more out of the boxmodel.
So again, it comes down to usecase, but like in the case of
the finance one, for example, inthe podcast we talked about how
if you ask a question like, whatstock item will I need to
reorder soon?
ChatGPT wasn't recognizing thatas an accounting question.
It didn't recognize theaccounting terms.
And the sage model that theybuilt, it understands that stock
(15:51):
refers to inventory and that'sthe world the model is operating
in.
And it just depends on your, usecase.
And that's why you have to diveinto this with expert partners.
And expert advisors to help youto head down that right path.
But if you do, I think you startto build on these use cases and
(16:12):
start to notch some real winshere.
Even though it in thebackground, you have this study
that says 90% of projects fail.
But the thing is, the kind ofprojects that are failing, in my
opinion, are not the kind thatyou and I are talking about
right now.
Andreas Welsch (16:29):
That brings up
an good point in memory.
I did a webinar a couple weeksago with cists who are in, the
procurement space, and we, had apanel of, experts and also got
lots of questions from theaudience.
And some of the audiencequestions were like, Hey, with
AI and especially agent AI.
What can I do in procurement?
(16:49):
And there were some really greatexamples where they said, Hey,
look for example, something likeLong Tail you have individual
vendors or users in yourbusiness that, that buy from
individual.
We vendors, but it's not at avolume where you can negotiate
large discounts or largercontracts and it's hard to
manage and it's hard to analyzethe spend and categorize it.
(17:12):
Consolidated so people are notdoing it.
But now with agentic AI, theysaid, Hey we can actually do
that.
So we are not replacing peopledoing this job because nobody's
doing it anyways, and we'regetting businesses better
insights so they canconsolidate, spend, optimize
there.
The category management andright real value in business
results from that.
So to me that, that was a greatexample because also what I
(17:35):
heard over the summer was thewhole debate about our entry
level jobs going away.
And everybody panicked.
And we looked at laborstatistics and STEM graduates
seem to be in, a higherunemployment category this year
than they have been previously,things like that.
And then luckily also the debateswitched to if we don't have
(17:56):
entry level roles how, are theybecoming senior?
Who, are the next senior andexpert level roles?
No, please don't, just, don'tjust eliminate the, need for
early in career roles.
You need to figure that out too.
So anyways, two, two stories andone, one is the where do we
(18:16):
deploy agenda AI and, how can itadd value?
Things like long tailed spendin, in procurement where we're
not looking at this anyways.
And on the other hand, we're nottaking jobs away.
And, this mindset I, think it'sreally, important because I also
see too many leaders thinkabout.
Where can I cut cost?
Where can I cut spending?
Where can I reduce headcount?
Where to me, the bigger questionis what can I enable if I have
(18:40):
this, booster, this not justproductivity booster different
insights, new insights, scalingour knowledge, scaling our
information acquisition andretrieval and reasoning.
And I'm not seeing a lot ofleaders think about that yet,
but would absolutely encouragethem to rather approach that
mindset than where can I save?
Jon Reed (19:01):
Yeah, and this is
something you and I hit on in
our last discussion around usecases is, the importance of
companies really.
I think taking a step back anddeciding what.
Their, what their innovation isheading towards in their
industry.
What do they want to be knownfor?
And I'm hoping that a lot ofcompanies wanna be known for the
(19:25):
way in which they excel in, howthey serve their customers and
their and, the opportunitiesthey provide from employees.
Now, that doesn't mean thatthey're not gonna wanna also be
operationally efficient.
Of course they do, but.
It's so important to have thosegoals firmly in mind because as
we discussed in our servicescenario last time, it affects
your decisions on whether you'regonna deploy AI to serve
(19:48):
customers better.
For example, round the clockafter your service employees go
home.
Whether you're gonna use that toreduce your service team, or
ideally based on the use case Iwrote about.
Enhance the service you canprovide to VIP customers by
taking admin off the plate ofyour, so again, it's the same
kind of thing.
And the same is true from thetalent side, right?
(20:09):
Do you want to use AI tocultivate talent as you were
talking about, or are you gonnause it to just get rid of your
junior staff?
If you can get away with that, Iwould argue that.
A lot of junior staff are moretalented than, bots still.
But let's just say that you'regonna try that.
That becomes a veryself-defeating model going
forward.
(20:30):
And so you need to ultimatelymake technology a.
I don't care if, it's blockchainor AI or whatever you think
subservient to your bigger goaland don't allow the technology
to take over the goal and say Ican't help it.
AI claim those jobs.
It's no, they claim those jobsbecause you allowed that to
(20:51):
happen.
Yeah you, can redeploy thosepeople.
You can treat them differently.
And when we talk about AIreadiness, which will be I think
the next.
Topic in our discussion here.
A big part of that I think isthe cultural part of how do you
cultivate a culture of employeesthat is excited about using
these tools and not having thembeing opposed upon them or feel
(21:14):
like, I gotta work harder'causeI'm gonna lose my job, but more
like I'm being provided withsecure ways of experimenting
with these tools so that I canideate use cases for my company
to excel.
That's the kind of environmentyou want to create.
Andreas Welsch (21:30):
So that brings
me back to the MIT study that,
that we, talked about at thebeginning, and one of the
nuggets in there was we need toempower middle managers to
empower their teams and theirteam members.
It's not just done if we say youneed to use AI, right?
Where AI first, you need to useAI because.
(21:51):
Your employees are, struggling.
They don't know what this means.
Where do I start?
How do I even use it?
How do I know that something Icreate is good?
Maybe they don't even know thatyou can create something that is
not good or factuallyinaccurate.
I heard two great examples last,week in a session that I did,
and one of the participantssaid, yes our, leadership team
(22:14):
is AI.
First is you need to use AI inour company.
And people look around and say,what does it mean?
So I help them figure out, Hey,here's how you can use, for
example, ChatGPT, or Copilot orAI in other parts.
Another participant said we'regoing about this a different
way.
Our leadership also encouragesthe use of AI.
We have smaller groups orcommunities of multipliers or
(22:37):
champions if you will, and wecome together and we say, here's
how I use it in my function.
Here's how I use it inmarketing.
Here's how I use it in sales.
This is what's worked reallywell.
Here's how I got to that point.
Here's what I had to change.
Here's where it fails.
And so we create this culture ofshared learning and,
community-based learning.
We accept AI as a newtechnology, as a new way of
(22:59):
working, but still with amandate of we want to use more
AI to be more efficiently andeffective, but also give that
opportunity to, learn togetherand, share visibility, share
praise of how people are, usingthat in our organization.
To me, those were two excellentexamples.
The same goal, differentexecution, and very likely
(23:19):
different results as well.
Jon Reed (23:22):
Yeah, and if you can
get your hands on the report and
hopefully it will be issued atsome point, they get into this
5% and what.
What made those 5% successful?
And it's, really quiteinteresting to dig into that a
little bit.
And the, more successful areasare no surprise, they're high
value use cases that areintegrated deeply into workflows
(23:46):
scale through continuouslearning rather than broad
feature sets.
Less generic sort ofproductivity things, but more
looking at very specificfunctions that can be automated.
A lot of interesting, they talka lot about back office admin
being successful as opposed tothrowing all of the money into
(24:08):
sort of various marketing andsales agendas.
Though they did note that leadgen can be a successful area as
well as service.
I think what, this report didn'tget into is that a lot of these
industrial use cases are gainingmomentum too.
I talked with a vendor just lastweek about some of their success
(24:31):
starting to embed shop floortype.
Functionality you can start withthings like quality assurance
and equipment monitoring, thingslike that.
You're not giving agents controlover your shop floor yet
necessarily.
But there's a lot of reallyinteresting stuff and it, and
again, it comes down tocustomers that are ready and,
(24:52):
what I'm seeing is that there,the reason that AI readiness
topic is powerful is because.
AI remains an accelerant to me,which is that if you have really
good processes and good dataplatforms in place.
AI is just gonna help yououtperform that much better.
(25:12):
But if you are struggling inthose areas, I don't believe
that today's AI can save youfrom yourself by, applying that
onto problematic data, silo dataproblematic process flows lack
of leadership buy-in stiflingworkplace cultures, people
(25:34):
clinging to their.
Jobs like life raft.
These things are not conduciveto AI deployments.
And so, that's why the AIreadiness conversation is so
important because when you applyAI, you enhance what you're good
at.
But AI can't make you good atthings you're not good at
already.
Andreas Welsch (25:53):
Beautifully said
that.
I don't even know what I shouldadd to that.
Other than a confirming nod in,your direction there's an
additional component to me thatI've been thinking about a lot
more lately.
I I, hosted Steve Wilson, one ofthe co-leads of the OAS top 10
for LLMs report several timeson, the show a couple weeks ago,
(26:15):
as, as well.
And, we got into this, topic ofhow do you secure your agent AI?
And I saw some interestingexamples of, vendors out there
thinking about security andidentity and authentication and
got.
Thinking too, right?
Today we have largely humanssitting in front of a screen
clicking on, on, on the screenmanually.
(26:36):
Maybe you have some serviceaccounts or, some API keys if,
there's some something happeningin the background.
But now, already with agents we,have a non-human entity that
acts on your behalf or that actson your company's behalf.
And a lot of times when we talkabout AI, we talk about agents,
we talk about.
My company view, how can I usethis?
(26:59):
How can I optimize my process?
How can I operate?
How can I improve my operations?
But again, there are two, maybethree things that I think we're
not talking about enough yet.
One is how do you.
Build and scale and configureyour operations for agents on
the other side that will sendyou requests all of a sudden
(27:21):
more and more, much more quicklyat a higher pace, a higher
velocity, higher volume, andwhat have you.
Are your systems designed forthat or how do they need to
change?
How do we need to think aboutagents at some point doing
business with other agents?
Is that other agent even trustedand trustworthy?
Are they really representing whothey say they are?
(27:41):
Is it really my business partnerthat's behind this and their
agent interacting with my agent?
So trust, security,authentication, even in this
scenario between companies,becomes much, much more
important.
Then what we're just talkingabout and we're just scratching
the surface with is that agentin my business even
authenticated and trusted.
Jon Reed (28:02):
Exactly.
And the thing that I have seenagain and again this summer is
that we cannot underestimate thesecurity issues involved here.
And the there are new threatvectors and the risk management
around this is really important.
And one way to think about thistoo is a little bit of a
(28:23):
difference like between a humanscenario where if, you like.
If you welcome someone into yourhouse because you trust them if,
you think about it, an agentwould do the same thing, but if,
you say you have free run of thehouse, do whatever you want,
(28:45):
like an agent might do that tooand just let you.
Do whatever you want.
Once it's authorized you, likeit's not gonna ask questions,
whereas a human, if you startedrifling through my laptops and
stuff, even though I gave youfree run of the house, I might
be like I don't need you runningthrough my laptops right now.
What do you need right now?
Agents aren't really wellequipped to ask those kinds of
(29:07):
questions.
Once you've been granted accessand, the access you're giving
these systems is quite powerful,so it's not a deal breaker.
But it's something where youhave to have this on the
forefront of your mind at alltimes.
And it's important.
Andreas Welsch (29:22):
Yeah.
See even on a, smaller scale,say you're a sole proprietor or
small business owner, yes.
You have access to tools likenanos or things like open AI
operator, or now the automationworkflows basically.
But what happens when you giveit access and it.
(29:43):
It, it, sends confidentialinformation to another customer,
to another prospect.
It wasn't intended for them.
They shouldn't see what youquote your, actual intended
recipient, but now they do bigmiss.
So to, to what extent do yougive these agents, these
technologies, access to yoursystem, to your data and how do
(30:05):
you mitigate.
Those risks.
I think that's a real questionthat, that we need to ask as, as
well, that vendors also need tohelp figure out if they want to
see more adoption.
It's fine that it can click on ascreen and it does things
independently even if it takes alittle longer than how, we would
do it as humans.
But I think this, part of risk,management, risk mitigation, and
(30:26):
just understanding of what couldgo wrong and well, what could
that mean I think is soimportant as well in this
discussion.
And especially as we're talkingso much about hype and all the
future and everything that it'sgoing to enable.
It works well until it doesn't.
And then the question is who's,at fault?
What's the damage?
So it's better to think aboutthat upfront as always.
Jon Reed (30:49):
Indeed.
Since we last talked on the AIreadiness topic, I spent a lot
of time looking at this and I,think we've covered a lot of
different components of that.
Things like leadership andculture and.
Process, discipline and data.
But the interesting thing isthat the data part.
(31:10):
Still is something that a lot ofvendors are arguing over in
terms of, it does seem very,clear that AI services thrive on
different kinds of datapresentations than, we typically
have had in the past.
And so what's the most optimalway to do it?
And some are saying you need areal time live data stream for
(31:33):
your'cause.
AI thrives on.
Latest live data, right?
Yeah.
And that's true to a point.
But then there are times wherereal time can be prohibitively
expensive in certain domains.
And I talked with some vendorsabout this who were saying yeah,
that's true, but in some systemsthat I might need for AI they're
not being updated in real time,but that's okay.
(31:54):
I just need the most.
So you're back to the right timething.
And then some people talk moreabout.
Cloud based access and howimportant that is.
Other people insist that itdoesn't need to be in the cloud.
There's a lot of discussions ofedge based devices, but I think
the edge based stuff's a littletricky because the compute.
(32:15):
On edge devices is often notenough for what you need, and
then you have various cloudprocessing things from edge
devices, so no one's figured outall of the answers to this
question.
I think one prevailing trend is,not to move data around as much,
but thinking more in terms ofzero copy scenarios where data
(32:36):
can reside, where it resides.
And, you have some kind ofintermediary system that can
essentially help you with this.
It's a new way of thinking aboutmiddleware, but in an AI type of
context, I guess you could say,no one's figured all of this out
yet, and there's no way to avoidsome of the pain around it.
(32:57):
And I.
The way I look at it is that acore component of AI readiness
is gonna be your data platformvision for your company.
And that's probably gonna be amulti-year scenario in terms of
how do you make your data mostaccessible to these systems.
But the good news is that you'regonna find that there's places
(33:18):
where you can start, where youdo have.
A high caliber of data.
And and people say these systemscan handle, unstructured data.
That's true to a point.
But even unstructured data,these systems thrive better when
that unstructured data hascertain kinds of recognizable
labels and formats that it canmake sense of.
(33:39):
So you're gonna learn as you goa little bit what the best.
Data is for those systems, andthat's why these trusted
advisors are so important.
But I guess I just wanted to saythat there's no consensus on all
of this yet, but the one goodthing that we know is that
starting with areas where youhave quality data is a really
(34:00):
good idea because you need to bedoing.
Actual live work with qualitydata to see what the results are
and iterating on that ratherthan testing these systems with
data that is not your own,because it will not give you the
input that you need on whetherto go forward or not.
Andreas Welsch (34:19):
See, I I spoke
in Munich at, an event a couple
of weeks ago and afterwards someattendees came up to me and said
you talk about AI, you talkabout the, need for good data,
clean data, fresh data, accuratedata, all of that.
We know that our business, ourdata is nothing like that.
And we're not the only ones,right?
(34:40):
So how can we do this?
Because we've been trying foryears, getting our data in,
order, getting our leaders tobuy into this to get money, to
get funds and do things we don'treally have time to, to do a
multi-year roadmap and,something, what can we do?
Jon Reed (34:55):
And
Andreas Welsch (34:56):
so one of my
recommendations was, think about
how can you do some parts of itas a phase zero of your AI
project.
Because if your leadership, ifyour board gives you money to
invest in AI, yes, you need toget your data house lakehouse,
what have you in order, to doAI.
So think about where can I weavethis into existing projects or
(35:18):
new projects that we're startingwith a manageable effort to make
progress.
Because similar to what yousaid, it's not like data is a
new topic.
It's, been there for years.
It's been there for decades andit hasn't been solved yet
because usually it's, not thesexy, it's not the shiny thing
that, that people want to investin or get visibility for or
(35:41):
credibility for.
But we need to do this work nowmore than ever.
If we expect to have AI, if weexpect to have a agent AI on top
of that, using the data toreason, to make decisions, to
generate proposals, to researchinformations in all of that.
So the time is now.
But if we don't have a dedicateddata budget, a data project.
(36:05):
See if you can weave that in asa phase zero to an AI project
before you get started withthat.
Jon Reed (36:11):
And a lot of, I think
the excitement that I feel about
enterprise AI.
I dunno about you, but it's thecombination of the structured
and unstructured data becausebehind transactions in systems
are back channel conversationsabout particular customers and
relevant information around.
The history of interaction withthat customer and the idea of
(36:33):
combining all of that into oneway of engaging with the system
to find out tell me more aboutthis customer.
Not just their sales history,but the feedback that we've
gotten from them and all ofthat.
And having all of that as partof one workflow is so appealing.
And the good news is that a lotof companies have.
(36:55):
Somewhere.
I don't think it's necessarilythat it has to be cloud, but the
reason why cloud products havebeen appealing is that when you
move to a SaaS product, a lot oftimes a different level of data
discipline is imposed upon youbecause you need to become a
little more sort of vanilla, ifyou will, in order to fit into a
SaaS.
(37:16):
A classic SaaS environment.
So what you're looking for froma structured data perspective in
terms of AI readiness is systemsthat are not incredibly, heavily
customized internally, but are alittle more out of the box you
could say, where you have had toimpose some, discipline around.
Structure and you have had towork through some meta issues
(37:37):
around customer names or regionsor whatever it is you're looking
for, stuff like that.
And like you said everyone hasbeen burned by these initiatives
in the past.
But the good news is that withthese iterative projects, you
are gonna have the ability tocome back to your team and your
leadership team and say, okay,we started by working with this.
(37:57):
The data was already prettyclean here, we gotta win.
We were able to increasecustomer satisfaction.
We were able to reduce,turnover.
We're able to land new clientsor optimize lead gen or whatever
it is and, that gets leadersexcited and then.
You can say, and now we need togo a little further here.
(38:20):
And so that's the hope from inthe past where in the past the
problem was we were doing thesemassive cleansing efforts, but
there was never really anyexternal result.
It was when we finally get itdone, now we can look for the
use cases.
But the idea here is that you.
Rolling out the use cases asyou're cleaning the data.
That's the distinction here.
(38:42):
And if you can do that, I thinkyou will be able to develop
momentum to move into the thornyareas, and it will happen
because what you're gonna runinto is your users or your
customers are gonna askquestions.
Of your AI system that it can'tanswer because it doesn't have
access to certain data.
And then that's gonna be yournext project is to be like, how
(39:03):
do we add that data into thismix?
Andreas Welsch (39:06):
And it sounds
like that's a much better
problem to have than the agentanswering questions where it
does not have the data Right.
And making things up.
Jon Reed (39:15):
Oh, much better.
Yeah.
Andreas Welsch (39:17):
So, definitely
lots to do and lots to, to work
on.
For sure.
We, we.
said that today we'll talk aboutwhat's next.
We covered security, trust, andauthentication.
We talked about AI readiness.
On the one hand, getting yourpeople and, your leaders ready.
And on the other hand, databeing a big topic where we, need
(39:39):
to increase that, then readinessto do AI on, on top.
And I'm just super excited tosee how things are moving in the
industry and how quickly.
They are moving from what, twoyears ago?
A year and a half ago, roughly.
The, first agent frameworksthat, that came out, then
vendors latched onto, they said,okay, here has a real
opportunity.
(39:59):
Maybe to some also a thread.
We need to do something aboutthis.
Now they've shifted.
Now there's the adoption.
Starting slowly but surely.
I'm excited to see moreexamples.
By the way, I also feel notenough companies are talking
about their agentic deploymentsyet.
I feel sometimes there are some.
Little bits and nuggets that youpick up in, in the tech media
(40:21):
and, coverage.
But I have seen very, fewcompanies out there and say,
Hey, here's how we've deployedit.
Here's how we looked at tasksand, how we broke this down into
what does the agent do?
What does what does the subjectmatter expert do?
Very few maybe for good reasons.
Maybe it's the competitiveadvantage for those that are
already ahead.
(40:41):
But I'm sure there's, more thatwe will see.
And then to me, that alsounlocks the next wave of what is
possible, what should we bethinking about, right?
Oh here's company X, Y, Ztalking about this couple weeks
ago, right?
We, talked about Modernacombining IT and HR to look at
roles.
Whether or not that's the, rightapproach, I think is, to be
(41:02):
seen.
We'll see that probably in thenext three to six months or a
little later, but the firstcompanies taking this initiative
and this approach and looking athow can we divide work?
Where do we need people becauseof the unique capabilities and
capabilities and knowledge andwhere do we need agents and how
does that look?
It's just moving super fast andit's exciting to see how that
(41:23):
progresses,
Jon Reed (41:25):
right?
And as we move to the end ofthis third discussion, I think
on that what's next thing when Igo to the fall shows the things
I'm gonna be looking for are,one of the big ones are what you
just described, which is I'mgonna try to document more
customer success stories.
(41:45):
Of what's been achieved so farand what the sort of hard won
lessons are there.
And, I have got a few from lastspring, but I expect more in the
fall.
So that will be reallyinteresting.
Vendors don't always do a goodjob of, promoting those.
And a lot of times what'sinteresting is some of the
really good ones are not.
(42:06):
Necessarily heavily generativeAI right now.
Some of them are still veryclassic machine learning type
examples that still have reallypowerful impact in terms of
things like triaging anemergency room into different
priority levels or somethinglike that.
It could be more of a machinelearning approach or even a
computer vision approach.
So that's one thing is that.
(42:29):
We, wanna document all of that.
But, then the other thing isjust looking at what's next for
agents.
And I think for customers rightnow, I would say start looking
at.
MCP and tool calling as theextension of looking at like
things like RAG and context.
It's all about how can you makeLLM smarter by incorporating
(42:51):
both your own data, but alsoexternal validation and
supplemental tools that helpwith decision making and process
flows.
So it could be things like.
Checking with a creditvalidation service externally
and start experimenting withthat.
And MCP is obviously oneprotocol that really helps with
that.
I would say that's the nextphase is start to look at how to
(43:14):
make them smarter for the thingsthat you want to do by taking
advantage of the data bothinternally and from.
In tool calling, which doesinclude that rag context, but
also much more where I would saykeep a wary eye for now, but a
curious eye is around the agentto agent protocols and A two A
(43:35):
and stuff like that.
And what you will hear a lotfrom with vendors.
This fall is aroundorchestrating multi-agent
workflows across domains.
You're gonna hear a whole lotabout that.
And the reason for that is thatevery vendor wants to be the
agentic system of record fortheir customers.
So they want to basically say,you can count on us to do that.
And then other vendors like theBoomie in the UI pass of the
(43:57):
world, are trying to be morelike the Switzerlands of these
scenarios by saying we don'thave a dog in this fight but
we're gonna orchestrate this foryou as well.
Now, all of that kind of.
Talk is very useful, but I thinkright now proceed with caution
when you think about thepotential to hand off agent
workflows across vendors,because that's where it's gonna
(44:21):
be a little more heavy liftingright now in terms of getting
the results you want.
It's not that you shouldn't lookat it, but I think right now.
Focus on getting some internalworkflows going within the
context of maybe one vendor ormaybe two that work very closely
together, that have a reallygood partnership, nail that down
first and, then we can lookahead to the prospect of more
(44:45):
agent to agent communicationgoing forward.
But the stats on multi-agenthandoffs right now just aren't
that great and the more complexthe workflows are.
The more problems you could runinto.
So I would say keep an eye onthat because it's really
promising, but start with stuffwhere you can get some wins.
Andreas Welsch (45:02):
That's very
practical advice.
Look at what's there.
How do you use these protocolsand technologies, experiment
with it so you become moreknowledgeable and, overall more
experienced as these protocolsand, solutions.
Mature as well.
And also the, recommendationthat every vendor wants or needs
to say, we're doing somethingwith AI and you can do it with
(45:24):
us.
I think makes perfect sense tome.
It comes down to what where doyou have an investment?
Where do you have a beachheadalready?
If it's this solution or, thatsolution, it might be easier to
go here or go there and maximizeyour investments.
Jon Reed (45:39):
And by the way, just
real quick, one of the reasons
why I think both MCP and A2A areworth tracking.
It is because so many vendorsthat you might think of as
competitors have signed on tothese particular protocols.
And that's really, importantbecause in the history of
enterprise software, thestandards that have done the
best are the ones where thecompetitors sign on to the same
(46:01):
frameworks.
And so when you see folks likeMicrosoft Salesforce, SAP,
Oracle, signing onto the samestandards that.
That tells you that there's somepromise there.
But again let some of the deeppockets like venture into the
hardest aspects of that firstand let them take some of the
(46:22):
bruising lessons.
But definitely be watching.
I think my final thing for youwould be since you have been to.
A bunch of things lately, and Iwas doing more deep dive
research from home.
What, would you say was the mostmemorable interaction you had
with a customer or, someone thatasks you questions around what
(46:45):
they're doing?
Was there anything that reallyjumped out like that surprised
you or.
Andreas Welsch (46:50):
I usually give a
lot of talks and, keynotes here
in North America.
So being in Europe for a coupleweeks was super refreshing
because I got a whole differentperspective whether there was
more an IT leadership audienceor mixed group from financial
services and venture capitaland, so on.
But the one event that is spokenin, Switzerland I, got so many.
(47:12):
Deep and thoughtful questionsthat I haven't gotten in a long
time over here, to be honest.
And, one of those questions wasaround how do we make sure that
these models are fair and safeand unbiased, and especially if
we have certain vendors buildingthis basic technology.
And then again, within certainvendors, there are certain
(47:32):
tendencies, right?
More conservative leaning, forexample.
More progressive, moreguardrails, fewer guardrails.
And then what if we have thesetools here and LLMs more with
with a western lens in, corpusof, data compared to more on the
east or the far east.
So how, does it work?
(47:53):
How do you leverage it ormitigate for it?
Do you need different models?
Just a good, Very broad.
And, from my point of view, deepdiscussion.
And I would say at the moment,did you have
Jon Reed (48:04):
a good did you have a
good answer for dealing with
bias in these models?
Andreas Welsch (48:08):
So I think if it
was good is is for the audience
to judge.
I think I did have a good answerand my, my answer was.
Think about where do you deploythese models and what do you
deploy them for?
Yes, there are some bias inthere, so we need to figure out
how do we check them and, doevaluation.
But if you know that you deploymodels on, a global scale for
(48:30):
different audiences, maybeindeed consider using a Quinn or
deep seek more for, far east inthe cloud.
Philanthropic in, in what?
Or, open AI models for morewesternized scenarios.
But that's also by the way,where the small language model
discussion came up to say whatis it that you're trying to,
(48:51):
optimize for and for this modelto be really good at?
So maybe a smaller model thathas more specific domain
knowledge about the industry,about your business could be the
solution as well.
But to me, the big thing isthat.
These, models are never free ofany values, right?
We, use language to train themlanguage, predominantly English
(49:13):
where there are certain worldviews and biases encoded in that
language anyways.
Yep.
The, words that we use, that wespeak, that we write, so
naturally these values areencapsulated in these models.
So we, need to be aware of that.
That's a much, much longer topicand discussion to get into.
But maybe also good challengefor all of you to, think about
(49:35):
what, if you have somethingthat's more Western leaning or
progressive or Western and moreconservative or US based, more
conservative if you will, withgrok or you have something like
DeepSeek and Qwen.
With different worldviews andideologies.
How, do you deal with that?
Jon Reed (49:49):
Yeah, and I'll just
add really quickly to that,
'cause you're right, yes.
We don't have time for a deepdiscussion on bias today, but it
might be interesting to revisitthat topic.
I really like when companiestake a proactive approach to
that, but of course not all ofthem will.
But I will point out that the EUAI Act has a really good risk
framework that is worthunderstanding and applying to
(50:11):
all of your AI initiatives andpotentially embedding it into
your development and designprocess, regardless of whether
you do business in the EU ornot.
And, the reason for that is thatthat, those risk levels roughly
correspond with your likelihoodto get sued by people.
Based on the misuse of thosescenarios.
And I can think of lawsuitsgoing on right now in the US
(50:33):
that pertain to the high riskareas in the EU AI Act.
It's got nothing to do with thatthat legislation.
But it's got to do with the factthat you're playing when you
playing risk areas, you'regonna.
Attract legal attention.
And and you're gonna need tohave a plan for that.
And absolutely these,discussions are not just about,
oh, my company has great value,so I want to confront bias.
(50:57):
You also have legal exposure ifyou don't.
Andreas Welsch (51:00):
Yeah.
And I think it, especiallysoftware vendors, right there,
there, are certain cases in, thein the judicial system at, the
moment about bias for resumescreening and candidate
matching.
Yeah.
And things like that.
Things you can look up inindependently as well.
For me the big thing of what'snext is, really this part of we
need to enable the workforce.
(51:21):
You need to enable yourworkforce to use AI and do it
responsibly.
So you're not just off the hookby using ChatGPT and or Copilot,
and you're done.
But how do we empower.
Line managers, middle managers,to empower their teams and their
team members to use AI and givethem guidelines to say, Hey,
here's how I want you to use it.
(51:42):
In many ways they'll be similarto how we expect people to work
with each other within our ownteam or across teams.
So if you are thinking about howdo I do that?
First of all, think about how doyou get people within your team
or across teams to worktogether?
What are the standards ofquality that you expect people
to follow?
And how can you communicate thatnow in simpler ways?
(52:06):
Over our last two episodes, I'vebeen working with LinkedIn
Learning and two courses therethat I'm super excited about to,
be recording shortly.
So keep an eye out for, those,but I think that's the, next
practical frontier at the veryfoundational and base level.
That's what's keeping me busyand excited at the moment.
Jon Reed (52:25):
Excellent.
Great stuff, man.
That seems like a good, place tostop.
Andreas Welsch (52:29):
Yeah,
absolutely.
So Jon, cool.
Thank you so much for joining.
Yeah I'm always super a amazedby how quickly we come, up with
a topic and then just run withit.
Totally unscripted.
So that's, yeah, that's a lot offun to see at the end where
we've taken it over the past 50some minutes.
So thank you so much forjoining, for sharing your
experience and expertise withus.
Jon Reed (52:50):
Great talk! Till next
time.
Andreas Welsch (52:52):
Till next time.
Bye-bye.