All Episodes

September 5, 2025 50 mins

Chasing hype is easy. Delivering results with AI in the enterprise? That’s where leadership is tested.

In this week’s episode of "What’s the BUZZ?," I sat down with Jon Reed, industry analyst and co-founder of diginomica, to unpack some of the biggest myths that hold organizations back from real Agentic AI success.

Here are four myths that stood out:

1) Myth: It has to be Generative (or now, Agentic) AI
Predictive models, machine learning, and other “less flashy” approaches often deliver the most immediate ROI. Success starts with the problem you’re solving, not the trendiest tool.

2) Myth: Perfect data guarantees perfect results
Even with high-quality data, AI is probabilistic and not deterministic. Outliers and unusual errors happen. That’s why audit trails, risk management, and cultural readiness matter just as much as data quality.

3) Myth: AI replaces expertise and creativity
AI amplifies expertise but cannot substitute for it. Domain experts are critical for spotting flaws and guiding outcomes. And while AI can generate content, true creativity and ingenuity still rest with people.

4) Myth: Leaders don’t need to understand the tech
Courage and vision are vital, but without data and AI literacy, leaders risk reimagining the future on the wrong foundation. Both human leadership skills and technical fluency are essential.

If you’re serious about moving past AI buzzwords and building sustainable success in your organization, this conversation is for you.

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:00):
Today we'll be busting some myth around the
enterprise AI topic and the do'sand don'ts that typically
inhibits success.
And with me, I have Jon Reed,esteemed industry analyst,
co-founder of diginomica.
Jon, it's so great to have youon again.

Jon Reed (00:16):
Yeah, it was cool.
I was really hoping we couldcontinue our discussion and
hijacking your show in some waysin a fun way because I just said
let's have a discussion'cause Iwant to hear what you think of
what I think and I think thatmakes it super interesting for
viewers.

Andreas Welsch (00:33):
I hope so too, right?
In two weeks ago, we're talkingabout what have we seen from
different vendors conferences inthe first half of the year.
So I'm really excited to talkabout what are some of the myths
that we each see out in themarket?
What are people believing?
What are they doing?
Where do they typically feel?
And then how can you actually doit better?

(00:54):
Without further ado, I would saylet's jump right in.

Jon Reed (00:58):
Yeah, for sure.
And one thing I do want to sayis, and I think this is true for
both of us, is that when we talkabout miss and puncturing high
balloons and stuff, this isn'tfor to, to boost our LinkedIn
profiles or to get like viralcommentary.
This is really about how do youget underneath that into a place
where you can really havesuccess on projects.

(01:19):
Because one thing that ha thathas convinced me in the last
year of events, especially thislast phase is that there are
plenty of success stories outthere.
The thing that might be a littledisappointing to some of the AI
true believers is that some ofthese success stories are a
little bit modest in scope, butI find them very encouraging

(01:40):
because they show thoughtfuldesign in ways of building
momentum and I think sometimespeople get frustrated'cause they
want to believe that this isinstantly revolutionary or
transformative when in fact.
It's really about building windson top of winds in a more modest
way.
And if that's not sexy enoughfor you, then I don't know what
to tell you.
I think it's pretty sexy.

Andreas Welsch (02:01):
If you're watching this on LinkedIn and
YouTube, you could probably seemy face light up as Jon was
talking, because it reminds meof what we've seen with machine
learning already and what we'veseen to some extent with
generative AI as well.
And I remember some of the earlymachine learning projects
business leaders said we need tohave 98% automation rate

(02:23):
otherwise this is not going tofly into the world.
What does your process look liketoday?
Oh we're we can automate 30% ofthings.
Even incremental improvements oreven incremental improvements in
accuracy can have verysignificant impact in absolute
terms.
So to me as we were talking,that was one of the big myths
that we saw early on, that ithas to be fully automated, has

(02:46):
to be perfect from the beginningon.
I think a lot of times it's alsoiterative in nature.
We evolve the data evolves.
Our goal and scope evolves insome respects.
And to me, that's an importantthing that has stayed true from
days of predictive machinelearning to generative AI in
that I see now with genetic AIas well.

(03:06):
So that's why I'm especiallyexcited that we are also seeing
the first customer examples andstories out there, why companies
are using this and how they'reusing it.

Jon Reed (03:15):
Yeah, for sure.
And look to I think, I don'twanna rehash like all of our
last show because our last show,I really recommend for folks to
get a sense of the context ofhow we got to this point in the
market, which we got into a lotand also some limitations on how
people perceive AI first.
But we also talked a little bitabout that some of the
documented struggles aroundgenerative AI have to do with, I

(03:37):
think, misconceptions about howthe tech is best applied.
And I think generic productivitycopilot.
Things like this Rod wrote anemail for me a little bit faster
or whatever are not at the heartof really compelling use cases.
And so when we take a step back,we talk about really rethinking
industries and, I think so muchof this is an industry

(03:58):
conversation and how are yougonna compete?
In an industry, how do you wantto change your company's
business model and services?
And then all the tools at yourdisposal are there, including
maturing agentic, AItechnologies.
And as you point out, in somecases you might find that a
simple machine learningalgorithm applied to some
inventory and supply chainpatterns actually like frees up

(04:22):
a significant amount of savings,even though it seems almost
surprisingly basic from thevantage point of today's
technology and.
If that works, that's beautifulbecause that really shows you
these outside gains that you cansometimes have just by smartly
applying the technology to anindustry setting instead of a
generic productivity conconversation.

Andreas Welsch (04:43):
One of the things that really surprised and
almost shocked me at thebeginning of the Gen AI hype was
leaders in large organizationssaying, we need to look for
generative AI.
Use cases, that's AI, predictivemachine learning, that's not AI.
We don't care about this.
It has to be generative.
And we quickly obviouslyrealized no, it doesn't have to

(05:04):
be just generative.
Everything has a place and ityou need to find out what that
problem is that you're trying tosolve in the first place.
Is it more predictive?
Is it more around naturallanguage summary, co-generation,
what have you?
And where does each of those.
Play play out their strengths.
Now add agentic AI to the mix aswell.
That's an additional dimension,but just blindly saying it has

(05:25):
to be this, now it has to beagentic.
To me that's a big myth becauseyou're missing a lot of the
opportunity and you might noteven be using the right tool or
the right method for the problemthat you're trying to solve.

Jon Reed (05:36):
Totally.
And we could definitely callthat the sort of first myth of
the show, right?
That agentic becomes this allconsuming technology that blinds
you to other use cases andapproaches.
And in fact, agentic technologyhas a very specific set of pros
and cons at this point, and wecan get into that further.
Right now I recommend AgTechtechnology specifically for more

(05:57):
focused workflows as opposed tostringing a bunch of agents
together where you start to seea lot more.
Breakdowns in the currenttechnology.
It is still a tool.
It's just, it's not the cheapesttool though.
And that's really important too.
But it has some uniquelybeneficial characteristics and I
will get into some use casesaround that as we go to
illustrate that point.

(06:18):
One myth that I'm gonna startwith here is.
This one is a little explosive,I think, but this notion, the
better your data, the betteryour result.
Now there's a lot of truth tothat, and I don't wanna
completely discourage peoplefrom thinking that way because
so often when you look at wherethese projects didn't go right

(06:39):
in the past, it did have to dowith not enough customer
specific data.
A lot of these generalized largelanguage models really haven't
been trained on.
Enterprise specific and industryspecific.
So there is a whole thing arounddata readiness and AI readiness
that you and I discussed before,which is a really juicy topic.
So I don't want to discouragepeople from thinking about data

(07:01):
quality, but I wanna point out acouple things.
There's still a whole lot to bedone around culture, business
model, design, all of that.
I just wrote a piece today aboutthe limits of autonomous agents
and how important it is.
For customers to dictate theirown pace.
When it comes to autonomy,that's not just a data problem,
that's a culture, that's acompliance thing based on your

(07:22):
industry, what your comfortlevel is.
With all of that and also I dowanna point out that even if
your data is perfect, thesesystems are not so even like
the.
The vendor that I focused onauditoria in my last piece,
they're doing a really good jobwith a finance specific model
they developed, and they also,they have a very specific

(07:43):
architecture.
Accuracy rates are in the 90%,but we're and, sometimes high
nineties, but we're not talkinga hundred percent.
This is not deterministic.
So just because your piping inall the quality data, it doesn't
mean everything's gonna goperfect.
And even when you look at thingslike rag context which gives
up-to-date information into thesystem.
There are issues around RAG interms of which data gets pulled,

(08:04):
whether the LM properly usesthat data.
And it's really important tothink about that because if you
step back and, you realize it'snot just about the data, how
does that help you?
It helps you because you dothings like what Auditoria did,
where they set up audit trailsaround the processes so you can
look at what went wrong, when itdid go wrong and, you look at
this from the comfort level ofyour users and figure out is

(08:25):
this good enough for them?
And you're able to step back andsay, look this technology needs
to be designed very carefullyand it's not just about the
data.

Andreas Welsch (08:35):
Yeah.
You know Absolutely right.
What you think.
Especially the part around thedeterministic, probabilistic
nature.
To me it is an important onethat we cannot emphasize en
enough because I feel that a lotof times through the history of
information technology andsoftware, we've typically

(08:55):
replaced a process that wasmanual or a semi-manual with an
automation in the automationworked like this.
If this happens then do that.
Could be a workflow, could be anapplication that you've
developed.
Now all of a sudden we say, lookat the data or look at certain
input factors and then make adecision based on language and,

(09:16):
some statistics andprobabilities of what the right
outcome or the most plausibleoutcome outcome would be.
So yes, naturally you areintroducing additional variance,
additional risk into thatprocess.
And I think being aware of thatit's not infallible, that it's
not a hundred percent, not ahundred percent of the time is a
really key aspect.

(09:37):
We've seen this before with,again, generative AI.
We're now seeing it even at afaster pace and a greater level
of magnitude or order ofmagnitude with agent AI.
And to me that is somethingthat's really important to
highlight that it's notinfallible.
It's not a hundred percentcorrect a hundred percent of the
time.
So we need to understand when isit likely going to fail and,

(09:57):
what is the impact if it doesfail.
Basic risk management andmitigation strategies that apply
here again as well.

Jon Reed (10:05):
And some folks might be wondering if what you say is
true, then why would I want toeven mess with a probabilistic
technology?
And I think we should get intothat a little bit further in
some of the use case discussionsaround what the strengths of
some of these systems are.
Obviously the no-brainerstrength, the really obvious
strength is its ability to, whenyou attach some kind of a

(10:25):
language chat interface ontothese systems, its ability to
engage users in whole new waysis clearly one of the top
strengths.
And it's something that hasfrustrated enterprise software
vendors for a long time.
One of the oldest tropes in ourindustry, Andreas, is the
difficulty of navigating thesesystems and the super user

(10:47):
protecting their domain andprotecting their information and
no one else has access to it.
And this type of interfacereally demolishes all of that in
a lot of different ways and,makes it so much more accessible
to interact with these systems.
And so that's just one example.
I don't wanna list them allright now'cause I want to get
into some other things, butthat's one example of.

(11:08):
Why you would want to use aprobabilistic system because it
can do things that rule-basedsystems simply can't.

Andreas Welsch (11:15):
Yeah.
And maybe adding two points tothat one is also as humans,
we're not a hundred percentcorrect a hundred percent of the
time.
So we need to realize and acceptthat as well.
So the question becomes how muchbetter or how much worse would
an agen system before perform inthis case?
And is that acceptable?
And I think the other part iswhere actually interoperability

(11:37):
between.
Different language drivenmodalities becomes I important,
right?
If I am in my Microsoft suiteall day, being able to trigger
agents or tasks that spandifferent systems that I have in
my landscape gets a lot moreimportant.
Or again, if I'm in my ERP and Iuse a product there, that I can

(11:59):
do this across different systemsso that the agent experience in
that sense transcends differentsystems.
It's an assistant, right?
It doesn't really matter if it'sin this silo or in that silo,
but being able to reach in intodifferent silos to get me the
answers that I need.

Jon Reed (12:18):
And you make a really good point because with
discipline and you start, whenyou start getting into the 90%
range on accuracy, which by theway is not always easy, but when
you can get there on a focuseduse case, it does get to the
point where you can startlooking at.
My use case in my industry, isthis good enough for me?
For a lot of things and,especially like I think one of

(12:40):
the things I'm fascinated by,and I'll get into a little bit,
is the customer facing serviceuse cases.
One of the reasons thatfascinates me is because I.
I like to look at what machinescan do that humans can't.
Here's one thing, 24 7 serviceright in the past if the service
center with the humans was shutdown, the bot would simply point

(13:02):
you to these pages ofdocumentation and good luck like
sorting through all of that,right?
And, now you have the potential,if you can get your service bot
good enough to at least resolvea lot of relatively important
queries, like without having tomake the customer wait until the

(13:24):
next day.
Now granted, there will be somecomplex issues that still can't
be resolved that way, but Ithink it's really interesting
because you say this issomething we couldn't do before
and now we can do it.
The one thing though that I dolike to point out to people is
that it's.
You can't just look at accuracyrates.
You also have to keep in mindthat some of the outliers may be

(13:46):
unorthodox and different thanwhat a human mistake would be.
Yeah.
In the finance domain, forexample, you could have a seven
figure mistake.
Now you can account for thatwith certain rule-based.
Audits of those systems.
But just to give you a couple ofexamples, one from the news
headlines where a Tesla vehicleran into a plane because had

(14:09):
never encountered planes in itstraining material before.
Okay, that's an outlier, butthat's also an outlier that a
human would not do.
And so when you plan for thisyou have to think about what
those outliers might look like.
Another example, just for my ownlife.
Is more recently, I had atranscript and I use machine

(14:29):
generated transcripts in my workand it inserted the word
holocaust in the transcript.
Now, a human transcriptionistwould not have done that.
It would've understood themeaning of that word and say
there's no way that the wordholocaust occurred in Jon's
interview about an AI project.
It just, it didn't occur right?

(14:50):
But the machine doesn'tunderstand that term.
That's not like a career endingmistake, but I didn't want that
in my copy.
And so you have to keep that inmind that sometimes the outliers
can be a little unusual but withthoughtful use case design you
can accommodate that.

(15:10):
And it's just important toremember that it,'cause some
people are like humans andmachines.
Make all make mistakes.
Yeah.
But some of the machines makedifferent kind of mistakes.

Andreas Welsch (15:20):
So to give you an example
ago, I was looking to book arental car abroad in Europe, and
I wanted to find out whether Ican legally drive the rental car
outside of the country where Iwas going to rent it in.
And I couldn't find reliableinformation on the website.
Only rental car company in townwhere we're staying doesn't have

(15:43):
a chatbot on their website.
They point you to a PDF of termsand conditions and you can read
it.
I was an under wiser afterwards,to be honest.
Yeah.
So I could have called duringbusiness hours, during European
business hours.
I decided to drive to the nexttown over once we're there and
rent with a different companybecause again, customer
experience really wasn't good.
So in this day and age, even abasic chatbot, a chat bot based

(16:07):
on gen AI that can understandwhat I'm saying, even if it's
not exactly trained on it wouldalready work wonders I'm sure.
Let alone something that'sagentic and that can help you
with getting more informationand so on.

Jon Reed (16:18):
Yeah.
I've done a couple of these usecases now with customer facing
service bots.
And granted, you have to becomfortable with this in your
industry based on whateverregulations and oversights you
might have.
But, one of my favorites that Idid was Salesforce connections

(16:38):
which was with a company calledEngine.
And the reason was it's a travelcompany and I'm pretty wary of
that because I ha actually don'ttravel as n agentic use case,
except for the most likeboilerplate travel arrangements
'cause.
I'm very fussy and I think mostpeople are about the nuances of
travel.
But what was really interestingis that Engine had a really nice

(17:01):
use case,'cause they looked atthis and they said let's start
with reservation cancellation.
'cause they handle over half amillion inquiries a year.
And a good chunk of those arecancellation requests.
And they put the agent up tothose cancellation requests.
But the whole idea was not the,and this is a really important

(17:23):
point I think for leadership tothink about in your AI
leadership theme, the ideawasn't how can we reduce head
count reduction in customerservice.
The idea, the guiding questionwas how can we make our customer
experience more seamless andbetter and increase
self-service?
I think that's a reallyimportant distinction.
And, they also really like theidea that this sage agentic AI

(17:46):
bot would be much more kind ofhuman centric and human
conversational because so manyof the service bots we've
encountered online are just notthat they're much more robotic
and very stiff.
Can we have a more engaging formof that?
And, so they went ahead and didthat.
And, their customer satisfactionscores, which they were

(18:08):
monitoring went up as a result.
They're actually hiring moresalespeople even though it's
helping them with some of theprospecting now.
And on the call center side, didthey release those call center

(18:28):
folks when they had, when theywere freed up?
No.
And, what they said is they'renever gonna, at least in the
foreseeable.
Use agents for over bookingsbecause that's really stressful,
high touch experience for ahuman, but they're gonna apply
it to other stuff and it, andthey said it allowed us to
remove a lot of the less complexstuff through self-service

(18:51):
interaction, leaving more roomfor the humans to.
Resolve things and what they'relooking at doing with that human
time is, not getting rid ofpeople.
Though I think they're happyabout making them more
efficient.
They want to apply those humansto more high touch and VIP
clients to provide superiorlevels of SE service.
I really love that because Ithink it's all about using the

(19:12):
technology in the right way in aframework that's driven not by
how can we reduce head count orbe more efficient, but how can
we serve our customers better,becomes the guiding question.
I as grouchy as I am abouttravel bots and stuff, I just
couldn't, I couldn't poke holesin that one.
I thought it was great.

Andreas Welsch (19:29):
I think so too.
I read your writeup about itwhen it came out a couple weeks
ago, and to me that's such anencouraging story and I hope
that we're going to see manymore of those because I think
leaders embracing that mindsetrealized that we actually have a
lot of great expertise in humancapital.
In our company already.
So how can we use thatexperience and, that expertise

(19:50):
of how we work, how our company,how our customers work, how we
work in the industry, what makesus unique, and really apply that
to human, interactions andscenarios.
And yes, fine.
Give the, menial, the repetitivethings to a bot that maybe still
can emulate some, empathy and,again, increase customer
satisfaction.
But then, like in, in thisexample here, use these,

(20:13):
resources and, team members andand shift them over to
activities that are high touch,that are high value at the most
significant clients in VIPswhere we generate even more
revenue and so on.
So I, hope that we're seeing alot more of that thinking
rather.
Let's just automate and cutheadcount because I think, at

(20:34):
some point, you also run out ofheadcount to cut And, you've
lost so much capital humancapital and, experience in your
company.
That'll be hard to keep thatservice level up.

Jon Reed (20:47):
Yeah, and I think what you are gonna find is if you do
this right, you're gonna improvewhat you might call the
cognitive load of certainemployees.
So you will have to hire lesspeople.
So there will be someoperational benefits to come out
of that, but I'm like you, I'm alittle wary of this because I
think a lot of companies aregonna use this.
In more of a brute force way,ironically, in concert with a

(21:09):
lot of mantras around AI first.
But if you combine AI first witha headcount reduction mentality,
I think you're gonna createfear-based employees.
And what you and I talked aboutlast time was how, the
alternative is to energizeemployees, not just with a use
case like we described.
But combine that with a sandboxenvironment that creates a
culture of innovation whereemployees can propose new uses

(21:32):
of the technology that arevalidated and secure within your
enterprise structure.
And if you could, if you can dothat, I really think you're
gonna make inroads in whateverindustry you're in.
Versus I think.
The more kind of slash and burnapproaches.
And look, I understand theeconomic pressure companies are
under, but I really steadfastlybelieve that if you use this

(21:53):
technology right, you can growthe top line and be more
efficient.
I'm not gonna let go of that.

Andreas Welsch (21:59):
Same.
Same here.
And, we've seen a lot of that inthe tech industry, coming out of
the pandemic you thoughtheadcount reductions in the
order of magnitude of 10-15,000at some of the large tech
players, where an exception willthey've occurred and reoccurred
on an annual basis.
On maybe not quite the samescale on a quarterly basis.

(22:21):
Couple weeks ago we, we saw newsfrom Microsoft coming out cuts
after cuts because of ininvestments and bring investment
in AI to fuel those those newareas to also free up capital to
invest in building data centersand building infrastructure and
so on.
I think a lot of times you alsoneed to see what does it do to

(22:43):
your to your company culture, tostaff morale.
You'll see it your employeeexperience or employee surveys
as one leading indicator.
And I think o overall tech mightjust be a leading industry in
that sense, where it's on onehand a lot closer to that
technology to see how can weapply it?
Let's assume we apply with goodintent to reduce cost fine.

(23:07):
So there could be some rippleeffects where 1, 2, 3 years down
the line, we see this in moretraditional industries become
more prevalent as well to seehow can we replace.
Labor with AI.
Again, I would caution and,let's say, first of all, think
about how can you augment humanlabor with AI to do more or to
do what you're doing better and,improve your customer

(23:28):
satisfaction as one example.
But I'm just curious to seethere again what are the ripple
effects and then also with somuch talent now on the job
market.
What are the new companies thatwe haven't seen yet that are
going to emerge from this?
Because there is so muchexcellent talent out in the
market that knows how to buildproducts, how to solve tangible

(23:49):
needs in different industriesand verticals.

Jon Reed (23:51):
Absolutely.
And I do wanna acknowledge, Ithink there's a fairly profound
problem around how more juniorlevel folks are gonna progress
in, in these AI environments.
And I think there may be anopportunity to have AI mentors
bring junior folks up to speedto an extent.
I'm a little bit wary aboutexactly how all of that plays
out, but I wanna get to a couplemore leadership myths be since

(24:14):
you've been focusing on that,these have been bothering me a
lot lately.
One of the myths is that wedon't need experts.
Yes.
Some of these systems do haveexpert repositories and they do
for example, ChatGPT.
probably knows more aboutcertain kinds of medical
practices than I do'cause I'mnot a doctor.

(24:35):
But this notion that we don't.
Need domain experts, I think isabsolutely ludicrous, and I'm
gonna tell you about a use casethat illustrates that also, AI
does not replace humancreativity.
This is very bothersome for mebecause I've seen it again and
again on these lists of skillsthat you don't.
Really need anymore.
And I heard it on this thingtoday on AI leadership on
YouTube with about a thousandattendees where they were like,

(24:58):
AI does the creativity andhumans are editors and curators.
That's wrong.
It every company needs creativeingenuity to succeed.
And the most compelling stuffthat, that gets created, either
product design or content formarketing, is gonna come from
humans Now.
AI will do a fair amount ofcontent, generation of more

(25:19):
routine content like FAQs andproduct descriptions and all of
that.
And I would acknowledge thathumans perform the editing
function on that content, butthat I don't consider true
creation.
I think that's a very generousdefinition of the word creation.
That to me is contentgeneration.
And there's a difference betweencontent generation and true

(25:40):
human ingenuity.
And again, these studies haveshown that these systems aren't.
Creating truly ingenious ideas.
They really help withbrainstorming, which I'm gonna
get into in a minute.
But they help with brainstormingby, by, by going through all
that training data and throwingthings at you that you might not
have thought of before.
They're not coming up with newand novel approaches to business

(26:01):
yet.
Maybe someday they will, butthey're not right now.

Andreas Welsch (26:04):
Look to me, the the analogy comes to mind is
like a hammer and a chisel.
First of all, hammer and chiseldon't build the statue.
It's the artist.
Even if you can hold a hammer ina chisel and if you can do the
work, doesn't mean that you arethe creative that you can mold
it and decide it that the wayonly, you can in that sense.

(26:27):
Plus I think the other questionis.
We really just want to becomereviewers and editors.
A couple days ago, I saw one ofthe leading AI influencers post
about how AI can generate athousand version versions of a
website.
Now I need an AI to sift out the999 that I don't like.
It's not just a problem ofgenerating, I think it becomes a

(26:49):
much harder problem of makingdecisions.
And again, we're back to eitherhuman human feedback, human
preferences.
And if you've ever been in aroom with a few creatives and,
looked at either differentmarketing copy, different
images, colors, logos, what haveyou.
A lot of times people havedifferent opinions, right?

(27:09):
So figuring out what is it thatwe actually want to do still
takes a considerable amount oftime.
Doesn't matter if you generatefive versions of it, or a
thousand just gives you moremore crap probably to sift
through and say, not that, okay,here are the three that are
okay, but do you really need tothen out of a thousand, so the
creation, I think in thatexample is not so much the

(27:30):
problem as.
Filtering out what is in thenoise and what's the actual
signal.

Jon Reed (27:36):
Yeah, for sure.
And I think that that you make areally good point there that we
get back to what AI what is yourvision for your company?
What kind of AI do you want tohave?
What do you want to cultivateamongst your employees as well?
And I think.
When you step back though, whatyou're gonna find is that if you

(27:58):
take an honest look at thistechnology, you're gonna see
that it's not in a position todo everything that you wanted it
to do in terms of creativestuff.
But like I said, there is a rolefor content generation and all
of that, and I think that's agood role for.
For generative AI to play.

(28:20):
And

Andreas Welsch (28:20):
I have a question for you if you don't
mind me asking.
You've been an analyst for alonger period of time.
You've been looking at themarket, it's different trends,
different vendors and so on.
When has technology ever fullydelivered on the promise that
organizations, that vendorsbelieved that it it could
unlock, right?
So to me, there is a good amountof the hype cycle that we see in

(28:44):
any technology and maturity andadoption curve.
So yes, approaching it with asense of realism that yes, all
of this is possible, will bepossible and at some point is
good to see the big picture.
But the question a lot of timesalso is what is the very
concrete, precise thing that wecan do right now?

(29:05):
To explore this, to put this ininto production, to learn and,
to do that with limited risk,but a lot of learning so we can
accelerate and, scale as we goalong.

Jon Reed (29:16):
And look I'm a creative person by trade way.
Before I was an enterpriseperson, I started writing
creative stuff when I was just akid.
Wrote for heavy metal fan zinesin high school.
If I thought AI could do thatstuff, I would be honest with
you and tell you that I thoughtit could, I would say I'm scared
shitless because it can do whatI can do.

(29:38):
But, the thing is you just haveto take a step back.
Now, I want to tell you about areally interesting use case in
this one.
Is called when AI gets a boardseat.
And, it came out let me justdouble check on this.
It came out on hbr.org, HarvardBusiness Review, and Esteban
Kolsky first brought it to myattention because he's doing a
lot of really interestingresearch on strategic

(29:59):
intelligence for the boardroom.
And this, use case isfascinating and it highlights a
couple of my critiques on Miss,but also shows a really
productive use case.
So in this case, what they didis they spent a year.
Involving AI, so this.
This would be a generative AIscenario, participating in
boardroom level decisions.

(30:20):
And one of the reasons I likethis scenario is I think it
plays to a lot of LMS greateststrengths.
It's less of a deterministic usecase.
It's more about things likecreative distillation of ideas,
brainstorming, support,summarization and inter
interaction via the chatinterface, which we discussed.
And the LM becomes an advisor inthis context.

(30:42):
And there were a couple reallyinteresting things about that.
So what they did is they had,they found that if the board
just worked with a tool andasked questions, it didn't
really work.
They needed someone with somecritical thinking skills to
distill their questions and putit like to the.

(31:03):
The LLM and get feedback back.
So they needed a little bit ofan intermediary to sort some of
the feedback, but when they didthat they, had some pretty
provocative, useful results.
And there was a reallyinteresting aspect to it, which
is that.
That LLMs don't care about ourfeelings.
And so the LLM would proposethings in challenging ways that

(31:25):
didn't really pay attention toif you're on a board, you're
thinking about, oh, maybe Ishouldn't say this because the
political, no, it just put itout there.
And they were making some prettycritically important decisions
during that time.
In terms of things like whereshould we relocate a plant?
Should we reconfigure our supplychain?
They, said the biggest advantageof ChatGPT, which was what was

(31:47):
used here, was disrupting thenatural flow of the meetings.
They thought it might be clumsyand awkward, but the executives
appreciated how it made themstop and think.
And the team was aware that theyhad worked together for decades
and they needed that type ofchallenge.
So this is an important culturalpoint, right?
That they need it.
You have to be open to that.

(32:07):
They have this, great.
Thing over this whether theyshould close manufacturing
facilities and how to what theirexternal stakeholders would
think about all of that.
And, AI helped them to have amore complete and fact-based
discussion about all of that,which I thought was really,
interesting.
Now I did have a couple ofconcerns about the use case

(32:30):
because I thought that while.
While it was able ChatGPT wasable to provide a lot of working
estimations that were accurateenough to move forward.
They, I think they might've gonea little far, a little bit in
terms of which plant should beclosed.
I think some of that has to bedue diligence and stuff like
that.
But but in general I thought itwas a very, strong use case.

(32:56):
The only thing that, that I hada little bit of an issue with is
they asserted that you need to.
Critical thinker, but notnecessarily an industry expert
to operate the tool.
Now, I do agree that thecritical thinker was important
because the critical thinkercould flag things that were
obviously inappropriate or noton the mark.
But the thing about these tools,and the reason why I say domain

(33:18):
experts are still important, isthat their output is getting
sophisticated enough that only adomain expert is gonna spot
certain kinds of discrepancies,I might be trained in critical
thinking, but if you show memedical charts that generated by
these tools, I'm not gonna beable to tell you, oh, this one
is obviously wrong because ofthis, it missed this.

(33:39):
Whereas someone who is trainedin that area and domain expert
is, really the only one that'sgonna spot that.
And so I think it's reallyimportant to understand that
domain experts aren't beingreplaced just because these
tools get these headlines forbeing able to, oh, it passed the
bar exam.
It passed this exam, it passedthat exam.
Sure it passed some staticbenchmarks, but that's very

(34:00):
different than a seasoned domainexpert with real world expertise
in your industry.

Andreas Welsch (34:05):
So definitely a good myth to bust, right?
Don't replace expertise, humanexpertise, just with AI a couple
months ago.
I, worked with a client inmanufacturing and here we're
exploring a new business areathat we might want to go into.
We've done a lot of research,we've accumulated a lot of data.
We have a board meeting comingup in a couple of weeks.

(34:26):
Can you help us build an agentthat can first of all
disseminate all the informationthat we have already gathered.
But that can then also in theworkshop as the board members
are having their conversationand, deliberation, put that
information in the agent,combine it with what it already
knows, and give usrecommendations for why this
would be more advantageous thansomething else or what some

(34:49):
considerations could be.
And I thought it, that was areally great idea and really
great use case to use AI on thedata to find correlations, to
find causalities that humansmight not see immediately or
might miss completely.
But again they also the pointwas we do need a person to

(35:10):
operate this, ideally, somebodyeither in our company or
somebody who knows our industryso they can ask the right
questions or they know thatwhatever ChatGPT or whatever
agent you use, spits out, it'sactually accurate, makes sense.
Yes.
This is doable, right?
Yes.
In our industry we can sourcerubber and material and other

(35:31):
things from these differentvendors or it will take that
much, it'll cost that much.
Yes, you can apply some criticalthinking, to your point, but
unless you have the domainexpertise is really, tough.
We also see this in trainings,right?
They do a lot of corporatetrainings on how do you bring AI
into your business and how doyou use it properly?
And if you do that withprofessionals, like they have a

(35:52):
frame of reference.
What does good look like?
What what does accurate looklike?
What does good look like when Ido the same with my undergrads
and at university, he said useit as a thought partner to
prepare for an interview.
Ask it to ask you some questionsand give you feedback on your
responses.
What do you think?
How did it work?

(36:12):
The answer for from the last twosemesters has been good.
I said, okay.
Why do you think it's good?
Oh, we did what I wanted it todo.
Okay.
Was the answer helpful?
Was it complete?
Was it again good?
Yeah, it was helpful.
So unless you.
Give a frame of reference or inthe academic domain, a rubric.

(36:34):
What are the differentdimensions?
What are the different intervalson a scale basically from bad to
great.
Unless you do that or you havethe domain expertise, it's
really tough to judge.
Is that useful?
Is that something that we shouldactually be doing?
Yes.
Critical thinking comes up everytime.

(36:55):
And I'm grateful that they'retalking about this as critical
thinking because I think it'sbeen on a decline anyways before
AI.
So more important than ever, butthen also the domain expertise
to say, does this actually makesense?
Is this realistic?

Jon Reed (37:11):
And I do wanna reemphasize one point from that
really interesting use case onthe boardroom, which is that.
Even though I do think it'sincredibly important not to
diminish human expertise.
And the reason I'm hammeringthis is because there are some
AI evangelists that are reallydiminishing this point.
I do think on the contrary that,you do want your domain experts

(37:34):
to be open to being challengedby the systems.
I want my experts to be open toall kinds of new ideas, not just
from machines.
But from their team.
Yeah.
But they should also be veryopen to the machine surfacing a
new idea or a new point of viewthat they had not considered
because these machines have beentrained on vast amounts of
information beyond the scope ofwhat you might know.

(37:56):
So I wanna see that openness aswell.
And if, you get that, I think ina way you have the best of both
worlds.
So I have one more to throw atyou on the leadership team.
Yeah.
Do it.
And, this one I think was reallyinteresting.
And it, it dates back.
I think I might have mentionedbriefly last time about this,
but it dates back to.

(38:18):
A debate I had with an HR leaderwho was leading some AI
initiatives who talked aboutwhat skills they need, and he
talked about, I don't, he, atfirst, he said, I don't need
technical skills because I don'tneed to know how my iPhone
works.
And I challenged him and I said,this is totally different than
an iPhone.

(38:38):
Especially in hr, you can,maybe, you can start with
posting job descriptions, butonce you get into things like
performance reviews andassessing people's like careers
and perform succession planningand stuff like that, you better
understand how it came to thosedecisions on who it screened in
and who it screened out and allof that.
You, you need to understand thetechnical architecture that had

(38:58):
got you to this point and, heconceded the point and agreed
with me, and this led me to.
A video online that I watched,which is a good video.
It was from MIT Sloan Managementcalled the 10 Essential
Leadership Traits from the AIEra.
It's not a long video, but they,had something really interesting

(39:18):
in the comments which led me tomy paradox of AI leadership view
in the comments that said, whatstruck us most while editing the
video was how unexpected theanswers were.
We thought we'd hear about dataliteracy and tech skills.
Instead, we got playfulness,courage, and pres present
futurists and made us realizethe human side of AI
transformation might be more.
Complex than the technical side.

(39:39):
I thought that was, those werereally good points.
But I responded to the video andI said, playfulness and courage
are important, but if you don'thave the technical and data
literacy chops, you're gonnamake huge mistakes.
There was a quote here about howeven more important than the
tech is imagining how thischange will work.
I.
Without deep knowledge of thetech, you can't accurately
rethink how it's going to impactwork.

(40:01):
You'll reimagine incorrectlybeyond what the tech can do.
That's why AI leadership is anearly paradoxical combination
of bottom line results,experimentation, excellent human
leadership skills and deep.
Technical skills downplay theladder at your peril.
So my exportation to businessexecutives is by all means,
become a more well-roundedbusiness.

(40:23):
Bottom line, soft skills person.
But do your homework on the techside too, because you need a
certain level of literacy there.

Andreas Welsch (40:31):
In a way, what comes to mind to me is how you
build models with Lego blocks,right?
As a child, one of the mostimportant things you can learn
is physics, right?
Where there's one object thatcannot be a second in the same
place.
You learn about gravity, youlearn about things that can tip
and fall, and I think in manyways, in the same in the same

(40:52):
vein, you have to learn about AIas well, to your point.

Jon Reed (40:55):
Yeah,

Andreas Welsch (40:56):
Would be data literacy if I don't know.
What are some of theopportunities, some of the
challenges with AI technology?
I might build something thatlooks really great, but once I
set it up, it falls flat, right?
So not so great.
By the way I did an interview acouple weeks ago and, one of the

last questions I was asked was: imagine it's 2040 and a new (41:15):
undefined
generation of teenagers or newleaders has come in.
AI is everywhere.
What do you think, first of all,they will look at and say, what
did you think in 2025 about AI?
Are you crazy?

(41:35):
That's what you thought?
And what do you think will itenable?
And my thought looking outfurther 15, 20 years is people
are probably thinking.
What do you mean, you used it towrite your emails?
Or to summarize your summarizingmeeting transcriptions?
That's what you used it for.
Are you, sure?

(41:55):
Are you serious?
I think we'll see a lot more, alot bigger and hopefully more
impactful things to your earlierpoint than summarizing text and
so on.
Maybe 15, 20 years from now,people will say, what do you
mean?
You had meetings between people.
You didn't send your avatar.

Jon Reed (42:16):
It's a fascinating future.
And I think 15 years from now isenough time.
It could, I think there could bea mixture of very wonderful and
scary things, and we could talka little bit about some of that,
maybe if we do this again.
But I think you're totally rightto blow the lid off today's
limitations sometimes is,totally good.
And that's a really goodexercise.

(42:38):
And that's, that goes alongwith.
Understanding the present isunderstanding where you're
headed.
You have to think about both.
Like I, I think about this inthe context of my town right
now, which is going through adowntown redesign that I
disagree with because I don'tthink they've reckoned with the
future of transportation enoughin their design.
And so that's where it does geta little tricky where, I would

(42:59):
acknowledge that sometimes inaddition to today's limitations,
you do need to think about whatwill this look like five or 10
years from now so you don'tbuild a totally immature
structure.

Andreas Welsch (43:09):
Let me throw another analogy.
Why thought about 2040 and myanswer came to mind.
I would also hope that by 2040we've.
And we've internalized enoughwhat AI can do, what are the
do's and the don'ts, much likewe know now what to do and what
not to do with electricity,right?

(43:29):
It can do many, things thatpowers the computers that they
were having the conversation on.
And many more things that thepeople didn't imagine when
electricity first was conceivedand deployed.
But we also know and we teachour children from a very early
age.
What you don't do right?
You don't stick a screwdriver inthe power outlet.
You do that once, but probablynot twice.

(43:50):
So don't do it even the firsttime.
So in a similar capacity I thinkwe need to educate and enable a
lot more.
And I think some, of the recentactivities in the US government
funding around bringing AI to K12 education might go in that
direction to help build andincrease that literacy.

(44:11):
But I think, it has to start ata much earlier age than when
you're at working age or whenyou're in a company already.
But anyways.
Think about it as electricity,what are the dos and don'ts that
ideally by 2040 we figure outand taught our children.

Jon Reed (44:25):
So I know we gotta wrap shortly, but I want to
throw one thing at you before wewrap.
I threw a bunch of differentstuff at you, use cases and
stuff like that, digest that forthe listeners.
What is your sort of generaltakeaways from all of our
discussions so far today?
Sure.

Andreas Welsch (44:40):
So I.
There is a good amount of hypein the market, and hype is
great.
We need hype to get excited, tothink big, to dream big and
envision what could be.
But just dreaming andenvisioning and fully buying in
into that vision isn'teverything.
It's not all right.
We need to start somewhere andfigure out what is the one

(45:02):
practical thing that we can dotoday to learn, to grow, to
figure out how does this work?
How does this behave?
What does all of this mean?
What are the ripple effects ifwe bring this piece of
technology to our business?
And I think in that process,there are a lot of myths that
you will be able to bust.
And that's the exciting piecesas well that lies ahead of us,
right?
There's nobody that's.

(45:23):
Figure it out yet.
Not everything, not all.
Maybe individual bits andpieces.
So it's a matter of us bringingthese bits and pieces together
as we see them, as we hear them,as we learn about them, and then
decide what are the things thatwe want to adopt in our company?
What do I want to do in my team?
How do I want to guide them?
I think that's the challengeahead of us as leaders, as

(45:44):
people.
Whether you work independently,whether you work in a business,
and that's the exciting part tome.

Jon Reed (45:51):
I love it and I think if we do this again, I think we
may want to talk a little moreback to the AI readiness themes
you brought up in terms of howyou get to that point.
The final sort of practicalthing I'll issue is when you do
that, unless I.
I think your advice is perfect,but if you do that now, unless
you're a team with really deepdata science and LLM

(46:14):
sophistication and understandingof things like rag architectures
and agent tool calling, unlessyou really understand all that
stuff, bring in trusted vendors.
To help you with these thingsbecause that's one of the
biggest places you're gonna getinto trouble.
If you go on YouTube and watchvideos on things like rag and
agentic system design, you'regonna be blown away by how

(46:37):
complex some of thesearchitectures are and, what
moving targets they are as well.
Like, one day it'll be onething.
And then today I just saw a newvideo from one of my favorite
one follow.
On YouTube, on hierarchical agagentic reasoning, so not just
reasoning, but hierarchicalreasoning.
So these things are moving sofast.

(46:57):
I would just say, unless you areon top of the world with these
capabilities internally, pleaselook at involving vendors both
new and old that you trust andpull them into these
conversations.

Andreas Welsch (47:10):
That's a big opportunity, right?
So you can scale very quickly.
You can bring in expertise whereyou need and augment what you
have, and also upscale your teamalong the process too.

Jon Reed (47:19):
And don't rule out independent advisors too.
Like I've always been anadvocate of independent advisors
on projects, and I would includethat for AI as well, because it
is really helpful to havesomeone who has less direct
financial.
Incentive in an ongoing way likea major vendor does in your
project that can come in for amuch more modest amount of money

(47:43):
and, provide a more, a differentview of things.
And yes, there's more politicsto manage around that, which we
could discuss at some point, butit's highly beneficial to have
that also.
And if you have those pieces inplace, I like your chances of
following the advice you justgave.

Andreas Welsch (47:57):
Wonderful.
Jon, it's been great having youon again.
I really appreciate the Yeah.
Free flowing discussion andwe've covered so much ground
over the last 50 minutes,roughly.
I can't wait to do this again.
Yeah, I really enjoy theseconversations and hearing what
you're seeing, and how you thinkabout this, and where things are
moving.
So I hope for those of you inthe audience, you appreciate it

(48:19):
in the same way.
Jon, thank you so much.

Jon Reed (48:21):
Many thanks.
Look forward to the next.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.