All Episodes

August 8, 2025 31 mins

What if you could navigate the complex landscape of AI projects with a proven methodology?

In this episode, host Andreas Welsch welcomes Kathleen Walch, Director of AI Engagement at the Project Management Institute, as they explore CPMAI, an essential framework for leading successful AI initiatives. 

From identifying real business problems to understanding crucial data requirements, Kathleen shares her expertise on the importance of a data-centric approach: 

  • What are the most common challenges new AI leaders run into?
  • What are the key steps to successful AI projects?
  • What’s the #1 surprising thing about AI projects and leadership?
  • What’s next with Agentic AI and how will it change the paradigm of what AI leaders need to do?

With actionable insights and compelling examples, this episode provides an invaluable resource for any business leader seeking to leverage the power of AI.

Don't miss out on learning how to transform AI hype into meaningful outcomes—tune in now!

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:00):
Today, we'll talk about the six steps to
leading successful AI projectsand programs, and who better to
talk about it than someone who'sbuilt an entire methodology
around that.
Kathleen Walch.
Hey, Kathleen.
Thank you so much for joining.

Kathleen Walch (00:11):
Hi.
Thanks so much for having me.

Andreas Welsch (00:14):
Wonderful.
Why don't you tell our audiencea little bit about yourself, who
you are and what you do.

Kathleen Walch (00:18):
Sure.
So as you said, I'm KathleenWalch.
I am the Director of AIEngagement and Community at
Project Management Institute,PMI.
So I joined there about 10months ago or so.
At this point.
I came from the Cognalyticaacquisition.
I had co-founded Conalytica backin 2017.

(00:40):
We started out as an AI focusedresearch advisory and education
firm covering the AI markets,and then quickly realized that
our clients wanted help withrunning and managing AI
projects.
And there was no formalmethodology for doing so because
data.
Is the heart of AI.
So AI projects are really dataprojects.
So you needed a data-centricmethodology.

(01:01):
So we came up with the CPMAI,cognitive project management in
AI methodology.
Have trained thousands of peopleat this point.
Also have a podcast called AIToday Podcast.
And we are coming up on seasonnine, which I can't believe I
know.
It's an official PMI podcastnow.
So season nine will be launchingon July 16.

(01:23):
And then also we had a lot ofcontent that we brought over as
well.
So a lot of articles and someresearch that we had done.

Andreas Welsch (01:31):
So I'm super excited to have you on and
obviously going back to 2017,there was, before AI was cool or
when it was cool the last time,or however you wanna look at it,
right?
Yeah,

Kathleen Walch (01:41):
I always say that I've been in the AI space
since before Gen AI made itcool.

Andreas Welsch (01:46):
Yes.
Like I said, excited to have youon.
I'm going through the CPMAIcertification myself at the
moment.
And obviously we're alsocollaborating around PMI a
little bit.
Now, Kathleen, should we play alittle game to kick things off?

Kathleen Walch (02:02):
Sure.

Andreas Welsch (02:03):
All right, so let's do this.
So this one is called obviouslyWhat's the BUZZ?
That's the whole idea of thebuzzer and the show.
When I hit the buzzer, you'llsee the wheel spinning and a
word appear, and you have 60seconds for your answer.
Are you ready for What's theBUZZ?

Kathleen Walch (02:21):
I'm ready.

Andreas Welsch (02:22):
Let's do this.
If AI were a book, what would itbe?
60 seconds on the clock.

Kathleen Walch (02:31):
That is a really great one.
I would say Jack of All Tradesand Master of None, especially
when it comes to Gen AI.

Andreas Welsch (02:41):
Why that?

Kathleen Walch (02:42):
Because it's really great at giving you basic
information, especially one inchdeep.
But to go very deep into thingsyou need to provide better
prompts, really understand howto work with AI systems.
And if you're at that surfacelevel, then I think it provides
you a very kind of one inch deepresponse.

Andreas Welsch (03:04):
I love that.
Indeed I'm sure many of you havefound this as well in your
journey of using AI over thelast two and a half years, or
maybe even longer, since beforeit was cool.
It can do a lot of differentthings, but what's the one thing
you really wanted to do andwhere it delivers value and,
where you can go deep?
Awesome answer.

(03:24):
Now, you know that brings us toour questions and, we've
obviously seen and we're seeingmore leaders to take on AI
projects and programs, which Ithink is fantastic.
More businesses are thinkingabout it and more business
leaders are thinking about it.
I see a lot of times they lookto.
Data leaders, AI leaders,technology leaders, maybe there
is even an AI department orcenter of excellence that

(03:47):
already exists, but it's thenthe, Hey you've always worked on
this technology sort of thing.
Can you now figure out how, whatdo we do with AI and what's our
strategy and how to make thisreal?
But I also hear that for manyleaders in this space, it's the
first time that in their careerthat they're leading in
initiatives like that.
It's not just technology,obviously, as we know.
But I'm curious, what are someof the common challenges that

(04:10):
you see leaders run into and whythey take CPMAI?

Kathleen Walch (04:14):
Yeah, that's such a great question and you're
right.
A lot of times, despite AI beingaround since of officially
coined in 1956, I feel that manyleaders and many teams and
individuals, this is maybe theirfirst time running and managing
an AI project, which I alwayslike to say is different than
using AI tools.
And so a few years ago actually,we identified common reasons why

(04:37):
AI projects fail and what youcan do to avoid that.
Because I always think you canlearn so much.
From other people, right?
And some of the mistakes thatthey've made so that you don't
make it yourself.
So some of the common reasonsthat AI projects fail is that
you treat them like a softwareapplication development project.
So you are going to try and usesoftware development

(04:59):
methodologies.
A lot of people say that they'reusing Agile, which you can do it
in an app.
Away.
But as I mentioned earlier, datais the heart of AI.
You need data-centricmethodologies because these are
fundamentally data projects.
With that, then if data is soimportant, we get issues around
data quality and data quantity.
So is our data good and is itgood enough and do we have the

(05:22):
amount of data?
We need, we always say you don'tneed Google sized amounts of
data, especially depending onwhat it is that you're trying to
do, but you do need data, andyou do need to make sure that
it's representative of whateverit is that you're trying to do,
and that you also have access tothat data.
Far too often teams move forwardand they're like yeah, our

(05:42):
organization has this data.
Let's just move forward, andthen they get to needing that
data and it takes some months toaccess that.
That's gonna slow down yourproject, right?

Andreas Welsch (05:52):
For sure.
Yes.

Kathleen Walch (05:53):
And then we've also seen a return on
investment.
The ROI, you have to make surethat's justified, and you have
to make sure that you'remeasuring things effectively
because your projects are notgoing to be free.
It's money, time, and resources.
I always like to say to makesure you measure, so when you're
not doing that, even if the AIsystem is working as expected,

(06:14):
if it's costing more than it'ssaving.
Is that really successful?
Probably not.
So we need to make sure we'remeasuring that.
We also have issues around proofof concept versus pilot.
So a lot of times people willsay, I'm gonna run a little
proof of concept and see if itworks.
But we say your proof of conceptis really proving nothing
because you're doing it in asuper controlled environment and

(06:36):
you're using the best qualitydata that you have.
And usually the person who hasbuilt the solution is the one
testing it when you put it outin a pilot.
And it's the real world.
Now things start to getinteresting, right?
People aren't using it asexpected data can be messier.
Maybe data sources are differentthan you expected.
So we say always move to a pilotand see how it's actually being
used, and then continue toiterate.

(06:58):
And the most common reason thatwe've seen AI projects fail.
I talked about how AI wasofficially coined in 1956, and
you can go, that's 70 years ago,right?
Almost 80.
At this point you're like,what's.
What's going on?
Why haven't I heard about itmore?
We've gone into two previous AIwinners, and so that's a period
of decline in investment,popularity funding.

(07:20):
People moved towards othermethods to get to their
solution.
I.
And the big overarching reasonfor this is over promising and
under-delivering on what AI cando, and I'm still seeing that
today.
So you have to understand whatAI is good at, what it's not
good at, how you can use it as atool and make sure that you're
actually setting realisticexpectations.

Andreas Welsch (07:42):
I like that part, especially the last one
about overpromising and underunderdelivering.
And I think just recently we'veseen that even the biggest tech
players in the industry and inthe world are not immune to that
take Apple as an example in theannouncements of apple
Intelligence last year.
I think we're very close tomaking progress, to shipping

(08:04):
something.
And then they found it's,actually really hard to do this
reliably without hallucinations,without some of the unwanted
side effects, biases and whathave you.
So there's this tension as well,when you want to ship something,
you need to ship something to becompetitive.
But at the same time, ifmillions and millions of people
use your product, you bettermake sure that it's right and

(08:26):
accurate.
Otherwise you get the customerfeedback on the other side.
So not an easy situation to bein.
Whether it's with machinelearning, predictive analytics,
or now with more Generative AIor maybe even agent AI things.
And I.
The other point that deeplyresonates with me is data,
right?
I, remember working with a largeFortune 500 customers and we

(08:48):
said, Hey here's an idea for aprototype or for a pilot even.
And we, agreed on the scope,yes, we, we want to do this.
For example, in finance, we wantto estimate what is our,
liquidity forecast.
And then we needed data.
Guess what?
Then all of a sudden legal getsinvolved in it, needs to extract

(09:09):
the data and it, comes to agrinding halt until you finally
have it three months later.
Six months later.
If you think you can have aquick win, six months to get the
data is not so quick.
So to your point those are realchallenges that, that you still
run into.
Especially you be doing this forthe first time.

(09:33):
You mentioned earlier in, inyour introduction, you've
trained hundreds and thousandsof, leaders over the years to
learn those fundamentals ofrunning successful projects.
What would you say are, the keysteps to success if somebody's
new in that role or wants tomove into that AI leadership
role?

Kathleen Walch (09:48):
Yeah, so that's also a great question because
what we've seen is far too oftenpeople, right?
Sometimes it's that fomo, thatfear of missing out.
You feel like your competitorsare doing it, and so people rush
to get things out.
And when you rush anything,especially when it comes to AI,
you're gonna quickly realizethat.
You know that move fast andbreak things mentality does not

(10:09):
work with AI projects.
So that's why we came up withCPMAI methodology because it's a
six phase iterative approach.
So we always start with.
Business understanding, whichsounds simple enough, you're
supposed to be answering whatquestion what problem are we
trying to solve?
And again, like I said, itsounds simple enough, but a lot

(10:31):
of times people just moveforward with it.
And we say, make sure you'resolving an actual business
problem.
And then once you know theproblem you're trying to solve
is AI the right solution.
And if you don't know if AI isthe right solution, we say,
okay, let's break it down onelevel deeper.
Because a few years ago, back in2019, a lot of people were.
Really caught up in the term andthe hype.

(10:53):
And they're like, is this an AIproject?
Is this not an AI project?
And it would paralyze them.
And we said, dig it.
Dig down one level deeper andsay, what are we trying to do?
And that's where we came up withthe seven patterns of AI.
And so it is again, like wepresent it as a wheel because
there's no one that's moreimportant than the other, but
it's hyper-personalization,treating each individual as an

(11:13):
individual, the recognitionpattern.
So this is making sense ofunstructured data.
We think about facialrecognition, right?
You can't really program yourway to understand individual
faces.
So that help make sense of thatunstructured data.
Then we have our conversationalpattern.
So this is where humans andmachines talk to each other in
the language of humans.
We think about AI enabledchatbots, but also LLMs fall in

(11:36):
that pattern.
Then we have our.
Patterns and anomalies.
So machines are really good atlooking at large amounts of data
and being able to make sense ofthat data quickly.
Then we have our predictiveanalytics.
So this is where you take pastor current data to help humans
make better decisions.
So we're not removing the humanfrom the loop.
Then we have our autonomouspattern.

(11:57):
So the goal of this pattern isto really remove the human from
the loop.
Whenever you're trying to removethe human from the loop, it's
going to be one of the hardestpatterns of AI, right?
I think about autonomousvehicles, which is my dreams,
still my dream, even though wedon't have commercially
available, fully autonomousvehicles.

(12:19):
But then we can also think aboutautonomous business processes.
So how can you have.
An AI tool navigate autonomouslythrough your business.
And then last, we havegoal-driven systems.
So this is really aroundreinforcement learning and
finding the most optimalsolution to a problem.
So we think about game playingand scenario simulation in here.
And a really fun example that Ilike to bring up is that the

(12:41):
city of Pittsburgh usedreinforcement learning and the
goal-driven systems pattern tohelp optimize their traffic
light pattern.
So they wanted to reduce idletime.
And make sure traffic wasflowing to help with the
missions as well.
So they used it and I thoughtthat was a really fun example.

Andreas Welsch (12:58):
Wonderful.
That's indeed a great example,right?
And

Kathleen Walch (13:01):
yeah.

Andreas Welsch (13:01):
Something where everyone benefits from, that in,
in the end too.

Kathleen Walch (13:05):
Exactly.
So that was just phase one,right?
So now we got five more phases.
So we go through our patternsand say, okay, does it fall into
one or more of those patterns,then yes, let's move forward.
With our AI project.
Then phase two is dataunderstanding.
So now we need to understandwhat data we need and you
brought up an example, three,six months later maybe you're

(13:25):
finally getting your data.
It's really not uncommon,especially for large
organizations.
Maybe you have data in differentsystems.
Some need to be internal, someneed to be external.
So because this is iterative andwe can do this over many sprint
cycles, we say start with thesmallest amount of data that you
need to get.
The result for this firstiteration.

(13:47):
So control the scope, right?
So maybe start with just onesmall feature.
If you're building a chatbot,for example, you don't need to
start off with the chatbot beingable to answer 10,000 questions.
It just needs to answer the mostfrequently asked question.
Because if you're trying when Italk about ROI, if you're
measuring that, if it's reducingcall center volume, if it's
increasing customersatisfaction, just start with

(14:07):
that one.
And then you get it out there,you see how it's working, and
then you can add additionalquestions and features on as you
go on.
Then we have our datapreparation.
So now we have the data.
Now we need to prepare it, cleanit, de-dupe it, normalize the
data, anonymize the data,whatever it is that we need.
Then we get to the fun stuff.

(14:28):
We're building our models.
So we have model evaluation,especially if we're building it
from scratch.
A lot of people wanna start.
Phase four.
But it's really important thatwe go through phases one through
three first.
Then we need to evaluate themodel.
Is it performing as expected?
Even if we're using AI tools,right?
We need to be making sure thatthey're performing as expected.
And then we have finally modeloperationalization.

(14:49):
So this is using the model inthe real world, wherever you've
decided that the model's gonnabe on cloud.
On premise, on an edge device,like a phone or in your car,
wherever it is.
And so those are the six phasesof CPMAI.
And they're iterative so that ifyou start in phase one business
understanding, then you go todata understanding, you think
you have the data you need, nowyou need to clean it.
And then you're like, oh, I'm inthe cleaning phase.

(15:09):
And I realized I don't haveenough think.
Go back a phase.
That's okay.
And like I said, we always saythink big, start small and
iterate often because you don'tneed to do everything in one
iteration.

Andreas Welsch (15:21):
I think that's such an important point,
especially the fact that AIprojects are iterative a lot of
times, right?
We're so conditioned to runninga project from start to finish.
It's got a start date, an enddate, and what are the
deliverables in between?
Let's look for quick wins andhow how can we build this and
ship this very, quickly and thenyou throw it over the fence.

(15:43):
And then somebody else needs tomaintain it and they haven't
been involved and they say whatam I supposed to do here?
How does it even work?
Or it's missing some criticalthings.
Yeah.
So the iterative nature I thinkis absolutely critical.
I, have a new course coming outon LinkedIn learning.
I.
Probably in the next couple daysof, what I hear about the, risks

(16:04):
in AI projects.
And, that was one of the thingsthat I picked up on as well.
And it's the, fact that alsomanaging that expectation to
your leadership is criticallyimportant.
It's not a straight shot.
It's not start to finish manyiterations and many loops.
And like I said, going back oneor two steps in fixing things
and improving them is absolutelynormal, but shaping that

(16:27):
awareness.
I think needs to happen as well.
And not everybody has thatawareness yet.

Kathleen Walch (16:32):
Exactly.
And I think that when yourealize their data projects and
you follow data-centricmethodologies, then it helps a
lot.

Andreas Welsch (16:38):
Yeah.
You said you've been working onthis since 2017, since before AI
was cool when, it was cool thatthe last time.
What's the one thing that stillsurprises you about these AI
projects and, what whenleadership is involved or gets
involved after all these years?

Kathleen Walch (16:57):
I think going back to those common reasons why
AI projects fail that, despiteit being known and that it is
aware and out there, we stillcontinue to fall into these
different reasons.
Regular reasons why AI projectsfail.
And I think one of the mostimportant things is to always

(17:18):
start with business,understanding what problem are
we trying to solve and never todownplay that.
Each phase of CPMAI isimportant, but phase one is the
most, the arguably the mostimportant because if you're not
solving a real business problem,you shouldn't move forward in
the idea.

(17:39):
It doesn't matter how much youlike it or how good you think
it's gonna be, if it's notsolving a real problem, you
shouldn't move forward.
And far too often, organizationsand leaders feel pressure.
To just get something out.
They want to have an AI tool inthe market or have an AI
solution, and so they just goahead and start creating

(18:01):
something and then they get 6,12, 18 months into the project
and have nothing to show for itexcept a large pile of debt.
Yeah.

Andreas Welsch (18:11):
So, how do you see leaders a addressing that
then?
Certainly one part is throughthe methodology, but it seems
like a lot of these issues aresoft issues if, you will, or in
interpersonal things.
There's this culture, there'spolitics.
There may be in individualaspirations, career aspirations.
There's the challenge of losingface or not wanting to lose face

(18:37):
greenwashing, right?
Dashboards from, red to yellowto bright green.
The, higher you go in, in thehierarchy, what do you see
there?
How can leaders shape thatawareness that it's not as easy
as as it seems in a vendor demo,for example?

Kathleen Walch (18:55):
Yeah, I think managing expectations and taking
out the hype.
Versus the reality, and thenalso really thinking big, but
starting small.
So say, what is it that we wannado?
So a really great example that Ialways like to use, because then
you can still show wins.
They might not be as grand asyou want, but they're
successful.

(19:16):
You're gonna get buy-in, you'regonna have these small wins, and
you can continue to work onthese projects.
So the US Postal Service saidwhat's the number one question
that we get asked?
Track a package, right?
And so they were trying toreduce call center volume,
especially around the holidaysand seasonality so that they
couldn't just hire up a ton ofpeople, but they needed to

(19:39):
control the call center volume.
So they said, let's just answerthat one question really well.
What's our goal?
We wanna reduce call centervolume, then we can measure
that, right?
It's really easy to measure.
That's the return on investment,right?
And they showed positive returnon investment.
So they're able to then reportback and say, Hey, look at this
win.
Look at what we did.

(20:00):
And then they can say, okay, nowlet's go to the number two or
number three, or get the 10 mostfrequently asked questions.
And then in the next iterationthey can add an additional 10
and then add an additional 10.
Rather than saying we need achat bot that's gonna be able to
answer 10,000 questions in fivedifferent languages, let's go.
What happens, right?
We have to control the scope,especially from Project
Management Institute, right?

(20:20):
All the project managers outthere who are working on AI
projects know you have to setexpectations, especially with
stakeholders, and you have tocontrol the scope because scope
creep is real.
And so it helps when you have anunderstanding of how to run a
managed AI projects, what shouldand shouldn't be an AI project
because also AI, as much as Ilove it, and I'm a big proponent

(20:42):
of AI, it's not the rightsolution for every single
problem.
Sometimes you can have straightautomation.
Sometimes you can code your wayto get the results that you
need.
You don't always have to use AI.

Andreas Welsch (20:52):
Love it.
And, by the way in a lot of theconversations that I'm having,
I'm hearing the same thing.
The, Hey, we need a chatbot todo something.
It's like, why?
What is the problem that we'reactually trying to solve?
First of all, let's understandthat is to your point, AI the,
right solution.
And then is something likeconversational AI or the
conversational pattern, or evena chatbot the right way to do

(21:15):
this.
Are people really going to chatwith their PDF for example, or
are there other ways in which wecan get similar or even better
results?
A great point.
We've obviously acceleratedthings in the industry from
predictive analytics to machinelearning to generative AI.
This year it's all about geneticAI.

(21:36):
2025 is the year of agentic AI.
If you want to believe somewe're talking about this just
before we went live.
We're al already halfway intothe air.
What is up with agentic AI?
How will that change theparadigm of what AI leaders need
to do?
What do you see and is that justsmoke mirrors at this point?

Kathleen Walch (21:59):
Agentic AI, we were talking earlier about how
it's kind of the year of agenticAI 2025 is the year of agentic
AI and what does that mean?
And so we're halfway through theyear and are people really using
agents and is it part of theirworkflow?
And how has it really beenchanging the game?
I have not seen it as widelyadopted as the hype, right?

(22:19):
This is also where you have toavoid the hype and be grounded
in reality and expectations,right?
Apply critical thinking skills,and I have not seen it as widely
adopted as it was firstportrayed.
But then there's alsoopportunity around this as well.
So I know at PMI, we have a toolcalled Infiniti, which is free

(22:40):
for all of our members.
And I encourage anybody who'swatching this, who's a PMI
member to go check out Infiniti.
And later this year we will beintroducing agents into
Infiniti.
But sometimes agents are in thebackground and so they're
running and you're not evenaware that they're going on.
So for the user, it's just anormal experience.
And so what's nice aboutdifferent AI tools and when it

(23:01):
comes to agen AI as well, isthat you don't need to really be
super technical in order to getbenefits and get value from
this.
So at PMI, and otherorganizations have this as well,
but folks that are.
Starting to move forward withagentic AI AI, we have the
opportunity to help define whatthat means.

(23:22):
The part of the reason we cameup with the seven patterns of AI
originally is because there's nocommonly accepted definition of
AI.
So when there's no commonlyaccepted definition of AI, it's
am I doing AI?
Am I not doing AI?
I don't know.
There's no definition.
So we said, let's break it downone level deeper.
And that's when the SevenPatterns came about.
I presented to the OECD, theOrganization for Economic

(23:43):
Cooperation and Development backin February of 2020, literally
before the whole world shutdown.

Andreas Welsch (23:48):
Yeah.

Kathleen Walch (23:49):
And they've adopted the seven patterns as
their definition of AI, which isnow in the EU AI Act.
So PMI, we have that opportunityagain to help define what is AI,
how should.
Project professionalsparticularly be looking at this
and how can it be used to helpenhance what you're doing today?

Andreas Welsch (24:09):
Wonderful.
I wondered there as, as wellwe've seen at the end of last
year, slack come out with areport things called the
Workforce Index where surveyparticipants shared, Hey I'm not
telling my manager that I use AIbecause they might think I'm
lazy or I'm incompetent.
And now while many of thesevendor sponsored studies are,

(24:31):
nice and fair, you.
Might also easily dismiss them.
Now in, in May duke universitycame out with a study among
4,500 professionals and theycame to the same conclusion.
Individuals, professionals arereluctant to sharing that, Hey I
use AI.
So I wonder if there, there aresome reasons below the surface

(24:54):
why we're not seeing, why we'renot hearing so much about AI and
agents, first of all, because.
Individuals don't want to sharethat they're using it being
afraid that they might be seenas lazy or incompetent.
Where I would actually say it'squite the opposite.
If you're going to use AI and ifyou find ways to incorporate in
your work, you are at thecutting edge of things and, you

(25:14):
should be there, right?
You should think about how can Iimprove what I do and what I
deliver.
Now the other part that I wonderabout is whether also these
technologies are stillrelatively nascent or relatively
not basic in the sense of whatthey do, but how the tools work,
right?
They're at a level where youmaybe need to be a little more
of a developer or, even forlow-code, no-code things,

(25:35):
putting together your agenda, AIworkflow, you need to have some
technology affinity orawareness.
So I'm assuming as that getseasier and baked into other
solutions.
That the adoption will flourishas well.

Kathleen Walch (25:49):
Yeah.
It's interesting too because Italk about this idea especially
when I deliver keynotes of thisleaper mindset.
And what is a leaper?
We have identified fourdifferent types of people.
Broadly one are observers, andthose are people that sit on the
sidelines and watch other peopleuse AI, but they don't
themselves.
Then the second is taskers, andthis is where a large majority

(26:11):
of people fall today, wherethey'll use AI to help with
certain parts of their workflow,but not their overall workflow.
So maybe I'll use it to help mebrainstorm ideas for an article,
or I'll use it to help megenerate images for a slide
deck.
But I'm still.
Mostly writing the article, orI'm still putting together the
slides, but I'm just using it tomaybe help me summarize all the

(26:32):
bullet points that I have sothat there's less text or create
images for me.
Then we have early adopters andthey're gonna be embracing any
new technology, and AI is justanother new technology to them.
So they love quantum andblockchain and AR and VR and all
the different technologies thatare out there.

(26:54):
Then we have this idea of aleaper.
And what is a leaper?
It's someone who.
Takes AI and uses it in theirentire workflow.
So now I can have maybe thelarge language model of my
choice.
Go in and help me brainstormslides and a presentation.
Then outline what the slide deckshould look like.
The human can always should andshould always be in the loop.

(27:16):
Go back and forth.
Say, yes, I like this.
Let's tweak some of this.
Okay, now I'm gonna upload atemplate for a PowerPoint
presentation that I want.
Now create the slides for me.
And then you can go and edit it,and then it also can create
images for you and really puttogether that entire
presentation and save you hoursof work.
After you've reviewed it, youcan say, okay, now send it out
for review to different teamsthat need it.

(27:38):
Or you can send it out towhoever it is that's gonna
upload it for you, for yourpresentation.
So you have now.
Really used AI through thisentire process.
That's what a leaper is.
And what are the characteristicsof a leaper?
They need to have courage.
They need to have courage totry, and that they need to, it's
really about their mindset,right?

(27:58):
So that they can't, don't beafraid of this technology.
I always say that prompting,there's a really low risk to
failure because you don't needto involve those other teams.
We're.
Sometimes in the past, maybeyou've needed to involve data
teams to get the data or ITdepartments to give you access
to different tools or help youwith programming different

(28:19):
things.
But you don't have to do thatwith AI, and that's what makes
this leaper mindset soimportant.
And it really is overcoming yourown fears and your own
hesitations and your own doubts,rather than being super
technical.
And I can see that happeningwith AG Agentic AI as well.
It's also, I liked how youbrought up this idea around

(28:40):
people and not wanting to sharethat they're using AI.
One because they don't wanna beperceived as lazy, but two, it
also comes down to they don'twant to be perceived as not
working because if they've usedAI and it now saves them three
hours a day, they don't want toeither be perceived as not
working, or also sometimes havemore work added to their plate,

(29:03):
right?

Andreas Welsch (29:03):
Yes.
20% more efficiency,productivity.
Here you go, right?
We know how this works.

Kathleen Walch (29:09):
Yeah.

Andreas Welsch (29:09):
So what I take from that response is almost
don't be a sleeper.
Become a leaper.

Kathleen Walch (29:17):
Oh, wow.
Okay.
I like the rhyme.

Andreas Welsch (29:20):
Yeah.
Definitely lots of potentialthere.
Now Kathleen, we're coming up tothe end of the show and I was
wondering if you can summarizethe key three takeaways for our
audience today.

Kathleen Walch (29:32):
I always say think big, start small, and
iterate often.
Understand that AI projects aredata projects, and make sure
that you follow data provenmethodologies for AI success.
I.

Andreas Welsch (29:43):
Wonderful.
I haven't had anybody on, theshow recently who's been able to
articulate it that clearly andconcisely.
So thank you so much forsummarizing it as you just did.
Kathleen, thank you so much forjoining us today.
Really appreciate you sharingyour experience and expertise
with us.
It was a super insightfulconversation.

Kathleen Walch (30:02):
Thank you so much for having me.
I really enjoyed it.

Andreas Welsch (30:05):
And folks, for those of you in the audience,
see you next time for anotherepisode of What's the BUZZ?
Bye-bye.
Advertise With Us

Popular Podcasts

Law & Order: Criminal Justice System - Season 1 & Season 2

Law & Order: Criminal Justice System - Season 1 & Season 2

Season Two Out Now! Law & Order: Criminal Justice System tells the real stories behind the landmark cases that have shaped how the most dangerous and influential criminals in America are prosecuted. In its second season, the series tackles the threat of terrorism in the United States. From the rise of extremist political groups in the 60s to domestic lone wolves in the modern day, we explore how organizations like the FBI and Joint Terrorism Take Force have evolved to fight back against a multitude of terrorist threats.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.