All Episodes

March 4, 2025 58 mins

 

In celebration of International Women’s Day, this episode of Analytics Power Hour features an all-female crew discussing the challenges and opportunities in AI projects. Moe Kiss, Julie Hoyer and Val Kroll, dive into this AI topic with guest expert, Kathleen Walch, who co-developed the CPMAI methodology and the seven patterns of AI (super helpful for your AI use cases!). Kathleen has helpful frameworks and colorful examples to illustrate the importance of setting expectations upfront with all stakeholders and clearly defining what problem you are trying to solve. Her stories are born from the painful experiences of AI projects being run like application development projects instead of the data projects that they are! Tune in to hear her advice for getting your organization to adopt a data-centric methodology for running your AI projects—you’ll be happier than a camera spotting wolves in the snow! 🐺❄️🎥

For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:05):
Welcome to the Analytics Power Hour. Analytics topics covered conversationally
and sometimes with explicit language. Hi, everyone. Welcome to the Analytics
Power Hour. This is episode 266. I'm Moe Kiss from Canva.
And I'm sitting here in the host chairs today because we're continuing our

(00:27):
great tradition of recognizing International Women's Day and all of the
amazing women in our industry. So it's coming up this Saturday,
March 8th, and we're going entirely gents free today. So of course that
means I'm joined by the wonderful Julie Hoyer from Further. Hey,
everyone. And Val Kroll from Facts and Feelings as my co hosts.

(00:48):
Hey, Val. Hello. Hello. Are you ladies excited to know that Tim won't
be slipping into some of his quintessential soapboxing?
Save some for the rest of us. I don't think he'd be able
to help himself on this one. I know, I know. He's pretty gutted
to miss it. So, as we're planning the show today, I fired up
ChatGPT, which, to be fair, I'm a power user and I asked it

(01:11):
to compare our topics from the last 50 shows to the topics data
folks are most talking about these days and basically identify the gaps
in our content. So, unsurprisingly, the response it came back with was that
we should definitely talk more about AI, and it was in caps,
so maybe there's some bias in that model. Who knows? Weird.

(01:31):
But it's got a good point. And we've definitely talked about AI on
multiple episodes on the show, but we probably haven't talked about it nearly
as much as we could or as much as it's getting talked about
in the industry right now. So it seems like everyone is just so
excited about the possibilities. But lots of organizations are also struggling
to figure out how to actually identify, scope, and roll out AI projects

(01:55):
in a clear and deliberate manner. I think it's really about that shift
from the tactical day to day things to the real transformation that everyone's
seeking. And that's why for today's episode, we're joined by Kathleen Walch.
Kathleen is a director of AI Engagement Learning at the Project Management
Institute, where she's been instrumental in developing the CPMAI methodology

(02:19):
for AI project management. She is the co host of the AI Today
podcast, which I highly recommend checking out, and she's also a regular
contributor to both Forbes and Techtarget. She's a highly regarded expert
in AI, specializing in helping organizations effectively adopt and implement
AI technologies. And today she's our guest welcome to the show,

(02:41):
Kathleen. We're so pumped to have you here. Hi and welcome.
I'm so excited to be here today. I obviously love podcasts,
so I love being guests on them as well. It's a different seat
for me today. It is definitely a different seat when you're a guest.
Hopefully a little lighter on the load. So just to kick us off,
I think one of the things that's really interesting about your professional

(03:02):
history is that you don't seem to be one of those people that
just stumbled into AI in the last year or so and have gone
full fledged on it. It really seems to be an area that you've
been working in deeply for an incredibly long period of time.
Maybe you could talk a little bit about your own experience and
the journey you've taken to get here. Yeah, I like that you bring

(03:24):
that up. I always say that I've been
in the AI space since before gen AI made it popular.
I feel like the past two years or so, everybody feels like they're
an AI expert and everybody is so excited about the possibilities.
But it's important to understand that we always say AI feels like the
oldest, newest technology because the term is officially coined in 1956,

(03:47):
so it's 70 plus years old. But we just feel like we're now
getting to understand AI. And there's a lot of reasons for this,
which we talk about quite often. But one big reason is that there's
been two previous AI winters, which is a period of decline in investment,
decline in popularity. People choose other technologies, other ways of doing

(04:08):
things, and a big overarching reason for that is over promising and under
delivering on what the technology can do. So it's really important to understand
that AI is a tool, and that there's use cases for it,
and it's not a tool that's one size fits all approach,
especially when it comes to generative AI. So my background and what got
me here is actually I started off in marketing and then moved...

(04:29):
Yeah, I know. And then back when I was first coming out of
college, my husband's a software developer. I feel like the technology world
and marketing or creative world or anything else, they really were very
separate. And over the years they've merged closer together to the point
now that I think technology is infused in many different roles and not

(04:50):
as disparate as it used to be. Then I moved more into a
data analytics role. Learned all about the pains of big data,
how data is messy and not clean and all of that. And then
I moved into more of a technology events role where my husband and
I had a startup. It failed, but met a lot of great people

(05:13):
from that community. Ended up going with my business partner from Cognilytica
for a company called TechBreakfast, where we did morning demo events throughout
the United States. And we were in about 12 different cities.
So from Boston, Massachusetts to the Baltimore DC Region, North Carolina,
Austin, Texas, really all over, a little bit in Silicon Valley.

(05:36):
But that's a unique space. And around 2016 we started to see a
lot of demos around AI and in particular voice assistants and how we
could be incorporating that. That was when all of the big players in
voice assistants started to come out. So we had Amazon Alexa and Google
Home and Microsoft Cortana, when that was still a thing. So from that

(05:58):
we said, there's something here. And we started an analyst firm actually
focused on AI. It was a boutique analyst firm, and very quickly realized
that organizations did not know how to successfully run and manage AI projects.
So they said, we want to use this technology, this is great.
How do we get started? And we said, okay, well, let's see if

(06:21):
there's methodologies out there and let's see if there's a way,
a step by step approach to do things. And what we quickly realized
is that there wasn't. And that's how CPM AI was developed,
which is a step by step approach to running and managing AI projects.
And it was important because people would try and run these
application development projects. And then you very quickly realize that

(06:42):
they're data projects and they need data centric methodologies, not software
development methodologies. And so these projects would be failing. Or they'd
say, we want to do AI and we go, well, what exactly do
you want to do? And they go, well, we have all this data,
let's just start with the data and then let's just build this, pick
the algorithm and then move forward because there's FOMO, fear of missing

(07:03):
out. And we say, okay. But in CPMAI we always start with phase
one, business understanding, what problem are you trying to solve? And even
still today, many organizations rush forward with wanting to have an AI
application or just saying, oh, look at this large language model,
let's put it on our website as a chatbot. And far too often many

(07:25):
things can go wrong. We always say, AI is not set it and
forget it. So far too often we see that these chatbots are providing
wrong answers and that maybe we shouldn't have started so big in our
scope and we should have really controlled it and said, drill down into
what we're actually trying to solve. So we always say, figure out what
problem you're trying to solve first and really, really make sure that it's

(07:48):
a problem that AI is best suited for. Oh, my God,
this is music to my ears. I am seriously. Yeah, because there is...
I feel like I'm coming up so often against people that are just
like, let's use AI. And you're like, what's the problem?
Have you noticed, though, over the last few years, and I feel like,
especially in the last 12 months, do you feel like the industry is

(08:09):
maturing here or is it Groundhog Day where you just feel like you're
having the same conversation again and we're not at that stage yet where
people are maturing enough to be like, is AI the right solution here?
What are you seeing in the industry? So generative AI has made AI
available to the hands of many. So maybe we were using AI before

(08:31):
when we were pulling up for directions or which, whatever you choose to
use Waze, Google Maps, whatever it is that you're using it,
it'll help route you. Or if you have predictive text with emails or
spam filters, that's using AI. But it didn't feel like we were using
AI because, yeah, it helped a little, but it didn't really

(08:52):
make my life more efficient. But now with tools like Chat,
GPT or at PMI, we have Infinity or Claude. I mean,
you literally pick the tool of choice and it can help you do
your job better. So it can help you... Or even Canva,
right? I love Canva. How I'm not a graphic designer by trade,

(09:12):
but now with the help of Canva, which is drag and drop,
but then apply add AI capabilities onto it, I can do things that
I couldn't do before, like automatically remove background from an image
and just have one, like my head now and I remove the background
from it, which is absolutely incredible and does not require me to have
to learn how to be a graphic designer. Or I can write better

(09:35):
copy for marketing campaigns, or I can create images for PowerPoint slides
that I no longer have to worry if I have rights to,
but because I know I do, because I just created it.
So it is helping in that way. But then we also see, you
really need to drill down and say, okay, generative AI is just one
application of AI. And so a number of years ago, actually back in

(09:57):
2019, because people said, well, I want to do AI. And we said,
well, what exactly are you trying to do? And there was a lot
of confusion about, is this AI, is this not AI? And we said,
why don't we just drill down one level further and say,
what are we trying to do? And that's where we came up with
the seven patterns of AI. So we looked at hundreds, if not thousands
of different use cases and they all fall into one or more of

(10:19):
these seven patterns. And so we said, why don't we just talk at
that level? Because then it really, it helps you with so much.
So the patterns at a very high level. And we made it a
wheel because it's no particular order and one isn't higher than another,
but it's hyper personalization. So treating each individual as an individual.
We think about this as a marketer's dream. You're able to hit the
right person at the right time, at the right message, but also hyper

(10:41):
personalized education, hyper personalized finance, hyper personalized health
care. How can we really start treating each person now as an individual?
And we can do that with the power of AI. Then we have
recognition patterns. So this is making sense of unstructured data. 80 plus
percent of the data that we have at an organization is unstructured.
Well, how do we make sense of that? So we think about image

(11:03):
recognition in this pattern. But you can have gesture recognition, handwriting
recognition, there's a lot of different things. Then we have our conversational
pattern. So this is humans and machines talking to each other in the
language of humans. This is obviously where large language models fall into
play. We think about AI enabled chatbots here. Then we have our predictive

(11:24):
analytics pattern. So this is taking past or current data and helping humans
make better predictions. So we're not removing the human from the loop,
but using it as a tool to help make better predictions.
Then we have our predictive analytics and decision support. So this is where
we are able to look at large amounts of data and spot patterns
in that data or outliers in that data. We have our goal driven

(11:46):
systems pattern. So this is where really around reinforcement learning and
optimization. So we think about how can you optimize certain things.
We've seen this actually with traffic lights. Some cities are adopting this
to help with the traffic flow and it can be adaptive over time.
And then also the autonomous pattern. So this is where the goal of

(12:07):
the autonomous pattern is to remove the human from the loop.
So this is the hardest pattern pattern to implement. We think about this
with autonomous vehicles, but we can also have autonomous business processes.
So how do we have something autonomously navigate through our systems
internally at our organizations? And so when we say, okay, well,
what are we trying to do now? This helps us figure out what

(12:28):
data requirements we need. This helps us figure out if we're going to
be building this from scratch, what algorithm do we select? If we're going
to be buying a solution, what's going to be best suited for this?
Large language models aren't great for everything, and generative AI isn't
great for everything. So if we need a recognition system, then maybe we
shouldn't be looking at a large language model for that. If we want

(12:51):
a conversational system, then yeah, then that's great. And this really helps
us to drill down that one level further and say, what problem are
we trying to solve? What's the right solution to this problem?
Is AI the right solution? Okay, if it is, which pattern or patterns
of AI are we going to be implementing? And then from there we
can say, okay, we know what problem we're solving, AI is the right

(13:12):
solution for this, and now we can move forward. And if it's not
the right solution, that's okay. But you have to be honest with yourself
and with the organization. Because sometimes, I always say, don't try and
fit that square peg in a round hole. You don't want to shoehorn
your way just because you want to use AI, so you create the
problem that I can solve rather than actually having it solve a real

(13:35):
problem. That was actually going to be my question. When you talk to
clients, do you end up showing them the seven patterns to start,
or is that like showing them the answers and then they want to
pick which one sounds coolest or that they had their mind set on
and then they shoehorn and create the problem. Do you have to try

(13:57):
to keep that blind from them to get the problem first?
Or how do you go about using that? So when we go through
the methodology, because that's what we really teach and follow this step
by step approach. So first you have to say, what problem are we
trying to solve? And within phase one, the business understanding, we have
a series of different steps that you're supposed to be going through.
So one of them is the AI go/no go. So this talks about

(14:20):
business feasibility, data feasibility and implementation feasibility. So
do you have what is your ROI, the return on investment?
You can measure this a number of different ways. I always say that
ROI is money, time and resources. AI projects are not going to be
free. And you really have to understand that. Sometimes people just go,
well, we're just going to do this. And I'm like, yeah,

(14:41):
but it's not, it costs a lot of money. And you measure that
however you want. Time is money. Resources is money. You only have a
finite amount of people that you can put on these projects.
Some organizations can have more than others, but still you have to be
mindful of that and so make sure that you understand the ROI that
you want. We go through a lot of reasons why AI projects fail,

(15:05):
and not having sufficient ROI is a failure. So the project may be
doing what it's supposed to, but an example that we give is Walmart
decided to have a autonomous bot that roamed the store floors and would
check to see if there were items that were out of stock.
Well, I just said that the autonomous pattern is a really hard pattern.

(15:28):
It's the hardest pattern. So it's able to autonomously navigate, and then
it had the recognition pattern because it's scanning the shelves to see
if inventory is out of stock or miss stocked. Well, what they could
have done is we always say, think big, start small, and iterate often.
So don't try and do everything all at once. Figure out what is

(15:48):
that problem you're trying to solve. Okay, you're trying to solve a problem
with inventory not being on the shelves. Well, maybe start with the aisle
that has the most need, not the entire store. And you already have
humans that are walking the floor. So maybe put a camera on the
shopping cart and say, okay, now, how is this going to solve that
actual return on investment? And was this really a problem that we needed

(16:11):
AI for? Could we have done it cheaper or quicker or better with
humans? Because we still need a human to go and actually restock the
shelves. We didn't have autonomous systems that were able to go and autonomously
restock the shelves. So they ended up scrapping that in favor of humans
because the return wasn't worth it. So did whatever they build work?

(16:33):
Yes. But was it still a failure because the investment was higher than
the return? Yes. I'm sorry, I've got to interject. That example is so
incredibly interesting because it also sounds like they had this learning
after building it. Whereas if someone had done their due diligence of like,
what does it cost for a person to walk the store for 20

(16:53):
minutes and check versus like the tech and the infrastructure and the data
and all the things we need to build this, you probably could have
answered that ROI question before you started the project, but do you feel
like most companies have to almost do it to learn it and then
they make the mistake and move on? Or is it... Tales of caution?
Yeah, like, are people good enough at figuring out this out before they

(17:15):
build it or is it only after? So a lot of people aren't
following that step by step approach. And when they're not, you can tell.
So Walmart is incredibly innovative. And they really push boundaries with
technology, but it's not always the right path forward. And so if you
go, okay, well, I don't have the resources of a Walmart. I don't

(17:35):
have the money that I can invest in some of these R&D projects
or putting out a pilot project. Another thing that we see,
another common reason for these failures is that we get into this proof
of concept trap and so we say, never do a proof of concept
because it actually proves nothing. You build it in a little sandbox environment.
It's usually the people that are most closely aligned with the project.

(17:58):
So they're going to be using it in the way that the tool
was intended to be used, not the way that humans actually are going
to use it out in the real world. And then data is messy.
Usually in a proof of concept, you have really nice clean data that
you're working with. And then you go out in the real world and
you're like, why didn't this work this way? Why are these users doing

(18:20):
things that I wasn't planning for? Why are you using it this way?
That's not how it was supposed to be used. And I was like,
yeah, but that's how your users are using it. So we say,
get it out in a pilot and have it be in the real
world and see how it's being used. So if they had put this
out in a store or two and said, okay, this isn't working as
expected, this isn't providing the returns that we wanted, maybe we didn't

(18:42):
invest a ton of money, we invested some money and we're trying it
out, but it didn't work out as we planned and so it's not
worth scaling. So the verbiage of use case is like really common, a
lot of the clients that we work with, they have like their AI
use case that they toed around with them. And I feel like that
is not. I heard you say use case, but I feel like you're

(19:03):
using it differently. It almost feels like a use case is we want
an autonomous vehicle to go find the open spaces on the shelf,
not the problem framing that you're talking about. So how often is there
too much momentum down this path and this inertia of we have this
use case in mind, our OKRs are aligned to completion of this project

(19:25):
and so it's like really hard to turn the Titanic? Or you can
just talk about righting the ship. And if you think that that use
case language is in converse of the problem solution framing. Yeah,
and that's a tough question, because you sometimes have a application, an
AI application. You have something that you want to do and maybe a

(19:49):
senior manager or someone in leadership is saying that that's what they
want and you've already invested a lot of money, time and resources into
it. And so it's their little pet project. And to pull back from
it can be incredibly difficult. People also have those ideas in their mind

(20:10):
about what they want and they try and shoehorn it. And so you
go, well, I want an autonomous vehicle. So let's figure out how we
can get an autonomous vehicle on the store shelves. And when people talk
about use cases, case studies, I feel like those words get thrown around
a lot. And it's like, what exactly do you want with a case

(20:30):
study? How is that defined versus your use case versus what it is
that you want? So we always say figure out what problems you have. And
this requires brainstorming, this requires actually saying what problems
are we trying to solve? And write it down, and bring different groups
together and say, what are we trying to solve? And then from there,

(20:53):
when we talk about the patterns too, you can look at it from
one of two ways. You can either look at it as what's the
ROI that you want, and then figure out which pattern is best for
that. Or you say here's the pattern and then you figure out the
ROI. So when you say I want this pattern and then you figure
out the ROI, sometimes that's shoehorning because you're like, oh, well

(21:14):
that's an okay ROI, sure. But if you go, I want my organization
to have 247 care, customer support. Well then you go, okay,
well then, what's going to drive to that? And that would probably be
a chat bot, for example. So you go, okay, well then that's what
we should be doing. And if Walmart had said, what exactly are we

(21:35):
trying to do? And we're trying to stock shelves better and it's like,
well, what's the actual return? Drill down even further. Well, what is the
real return from that? Because you want more satisfied customers or because
you better inventory management or something like that, rather than just
saying, well, let's have something roaming the store shelves to say when

(21:56):
we're out of an item, maybe we should be fixing something with the supply
chain earlier on. Is that the biggest failure point you find?
Is the identify the problem part that we've been talking about?
Or is it oh, we can help 80% of clients that come to
us get past that point and then the biggest failure point of the

(22:18):
AI project is actually later on? There's 10 common reasons that we've identified
for project failure. Oh yeah. So one of it is running your AI
projects like a software application project. It's not, it's a data project.
You need data centric methodologies. You need to have a data first mindset.

(22:40):
Yeah. Then obviously, if data is the heart of AI, we're going to
have data quality and data quantity issues. How much data do you need?
I know a lot of times, especially with like analytics, we talk,
you can train on noise. More data isn't better. So you have to
say, what data do I need? And then, do we have access to

(23:03):
that data? Is it internal, is it external? Are we going to be
adding more data and then just feeding it more noise? I mean,
we have so many failure reasons. There was a, I think it was
a forest, maybe US Forestry, it was one of the government agencies,
and they were trying to count the number of wolves that were migrating

(23:23):
in a national park, which is a great use case. You put a
camera out and you can do the recognition pattern so that you're not
having humans who are there, which isn't really great and conducive to being
there for however long you're trying to track these wolves. So,
okay, that's a good use case. Well, what they realized was that it
ended up being a snow detector, not a wolf detector, because what it

(23:45):
was being trained on, because especially some of these deep learning,
for example, is a black box. So we don't know actually what it's
using to learn. And so they realized, they said, okay, well that's not
performing as expected. So then that's another common reason. Like I said,
proof of concept versus pilot. You're not putting it out in the real

(24:07):
world until you're investing all of this. I love that distinction. So good.
Yeah. And I cringe when people always talk about proof of concepts because
I'm like, I don't think you mean that. And I'm like,
you really mean a pilot. And if you don't, you should be meaning
a pilot. And then also a reason I talked about earlier, the number
one reason is over promising and under delivering. That's what brought us

(24:30):
to two previous AI winners, and it will bring us into another if
we continue to act that AI can do more than it actually can.
So the ROI part of this seems like it's very much tied to
this expectation setting. I'm really curious about this especially. I just
don't know how you even get a full team on board with this
type of thinking. Even if, let's say Walmart started with MVP of putting

(24:53):
the camera on the shopping cart, would they have been able to understand
the actual investments it would take to run with the full product versus
just the MVP? Or how does that play into the ROI conversation? Because
it seems like that's so tied into the expectations.
Yeah. And we don't do implementation. So I'm not there helping these organizations.

(25:16):
So I don't get to always hear through the entire conversation. But these
should be short, iterative sprints. And so we say, if you really need
to be mindful of what it is you're trying to solve,
make sure that you're not... You want to solve something big. So think
big, but then start small and then make sure that it's actually solving

(25:37):
a real problem. Another example that I like to use that I think
provides really good example of a positive return on investment is the US
postal service. They were, it was around the holidays and they were getting
a lot of calls to their call center, more than usual because it's
the holiday season. And so you think about, well, what's the number one

(25:57):
question that they get asked? Track my package. So they said,
we are not going to have a chatbot that can answer 10,000 questions.
We are going to have a chatbot that can answer one question,
track my package. So we can say, what is that return going to
be? Well, the return on investment is we want to reduce call center
volume because our call center agents can't handle the volume that they're
getting. They said, okay, we're going to have it answer that one question.

(26:20):
We can compare it to data that we've previously had. They said,
yes, this is a positive return. It is decreasing call center volume and
improving customer satisfaction because people can figure out where their
package is a lot quicker. From that they said this was a positive
use case. Now we can go to maybe the second most asked question
and then the third most asked question rather than saying, let me start

(26:41):
and answer 10,000 questions all at once, which a lot of people are
getting into trouble now because they just throw a chatbot on their website.
They're not testing it, they're not iterating on it, they're not making
sure that it's answering those questions correctly. And they're not thinking
big, but starting small. They're thinking big and then starting big.
So they're saying, I'm going to put a chatbot on my website that

(27:02):
can answer a bazillion different questions. And then it starts giving wrong
answers and then they get into a lot of trouble. We've seen this
with Air Canada, we've seen this with the city of New York.
I mean, we've seen this with Chevrolet dealerships that have chatbots on
their site. So like, I don't even need to make stories up.
It's like every day there's a new story about some failure.
But is that also, coming back to your point about, I was trying

(27:24):
to conceptualize the over promising point and it seems like that's intertwined
with this huge scope creep that then happens with many projects that it's
like, the scope becomes so wide and there's also this assumption that AI
can handle a big scope, but actually by doing that, you almost
burn the house down before you've even started building it. Yeah.

(27:47):
So over promising can be scope. And it also just, we over promise
what the technology is capable of doing. So we say it can do
all of these things and we're like, but it can't really. Or we're
trying to apply it in ways that it shouldn't be used.
So then it's not providing the answers that we want or that return

(28:10):
that we want. And then people go, well, now I'm frustrated,
it's not delivering on what we said it would. So we're not going
to use it anymore. And we go, yes, because if it doesn't fall
into one or more of the seven patterns. So another example is what
I did not say was a pattern of AI, is automation.
Automation is not intelligence. It's incredibly useful, but you're just

(28:32):
automating a repetitive task. And so we think about RPA technology and that's
incredibly useful, but it's not AI. And so sometimes people want to make
things more than they are. Or if we don't, if the technology isn't
there. So an example, back in the first wave of AI,
back in the 1950s and the 1960s, we wanted to have voice recognition,

(28:54):
and we wanted to have cockpits that were voice enabled so that pilots
didn't have to have all these switches and levers and they could just
talk. But we didn't... That technology wasn't where it is today and so
it wasn't ready. Right. So we had, we over promised on what we
could do and then under delivered because we didn't have what we needed.
And so we're even starting to hit some of that today which

(29:16):
we don't have machine reasoning. So we can't ask these systems to do
more than they really can. And if we don't understand those constraints,
this is where we run into issue. I am dying to dig into
something that you've alluded to twice, that a lot of AI is actually
a data problem. The reason I want to dig into this specifically is

(29:39):
I think there is a perception often in the industry that's a technology
problem that's solved with product managers and software engineers and that
sort of thing. How have you navigated that? 'cause like, we're three data
folks who probably appreciate the difference here and technologists in general

(30:00):
are amazingly smart, curious people. But there are still nuances to data
that are not fully appreciated. In the same way that I don't fully
appreciate the complexity of backend systems or front end code or things
like that. How do you navigate that in a business? Yeah,
we always say it's people, process and technology. This three legged stool.
And the easiest thing to do is to fix the technology.

(30:24):
Fix, I air quote that. So you just add a new technology or
you add a new vendor because it's the easiest, because you can buy
it. And it's something that people feel is within their control,
but it doesn't actually fix the problem. And then process, that's harder
to fix. And so we need to say okay, maybe the way that
we're doing it, we can be agile, but we shouldn't follow agile from

(30:51):
that software development angle. We need to follow data centric methodologies.
And that's also people. And so it's really important to understand that
these are data projects and data, the issue, which, I don't know, maybe
I'm saying something controversial here, but data isn't sexy. And so people
don't want to talk about it. And people that are in data fields

(31:15):
love data, but other people don't necessarily, and they think it's a solved
problem. And I'm like, it's not a solved problem and it will never
be a solved problem. Yes. Exactly. Because the more data we create,
the more issues we're going to have. And so people just want to
throw technology at it. Oh, Tim's going to be so sad he was
not on this. He's going to listen to this later and literally be
fist pumping in the air and be like, yes, yes. I keep being

(31:37):
like, Tim's smiling somewhere in the world right now at multiple points
and he doesn't know why. He's just like, oh.
This warmth has come over me. Okay, so something that I've been thinking
about ever since you talked a little bit about the example,
Kathleen, is the postal service example about the chatbot answering that
most popular question. So if the ROI proves itself for that single question,

(32:03):
are any other subsequent use cases solving problems just gravy on top?
Because if you were to try, just because it worked for that first
one doesn't mean it's going to be appropriate for the second.
Or maybe not for the third. Or perhaps it would have to pull
in another pattern which expands the scope. So is it
a freeing place to be after you've come up ROI positive on one

(32:23):
first use case? Because then you have a different proof point for a
second use case. Because if it doesn't work out, you're like,
nope, we're still good. Track my package. We can explore use case number
three, but we're going to go ahead and happily depart from investing further
in use case two as an example. Is that mental model the way
of building on that accurate? I'm curious your thoughts. Yeah, I mean, every

(32:46):
use case, every example, every organization is going to be different.
And so you have to say, what really is that ROI?
Because if the ROI is to reduce call center volume, then maybe it
shouldn't be the most second asked question. It should be the most second
asked question that the call center gets. And is AI the right solution

(33:07):
for it? I don't know. Depends on what it is. Because maybe if
it's... I need locations of different post offices, you can just have it
direct to a point on the website. It depends on what exactly those
questions are. But yeah, but to just really drill down. And then when
you get to a point that you're like, this is good,

(33:28):
we always say, AI isn't set it and forget it. So you have
to make sure that it continues to perform as expected. And so think
about what that means for the resources at the end of that iteration. But
you don't always need to continue and continue and continue and try and
make it more efficient and try and make it better and try and
have it answer all these different things. Because that's where people do

(33:48):
get into trouble, and they start doing things that maybe have a negative
ROI where it used to have a positive ROI. Or they could have
done a different use case or a different example, a different project. You
want to have those quick wins. So we always say, think about what
is the smallest thing that you can do that's going to show a

(34:10):
positive win. Because obviously you're not going to get investment for further
projects if you're showing negative wins, negative returns. So what could
continue to be those positive wins? And then at some point you're like,
okay, we've done a lot with this, let's move on to our next
project. Or how can we add a different pattern into this?

(34:31):
Or how can we do something different? But you do want to always
be thinking about that and saying, and that's why we always say,
come back to this methodology where it's six steps and it is iterative.
So if you're not ready. So we start with business understanding what problem
are we trying to solve. Then we move to data understanding.
We need to understand our data. We need to understand if it's,
do we have access to this data? Is it internal, is it external,

(34:54):
what type of data is it? And then from there we go to
data cleaning. So because again, we know that data is not going to
be nice and clean, and we need to do things like dedupe it
or normalize the data or whatever it is in that next phase.
Then from there then we can actually build the model, then we test

(35:15):
the model and then we put the model out into the real world,
which we call operationalization. So that would be that one question is
one phase of the chatbot. So then we come back and we say,
okay, now let's figure out the next problem that we're trying to solve
and do we have the data for that? I really like the fact
that you asked that, Val, because it's giving me a light bulb moment

(35:35):
of I have a coworker, Nick, who always says we're not here looking
for local maxima. And I feel like that's exactly what you're saying,
Kathleen, is you prove ROI on that use case. But then you have
to pick your head up and say, now what is our highest priority
problem? Was that ROI enough to maybe make the problem of that huge

(35:55):
volume coming in asking to track packages? Not our top business problem
where we need to take these people's resources, time, brain power for AI
solutions to keep pointing it in the same direction. Maybe this is where
we pivot to get the most ROI. Instead of saying, we started AI
here on the chatbot, we must continue on the chatbot. I'm telling you,

(36:15):
there's a company that has this exact work stream where there's the chatbot
AI roadmap. And they are going to run that down versus the reorientation,
like exactly what you're talking about, Julie and Kathleen, about the next
biggest problem which might have nothing to do with the chatbot or track
my package. Yeah, I like that a lot too. Oh, I love that.. Not

(36:38):
looking for local maxima or something. Like, I just, I love the phrase.
Oh, see, I just always talk about diminishing returns. I feel like that's
equivalent. Yeah. But sorry, people, we are running out of time and I
have so many questions for Kathleen. I am dying to talk about skill
set. In your experience, people that are project managing with AI,

(37:01):
is it a different skill set? Is this the same skill set as
anyone doing project management or even the team that are involved, what
are the things that make the team possibly more successful? That's a great
question. So when we talk about AI and project management, we talk about
it from two angles. A lot of people are talking about what are
the tools I can use to help me do my job better?

(37:22):
And that's where a lot of like 95% of conversations are.
And there's so many tools. And people always ask me, well,
what's the best tool? And I go, I don't know. What are you
trying to do? There's so many different tools. I can't say there's no
one tool that's best. But then how do we run and manage AI
projects? And that's where CPMAI comes into play. So what we found is

(37:43):
that when we're looking at running and managing AI projects, we get those
traditional project professionals. They're a project manager, maybe a product
program manager, but then we also get project adjacent. So they're a data
scientist or they're a data engineer and they've been tasked with running
this project. So the skill sets really are unique and varied when it

(38:06):
comes to running and managing AI projects, not typically
always that traditional project manager skill set. And they're usually a
little bit farther along in their career as well. So we found that this
complements very nicely with PMP, so for example. A lot of people that
get CPMAI certified are also project management professionals with PMP certification.

(38:27):
They're a little bit farther along in their career. Doesn't mean that you
can't run and manage AI projects early on in your career,
but it does... We do find that they tend to be a little
bit more mid to senior in their career. That's interesting. I wonder if
that's also because so many of the things that I've heard you talk
about, both on your own podcast and today,

(38:48):
it actually requires really deep understanding of the business and the strategy
and asking the right questions. And I feel like typically those are the
skill sets that people get better at with time. I mean,
I have some amazing junior people in my team that are naturally just
very good at that. But I do find it tends to be,
you need to have a bit of experience under your belt.

(39:08):
So I wonder if that's part of the allure or if it's just
like people need to, are more willing to take some risks.
I think it's because they know the industry, they know the real problems,
the real pain points. And then they're now solving for that.
And so AI is going to become a part of more and more

(39:29):
projects as well. So we may see a shift over time and everybody
needs to be an AI project manager because they're going to be involved
in more projects. But what we've seen so far is that it tends
to be on the, a little bit later in their career, not super
early in their career. Because you need to have some of that industry
knowledge. I mean, even thinking about ROI. What's the return that you're

(39:51):
looking for at that organization? If you're new to the industry,
you may not know some of those real pain points.
And I know at PMI you've talked previously about power skills.
Can you tell us a bit more about that? Yeah, sure.
So at PMI we call soft skills power skills. And I think that
this conversation is incredibly important. So even on AI Today podcast we've

(40:13):
talked about this and I've written articles in Forbes about this.
When we think about how we've taught in previous years, and what we
focus on with school and academics in K 12, it's been a lot
of STEM, so science and technology and engineering and math. Some of those
types of skills. And they're great skills to have. But we also need

(40:35):
to be thinking about creative thinking and critical thinking and collaboration
and communication. And so now that generative AI has put AI into the
hands of everybody, we need to really think hard about what it is
that those outputs are, and how we use them. So I always like
to think about this as two sides. So how do I use my

(40:58):
power skills to be better with large language models and generative AI?
How do I become a better prompter because of that? And how do
I take the results? How do I use generative AI to help me
with my power skills? So how do I use it to help me
be a better communicator? Maybe it can write emails in different tones that
I struggle with, or maybe it can help me with translation in ways

(41:23):
that I couldn't before. Or how does it help me brainstorm?
How does it help me bring teams together and have those collaborative sessions?
But then at the same time, how do I take my critical thinking
skills and say, was this a correct output? Maybe I shouldn't trust it. Let
me, what is it, trust but verify? Always think about what it is

(41:44):
that's coming out. Because we know that they can hallucinate. We know that
means that it can give results that act like it's, it's confidently wrong.
Well, okay, let me do a little bit of critical thinking here and
saying, okay, maybe drill down one level deeper. Or how can I have
better communication skills with it and do a follow up prompt or write
it a little bit differently or have it help me rewrite and tailor

(42:07):
even more finely the results that it's given? And so I think it's
really important to use those power skills and not take them for granted.
I also am really interested to see the shift now in learning with, sometimes
people get a very negative reaction to AI and they go,

(42:28):
oh, it's going to, students are going to be cheating with this or
whatever. And so they just have this do not use policy.
But of course people are going to use it. And even organizations,
if they don't really know how to manage this, they'll go,
well, you're not allowed to use it internally. Well, guess what?
They're all using it on their personal devices and it's probably way worse
because there's data leakage and there's security issues that are going

(42:49):
on and the organization can't control that. So we say, don't fight the
technology, but really lean into it and let's all use it in that
trustworthy, ethical, responsible way and not fight it, because it is going
to be here. So how do we now teach children these power skills
and help use the AI technology to help them be better at communication

(43:13):
or collaboration or critical thinking or creativity or whatever
that power skill is that you all like and want to think about.
I always think about critical thinking. I think that that's such an important
and usually underrated, under discussed scale. We are all just

(43:34):
clicking our fingers in agreement. Do you think critical thinking can be...
It's a very controversial question that I have been wrestling with for my
whole career. Do you think critical thinking can be taught or do you
think some people naturally are better at critical thinking than others?
So I think anything can be taught, but I think that some things

(43:54):
come more naturally to people. So you may not be a great
communicator, for example. You may struggle to find words, but if you use
a large language model, it can help you become a better communicator.
Same thing with critical thinking, but it's something that is like a reflex.
And so you need to really embrace that. And I think that leaders

(44:16):
on teams, colleagues can really help. And that's something that everybody
needs to be thinking about and really feel safe and empowered to have
that critical thinking and say, I understand that's what you said,
but what did you mean? Or I understand that's what you said,
but let's drill down one level deeper. And that's how you really get
that critical thinking. And I've been trying hard to teach it to my

(44:38):
children. I have two young kids. And then I also think about how
do I apply this? And this is so incredibly important because now in
the age of AI, there's a lot of misinformation, disinformation. We say you
can no longer believe what you see, hear or read. So how do
you say was, did this come from a source that I can trust

(45:00):
or should I be questioning this? And okay, so an example out there
is there's a stat that Elon Musk is the richest man in the
world, and he has like 44 or 48 billion dollars, and there's 8
billion people in the world. So if he gave each person a billion
dollars, he'd still have $40 billion. And I'm like, that math ain't mathing.
But people are circulating it like it's the truth. And even one of

(45:21):
my friends sent it to me, and then I told him,
I was like, wait a second, this isn't right. And I said to
my husband, I go, what is this? And he's like, this is ridiculous.
But people aren't right because we're in such a go, go, go world. And
you need to understand where this is coming from. People just hear something
from the internet, believe it, even though we say, don't believe it,

(45:42):
and then they regurgitate it like it's an actual stat. And I'm like,
please stop. That's critical thinking, just because you hear something doesn't
mean that it's the truth. So maybe do math and say,
okay, that math isn't mathing, or figure out where it came from.
And it gets harder because AI is prevalent. And so that's why critical

(46:03):
thinking is really now critical. Okay, I'm going to ask one last question,
just because that's what I like to do.
I was looking at some research the other day, and I feel like
we are so in the thick of AI from the technology perspective, we're
all living and breathing it. But it does seem that there are these

(46:25):
huge sections of society that have such a different experience.
And a lot of it is that the wider public can be quite
apprehensive about AI and that if you're trying to market a new feature
or product or whatever, potentially you don't even want to mention that
it's AI. And I was a bit surprised by that. And I was

(46:45):
going through San Francisco a couple of months back, and I was blown
away because every single ad was talking about AI. And I was like,
I don't get this. Why do all the ads reference AI?
And of course, I started chatting to people about it, and they're like,
because it's San Francisco. It's because people want to use it to attract
talent. And, like, look how shiny we are. We're doing the cool thing,
but that's not necessarily the same as what the customers want.

(47:09):
Is that a tension that you've noticed? Like, I don't know,
companies have to package it up and maybe not fully show the like
what' and all of, that this is AI solving your problem.
Yeah, So I think it's, I like how you brought that up because
San Francisco is Silicon Valley. So they're very tech forward and tech leaning.

(47:31):
And a lot of this is coming from there. So of course they're
going to be pushing that. And that that landscape does look different than
other parts of the country or the globe.
You also have to think about what industry you're in and some industries
are embracing AI a lot more than others. That's like a heavy technology.

(47:52):
And probably most of those ads were heavy in tech. And you think
about all of the tech companies that are from there. But then there's
other industries that are not as forward leaning with AI even if they're
using it. And that's for a number of different reasons. Like healthcare,
there's a lot of applications that could be used but aren't always used

(48:13):
or are used, what we call augmented intelligence, where it's not replacing
the human but helping them do their job better for a variety of
different reasons. You can't have AI systems diagnose patients. So they
can provide a diagnosis, but then the doctor needs to actually provide that,
at least in the States in very limited use cases can you actually

(48:34):
have have an AI system diagnose a patient. Construction also is an industry
that is not a heavy adopter of AI. Yes, of course there's applications
for it, especially when you think about work, job sites, the recognition
pattern is being used to make sure that people are either not on
the site when they're not supposed to be. So keeping that watchful eye

(48:57):
over it, or for safety reasons, making sure that they have on protective
gear, hard hats, and IT can monitor it in real time and then
you can fix it in real time so that you can prevent injury.
And so I think that it depends on the industry. And also there's
a lot of fears and concerns when it comes to AI that we
don't feel with other technologies. I don't think people fear mobile technology,

(49:19):
for example, as much as AI. And this comes from a variety of
different reasons. Science fiction, Hollywood. We conjure up all these different
ideas of what good and bad AI can do. We think about HAL
or the Terminator or Rosie from the Jetsons, and we don't have this
when it comes to other technologies. So people have real fears which are

(49:42):
emotional and concerns which are more rational, and we need to be addressing
that. So messaging plays a part in all of that. And I think
that it depends on the industry, it depends on the user use case.
And so we shouldn't hide necessarily that we're using AI, but we don't
always need to be so forward leaning if the industry isn't quite ready

(50:02):
to embrace it. Thank you so much, Kathleen. That was such an incredible
place to end. Yeah, I think we're all blown away. We're going to have
to do a part two at some point if we can drag you
back. But we do like to end the show with something called Last
Calls where we go around and share something interesting we've read or come
across or an event that's coming up. You're a guest. Is there something
you'd like to share with the audience today? Sure. I mean,

(50:23):
obviously AI Today podcast. I think it's wonderful. It's been going on now
eight seasons and we're in the middle of a use case series.
So if people want to see how AI is being applied in a
number of different industries, then definitely check that out. And also
one event, I've been an Interactive Awards judge for south by Southwest
for a whole decade now. I can't believe.

(50:43):
I know. And I'm going back, so really excited for that.
And PMI is going to have a presence there, and so I'll be
on a panel discussion. So I think that that's pretty exciting. Yeah. I
can talk AI all day, every day. So I'll be
a judge at the Interactive Awards live. So it's March 8th when the

(51:04):
judging happens and then my panel will be a day or two later.
Nice. Thank you. Very cool. Julie, what about you? So I'm pretty proud,
this week I finally tried out making a gem in Gemini and I
don't know if any of you guys have tried it, but I was
really proud. It was just one of those things on my to do
list. I'm like, I want to play with it, I want to do
it. I kept putting it off. I didn't find time.

(51:26):
Was at work and found a great use case for it.
And so I finally took the time to do my pre prompting.
And actually part of what I wanted to call out here was that
I finally understood what it was doing. I would hear everyone at work
say, I set up a gem, I'm recreating myself. It's the coolest thing
ever. It can do so many things for me. And I'm like,
whoa, okay, I'm intimidated, but it sounds awesome. So when I sat down

(51:49):
to do it with some of my colleagues, they were explaining to me
that what it's doing is you're pre prompting Gemini, and so you get
to save all this information. So, for example, I said the role I'm
playing is a consultant in analytics and experimentation. This is my title.
Here's my LinkedIn, here's what I focus on. Please use with every answer

(52:11):
the context of these couple documents that I gave it. And in those
documents I was able to give it a lot of
slideware and other documents I've created in the past of saying,
like, this is the topic I want you to reference when I'm asking
you these types of questions. And so once I really understood that it
wasn't magic, you weren't giving it just a subset of data,

(52:33):
you were pre prompting the model, it was like it finally really clicked.
And I tried it out today. I said, I'm trying to spin up
this specific thought leadership group. I gave it a few sentences of things
I had brainstormed. I gave it to my gem, who I named Juniper.
And I'm embarrassed to say. I literally went to ChatGPT and was like,

(52:53):
what are fun names for gems? Because I was not feeling creative that
day. Stop it. No you didn't. That's. Yeah. Anyway, so Juniper,
I asked Juniper for this. It gave me like a two page outline
for the whole charter of the group. And
it was like a little broad, like I'll take it and change it.
But yeah, I very impressed by this gem. So something fun to go

(53:16):
try. It was less intimidating than I thought. Very nice. I like that.
Over to you, Val. So mine is a twofer, but they're actually related,
and it's actually more related to this conversation than I was originally
even anticipating, which I love. So first of two is a medium article
called Thinking in Maximums Escaping the Tyranny of Incrementalism and Product

(53:38):
Building. And it's all about the local versus versus global maximum.
It goes through all these use cases of like, why MVP thinking is
actually problematic in some cases. And all these stories of companies that
actually swung big and why that's so much better than taking it down
to the smallest bolts of the product and getting feedback and

(53:59):
not really being tied to the full vision, which I'm just call out.
I'm not sure I agree with all this, I just find it interesting.
And then I was also listening to a podcast from the product school.
They were interviewing the CPO of Instacart. And one of the call out
quotes from that was, you won't hear me say or use the word
MVP because I find it to be very reductive. And I think that

(54:21):
product is so much bigger than that. So anyways, I'm just,
I've been doing some research around, is this a theme in the product
world and product space and how they're thinking about this? Because obviously
as someone who has an experimentation background, I'm very much a fan of
de risking and think big, start small, Kathleen, which I love that.
So. But two interesting reads from very different POVs than where I stand

(54:44):
and thinking about how to break down and think about the work and
de risking choices as you're moving along the process. So two good ones
there. Those are good. I can't wait to read those.
Yeah. And how about you, Moe? I have nothing to do with the
show. Mine are just too fun. Well, one is Canva Create is coming
up next month, April 10th in Hollywood Park, Los Angeles, which I am

(55:09):
super excited about. It's just, yeah, really fun atmosphere and we always
have some incredible speakers. So super pumped about that one. The fun one,
so I had a session yesterday with my mentee and she started talking
about Gretchen Rubin, and she's like the four tendencies and blah,
blah, blah. And I was like, this sounds really familiar. And then I
realized I'd listened to a podcast on it, but the podcast was applying

(55:32):
the four tendencies to children, and how you raise your children.
And then I'd never actually gone back and read the total work of
Gretchen. And so we had a really interesting conversation about it.
It basically talks about, whether you're an upholder, an obliger, a rebel,
or a questioner. And it's basically to do with where your motivation comes
from, if it's an internal motivation, external, both, etcetera. And the

(55:56):
thing that blew me away is that I had listened to it and
been like, this is the one that I am. And then as I
was talking about it more and more, I was like, oh,
I'm a different one. And then I did the quiz and I was
like, I'm actually a completely different one to what I thought.
So that was like a really big eye opener because, yeah,
I've been thinking a lot about my own motivations and how I can

(56:18):
get the best out of myself. And life and balance and all of
these things. So it was actually also just like a really nice way
to break up my day. So I'm going to have, my poor team
don't know it yet, but I'm going to ask them all to do
the calls because I'm so interested to see what everyone is.
So, yeah, those are my two last calls. Just to wrap up,
I want to say a massive thank you, Kathleen. This was just phenomenal.

(56:38):
We have not even touched the sides of all of the possible directions
that we could have discussed with you. But a very big thank you
for coming on the show today. Yeah, thank you for having me.
This was such a wonderful discussion. And we can't end without saying a
big thanks also to our producer, Josh Crowhurst and all of our wonderful
listeners out there. If you have a moment, we'd love if you could

(56:59):
drop us a review on your favorite podcast platform.
And I know I speak for Val, Julie and myself, no matter how
many problems you're solving with AI this year, keep analyzing.
Thanks for listening. Let's keep the conversation going with your comments,
suggestions and questions on Twitter @analyticshour, on the web at analyticshour.io,

(57:25):
our LinkedIn group and the measured chat Slack group. Music for the podcast
by Josh Crowhurst. So smart guys wanted to fit in. So they made
up a term called analytics. Analytics don't work.
Do the analytics say go for it no matter who's going for it?
So if you and I run the field, the analytics say go for
it, it's the stupidest, laziest, lamest thing I've ever heard for reasoning

(57:51):
in competition. Quick before you drop, Kathleen, when you were talking about
the communication skills helping with the way you communicate, informing
the prompt engineering and even what you're talking about Julie?
ChatGPT did me dirty. So you know how it shows all... It's essentially

(58:11):
showing your search history in that left rail unless you hide it.
I went back to end of 2023, and half of what...
My responses were, give me three more. Give me three more.
And that was, I was giving it no more direction or information.
I was like, give me five more. Make it funny. Give me five

(58:34):
more. It was like all I said to... It to be fair,
sometimes I say do better. I was like, no additional information.
Just try harder. Rock Flag and AI is a data problem.
Ooh, that's a good one.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.