Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
When it comes to
kick-starting your enterprise AI
strategy, leaders are oftenflooded with ideas and a deep
sense of urgency, but without aclear framework, even
well-resourced teams can end upspinning their wheels, because
the reality is only 1% oforganizations consider
themselves AI mature.
The rest are just stuck in whattoday's guests call shadow AI,
(00:20):
a mess of disconnectedproof-of-concepts, siloed data
and untracked value.
In this episode of the AIProving Ground podcast, we're
talking with Kathleen Nowickiand Yoni Malki, two well-versed
AI consultants who've helpedFortune 100s escape that
so-called POC purgatory, andthey'll walk us through a
deceptively simple three-stepprocess for doing AI right not
(00:42):
flashy, but foundational.
So you're building flywheelsand not just going through fire
drills.
If you're tired of AIconversations that start with we
need a chatbot and end withwhat did that even accomplish?
Well, stick around, becausethis episode may change the way
you approach enterprise AI fromthe ground up.
This is the AI Proving Groundpodcast from Worldwide
Technology Everything AI all inone place.
(01:04):
Let's jump in.
Kathleen Yoni, thanks forjoining us on today's episode of
the AI Proving Ground podcast.
Speaker 2 (01:20):
Thanks, Brad.
Speaker 1 (01:22):
Well, yeah, no,
absolutely, jinx.
Let's start with.
I read a recent report fromMcKinsey that talks about how
only 1% of organizationsconsider themselves AI mature
and I'm wondering, from theperspective of you know,
developing, identifying andprioritizing use cases, what is
that 1% doing that?
The 99% are simply just gettingwrong.
Speaker 2 (01:44):
So I'll take this,
kathleen, just to get started
that the 99% are simply justgetting wrong.
So I'll take this, kathleen,just to get started.
So I think what you hit onfirst is that there's some
upfront work that has to getdone to get started on the right
foot, and I think that's goingto be the majority of what we
talk about today in this podcastis what needs to be done
upstream of actually buildingand deploying AI models and use
(02:07):
cases, but there's a whole hostof other things as well that
we're not gonna talk about today.
Things like having the rightplatform in place, being able to
reuse components from use caseto use case, having your data
set in a way that you cancontinuously build and grow your
AI capabilities.
(02:28):
All these things are what setoff what we call the flywheel
effect, where we can pump outuse cases, generate value in
organizations, and that's reallywhat I think the 1% of the
organizations are doing well.
It's a combination of gettingstarted up front correct, on the
right foot, but then alsosetting themselves up for
success so they have thisflywheel effect.
Anything else you wanted to add, kathleen?
Speaker 3 (02:50):
Yeah, I love that
flywheel.
It's like you build up themomentum, you know, prove the
value, generate the value andthen keep it going.
And I think to what you'resharing we really want to cover
today how do you get theflywheel going and prove the
value of it organizationally tojumpstart the AI journey?
Imagine yourself being, I think, for companies, large
(03:13):
organizations that are startingtheir AI journey, it can feel
like you're standing at the baseof a giant mountain and you're
trying to figure out where do Iput my foot down first.
You could hike in some randomdirection and, even worse, if
everyone starts hiking indifferent directions, you have,
(03:34):
you know, sheer chaos,duplicative efforts.
It's very expensive, right?
I think we call it shadow AI,yoni that, right yeah, yeah,
shadow ai is the term.
Speaker 2 (03:45):
It's kind of a play
on the words from shadow it.
Speaker 3 (03:48):
Now we're moving over
to the ai world yeah, but the
value is not tracked well and Ithink you know in large
organizations if you can't trackthe value right, you might as
well not have even existed, kindof if.
Speaker 1 (04:01):
If a tree falls in a
forest type of situation, yeah,
Yoni, I like that you mentionedwhat we're not going to focus on
today, because I do want to askit's probably very easy for
organizations to jump into thosecomponents from the onset the
need and the rush fromexecutives or boards to move
(04:21):
fast with AI.
You're probably thinking toyourself you have to build a
system or you have to get all ofyour data state in order.
How quickly is it to, or howeasy is it to fall into that
trap of wanting to do all thatfirst before you really start
applying some just you knowhypothetical thinking to all
this?
Speaker 2 (04:39):
Yeah, it's very easy
and you know that's actually
where we're going to go.
Here is the reason why I meanit's kind of ironic, right?
You know, before the explosionof chat GPT onto the scene at
the end of 2022 and beginning of2023, we were working in a
world of just doing machinelearning models, and machine
learning models take a lot ofeffort to get up and running.
(05:01):
You've got to get all the datain the right place and you've
got to be able to train thesemodels based on your company's
information, and so it was a lotof work to get going.
Now you fast forward to todayand we have these LLMs and
they're pre-trained on theentire internet.
So running a very complex AImodel like these LLMs is
(05:26):
actually, ironically, very easyto do.
So it can be extremely temptingto just get started and start
building, because you will seesome value.
You will see these modelsrespond back to you in
human-like terms and answerquestions that could be valuable
to your organization human-liketerms and answer questions that
could be valuable to yourorganization and the hard part
of actually bringing yourorganization's data to those
(05:48):
LLMs that is pretty challenging,but to do it right but but to
get it just kind of good enoughthat you can see answers again
about your organization isn'tthat hard.
So while these, while thesemodels are extremely complex,
the ability to spin them up ispretty easy and that actually
leads to this POC chaos.
And if you look back I just lookback at the time from 2023,
(06:12):
2024, it really was most of theorganizations we talked to just
had POCs all over the place andin fact, worldwide.
We were no different.
We had shadow AI everywhere.
It was hard to kind of trackthe value and people.
You know we're kind of doingthings in their own silos and
(06:33):
you know you run into a lot ofissues here where the use cases
that are being built they reallyonly matter for a very few
people in the organization oreven if it matters for those
people, it's still even it couldmatter for, like, a lot of
different people.
It's siloed so it never actuallyexpands out to all the other
areas, or the use case justbecomes extremely just kind of
(06:58):
half-baked because there's nogreater authority or the
organization pushing you tobuild it to the end and make it
really fantastic and it kind ofjust dwindles and dies on a vine
.
So there's just a lot of thingsthat happen if you kind of go
forward and just start buildingand you kind of take that
temptation.
(07:18):
And so I think what we want totalk about today is how do you
avoid that?
And so I wanted to just pass itback to Kathleen here, because
I know, kathleen, we've, we'vetalked about this before.
You know, how do we, how do westop that from happening and
ourselves aligned on the rightfoot and do it in a way that
it's not just overbearing andonerous?
(07:38):
We, you know we have to slowdown and speed up, but we don't
want to take the entire year todo something like that.
What's your thoughts on that?
Speaker 3 (07:46):
Right.
And that's where we want to,where we talked about leaning on
our kind of consulting toolkitto really get alignment to.
You know, achieve this kind ofcrucial paradox of slowing down
to speed up, not for the sake ofbureaucracy, right, but to
really build a strong foundation, get alignment and, you know,
(08:08):
bringing it back to our mountainmetaphor like, plan your path
to the top in a way that's goingto help, you know, brian,
organizations get to the, youknow, more organizations get to
that high level of maturity thatyou mentioned, high level of
maturity that you mentioned.
And so this is where we want to.
We want to tee up this likethree step process.
(08:29):
There's lots of consultingframeworks in here forgive me,
yoni and I are, you know,consulting geeks but step one
you want to exhaustivelyidentify and comprehensively
identify potential use casesusing what we call a driver tree
framework, and we'll go intodetails here.
But once you have, you know, anexhaustive set of use cases
(08:52):
identified that are tied toorganizational value, then you
want to organize and prioritizethose, leveraging kind of two
frameworks or approaches thatcome from the consulting world a
value complexity matrix andthat is fueled by
(09:12):
hypothesis-based thinking andusing a data-driven approach to
kind of validate the placementon the value complexity matrix.
Again, we'll go into somedetails here and then the third
step is really to select thebest use cases, leveraging what
we call the 80-20 rule.
So, and that's how you get theflywheel started right.
Those three steps help youachieve the flywheel and in a
(09:37):
world where AI is only an APIcall away, it can be very
tempting to skip this step andjust get to action.
But you know, we reallyencourage you to kind of take
this three-step approach to getset up for success so you don't
have the failures, the commonpitfalls that Yoni was just
chatting through.
Speaker 2 (09:56):
Yeah, and I don't
want people to think also,
kathleen, while what we'retalking about today is, that is
that part upstream of the actualbuild.
You know it's everything thatyou need to do before you get
there.
I think we also want to sendthe message that, while you do
it before you actually startbuilding, even after you do
start building, you probablywant to continue some of these
frameworks throughout andcontinue to iterate, right.
Speaker 3 (10:18):
That's right, that's
right.
Speaker 1 (10:27):
Yeah, well, let's
dive into the identification
here.
Who should be at the table?
You know, the executivesprobably have the vision, but
there's probably a certainsubset of employees that are
continually kind of testing outnew AI tools or understanding
the art of what's possible.
So who are you bringing to thattable to have that discussion,
or at least start to identifyand bring in the right people?
That's a great question youthat table to have that
discussion, or at least start toidentify and bring in the right
people?
Speaker 3 (10:47):
That's a great
question.
You really want to haverepresentation across the
various business units.
The name of the game here isnot creating these siloed use
cases, and you really want tostart at the use case level.
We are not talking about tools,right, ai tools right now,
off-the-shelf things to buy,right?
We are talking about businesschallenges, business
(11:09):
opportunities, and so, to makesure that we don't fall in the
common trap of you know what'sthe solution, looking for a
problem, we like to start withthis driver tree concept and
again, you're bringing you knowlots of business units to the
table.
Executives and getting a reallygood workshop going is typically
(11:29):
how we suggest doing this.
But your starting points of thisdriver tree are going to be,
you know, the common driversthat you want to bring to the
table when you're first startingare going to be, you know,
revenue growth, cost reduction,risk reduction, maybe employee
productivity, customersatisfaction, and there's
(11:52):
potentially secondary onesrelated to innovation,
enablement or maybecompliance-related value or
operational speed and agility.
But I do think oftentimes, whenwe're talking about
organizations in corporateAmerica, we've got revenue
growth and cost reduction orpotentially risk reduction as
top drivers, and so from thereyou want to go and break out.
(12:16):
You know and and this is whereit's so critical to have lots of
business unit representation atthe table brainstorm, okay,
what would be a revenue growthyou know concept in your
business unit, you know chiefmarketing officer, and their use
case is probably going to bepretty different from a use case
(12:37):
that's generated in HR or, youknow, in a manufacturing
business unit, right?
So that's why it's so importantto have a lot of representation
at this use case generation,because the ideas are going to
be very different.
But if you start with thisdriver tree that's rooted in the
business outcomes kind ofstarting with the top of the
summit, the top of the mountainin your mind, so that we're all
(12:59):
charting towards the same finaldestination that's really key.
Speaker 1 (13:03):
Yeah, yeah.
Is this a situation whereyou're thinking no idea is a bad
idea, or do you want to putsome focus on this driver tree?
Speaker 3 (13:13):
No idea is a bad idea
, but you do want it to fall
into the framework right.
You need to be able to tie theideas to those original business
outcomes and if you see this onpaper, it kind of starts with
these big buckets and theybranch out into, you know, we
say MISI, mutually exclusive,collectively exhaustive buckets,
and then it may branch outagain and again into their
(13:36):
component parts and it's just anupfront brainstorming session
to get all the ideas on thetable.
We'll talk in the next you knowstep here about how to actually
organize and prioritize this,but at this point there's no bad
idea on the table.
Speaker 2 (13:50):
Yeah, I mean, maybe
it would help if I brought it to
life with an example here,right yeah?
Speaker 3 (13:54):
Because Yoni led the
internal ideation on use cases
right.
Speaker 2 (13:58):
Yeah, so at Worldwide
, we went through this ourselves
back in early 2023.
Went through this ourselvesback in early 2023.
Again, we had shadow AI, youknow, everywhere, and we needed
to kind of take a step back andget this organized.
And the first step is exactlywhat Kathleen was talking about
we had to bring the right peopleto the table.
So we have.
We had the heads of eachbusiness unit as key
(14:22):
stakeholders here and theybrought their director level
people to the table as well.
We had people from technologyas well as all the way up to our
CEO was very deeply involved inthis, and that actually set the
stage for us creating thisdriver tree around revenue
growth and profitability, or howdo we reduce costs right, those
(14:44):
were our main buckets and wewanted to kind of take use cases
that kind of were focused moreon one side or the other, and so
at Worldwide, our first two usecases that we went after, based
on the driver tree, were Atom,which is Atom AI it's our
(15:04):
internal knowledge managementchat bot that kind of knows
everything about Worldwide andthe RFP assistant.
And it was interesting becauseAdam, being a general knowledge
system and a question and answerchat bot, you can go in a
thousand directions and each oneof those can have and they have
use cases.
(15:24):
There's multiple use cases forAdam.
But going down the revenue path, what we found, you know we
were looking at things likemarketing, things like sales, um
, how we, how we do our, ourservices, capabilities.
Where we landed was we wantedto focus on the sales efficiency
and sales effectiveness, and soour first couple use cases with
(15:46):
Adam were focused onsalespeople more effective at
their job by being able to pulltogether, uh, vignettes or you
know what.
What have we done in thisindustry type questions?
Or can you tell me an exampleof when we did this type of
project in the healthcareindustry, you know, focused on
increasing patient, you know,through time or something along
(16:09):
those lines?
Right, and so that allows themto prepare for meetings really
quick.
Then, on the flip side, we atWorldwide respond to a lot of
RFPs.
That's a big part of our job.
We have a whole proposal teamand it takes them a long time to
get to the first draft of theresponse to the RFP, to actually
understand what the RFP isactually asking, and then to
(16:31):
ultimately just respond and getto our final response, and so
that to us felt like a costreduction type of driver where
we could reduce the time andeffort it takes for this team to
respond to RFPs and thus, youknow, allowing us to save, save
on costs there in time.
Uh, on the Adam side, you know,with the sales organization, it
(16:54):
was fantastic.
I think you know we were seeingnumbers like 25 to 30 to 30%,
uh gains in how quickly they canuh come up with a sales pitch
process, a sales pitch, a wayfor them to be more effective in
how they're talking to theirclients on a day-to-day basis,
and we have a lot of metricsaround that.
(17:16):
On the RFP side, it was prettyinteresting.
Like I said, we went inthinking it was gonna be a
cost-cutting activity, butactually the fact that we can
now respond to RFPs in a moreefficient manner.
We realized there were so manyRFPs that we just said, you know
what, we don't have time, wecan't, we can't get to this one,
we don't have enough peopleright now, and now we can, we
(17:38):
can actually respond to RFPs wayquicker, way easier.
There's some that you knowcould be a moonshot for us, but
we're still going to respondwith our best effort and it
could only take us a day or tworather than two weeks, and
that's led to some top linegrowth for us as well.
We're responding and winningmore RFPs than we ever have.
Our time to first draft of anRFP is 80% faster.
Speaker 3 (18:02):
That just translates
into cost savings, but also
revenue growth, so that was anice surprise for us from a
driver tree perspective of ourtoolkit here to get started to
build the flywheel momentum inan AlignSmart way is to organize
(18:35):
and prioritize your use cases,right.
So you've just stepped along,you've generated a laundry list
of them great, from differentbusiness units.
But now which ones do you wantto get started with?
You need to organize andprioritize.
So we leverage this frameworkcalled the value complexity
matrix.
And that sounds complicated,but it's not Just imagine.
You know a simple graph whereyour y-axis, the one going up
(18:57):
and down, is your value and yourx-axis, going across, is
complexity.
Okay, and you, you take youruse cases and of course it's
hard to precisely quantify thevalue and complexity of each use
case.
You know when you've justcreated this laundry list.
(19:19):
But you make some assumptions,leveraging hypothesis-based
thinking.
Leveraging hypothesis-basedthinking okay, there we go more
consulting terms,hypothesis-based thinking, to
place each of your use cases onyour two-by-two here.
And you say, okay, yoni, yourRFP assistant, we think we can
(19:42):
cut down on first draftgeneration by 80%, right, what
does that quantify?
How do you quantify that intothe value, ie cost savings?
Right, because the value is, inthis case, if we're focused on
revenue and cost, I mean valueis a dollar-driven number.
So you make assumptions, youdevelop a little back-of, you
(20:05):
develop a little back of theenvelope model and you have
value On.
The other axis is complexity.
So do we have the data readilyavailable to ingest, to train,
you know, to fine tune themodels?
Do we have the computationalpower?
We need Yoni.
What are the other?
Speaker 2 (20:22):
pieces.
Yeah, those are the bigcomplexity ones from a technical
perspective.
But there's also regulatorycomplexity, compliance
complexity.
There's sometimes even withinorganizations, political
complexity.
Some use case may seem amazingfrom a value perspective, but
there's maybe three groups thatare fighting for the ability to
do that use case and it's justnot worth it to do it right now.
(20:44):
It's just going to cause toomuch stir.
So there's a lot of differentthings you have to consider when
you're thinking aboutcomplexity.
Speaker 3 (20:50):
Yeah, and then so you
come up with your first, you
know, hypothesis based placementof your use cases on this value
complexity curve.
Right, and it's not perfect andyou know that.
But it's a starting point tohelp you select your, your best
use cases.
Now a critical part ofhypothesis based thinking and I
(21:21):
say this as a recoveringscientist is, you know, you
develop final selections.
Maybe you want to go gathersome additional data from the
proposal team to you know togather information on how long
it takes to do X, y, z, like.
Maybe you want to actually goand gather data to refine your
hypothesis on the use caseplacement on the value versus
(21:44):
complexity matrix.
Speaker 2 (21:47):
And the one thing
I'll just add to this is that it
just like it was tempting tojust get started on building AI.
It's very tempting, especiallyas I'm a recovering scientist as
well, and it's hard for somepeople to make a guess, make a
hypothesis in the absence ofinformation, make a guess, make
a hypothesis in the absence ofinformation, and so a knee jerk
(22:08):
reaction may be.
I need to just go gatherinformation first before I make
a hypothesis on what the valueis or what the complexity is.
But that could lead to analysisparalysis and it also makes it
very difficult for people tokind of react.
And so generating thehypothesis first in the you know
in the absence or in the black,in the, with having lack of
information, is a good thing.
(22:29):
You could be completely wrong,but now you have it up on a page
, you show it to the subjectmatter expert, the person who's
in charge of running this usecase, and now they know exactly
kind of what they need to talkto and how they need to refine
it.
So it's important to be able tojust kind of put your foot out
there and just make a hypothesisto get started, and then you
(22:51):
can get to the data to refine it.
It will also direct you to theright data that you need to
refine it in the right direction.
Speaker 3 (22:59):
This episode is
supported by Vast Data.
Vast delivers a unified dataplatform purpose-built for AI
and advanced analytics,eliminating silos, accelerating
insights and scaling to meet thedemands of modern enterprise
workloads.
Speaker 1 (23:14):
Well, Yoni, bring us
back to the internal example
here.
You talked about the two usecases that we did move forward
with, Atom AI and RFP Assistant,which are going phenomenally
for us.
But I'm interested on that twoby two chart.
How did it compare to other usecases?
Because we weren't just dealingwith those two, as I understand
that we had dozens upon dozensof use cases, if not more, that
we were also considering.
(23:34):
So how did we make thoseweighted decisions?
Speaker 2 (23:37):
If not more, that we
were also considering.
So how did we make thoseweighted decisions?
Yeah, I mean, it's exactly howKathleen laid it out.
You know we had 82 use casesafter all was said and done.
82?
82.
Oh my gosh, weeks, four weeks orso in war rooms with different
lines of business, differentsubject matter experts,
technology, trying to make surethat we're all aligned on what
(24:01):
this value is.
We were putting hypotheses outthere, we were refining it with
data, we were shifting things.
I think at one point the RFPassistant was probably ranked
number 64 on the list.
But after we continued torefine it we kind of realized a
bunch of things about some ofthe use cases that were above it
, about the RFP assistant itself.
Adam was always top on the list.
(24:22):
It was kind of the thing thatyou need to do to get started
and we knew it was going tobranch off and be able to be
used for a number of differentuse cases in its own right.
But RFP assistant startedpretty low and kind of bubbled
up after all this refining.
So that's how we went about it.
We are just to fast forward totoday.
We still have this valuecomplexity matrix.
(24:43):
It's evolved immensely overtime.
We're constantly in taking usecases, removing some that we
don't think are good anymore,some we thought we need to build
custom, but now they have offthe shelf products for that, so
no reason to reinvent the wheel.
So there's a lot of things thathappen with this, but getting
started off with that on theright foot, it is now a place
(25:06):
for everyone to react to, foreveryone to you know, um, align
against and it and it just kindof like is our.
It's our place we go to whenwe're trying to think about what
are we doing next and where arewe getting our money yeah, it's
a, it's a great backlog.
Speaker 3 (25:20):
Now, right, and and
this starts to tease towards you
know, if yoni and I are invitedback, a part two podcast around
, how do you sustain themomentum and keep the flywheel
going?
But, like you know, how do youcontinue cranking through that
backlog of use cases?
Uh, is, is.
Speaker 1 (25:36):
Yeah, what's another
conversation?
I will definitely, or we willdefinitely, invite you back for
a part two here.
But I'm curious, you know, howdo you balance the need to go
for some more moonshot projectsthat might be harder to
accomplish versus the lowhanging fruit that might build
momentum towards going for thosemoonshot projects?
So, you know, are you lookingfor just the quick, easy wins,
(25:57):
or are you looking for, you know, something that you can sink
your teeth into?
Speaker 3 (26:01):
As you just teed up
step three of our process
perfectly here.
So thank you for that.
We, you know, we recommendtaking the quote unquote 80-20
approach for actually selectingyour use cases, and this is not
about blindly selectingeverything that's in that, you
(26:22):
know, bright green quadrant.
If you break up your, yourmatrix into you know four
squares, essentially, then yourhigh value, low complexity is
going to be your low hangingfruit, obviously right, and you
clearly want to get started withsome of those.
That's going to be where you'redriving value with not as much
(26:44):
effort.
But maybe your high value, highcomplexity will include some of
your you know, moonshot, as yousay use cases, and you probably
want to go for some of thosetoo.
And so this is where you gofrom a math algorithm to using a
conversation, leveraging the80-20 approach and just
(27:08):
strategically picking the rightuse cases to go after, once
you've organized them in a waythat a leadership team can wrap
their head around and make adecision around what to move
forward with and how to getstarted.
Speaker 2 (27:20):
Yeah, and I'll just
use yet another consulting
phrase here it depends, right,it depends on the appetite of
your company and the culture ofyour company.
There are some companies thatare very risk averse, very, and
there are some that are, youknow, risk tolerant, and so,
depending on the leadership, howmuch momentum you actually need
(27:45):
to build, to create thatflywheel.
There are some organizationsthat just need a little bit of
momentum and they can get goingand so you can sustain, kind of
taking on a really complex usecase because the leaders and the
board and whoever has somepatience, and there are others
where it's like, no, you need tokeep on just getting those
quick wins and proving yourself.
(28:05):
But this again, having this80-20 kind of mindset, where
you're spending 80% of your timeon 20% of the use cases and
only focusing on those thatmatter to your organization.
And now you have this with somecategorizations of low hanging
fruit or moon shots, however youwant to call them on the four
quadrants.
(28:26):
It gives you direction and itgives you an ability to make
meaningful presentations and itand everyone knows what you're
talking about, because you'vekind of again bringing back to
the original analogy of themountain.
We've circled everyone back tothe same spot on the mountain
and we're all marching uptogether.
Speaker 3 (28:42):
Yeah, and,
importantly, you're marching up
because you've rooted all ofyour use cases in one of our
common, you know driver treestarting points of revenue
generation or cost savings, oryou know productivity.
So, and I do think that as wehave launched our, you know
they're not POCs anymore,they're production.
(29:03):
You know capabilities, with theRFP assistant, adam, and more,
as Yoni was discussing.
You know we take a lot ofeffort to track the value and
continue reporting on the valuecreated from these capabilities,
because that's, you know,important to demonstrating that
AI is transforming how we'reworking and continue, you know,
(29:23):
earning the right to pursuing,to pursue additional use cases.
Speaker 2 (29:27):
Yeah, and you know
you can't yield out as you can't
change what you don't measure.
And so sometimes the use casesfeel good, they feel like
they're going right, but thenyou kind of measure things and
it's actually it's okay.
Or sometimes things feel rockyand they're not really moving
along, but when you kind of takea step back and look at how
things have changed, you'reactually making a pretty
meaningful, impactful difference.
(29:48):
I think the other thing tomention about use case use case
driven approach is that we oftenfind with all of our clients
that use cases beget use cases.
You actually there's sometimesyou didn't even think of these
use cases when you were gettingstarted, but because you did one
use case that was related, youkind of uncovered something else
(30:11):
that you would never havethought of before, and that's
happened, I would say, 99% ofthe times on client engagements.
It's happening within worldwideas well.
Speaker 1 (30:20):
Yeah Well, I mean,
the two of you state this out.
It feels and seems rathersimplistic and it is pretty
straightforward.
But I'm curious as we head outinto the real world, where
things are never as easy as theyseem where are we finding
stumbling blocks with theclients?
Is it with bringing the rightpeople to the table?
Is it aligning, Because thatfeels like several landmines in
(30:41):
and of itself, aligningeverybody behind one or two use
cases, or is it just openness torevisiting things?
Yoni, your example of RFPassistant, where we thought it
was going to save money, itended up saving money but also
driving growth.
So where do we see stumblingblocks?
And you know better put, whatshould listeners be avoiding or
(31:02):
what should they try to avoidhere, so that they can make it
as simple, as you stated.
Speaker 2 (31:07):
Yeah, I think I think
I'll just speak freely here
about some things.
I think you know everything wetalked about today just getting
started on the right foot,taking a little time to slow
down, to speed up that's a bigstumbling block.
That's why we're talking heretoday.
And so having a structuredapproach to defining and
prioritizing and then ultimatelyselecting your use cases, like
(31:27):
we're recommending here, is abig one, and that's one that
we've seen over and over andover again.
But I do think just some otherthings that come to mind are
getting the right leadership tothe table.
There's AI is a big buzzword.
It's a lot of hype right nowand it's easy to say you're an
AI first company or somethingalong those lines.
(31:50):
But if you are a leader andyou're not actually driving
those meetings or asking thetough questions, then I think
you're doing a disservice toyour organization.
I mean, we still, to this day,have weekly meetings all the way
up to our CEO, where he isdriving us hard on our internal
(32:11):
use cases and how it'stranslating into value.
And I'm not saying everycompany needs to have their CEO
driving that.
It would be great.
But leadership does need toactually take action and be
engaged.
That's a big stumbling blockthat we're seeing as well.
Speaker 3 (32:26):
Yeah, and I also
think that you know, if you look
at a traditional kind ofstarting point engagement that
we take with clients, brian,like we, you know we start with
the with the workshop to bringthe stakeholders to the table,
to do the use case generation,ideation from the driver tree,
right.
We do the prioritization matrix.
Like not everybody, noteverybody is comfortable taking
(32:48):
that hypothesis based approachand doing a value versus
complexity scoring.
That can be challenging, right,and so as consultants we can
oftentimes help, you know,overcome that and get a
placement, get alignment on theright use cases and then
actually get the pox startedLike that's a typical way, you
(33:09):
know starting point that wepartner with clients, because
just that whole process can be alittle bit overwhelming when
you've got the you know shadowAI that's shotgunning in the
background.
So oftentimes this will comefrom like a you know top-down
approach to get alignment, whichI think is a really smart,
smart thing to do before theshadow AI takes off.
Speaker 1 (33:39):
Well, yoni, I loved
the earlier that you said.
Use cases tend to beget moreuse cases.
What else can the valuecomplexity matrix tell us?
Because can it tell a companywhere to invest for talent, to
help support some of thesethings?
Can it help identify businessmodels?
What can we learn other thanjust understanding what to go
after first?
Speaker 2 (34:01):
Well, I think you hit
on the talent thing for sure,
because what you're going afterthat's going to necessitate the
right people, right, and so Ithink it's going to allow you to
kind of set your hiringpipeline in the right direction,
where you're going to invest inyour people.
I think, if you get the righttechnology people in the room
(34:23):
with those use cases andunderstand what will it take
from a technical perspective toget it going and the data that
you know, the chief data officerand his or her organization,
what data needs to go in place.
I mean, we're actually workingwith a client right now where we
are doing this use caseidentification prioritization,
(34:44):
but we've been in there for sometime helping with their data
platform and their datagovernance capabilities and the
data platform is we're migratingthem from an on-premise
platform into a cloud-basedplatform.
What data we migrate first,which data which you know data
domain we are doing datagovernance on first and second.
(35:05):
That also can now be use casedriven right Because we have
this use case identificationmatrix.
It's got all these different usecases on there.
That's going to now guide howwe govern our data and where we
start that process first,because it's a big process to
get everyone aligned on howwe're going to do our data
(35:26):
governance program.
It's a large, large effort tofigure out how you're going to
prioritize your waves ofmigration of data into the cloud
.
But now you've got this guidinglight of use cases driven by
all going all the way back tothe beginning, the value, the
driver tree, and then you haveyou know the value and the
complexity of each of them.
(35:47):
So now you can have directionon how you migrate data,
direction on how you govern data, direction on how you hire and
then, ultimately, what you putinto production and get value
out.
Yeah, anything.
I missed there, kathleen.
Speaker 3 (35:58):
Well, yeah, no,
that's right, and I was also
just thinking a little bit aboutBrian's question, and I want to
actually take it back to thefirst step, which is the driver
tree ideation.
Right, and it's a practicaltool, not only because it links
your use cases to business value, but it forces you to think
about things in a reallycomprehensive way.
(36:20):
And so I do feel like we oftenget unlocks in those discussions
where leader, business unitleaders, may not be thinking
about you know a use case, untilwe force the framework on them
and they say, okay, gosh.
Well, maybe I'm thinking abouta mining client that we've been
working with.
(36:40):
Right, like, the revenue growthside is harder in mining
because the price of thematerials is very much
controlled by the market.
Right, but but costs, ok, let'sbreak out what are the five
major cost components of ourorganization out.
What are the five major costcomponents of our organization?
Right, and then, okay, let'sfollow the line on that.
(37:00):
Okay, if maintenance of ourequipment is one of our biggest
cost drivers, what could we bedoing to decrease the cost of
maintenance of our equipment?
And when you ask the questionslike that, it leads to ideas
that you maybe wouldn't pull outof a hat.
If you ask somebody hey, whatideas do you have for Gen AI for
your business unit?
(37:21):
Right Like it forces theideation in a way that that you
know covers the entirecomprehensive bucket linked to
the top value drivers.
So I think that that drivertree approach really helps,
lends itself to, you know, Ithink that the hidden the hidden
(37:59):
like value and gem that you getout of doing this.
Speaker 2 (38:03):
It's a very human
process and you're bringing
people together to kind ofideate on things that they
probably wouldn't do in theirday to day and just going
through the motions ofprioritizing things, going
through the motions of thinkingabout what truly you're trying
to get at from a valueperspective that brings people
(38:25):
together and often unlocks ideasthat they would have never even
thought of or even you know,gets them excited and feeling
ownership around getting theseuse cases off the ground,
whereas if you're just kind oftold to go make this an AI use
case and make it happen, itdoesn't have that ownership
feeling.
It doesn't really have peoplekind of like feeling rallying
(38:49):
around it rallying around it.
So there's this intangible thingthat happens when you actually
do this work that you know,again, it's very human.
So, you know, that's that's mypitch, to also not turn this
into, you know, consultants, gpt, and make it, make it automated
, like you definitely needpeople rallying people.
Yeah, you're sending me chills,yanni, I know Sorry.
Speaker 1 (39:14):
Fantastic.
Well, we're running short ontime.
So you know, thank you to thetwo of you for taking time out
of your busy schedules and yourbusy days.
You know analyzing, identifyingand plotting those use cases,
not only for us here at WWT buton behalf of all of our clients.
So thanks again for joiningthis episode and we'll
definitely invite you back for apart two.
Speaker 3 (39:32):
Thanks, Thanks for
having us.
Speaker 1 (39:33):
Thanks, thanks for
having us.
Okay, as we wrap up, a fewclear takeaways emerge from
today's conversation that applywhether you're just beginning
your AI journey or looking toscale with more structure.
First, AI success starts wellbefore any model is deployed.
The most effectiveorganizations invest in upstream
alignment, identifying usecases tied directly to business
(39:53):
value, and not justexperimentation.
Second, prioritization iscritical.
A structured framework like thevalue complexity matrix helps
leaders make trade-offs based onROI and feasibility, not just
enthusiasm and executivepressure.
And third, broad engagementmatters.
Cross-functional input ensuresyou're not just solving in silos
(40:14):
and that your initiatives haveboth traction and longevity.
The bottom line is as AI becomesmost effective.
The bottom line is AI becomesmost effective when it's treated
as a business initiative andnot just a technical one.
And what sets high performingorganizations apart isn't just
their tools, it's their process.
If you like this episode of theAI Proving Ground podcast,
(40:36):
please consider sharing withfriends and giving us a rating,
and don't forget to subscribe onyour favorite podcast platform
or watch us on WWTcom.
This episode of the AI ProvingGround podcast was co-produced
by Naz Baker, cara Kuhn, mallorySchaffran and Stephanie Hammond
.
Our audio and video engineer isJohn Knobloch and name is Brian
Felt, we'll see you next time.