All Episodes

October 20, 2022 28 mins

In this episode, Brian Pearce (Senior AI CoE Leader) and Andreas Welsch discuss leading an AI CoE to success. Brian shares shares his journey on building and leading a CoE, and provides valuable advice for listeners looking to do the same.

Key topics:
- Understand the mandate of an AI CoE
- Learn when AI CoE’s mission is done
- Hear why a product mindset is key for AI

Listen to the full episode to hear how you can:
- Focus on delivering business value
- Look beyond just building models
- Engage with stakeholders as users of your product

Watch this episode on YouTube: https://youtu.be/_V-MWGawqks

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:00):
Today we'll talk about leading your AI CoE to
success.
And who better to talk about itthan someone who's done just
that.
Brian Pearce.
Hey Brian, how are you?

Brian Pearce (00:12):
Hello, Andreas.
How are you?

Andreas Welsch (00:14):
I'm all right.
Thank you so much for joining.
I'm so excited that we have theopportunity to have you on.
And we've talked a couple weeksand actually also a couple
months ago.
And I was so in inspired by thestory that you've shared your
own career story in, how youhave built and led a CoE at one

(00:34):
of the largest U.S.
financial firms.
What you can share with theaudience, what has worked for
you, what they can take away aswell.
Before I talk too much, whydon't you tell us a little bit
about yourself

Brian Pearce (00:47):
Sure And, I think just for context today.
I was a member of a, leadershipteam that built in AI c OE at a
large US financial institution.
We were responsible for standingup the CoE and ultimately
delivered$200 million worth ofmodels in a two year timeframe,

(01:07):
once we actually got going.
Specifically I was responsiblefor product for, go to market,
and for customer successfunctions in that CoE.
It's a team sport.
There's lots of other playersbut, that was of my sort of role
within that leadership team.

Andreas Welsch (01:25):
Thanks for sharing.
So I'm confident that thelearnings you have gone through
and that you'll share in theones that resonated with me are
probably the same ones thatothers in the audience are
either experiencing right now oras you're getting onto that path
of leading a CoE that you mightrun into.
Pay close attention.

(01:46):
So for those of you in, theaudience if you're just joining
the stream, drop a comment inthe chat.
What do you think the role of aleader in the AI space in
business should I know we'vetalked about this already a
little bit and, many people inmy network that lead AI projects
come from very differentbackgrounds professionally.
And so I'm always interested tolearn about the path that people

(02:07):
have taken in their career.

And so I'm curious (02:08):
what's been your journey to AI?
Can you share a bit about thatwith us?

Brian Pearce (02:15):
Yeah, for sure.
I think my journey's a littlebit different than lot of folks
in this space.
I really came at AI as a fullstack product manager, which I
fully embraced that description.
And my career has really beenfocused on taking like emerging
technologies and applying themto business problems.

(02:35):
And it's started out franklywith client server and then
internet, and then mobile.
And then I came to AI.
AI is just another in a sort ofseries of technologies that I've
used to solve business problemsin my career.
And so I'm really coming out ofit from almost more of a
software perspective.

(02:55):
And, that's a little different.
I think if you look at most AIjob postings, people want people
with a lot of AI experience orparticular degrees.
And that's really not where Icome at it from.
And so I think that does help mein some ways.
And then I bring a verydifferent perspective, which is
a perspective of sort of thesoftware industry bringing much
more sort of that softwareengineering, software sort of

(03:16):
background around the types ofthings that we're doing or we're
doing in that side of the housecan really be applied to AI.
I think that's helped me besuccessful, because it is a
little bit of a differentperspective.

Andreas Welsch (03:29):
That's, awesome.
I think that's also veryencouraging, right?
Because to your point, a lot oftimes,there's this expectation
that you need to have a PhD instatistics or have been a
researcher before you can moveinto this kind of a role.
But also from my experience, Ifeel that having a different

(03:51):
background and differentperspective also gives you a
different perspective on thetopic of AI and making it
relevant in business.
And also as we talk to your yourstakeholders in business about
these kind of things.

Brian Pearce (04:04):
Yeah, absolutely.
I think I was joking with you.
I feel like 99% of job postingstoday, I would be automatically
just rejected from an AI jobposting right?
If you look at way the postingswork, and it's no, I've been
there.
I've had success.
I have a lot of perspective Ican bring, but because there's
something of a focus, right?
That ends up being a barrier fora lot of folks, and I don't

(04:26):
think it needs.

Andreas Welsch (04:29):
I think it's also on all of us as we move in
into leadership roles to carrythis forward, right?
And to look for diverse skillsets that in the end help us
become better and build betterteams.
So maybe let's switch gears alittle bit.
We wanna talk about leading a cfor AI.

(04:49):
Maybe as the first step, right?
We should probably talk aboutwhat does a CoE even do?
And it's the abbreviation for acenter of excellence, but what
actually stands behind it?
What do we mean by it?
And what needs to be in place tobuild one?
When is it a good idea to haveone in the first place?

Brian Pearce (05:08):
Yeah it's so funny that it's such a basic question.
But it's actually reallyimportant to understand.
And, talking with other CoEleaders and in looking at my own
journey and, our journey when weboth are CoE.
I think it can mean differentthings for different companies
really based on culture andbusiness needs.

(05:28):
But at its core, at its verycore an AI CoE should be
accelerating the realization ofbusiness value when you're using
AI.
So you know, it's really abouthow do you go faster, do better,
get more results when you'reusing AI.
And move it from really beingabout experiments or the lab or

(05:49):
R&D into really delivering valuefor your stakeholders, real
value in the business as opposedto just learnings or
experiments.

Andreas Welsch (06:00):
Fantastic.
So let's maybe turn over tofolks in the audience.
If you have a question forBrian, put it in the chat.
I think the other part that Ifeel is so important is not only
having the right skillset andunderstanding why you need this

(06:22):
CoE.
But I think we all go through acertain journey, right?
Whether we are aspiring to moveinto a role like that, are in a
role like that, have been in arole like that.
There are lots of learnings andlots of stories to tell as well.
So I'm curious, what was one ofthe things that you've learned
on your journey when you'vestarted the, CoE?

Brian Pearce (06:43):
Yeah.
I mean there's a couple ofthings.
I think going in, getting reallyclear on the mandate of the CoE
is important.
Why does the CoE exist?
What's it gonna do?
How's it gonna help yourcompany?
And what's the role?
And certainly I think I sharedwith you the story of trying to

(07:03):
be the AI police.
When we first started, we werereally excited about AI and we
were gonna catalog and controland decide.
Every vendor in the company thatwas using AI.
And it was just a completefailure, because everybody has
got AI embedded in theirproducts.
And the idea that our team wouldsomehow have control over this

(07:26):
was just not realistic.
At this point, the hardwarerouters have AI models in them.
Are you really gonna have yourteam be responsible for the
hardware, the models in ahardware router?
It's not possible, right?
So really getting smart aboutthat getting your data in place

(07:49):
and data governance is reallycritical.
Certainly as a financeinstitution, data governance is
absolutely critical for us.
I think for any organizationand.
Governance can actually reallyhelp you.
So when you begin to reallyunderstand your data, you have
good tight controls over itbecomes much cleaner.
It's a much better tracking ofdata.

(08:10):
A much cleaner sort ofunderstanding of the chain of
custody of data.
It's really critical.
And so you can go from thereinto that space.
And then frankly, understandingwho the stakeholders.
Is also super critical, right?
From a stakeholder perspective,you have to really understand

(08:30):
who is it that is gonna reallybenefit?
Who is it that's really gonna beinterested in AI?
For us, we had our governance,we had model governance, we had
data governance, we had ourbusiness lines.
Those were all stakeholders.
And it was really important forus and even like legal and
compliance were importantstakeholders where we.
Education and then continuing tokeep them in the loop as we move

(08:52):
forward.
And so really that was reallycritical for us is understanding
the sort of full ecosystem ofstakeholders and then having a
plan to actually engage thosestakeholders.
And, frankly, we did a lot ofeducation.
The legal team was not happywhen I showed up for the first
time and told them, guess what?
AI, it doesn't follow rules,right?

(09:13):
It doesn't follow.
You don't tell AI things, AIlearns.
There was not some happy people,right?
There was something like, wow,this is gonna be a problem,
Brian, but we worked through it,right?
And we began to understand.
But it's really thatunderstanding that was getting
to a common understanding thatwas really critical for.

Andreas Welsch (09:29):
Yeah I remember having similar conversations
with some of the customers, thatI've worked with.
And, specifically around risksand controls.
As soon as you get in that area,and like you said, legal as
well, it starts getting a littlefinicky So let me pick up maybe
on that AI police topic realquick.

(09:51):
I see we are getting a fewquestions in the chat already.
What has that led to as you andthe team started trying to,
catalog as much as possible?
How was the resonance, thefeedback from your peers.

Brian Pearce (10:06):
Yeah.
So when we started out thatprocess we had every good
intention.
Which was we wanted to compile alist of everywhere in the
organization we were using AI.
And we began to realize thatevery vendor vendors were
claiming to have AI in theirproducts.
And we wanted to have a list.
And the point where we tried toset up these gatekeeping

(10:30):
controls where we were gonnamake approvals and people
frankly just worked around us.
And we lost some credibility.
It was an overreach.
We thought we were doing theright thing, but we really just
got out ahead of ourselves andwe need to begin to operate in a
large organization like that.
People very quickly realize youdon't really have the scope to

(10:53):
pull this off.
You really can't do this.
And you lose some credibility.
You lose some trust.
So we really had to go back andreset on that point to realize
that we weren't gonna besuccessful doing that.
We wanted to really get focusedback on that delivering value
mandate and get back into someof the good graces of our
stakeholders and let this otherstuff go by the way.

Andreas Welsch (11:15):
Thanks.
Perfect.
I think that's a great way toreset and go back to the point
of providing value to thestakeholders.
Thanks for sharing.
Thanks for being open to sharingthat.
So I see there's a question fromPedro in the chat.
And he's asking, where should anAI CoE in your view be best
located?
And what criteria would you useto determine if it should be

(11:38):
centralized or federated?
So now we're talking aboutorganizational models as well.

Brian Pearce (11:42):
I think this is gonna depend on your company
culture, right?
And how your company works.
For us, what we ended up doingis we had a virtual team.
So we had a business team that Isat on.
That was in our innovationgroup.
And that's where sort of thehead of the CoE sat.
We had a data science team inour data group and we had a

(12:03):
technology team in ourtechnology group.
And we worked together as aleadership team.
Cause you really need a blend ofall those skills.
And that was the way that workedfor us.
I think What you risk is if youhave the CoE say, be just in
your IT organization.
You have to make sure that it'snot just a solution looking for

(12:26):
problems to solve, right?
You have to make sure that thebusiness is engaged and you're
really thinking about the rightbusiness problems.
You don't want people to justsee it as yet another tool in
the toolbox, right?
Oh, you've got Java and you'vegot AI and you've got NoSQL.
And hey, they're all together inan IT toolbox.

(12:48):
And so I think you need to makesure that your organization, if
it's gonna be sitting in IT.
And then I think when IT sits ina data organization, I think you
have to really make sure thatdata organization is connected
to the business, right?
Do you have the businessknowledge in that.
And the centralized versusfederated, I think is actually
really an interesting question.

(13:09):
A couple of things I think comeinto play, and again, some of it
is organizational culture andfrankly size.
I think if you have anorganization where you've got
two or three data scientistsspreading in teams throughout
the whole organization, thatbecomes really challenging,
right?
Because as a data scientist,you're not working in a team.

(13:29):
You don't have a career path.
I think it becomes very hard toleverage learnings and best
practices, and that's, to me, acase for centralization is to
get some scale.
But if your businesses are largeenough that you can really have
scale, then I think federatedcan make sense.
For us, we started out as acentralized model and moved to

(13:50):
a, I would call it a semifederated model.
We continued to have somecentralized resources developing
models.
But then the very largebusinesses who could hire their
own teams of data scientists didthat as well.
And the central team became likebest practices, platforms and
tools and really try.

(14:12):
A community of practicetogether.
So it's gonna vary by company.
I know that's not like a magicanswer, right?
I think you have to reallyfactor in these different
aspects of that decision.

Andreas Welsch (14:26):
Awesome.
Thanks.
Yeah, I think that the culturalaspect is definitely an
important one.
I see there's another questionfrom Octavo in the chat.
And he's saying, do you see therole of IT automation
infrastructure platform withinthe scope of the CoE?

Brian Pearce (14:42):
Yeah, we didn't have that.
We worked with the RPA team andthe RDA team as part of it.
I have to say, I think the mostinteresting model that I saw was
ran into a CoE leader that had ateam that was AI, RPA and

(15:04):
process engineering, which tome, that is perfect, right?
Because, you know what?
You can show up.
There's a business problem.
You bring your processengineering team in there.
You figure out like, what'sgoing on?
What's the best way to solvethese problems?
What's the best process?
And then you can apply both RPAand AI together.

(15:25):
You know why often cautionpeople, right?
As you begin to think about AIand applying AI, AI can make a
bad process really fast, right?
You can speed up your badprocess, you can make bad
decisions over and over again.
So you need to be reallythoughtful about the process
that you're looking at and whereyou're gonna drop AI or RPA into
that process and that team thathad all three together was

(15:48):
pretty compelling.
For us, scale was just too largeand so we split those up and we
had different teams.
But I do think there's somesynergy between those teams
working together for sure.

Andreas Welsch (15:59):
Awesome.
Thanks.
You mentioned a little bit aboutthe being the AI police and
going back to the drawing boardand repositioning yourself.
How did things work after youhad done that and you started
getting in into your first AIprojects?
What were some of the learningsearly on that have carried
forward over the journey?

Brian Pearce (16:20):
It took us a couple of attempts to get it
right.
I think at first we got reallyexcited about building models.
And we didn't wanna disappointour stakeholders, so we took in
every project and we began tothen lose projects, deepen the

(16:41):
pipeline.
After working on them for weeks,projects would wander away.
We'd run into some problem andthe it would stop.
And we had to really begin to doa much better job upfront of
filtering what projects were wegonna take on, and how do you
really make sure that whatyou're doing is in fact a good

(17:04):
AI problem to solve?
And it turned out a couplethings we needed to get really
focused on.
One is really about data.
So do you have the data to solvethis problem?
If you don't have the data orthe data isn't easy to get to in
some cases the data was reallychallenging to get to.

(17:25):
It was, we had it, but it waslocked away.
It was gonna take months tounlock it and to get it in the
shape to use.
That's not a good project totake on, right?
You gotta go solve that problem.
And then there's anactionability question, right?
Building a model doesn'tactually solve any business
problems.
Somebody has to, or some systemhas to, consume the output of

(17:48):
that model and make a businessdecision.
And we had some models that wewere all excited to build, had
the data and we trained them upand tbm and this is so great and
there was nobody to use theoutput.
Or the system that needed to usethe output had a two year
backlog before they were gonnaconsume it.

(18:09):
So there was this huge mismatchin terms of the model was ready,
but the system that needed toconsume the model output wasn't
ready.
So we got a lot better and wehad to say no more upfront to
stakeholders and just do abetter job filtering what things
we took on.
But once we got that going, wefound that we got a much better
success rate in terms of thethings we began to work on and

(18:31):
began to get more traction interms of actually delivering
things.

Andreas Welsch (18:37):
I remember from my own role
comes with a set of criteriathat you want to prioritize
these cases by.
You mentioned the models thatthe CoE has built have led to
$200 million in savings.
How did you quantify that?
Or what were some of the KPIsthat, that you put in place to
measure?

Brian Pearce (18:57):
Yeah.
The part of the process we beganto do as we learned upfront was
that as part of the valuation ofis this a good idea or not,
should we take this on, we beganto build a sort of business case
for each model, right?
And again business cases aregonna vary by company.

(19:18):
I think everyone's culturallyhave different perspectives on
business cases.
We got to the point where webuilt a business case for
everything we wanted to do andthat became a factor in our
prioritization.
And in fact, we got to the pointwhere we actually had the
leadership of the business teamlike sign off on the business
cases.
And again, early learnings,right?

(19:40):
All excited.
We're gonna build stuff.
We get the project teamtogether, they put together just
a crazy business case, hugenumbers.
It's gonna be a billion dollars.
This is so great.
And we go through the processand then later we go talk to the
leadership team and hey, we dida billion dollar model for you.
And they're like, you know what?
It's not, no, we don't pass.

(20:00):
So we got much smarter aboutupfront.
Put together a business case,make sure it's relatively
realistic, and get sign off fromthat senior leadership team on
that model.
And then that became an inputinto our prioritization.
Along with sort of the ideaaround is the data available?
Is it actually actionable?
So are the systems or people inplace to actually consume the

(20:23):
output of that model.
And it's really a good use of.
So in some cases even though youhave maybe had some value and
you had the data is it reallysomething where you need AI or
it could just simply a regularanalytics report do the same job
for you probably cheaper andfaster, and we can have our AI
resources focused on somethingelse.

Andreas Welsch (20:44):
Awesome, thanks.
So I'm taking a look at thechat.
One of the questions was fromMichael.
How do you keep an AI CoErelevant to enterprise groups?
Do you hold regular calls,install members within other
teams, produce internal reports?
What's the one tactic that youdo not recommend following?

Brian Pearce (21:04):
Yeah.
A couple of things that we did.
So when we first started the CoEup, we put together what we
called a roadshow.
And we just barnstormed, right?
We had a great presentation thattalked about different aspects
of AI, how to think about AI,where were good AI projects,

(21:24):
where were not good AI projectsand went all over the company
and whether it was a one-on-onewith the senior leader or an all
hands call or a team meeting orWe presented to hundreds of
people at offsites.
So we really just tried togenerate interest in AI program
and awareness and tried to usethat as an opportunity to drum

(21:45):
up business.
And then what we did as well iswe had basically a team of folks
that some businesses would callthem like a go-to-market team or
like a consulting team that werebasically aligned with the
businesses.
We had one person on our teamthat their job was to work with
the deposits business and go todeposits business and sit down

(22:06):
and get to understand thatbusiness and the leaders and
evaluate with them like what aretheir top 10 priorities for the
year.
And of those 10, which onesmight be AI opportunities, and
then of those six that are AIopportunities, you know what
three are actually actionable.
And so by building thoserelationships and really getting
into that business, that helpedus stay top of mind.

(22:28):
Because these businesses,they've got lots of things going
on.
AI is like 1% of theirmindshare.
And so if you're not there withthem, they're not thinking of
you.
They don't realize this is a newtool in the toolbox that they
can be using.
There's some opportunities here.
And then when it comes, like thebudget time, they don't put
aside any budget for AI projectsand then you're stuck, right?

(22:50):
You have to wait a whole budgetcycle to get back in.
So we were really trying to staytop of mind by using that kind
of relationship model.
I think the thing that was mostchallenging, just given the
culture, it was any sort ofmandate, right?
The culture of the company wassuch that people didn't really
like the so centralizedfunction.
People were suspicious of.

(23:12):
And any type of mandate thatcame out of that centralized
function, you have to do it thisway, you have to work with us,
that was challenging, right?
And so we had to really not usethat, right?
You can't play bad cop all thetime.
And you really had to go topeople and say, I'm here to help
you, right?
Help me help you be successful.

(23:33):
I have tools, I have resources,I have the new ways to solve the
problems you have.
I'm here to listen and I'm hereto be your partner.
And that was a much, much moresuccessful approach to that.

Andreas Welsch (23:46):
Fantastic.
I see there was one questionfrom Jesse asking how you see
the role of a business analystin this CoE ue, for AI?
There was another question fromJanet.
What's the business case forchange?
And how will you drive changebeyond the process?

(24:07):
And so what's the role of abusiness analyst and what's the
change you're driving?
Is there maybe a connectionbetween the two?

Brian Pearce (24:15):
Yeah.
And I think one of the things Ithink that's important to
understand and, one of thethings that became really
critical for us to help peopleunderstand is AI by itself, it's
not gonna solve any problem.
You have to have some contextaround it.
And when we talk about abusiness analyst.

(24:37):
It's really about helping to putAI in that context and whether
it's other systems, right?
Because when you're actually inproduction and you're actually
scoring your model, right?
You've got data in, you've gotdata out.
So where is that data going?
What is the business processthat it's in the context of and
who's doing something with it?

(24:58):
And it's easy to get focused onoh yeah, I'm building this model
and I've got my training dataand it's, oh, my model's so
great and look at my area underthe curve.
But it's okay, that's great fortraining, but to actually score.
You need data in, you need dataout.
And you gotta think about who'sconsuming that data out and
who's sending you the data in.
And that's really where thebusiness analyst comes into play
and understanding sort of thecontext of that project.

(25:20):
And the value play really comesinto, okay, great.
What is the value of thatoverall effort?
The model by itself, no value.
You're not doing something withthe output of that model then
you're just doing experiments.
You're just doing R&D, right?
Which can be useful in somecontext, but you're not
delivering value to thebusiness, right?
So that's why it becameimportant to really understand

(25:41):
what's that full pipeline?
And once we're actually inproduction what difference are
we gonna make?
What's really gonna change inthat business process?
And how are we gonna act thebottom line.

Andreas Welsch (25:55):
Perfect.
Seems like there's never enoughtime to talk about these things.
But maybe can you summarize itin one point?
What's the one main takeaway forour audience today before we
wrap up?

Brian Pearce (26:09):
I think for me the real key is AI is not magic.
And, what's really critical isyou have to really get focused
on delivering business value.
So AI is a fantastic tool thatyou can use to do that.
You need to focus on what is theactual problem you're solving
and how's it helping?

(26:31):
You can't just be focused on,no, it's great.
We're building models.
This is really cool.
I wanna do deep learning andlet's do a neural net.
That's just not gotta be thefocus.
The focus has to be on solvingbusiness problems and using this
powerful new tool called AIthat's in our toolbox.

Andreas Welsch (26:47):
Awesome.
That was very concise andstraight to the point.
And again, resonates deeply withme in what I've seen and I'm
sure with what many of you inthe audience are seeing as well.
So with that, we're coming tothe end of the show.
Thanks Brian for joining metoday.

Brian Pearce (27:01):
Thanks, Andreas.
Really enjoyed it.
So it's a fun conversation.

Andreas Welsch (27:04):
Awesome.
Thanks.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.