All Episodes

December 2, 2025 43 mins

"In AI, speed matters—but trust compounds."

Naomi Lariviere, Chief Product Owner and VP of Product Management at ADP, leads product strategy for a company that processes payroll for millions of people. When even the smallest error can mean someone doesn't get paid on time, there's no room for "move fast and break things."

In this episode of Hard Calls, Naomi and host Trisha Price dig into the decision that defined ADP's AI strategy: choosing to slow down full automation on ADP Assist to protect client confidence. It's a masterclass in what responsible innovation actually looks like when the stakes are real.

"We could have shipped faster. But in payroll, trust isn't something you rebuild easily." - Naomi Lariviere

Here's what you'll discover:

Why ADP paused automation to preserve accuracy. Naomi walks through the hard call to prioritize explainability and reliability over speed-to-market. In high-stakes environments like payroll, trust compounds—and so does the cost of getting it wrong.

The 3-question rubric to prioritize what to ship. Naomi shares the simple framework her team uses to evaluate every feature: Does it solve a real problem? Can we explain how it works? Does it protect user trust?

How to embed ethics from day one. ADP doesn't treat privacy, compliance, and bias as checkboxes at the end. Naomi reveals how "shift-left ethics" means involving legal and privacy teams at the earliest stages of product development.

Why diverse teams build safer AI. Homogeneous teams miss blind spots. Naomi explains how diversity across backgrounds, perspectives, and experiences leads to more resilient products—especially in regulated industries.

Building psychological safety in high-pressure environments. Innovation requires teams that feel safe to challenge assumptions, raise concerns, and kill their darlings. Naomi shares how she creates that culture while still delivering outcomes.

Whether you're building AI in a regulated industry, leading teams through complex trade-offs, or trying to balance innovation with responsibility, this episode shows you how to make the hard calls that protect what matters most.

Episode Chapters

  • 00:00 Introduction and Naomi’s Path to Product Leadership 
  • 04:32 The Hard Call: Trust Before Speed in AI 
  • 07:16 Balancing Innovation and Reliability 
  • 09:41 A Simple Product Rubric for What to Ship 
  • 11:51 Killing Your Darlings with Outcome-Based OKRs 
  • 15:15 Shift-Left Ethics: Privacy and Compliance from Day One 
  • 20:10 The Payroll Anomalies Breakthrough 
  • 28:02 Upskilling Teams for AI Innovation 
  • 36:51 Building Psychological Safety 
  • 38:54 Why Diverse Teams Ship Better, Safer AI 
  • 42:27 Closing Takeaways

Love the episode?
Drop us a ⭐⭐⭐⭐⭐ review and share it with a teammate exploring AI in regulated industries. Every subscription helps more product leaders find Hard Calls.

Presented by Pendo. Discover more insights at pendo.io or connect with Trisha Price on LinkedIn.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Trisha Price (00:00):
Hi everyone, I have an exclusive discount for Hard Call's
listeners to Pendomonium - Pendo's ProductFestival happening March 24th through
26th, 2026 in Raleigh, North Carolina.
Listen in at the halfway point todayto get this special discount to the
product festival, bringing today's topleaders in product, AI, and software.

Naomi Lariviere (00:23):
I oversee our client facing AI program at ADP.
It's called ADP Assist.
And we've been working on Agentic AI andyou know, in it, in the space of payroll.
The hard call for us is as we startedto talk to our clients and we were
showing them early concepts of whatwe wanted to do, what we realized is
that our innovation was colliding withthe trust that our clients had in us.

(00:48):
We service over 1.1 million clients.
We pay one in six peoplein the United States.
That's a lot of people.
When you say you can't get things wrong.
Imagine getting apaycheck that's not right?
So you know what we learned is yes,speed matters, but trust compounds.

Trisha Price (01:06):
If you build software or lead people who do,
then you're in the right place.
This is Hard Calls, real decisions,real leaders, real outcomes.
Hi everyone, I am Trisha Price, andwelcome back to Hard Calls, the podcast
where we bring on the best productleaders from across the globe to talk
about those moments, the decisionsthat mattered, the hard calls.

(01:29):
Today on Hard Calls, we have NaomiLariviere, Chief Product Owner and
VP of Product Management at ADP.
The company many of us relyon to get paid, including me.
Naomi and I first connected whenshe was a keynote speaker at Pendo's
user conference a few years back.
We had the joy of backstage jitterstogether, and that comradery was

(01:51):
maintained as we continue to get together.
To share strategies, challenges,and just network CPO to CPO.
What I love about Naomi is howshe thinks about innovation,
especially in an industry wherethere's little to no room for error.
In today's episode, we're digging into how Naomi balances innovation with
trust, builds psychological safetyon our teams and makes the hard calls

(02:14):
that define great product leaders.
Welcome to Hard Calls, Naomi.

Naomi Lariviere (02:19):
Thank you my friend, for having me.
That was such a nice introduction.
Thank you.

Trisha Price (02:23):
Well it's easy to do a great introduction when you mean it and
when you know someone and admire someone.
So I'm really looking forwardto our conversation today.

Naomi Lariviere (02:33):
It's very mutual.
Thanks for having me.

Trisha Price (02:36):
So before we jump into hard calls, could you share a little
bit with our listeners about yourjourney into product leadership and
what your role at ADP looks like today?

Naomi Lariviere (02:47):
Yeah, that's a great question.
I'm somebody who kind offell into product management.
I did not go to school for computerscience or kind of any of the other
paths that tend to end up here.
I have a business andmanagement background.
My first job outta college, I was in anearly career program, which I think a lot

(03:10):
of people coming outta university are - Igot into business analysts and over
time that role with the companies thatI worked for kind of merged into product
management and really understandingboth the business needs, the client
needs, as well as being able to talkto the development team about like what
we were actually trying to accomplish.

(03:30):
And maybe 15-years ago, I got put intoa leadership role and it's just kind of
snowballed ever since with additionalresponsibilities and bigger portfolios.
And I think when you lovewhat you do, your career just
kind of keeps feeding you.
And that's what I've gotta say about ADP.
I joined here, I had a very smallteam and ended up overseeing our large

(03:53):
enterprise portfolio, and now I overseeone of the largest portfolios at ADP,
which really services all of our businessunits in a shared product fashion.
Which really for us means we buildproducts that we can build once and
deploy multiple times across our productecosystem, which is, which helps us be

(04:14):
scalable and helps us grow as we need to.
So, it's a fun job.
Love working at ADP and love to bringwhat I know about product back to my team.

Trisha Price (04:24):
Great, well I look forward to digging into that a bit
more as we go through the podcast, butthis show is Hard Calls and we like
to start every episode with a hardcall that our guest has had to make.
So tell us, looking back over your career.
Recent or a while back, tell usabout a hard call you've had to make.

(04:45):
What made it challenging?
You know, what considerations orprocess or data led you to make the
decision and what'd you learn from it?

Naomi Lariviere (04:53):
Yeah, no, I think in our job, in our profession, we
make hard calls kind of routinely.
It's part of the job and you'reprioritizing things all the time
in terms of what you wanna do.
But maybe I'll talk about somethingthat's a little bit more recent.
So I oversee our client facing AIprogram at ADP, it's called ADP Assist.

(05:14):
And again, this is another initiativewhere it's build once, deploy
it across our entire ecosystem.
We've been working on agentic AI andyou know, in it, in the space of payroll
and sometimes in the space of payroll.
You know, and with this technology wewanna go as fast as humanly possible.

(05:35):
And there's so many wonderfulthings you can do with it from just
task automation to like actuallydoing the entire process for you.
And that's actually thisinitiative had that goal.
We are just gonna do everything for you.
You don't need to worry.
And the hard call for us is as we startedto talk to our clients and we were showing

(05:58):
them early concepts of what we wanted todo, and you know, we were iterating on it.
What we realized is that ourinnovation was colliding with the
trust that our clients had in us.
We service over 1.1 million clients.
We pay one in six peoplein the United States.
That's a lot of people.

(06:18):
When you say you can't get things wrong.
Imagine getting a paycheckthat's not right, right?
Like that is not a good scenario for us.
It's not a good scenario for our clients.
So you know, when you are doing thingsthat may erode the trust of your user.
You might wanna reconsiderwhat you're doing.

(06:38):
So we actually, as we talked to theclients, we're like, okay, well if
we wanted to get to that, they'relike, well, that's a really great
thing in the farer in the future.
Like, how can we get you onthat trust building journey?
So what we learned is yes, speedmatters, but trust compounds.
So, to a certain extent, we've had todelay kind of what our overall goal is

(07:02):
to really ensure that we have long-termcredibility with our users and that we're
bringing them along on this innovationjourney as we continue to progress
where we're going with Agentic AI.

Trisha Price (07:13):
I love that you shared that story.
I feel like.
So many of us building AI and agenticexperiences are going through, I
mean, yours is magnified even moreso when you talk about something
so critical as people's paychecks.
But it's true for all experiences.
It's like our customers want AI,they want the ease of use of it.

(07:40):
They want the value it providesthe automation it provides.
And if you're not doingthose things, you're falling
behind in their expectations.
But at the same time, the very feware ready to jump from here to here.
And maybe if it's something reallynot critical to their business and
it's like a playful tool, that's fine.

(08:01):
But I think in anything, that'scritical to people's business.
This like crawl, walk, run strategyof building trust and sort of being in
more co-pilot mode than full agenticautomation mode is something we have
to get people comfortable with on thejourney, even though the real value unlock
comes when we can get to automation.

Naomi Lariviere (08:22):
Yeah.
And you know, like we view whatwe're doing as does like our
job is to design for people.
This is the world of work.
We are changing how people experiencetheir day-to-day lives and.
You know, just as much as you orI walk would walk into our C-suite
with you know, "Hey, we're gonna dothis." They want the data behind it.

(08:44):
They want us to be able to explain it.
They want it to be very transparent.
And that's what we are definitelyweaving throughout what we're doing as
we reimagine how work is actually done.
So explainability, transparency, likethose are kind of like, especially with
AI, those are quintessential elements thatall of us have to be paying attention to.

Trisha Price (09:05):
For sure.
Well, I know ever since I met you,one thing you and I have always had
in common and believe in is everythingthat we do has to deliver client value.
And the great product managementstarts with client value.
And you know, your example here is,is clear that that matters to you.

(09:26):
How do you help your teams stayfocused on delivering client value,
and delivering outcomes not justfor your clients but for ADP.

Naomi Lariviere (09:39):
Yeah.
it's funny 'cause I was just talkingwith some folks just this morning,
earlier today, a about like, what'sthe prioritization rubric like?
What are we looking for when we sitdown and go, is this an idea that
has merit, that we should progress?
So there's really three kind ofthings that we're looking at is
one, is it good for our users?

(10:01):
So is it good for the client?
Like is it going to help them have abetter day or experience our product in
a way that makes them happy or delighted?
Right?
So that's question number one, yes or no.
Very simple.
second one is, does it help ADP?
So will it help us grow our revenue?
Does it help us acquire new logos?

(10:22):
You know, will it helpus deflect service calls?
Like what that's kind oflike, does it help ADP?
And then third is, how does ithelp us with either our market
position or competitive position?
So we're not necessarily the organizationthat's trying to do everything the
same as what our competitors are doing.

(10:44):
We are really trying to service the needsof our clients and really think about.
How work does evolve, so we don'thave to be the same as everybody
else, but there are things that ifyou have a sales prospect come to you,
like there are certain table stakes.
And so in, in those instancesit's either a yes or a no on

(11:04):
whether it helps us with that.
Now, this is where I love Pendo.
We leverage Pendo a lot in every singleaspect of that decision making process.
You know, in terms of how our usersare using our system, where they
might be running into problems,kind of what the journey they're
taking through our systems are.

(11:24):
And you know, that data is socritical for us in terms of how
we answer those questions andhow we then make the ultimate
decision about what we're gonna do.

Trisha Price (11:36):
I love to hear that.
You know, that warms my heart to hearthat Pendo is driving decision making
and an important part of how you measurevalue for ADP and for your users.
Naomi, do you guys have scorecards,goals, KPIs like that you think about

(11:58):
for the product team to make surethat these outcomes are achieved?

Naomi Lariviere (12:04):
Yeah, we use OKRs.
So everything is outcome based driven.
So we have our vision, our mission theoutcome that we're trying to drive.
And then as we decompose the ideainto a roadmap, then we actually
are going, okay, in Q1, we're gonnaachieve this part of the outcome.
And we keep tracking towards it.

(12:26):
outcomes for us always aremetric bound, so it could be that
we're reducing service calls.
It could be that we'rehelping those new logo sales.
Whatever that element is, and thenwe're just tracking that as we go along.
I mean, what I love about ADP, I meanautomatic data processing, but if
you think about data, data, it's ourmiddle name and everything we do, we

(12:49):
are probably one of the most metric'dorganizations that you have, and I love
that about my job is we, we can prettymuch tell you anything about what it is
that we're doing and how we got to anoutcome that we were trying to drive.
So, yeah, just lots of, you know.
Really monitoring it because just becauseyou made a decision to actually invest in

(13:12):
something doesn't mean that you actuallyhave to continue to invest in something.
We've had things whereas we were building it.
And maybe we put it in pilot, wejust weren't getting the adoption
or we weren't getting pilotclients to sign up for the idea.
And given some time and some, some moreanalysis as to what might be happening.

(13:35):
We kill ideas all the time.
We kill projects.
There's probably a lot of stuff that goeson here that never sees the light of day.
And that's okay.
And that's how data can reallyhelp influence your decision.
And not just at the inception of an idea,but as you are continuing to go along.
And the SDLC,

Trisha Price (13:54):
I love that that is - as you said when we started off, you
and I, this role, we make hard callsevery day, and killing a product or
pausing something is probably one ofthe hardest calls that we have to make
because sometimes it's easy to say likethe outcome's right around the corner.
We're just not there yet becausewe fall in love with our ideas and

(14:14):
we're trying to innovate and we'retrying to do things different.
and that I think is just one of thehardest calls that we have to make.
'cause you just want it to be,you knew it was a good idea.
and it's like, oh, but we just have todo this one more feature and the outcome
will come, but sometimes it doesn't.

Naomi Lariviere (14:32):
Exactly.
I think I really believe in, inthe phrase, and you've probably
heard it before as well, islike, you're not the user.
I'm not the user and so I neveractually get too caught up in
whether my idea is actually gonnamake it into production or not.
For me it's all about that person at theend of the computer screen or the mobile

(14:54):
device that is actually experiencing them.
I believe what they tell me, andthat is how you make your decisions.
Because if they don't see valuein it, then why are we doing it?
Right?

Trisha Price (15:06):
Yeah.
Then they're not gonna pay for it.
They're not gonna appreciate you.

Naomi Lariviere (15:10):
Exactly.

Trisha Price (15:10):
We to listen.
We have to listen.

Naomi Lariviere (15:12):
Exactly.

Trisha Price (15:12):
Well, as you mentioned, and we all know, ADP
is a highly regulated space.
and your role is fascinating to mearound bringing AI to your users,
bringing AI in a scalable way to ADP.
and you have to do this in a placewhere precision matters, right?
Mm-hmm.

(15:32):
Even a small mistakehas major consequences.
As you said, none of us want ourpaycheck to be wrong unless, unless.
It's in the positive direction,but then there's probably still
somebody there who's not happy.
so tell us like, how do youapproach bringing AI in?
How do you balance innovation with theneed for almost perfect reliability?

Naomi Lariviere (15:57):
Yeah.
Very carefully.
So I think, And I'm gonna applywhat I'm about to say as like before
AI, so BC so before AI happened,generally most organizations,
they would build their products.
Test it and then hand it over to yoursecurity, your legal, your compliance

(16:21):
team, and they would look at it anddo a checklist of yes, yes, yes.
And then it would actually goand become generally available
or be released clients to use.
That we have completely shifted left.
So as we come up with our ideasfor AI, what we realize, because we
do have a lot of data, we have thelargest HCM data set in the industry.

(16:46):
We service organizations in 150 or140 different countries, so there's
lots of laws, regulations, especiallylike in Europe where there's been
a a new legislation around that,even here in the US, California.
So what we, we did realize is weneed to shift that entire process.

(17:06):
Left.
And now any idea that comes in for AI itgoes through, we call it the CDO process.
It's governed by our ChiefData Officer and that team.
And basically it's looking at thesecurity elements of the idea.
It's looking at the data, how wewanna use it, are we using it in a
way that complies with privacy laws?

(17:27):
We look at it in terms of how doesit watch or observe compliance
laws around you know, the differentstatutes across the world.
And then last but not least, like legal.
So are we thinking about bias?
Are we thinking aboutthe ethical use of it?

(17:48):
All of that is kind of likeour shift left philosophy.
Now, it doesn't just happenat the first time that you.
Come up with the idea as we are goingfrom A-A-P-O-C to a pilot to generally
available, that analysis or thatwork that our CDO office has deployed

(18:09):
gets progressively more difficult.
So there's harder questionsas you go through.
So by the time that actuallyis in our products, it's.
We been thoroughly vetted basedoff of our understanding of the way
the world is right at this second.
I mean, laws are changingevery single day.
So our process does adaptas we go through it.

(18:30):
But that is generally what we donow, overall, our principles And
how we have been thinking about AI.
We started an AI andEthics Council in 2019.
This is made up of subject matterexperts in the field of artificial
intelligence and ethics from someof the major universities out there.
And they work alongside us to help us layout our plan in terms of the things that

(18:54):
we should be watching for in this space,because it's not just about your payroll.
You know, being yourpaycheck being correct.
It's also about how you'rerecruited into an organization.
It's about your performance review.
It's about hiring and firing decisions.
All of that is as wewant to apply AI to it.

(19:14):
We just have to be superthoughtful about what it is.
Now you can pass all of these checks thatwe're doing internally, but again, it goes
back to does the user need this solution?
And is it good for a DP and doesit help us with the competitive?
And so like this, all of this processgoes in tandem with how we actually are
making decisions about what we bring.

Trisha Price (19:37):
So fascinating when you think about all of the aspects that
you're bringing AI all the way fromfirst touch of candidates to hiring, to
onboarding, to performance management.
I mean, that's just critical,critical to how so many all
companies run their business.

(19:58):
I mean, our number oneasset is our people.

Naomi Lariviere (20:01):
Yep.

Trisha Price (20:01):
And so it is fascinating to think about the
legal implications of everythingyou're doing across that life cycle.
Naomi, can you give us a concrete exampleof a new AI feature or product that
you've launched into your products?
And the impact it's had.

Naomi Lariviere (20:21):
Yeah.
We've done a actually quite a lot andwe actually, so I mainly important
to note, we don't talk about like,ideas that we have that we're just
kind of like thinking about today.
We only talk about things onceit's actually in our product.
It's.
Being used by either pilot clientsor by or it's generally available,

(20:42):
but we actually have quite a lotthat we've delivered across our
six major platforms that we've got.
And I'd say like probably the oneI'm most excited about, it's been
in pilot now for several months.
And I say pilot, it's like we'rerolling pieces of it out to
generally available as we go.
So it's not fully GA right now, but likeclients do have pieces that are using.

(21:07):
and it's called payroll anomalies.
So the bread and butter of whata DP does is a while we are a HCM
provider, what people mostly useus for is the payroll process.
Payroll is a very complex process.
on average, a payroll practitioneror the HR department, they do about a

(21:29):
hundred different activities to makesure that you get a correct paycheck.
That process is done over thecourse of generally two days.
Most organizations, usuallyMonday and Tuesday are kind of big
days for, for the HR department.
They're running all of their checks.
So this is like.
All the new hires that came in, are we,are they accounted for people who left?

(21:53):
Are they accounted for anybodygoing on leave of absence?
Do we have our benefitsdata, our 401k information?
All of that information'scoming into the system.
And then payroll basically checks all ofthat data to make sure that it's correct.
And we call that.
Anomalies.
So what we're looking for isanything that is out of the norm.

(22:15):
So maybe you're an hourly worker, but allof a sudden you have like 80 hours on your
weekly pay stub, and that's kind of odd.
So did we overpay you?
So it's flagging that those kindsof decisions back to the user to go,
Hey, you, you wanna look at this?
And what we've what?
Used to happen is our payrollpractitioners is, they would, we would

(22:38):
flag all this information, put it ina PDF, they would have to print it off
and then go through it line by line.
Some of these reports can belike 200 pages long, right?
And we're like, there'sgotta be a better way.
On average it takes 'em about90 minutes to do this process.
90 minutes is a lot of time.
And you know, I always talk about itlike, we want you to get in, get on and

(23:00):
get on with the rest of your day becauseyou shouldn't live in our systems.
Our systems are used to facilitatework and we really felt that problem.
It was important because out of allthe payrolls that we run, we can see
that at least 70% of payrolls have atleast one anomaly that will show up.
So it, it is a critical step in theprocess that people need to look at.

(23:22):
It's high impact it's had.
But it also had high feasibility interms of can we apply agentic AI to it.
So what we've created is the ability todetect, make the user aware, and then
actually resolve the issue for them.
Now, this is where kind ofthe trust factor came in.
So when we first started, we werelike, yeah, we just wanna, you know.

(23:45):
Everything's all solved.
Like the world is beautiful.
That 90 minutes it's maybe a five minuteprocess where you just kind of check it.
That's where clients are like, "no,no, no, no, no, no, no, no, no, no.
Show me the math.
How did you get here?
Show me why you, you did it." And reallywhat we've went back to the the tinkering
board to, to go and actually look atokay, how do we make it more explainable?

(24:09):
How do we make it transparent?
How do we actually show them our homework?
Right?
And also, how do we letthem make the final choice?
What we understand about our usersis because this is such a it's
one of the most audited processes.
In a organization we wanted to makesure that they felt comfortable

(24:32):
and they could check off.
So while we have automatic detectionawareness and resolution capabilities,
they are the final human in the loop toactually go, yes, I accept this work.
Yes, this is the right thing to do.
And then it moves on.
But the other part that we then woveinto our process from an audit tracking

(24:53):
perspective, while we have audit logs.
For every single process in our,in our ecosystem, we actually
brought in an agent control center.
So this tells them all of thethings that the agent is doing
versus what the human did.
So that way if they ever were auditedor you know, God forbid they were

(25:15):
sued or something like that, they havethat information at their fingertips.
They can produce it andthey are good to go.
So our clients, and now let'stalk about impact because that is
something that we track, right?
You know, what we can actually see ishow many, like literally, and this is
where Pendo helped us, is to be ableto tell when they see the anomaly how

(25:38):
they click on it, and then actuallyhow many go and do the action to say,
yes, I'm okay with how you solve this.
And that we can see like theyprioritize some of the anomalies
that they look at it, you know.
Basically varies by user.
And then last but not least, itgenerally it's taking 90 minutes

(25:59):
in the process that they use, ifthey're going with the PDF, it now is
reducing up to an hour's worth of time.
From that process.
So like, it's significantly,significantly improved.
Kind of like their happinesswith that part of the process.
And you know, clients are just you.

(26:20):
I think we have a quote on our website,like the client was just like, this
means so much to me because it's.
It's easy, It's smart.
It's doing the things that help me withmy job and hopefully we, we'd say maybe
that makes them the situation a littlebit more human for them in that process.

Trisha Price (26:41):
Registrations for Pendomonium 2026 are now open.
We are bringing together the mostinspiring minds in product and leadership
who will challenge your thinking oneverything from product-led growth.
To the future of product to gainingvalue from your AI investments,
it is likely you'll even run intosome of our guests from hard calls.

(27:04):
The product festival is designed to sparkcuriosity, create conversation, and build
community while spotlighting the newesttech for software experience leaders.
I would like to invite you to joinme in Raleigh, North Carolina from
March 24th to 26th with an exclusive30% discount when you use the code
HardCalls30 That's Hard Calls, alllowercase and the numbers three zero.

(27:29):
Get your discounted ticketat pendo.io/pendomonium.
See you there.
I mean, that is real valueand you know, we hear so much.
Around AI and everybody's building AIfeatures, but in a lot of cases, for
a lot of people, AI has yet to givean ROI, and this is a real example

(27:53):
of your customers getting actualtime back from your AI investment.
and that's incredibly impressive.
All of us have had to pivot andlearn new skills in this era of AI.
Whether it's the engineers orproduct managers, designers, in

(28:15):
terms of how to build trustworthyinterfaces and interactions with
agentic interfaces for our customers.
How did you do that asa leader with your team?
Did you have to go out and hirepeople that had experience?
I mean, it's kind of new for everyone,so how do you find that experience
or how did you upskill your teamso that they were able to have the

(28:37):
kind of success you've had so far?

Naomi Lariviere (28:39):
Yeah.
I'd probably say a littlebit of gorilla tactics.
So I think you just said somethingthat's really important that
everybody should understand.
This is new technology,we're all learning together.
Right?
You know, we weren't sitting as PhDstudents at Stanford like learning
this as part of our coursework.

(29:00):
So we're all learning ittogether, including our users.
They're learning it together for us,we have a really great leader who was
like, "Hey, you know," Maria Black,she's our president, and CEO, she
basically said, "listen, I think thiscould really be the wave of the future,
especially in our industry, and we canreally think about how we design the

(29:23):
work for people and reimagine work." Andshe was just like, "I need everybody."
To jump on board.
So, especially within the product andtechnology organization we have had
coursework that we've all gone through.
We do a lot of webinars wherethey're more like a lunch and learn.

(29:45):
So here's a team that was doingearly experimentation, what they've
learned, what they understand.
And then when I say gorilla tacticswe've also leveraged content from
the big LLM companies, they all havefree learning available to them.
Coursera has a ton of learning as well.

(30:07):
The universities are makingeducation available for free
if you want to do some of that.
Like Duke University is agood one in your home state.
And they all have coursework.
But I think what I love about wherewe've been on in this journey over
the last two and a half years is.

(30:28):
The collaboration thatthe organization has.
I don't think you, you can justgo, I wanna be innovative one day.
It really takes a lot of boldthinking and you have to, to
drive kind of that disruption.
You can't just be satisfied with,well, this is how we've always done

(30:49):
it and this is the way our clientsalways wanted in order to make.
You know, work reimagined,you have to think differently.
And this technology givesyou that opportunity.
So we have we I talked about we haveoutcome-based teams, but we also have
fleets of teams that are working on AI.
And they work on problemsacross the, what I would've said

(31:11):
might be a traditional silo.
They are working across it to go like,"Oh, the payroll team did this, well,
maybe we can use that same conceptover in benefits or retirement or
recruiting." And so they're learningoff of each other and I think most
of the ways that maybe you or I kindof grew up in the corporate world is.

(31:33):
You learn on the job.
And so it's been a great opportunityto see those teams really push the
boundaries of we're like bold, yougotta be bold and they from having
sandboxes where they have the freedom toexperiment with all the different tools
to try and determine which one's thebest one for the problem that they're

(31:53):
trying to solve, to just kind of saying.
We can think differently.
We can do things differently.
It doesn't have to be the same.
And they have that permissionand autonomy to do that.
And I think you know, given ourpredictable approach around how
we bring AI to market, that allowsus to really kind of celebrate the

(32:16):
learnings that we're getting as we'regoing along it, and just not pushing
features that maybe clients don't want.
So it's been a great time.

Trisha Price (32:24):
I love that.
I mean, I don't think it's common andeasy or typical for companies of your
size and scale to be able to do a pivotthe way you have to this experimentation,
continuous learning mindset of AI.

(32:47):
And that's clearly showing upin your ability to deliver it to
your client and the experiences.
I think easier sometimes in smallcompanies that are just getting started
to have this experimentation and learningmindset, but I think is often harder for
companies who have probably gotten into apretty predictable delivery methodology,

(33:12):
SDLC, we could probably, you and Ihave been doing this for a long time.
We can probably, with reasonableconfidence most of the time, understand
when a new product or feature is gonnacome to market, what the risks are.
But this is a whole different ball game,and you're working through it in a really
interesting way that seems to be working.

Naomi Lariviere (33:32):
Yeah, and I mean, it starts from the very first use case
that we actually brought to productionit that went from the idea and the
data part of like why it was a goodidea to actually execute on that
went from you know like day one to inproduction with clients in 13 weeks.

(33:54):
So I don't think anythingat a DP has gone that fast.
Yeah.
But.
I would say that the process that we'venow applied our like when I talk about
our shift left on our compliance and our,our AI, responsible AI program that has
really actually enabled us to move fasterthan we probably would have traditionally

(34:18):
on a regular capability or feature.
And I'd say it's refreshing to seethe pace of what we've been able
to deliver over, like we, it waslike that from that one use case.
Within six months we had 10 andthen 20 and like we just keep.

(34:42):
Like it's become kind of this like, notnecessarily a conveyor belt, but like
it we're, it's faster, more predictable.
We're learning, as the technologyis changing we're having to
go, oh wait, that idea wasn'tnecessarily so great to do that way.
Now we have a new toolkit inour bag, let's go and use that.
And so we're trying to be very nimbleand not locked into any kind of form

(35:08):
or fashion in terms of how we do this.

Trisha Price (35:10):
It's interesting.
We had the same experience at Pendo Naomi.
We were building agents and we builtone agent into our listen product.
The one that looks at support ticketsand call transcripts and looks at like
portals And any kind of survey dataand helps product managers know what

(35:35):
our customers users are asking for.
And we built an agent on top of that.
So you could ask a questions like, whatare the top 10 enhancements that my
enterprise customers are looking for?
And then we went and we builtan agent for our guides, right?
So you can say, Hey, I wanna put a newonboarding guide into my product that

(35:55):
does X, Y, Z, and you can ask it andit starts to create the guide for you.
And then we built one for our analytics.
So you can ask questions like, how isthis particular feature performing?
Tell me what's workingwell and who's using it.
And we learned from e different teamsbuilt each one of those because we
were trying to go fast and we weretrying to experiment and learn.
And then we realized what we reallyneeded to build was an MCP server.

(36:19):
And we wanted to do that because youmight be building your own agent for
your product management team, and youmight wanna be able to ask it questions
about any of those things, of your Pendodata without coming to your Pendo agent.
And so it's like, okay, well nowI gotta scrap all of these and
move to this new architecture.
And I think like, but that's okay, right?

(36:39):
It's not wasted time because weactually learned lessons from
each one of those experiences.
And I think that's just part ofthe world we're living in because
the technology's moving so fast.

Naomi Lariviere (36:50):
Yeah.
And I think the I talked aboutcollaboration and what I think is good
and it sounds like at Pendo, you guysare practicing this, is that you need
to give teams psychological safetyin terms of how they're coming to.
How they're showing up and how they'reactually advocating for what we're doing.

(37:13):
I think it's really importantthat we give them the space to,
and the power to speak the truth.
I don't want people to just feed mea line and go, oh yeah, we're gonna
make that date when really it's.
Failing miserably, right?
I really want them to tell me whatrisks that we might have early,
what challenges we might run into.

(37:34):
Because this technology is new, it'sunpredictable in certain elements.
And you really have to you know,maybe slow down to go fast.
So we put a lot of accountability onour teams in terms of what they're
delivering and how they're delivering itand how they speak up because I think a
lot of organizations can fall into thetrap of like, well, my exec wants it

(37:55):
by this date, and then they don't speakup even though they know that it's not
doing the thing that you wanted it to.
Yeah.
i talked a little bitabout experimentation.
We spend a lot of time experimentingin lower environments that are not in
production, really looking at all 360degrees of the issue because we really

(38:17):
wanna make sure that we're doing the rightthing for our clients and for their data.
And so experimentationhappens in lower environments.
Never in our production environment.
and with that experimentationprinciple, it's you can fail fast in
those lower environments, but thenyou're succeeding very deliberately
in our production environment.

(38:39):
So we're really trying to balancecreativity, keep that alive, but also your
quality and what we bring to our clients.
That's uncompromised.

Trisha Price (38:51):
I'm continually impressed with your leadership
style, with your ability to driveoutcomes, with your ability to
innovate in a complex environment.
But I also respect your leadershipsort of on a different angle, which
is something I know both you and Ihave been passionate about for a long.

(39:11):
Long, long portion of our careerswhich is our belief that diverse teams
are better teams and we both have putsignificant energy into lifting other
women up especially in technology wherethat's been a challenge for both of
our many parts, of both of our careers.
Can you talk a little bit aboutthat and why that's a passion of

(39:33):
yours and why you think that'sled to your team's success too?

Naomi Lariviere (39:37):
Yeah.
Well, I, when you think about like,at the end of the day, you and I
are in the business of buildingproducts that people buy, right?
And not everybody looks like me.
You know, and I really do believethat diversity drives better products.
That different perspectives are gonnacatch different edge cases that might

(40:01):
happen that I might not think about.
And then as it relates to like the teamsthat I cultivate and as I look when
I. When I got into this business not,not a DP, but into the tech business I
would look around, I was oftentimes theonly woman who was the business analyst

(40:25):
or the only woman who was on the team.
And it's kind of like, well,sometimes you get afraid to speak
up and you, and you don't do that.
And I think over the course of my careerand maybe like the ambitions that I've
had for myself, I really want to bringother females or underrepresented groups

(40:46):
along that journey with me because I dobelieve that everybody has a seat at the
table, that we all have a voice and thatwe all can contribute to the growth and
development of our own organizations.
And I've benefited from bothmale and female mentors that
have helped me, grow my career.

(41:07):
And I really do believe that it's ourresponsibility to be able to give that
back to those who are falling behind us.
Because having an example of a strongfemale leader who's getting things done.
That inspires other women to do that.
In our profession, 35% of women in techare leaving the career by the midpoint.

(41:30):
And we can save those women if we givethem the examples within the profession
that they can aspire to because someonejust has to tell them it is possible.

Trisha Price (41:41):
I love that.
And it is possible.
And while I think you and I both haveseen change in a positive direction
and we see more diversity and morewomen in leadership positions,
there's still a long way to go.
and I'm with you.
I'm passionate about this, not justbecause I think it's the right thing
to do, which I do think it's the rightthing to do, but I actually believe and

(42:05):
have seen it produce better businessresults when you have different.
Perspectives at the table willing tochallenge each other that come from
different backgrounds or look different.
and you're right, our buyers and our usersdon't all look like each other like us.
So It's helpful to keep that in mind whenwe're driving business results for sure.

(42:25):
Yep.
Yeah.
Well, Naomi, thank you much somuch for sharing your story.
Your passion for people, your approachto leadership but most importantly,
for hard calls in our hard callsaudience, how you have successfully.

(42:45):
Delivered AI features that aredriving value for a DP driving
value for your clients in a complex.
Make no mistakes, almost,space that you're in.
So I know that our listeners aregonna learn a lot from today's
conversation and enjoy hearing from you.

(43:06):
So thank you so muchfor joining Hard Calls.
Thank you for having me.
Thank you for listening to HardCalls, the product podcast, where
we share best practices and allthe things you need to succeed.
If you enjoyed the show today, sharewith your friends and come back for more.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.