All Episodes

March 25, 2025 23 mins

Navigating AI disruption is like steering a ship—startups move like speedboats, while enterprises must turn the equivalent of a cruise liner. In this episode, host Hannah Clark speaks with Randhir Vieira, Chief Product Officer at Omada Health, about how established companies can adapt to AI-driven change despite their size and complexity.

With over a decade in operation, Omada Health faces different challenges than AI-native startups. Randhir shares strategies for integrating AI into both product development and company culture, offering insights on how enterprise leaders can stay agile and competitive in this fast-changing landscape.

Resources from this episode:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Hannah Clark (00:01):
Imagine you are the captain of a boat,
and as you're sailing along,you see some conditions
ahead that tell you it'stime to change directions.
Okay, so now let's sayyou decided the right
thing to do is make a 90degree turn northeast—how
fast can you do that?
If you thought that's notreally a fair question, Hannah.
You're absolutely right,because we're missing some
critical information, suchas how big is this boat?

(00:23):
This is the analogy Ithink of when comparing
established enterprisesto startups in the face of
major disruptions like AI.
So while the business equivalentof a cruise ship may have a
lot of resources on board,it's gonna take a lot more
time and nautical experienceto change course than say the
speedboat carrying four peoplein a case of energy drinks.
My guest today is RandhirVieira, Chief Product
Officer at Omada Health.

(00:44):
Omada has been operating since2011 and as a mature company
with established products,they're playing with an entirely
different set of challengesand advantages compared to
the new startups that haveAI embedded in their DNA.
Randhir talked me throughdifferent approaches that
enterprise leaders can takewhen implementing AI in
both their product offeringsand their internal culture,
and how those decisions canhelp establish companies
navigate into new horizons.

(01:06):
Let's jump in.
Oh, by the way, we holdconversations like this
every week, so if thissounds interesting to
you, why not subscribe?
Okay, now let's jump in.
Welcome back to theProduct Manager podcast.
I'm here today withRandhir Vieira.
He's the CPO of Omada Health.
Randhir, thank you so much forcoming and joining us today.

Randhir Vieira (01:25):
My pleasure.

Hannah Clark (01:26):
So we'll start the way we always start the show.
Can you tell us a littlebit about your background
and how you've arrivedat where you are today?

Randhir Vieira (01:31):
Alright, I'll start at the end.
I am the Chief ProductOfficer at Omada Health.
I've been here aboutfive and a half years.
Prior to this, I wasat Headspace, but now
rewinding a little bit.
I started in at Yahoo,where I joined a small
group that went from zeroto 300 million in about four
years, and was at Yahoo forabout six and a half years.

(01:51):
But since I was based in SiliconValley, I decided I wanted to
at least try the startup life.
And so I followed my passion,which was photography,
and joined a photographyrelated startup called ifi.
It was less than 20employees at the time.
One of the first or earlyinternet of things, combination
of hardware and software.
And through that and throughthe VC network, met somebody

(02:15):
at B2B learning managementsoftware called Mind Flash.
And I was there for a few yearsleading product, marketing,
sales, and engineeringat various points there.
And then I have a hugepassion for meditation and
I had an opportunity to joinHeadspace to start their B2B.
And eventually led productfor all of Headspace.

(02:38):
That was my first forayinto health and wellness.
And then from there to Omada.

Hannah Clark (02:43):
Ah, it's quite a journey.
So today we're going to befocusing on approaches for
established companies likeOmada to start integrating AI
into their products and theirculture, which is a big topic.
So we'll get started offwith your thoughts on Horizon
one and Horizon two whenit comes to AI adoption.
So can you tell me a littlebit about how you're thinking
about these horizons and whyyou believe that companies tend
to get stuck optimizing theirexisting workflows, rather than

(03:05):
thinking about that horizon two,what's possible and what's next?

Randhir Vieira (03:08):
I think the horizon one is what I describe
as the incremental improvementsor innovations that people can
use with any new technology.
And in this case, AI isthat hot new technology that
we are all excited about.
And so the usual place thatwe all go to are what are the
things that we are currentlydoing that we may hate doing

(03:28):
or believe that AI can dofaster, better, cheaper.
And those are the thingsthat we will use AI for.
So summarizing is a greatexample, whether it's
summarizing content, summarizingmeetings, those kind of things
are great places to start.
And those are low hangingfruit, and of course we should
absolutely take advantage of it.

(03:48):
The bigger opportunities though,lie in the horizon two areas.
So what are the thingsthat we could just not do
before that this new toolor capability can unlock?
I think that's where whatholds us back is the inertia.
It's the inertia of ourthinking, of existing processes,

(04:09):
of our business model, of lotsof different things that usually
hold especially large companiesand successful companies back
from moving to that horizontwo, or revolutionary updates.
And I think there are a lotof standard examples of, if
we look at the last big techshift that we had, which was
around mobile and smartphones.

(04:30):
A lot of companies that mayhave had an online presence,
let's say, for booking or otherthings from the web that moved
that content or that capabilityto the phone because it was
more convenient, et cetera.
But the companies that reallystood out were those that.
Use the capabilitythat in a way that just
wasn't possible before.
So the ride share companieslike Uber and Lyft, et

(04:51):
cetera, would likely not haveexisted without a smartphone.
So those are the kind of thingsthat are more challenging
for especially successfullarge companies to reimagine
versus being almost victimsof their own success with
their existing successfulbusiness model and process.

Hannah Clark (05:10):
Yeah, that's a really succinct example.
And I'd like to, speakingof examples, explore some of
the ways that you have seenthat kind of transformative
application of AI that goesinto that horizon two territory.
Just so we can give a contrastof like, where does that
exist in the space right now?
Where you seeing that happening?

Randhir Vieira (05:27):
I think where companies go after some of the
foundational constraints thatmight exist or go after new
capabilities that are unlocked.
That's where you seereally exciting things.
I think we are still inthe very, very early days
of the AI transformation,especially for those horizon
two kind of scenarios.
I'll give you a coupleof examples that I've

(05:49):
been fascinated with.
So one is an example ofcurrently we have a lot of
people applying for jobs andwe've always had some kind of
algorithms that were helpingto screen people, right?
At many companies thatreceived lots of applicants.
And some of the new wave toolsthat I've noticed, some of
them do full on synchronous,not just audio, but also video

(06:12):
interviews of candidates.
Now, what if instead of havingthose keyword based, scraping,
et cetera, anyone who appliesfor a role is instantly offered
the opportunity to interviewwith an AI agent, would we now
have a much better selectioncriteria that is more objective
versus just what people state?
In the application, you couldactually run a structured

(06:34):
interview process completely AIbased, especially for that first
tier or second tier interview.
So now you don't have theconstraint that was forcing
you to select a small groupof people to put in front
of a human interviewer.
So I'm fascinated with thesekind of companies that I just
reimagining what's possiblebecause of that constraint is
removed of having to filterdown to a smaller set of people.

(06:57):
Another example might be havinga doctor on demand, or at
least somebody that you canask questions to all the time,
and there are many companiesthat are trying to do this
in a way that is both safeand complies with regulatory
guidelines, et cetera.
But again, that was notsomething that was possible
earlier because of costand resourcing, et cetera.

(07:20):
Fascinating things therearound mental health
and a lot of areas.
They definitely come with somerisk and one has to be careful
about that, but it certainlyopens up access, affordability,
et cetera, in a way thatwasn't possible five years ago.

Hannah Clark (07:35):
You've observed that even within the product
and engineering teams, that AIadoption hasn't reached a 100%
despite obviously the benefitsthat we've just discussed.
So why do you think thatis one of the barriers that
you're thinking are preventingfull adoption into this
kind of technology and howcan we address that from
a leadership perspective?

Randhir Vieira (07:53):
I think this observation comes
up in many of the productand engineering leadership
groups that I'm part of.
And I think all of us aresurprised in some ways that how
can we not have anything lessthan a hundred percent adoption
of such an exciting technology?
And yet when you step back, youlook at the technology adoption

(08:13):
curve and it's standard, andI think no different in this
case, where you're gonna havesome already adopters and then
the early majority and thelate majority and the laggards.
And it requires us asleaders to run our change
management process.
Some of those things thatmay encourage this are
experimentation, encouragingthose early adopters to be

(08:34):
the scouts and show the restof the organization, or they
show the rest of the teamwhat's possible to invite the
skeptics in and make them partof the team that is evaluating
some of these solutions.
None of these tools areperfect, but the pace of
change and improvement thatwe are seeing is something
that we have not seen before.

(08:55):
So for every criticismor concern that we have
today, those quickly getreduced significantly, if
not raised within a fewmonths or a few quarters.
So it is incumbent on all ofus to keep trying and may not
be deployed ready just yet, butat least to keep playing with
it and see what the currentcapabilities are and be aware

(09:17):
of both the pros and the cons.
And not look for solutions tojam this technology in, but
really see what kind of problemsor opportunities would benefit
from this amazing technology.
And it is a tool after all.
It's not a solution that'slooking for a problem.
It should be a powerful toolthat is deployed to the right
problems or opportunities.

(09:37):
There are also somecompanies that I've seen
that also use the stick.
Which is as an example, theymight say that if your team
does not have a hundred percentadoption of the AI tools that we
provide, we are not entertainingany request for additional
headcount for your team.
So that's certainly aninteresting technique.
I haven't seen many companiesadopt that, but it's certainly

(09:59):
one example of the stick.
To really push thatadoption because we're
not seeing the level ofadoption that maybe we want.

Hannah Clark (10:08):
Yeah.
I don't think most peoplerespond very well to the
stick, especially if they'realready set in their ways
of, I don't wanna do this.
I had a conversationwith someone we having
on the show very soon.
She works at a startup that'sreally in, in enculturated
AI curiosity into sort ofthe, just the atmosphere of
the work culture, which Ithink is a really special
way of going about it.
It just seems fostering thatexcitement is the way to go.

(10:30):
So it's interesting to hear thatfolks are going the other way.

Randhir Vieira (10:33):
I agree.
And I think it's definitely youwanna lead with the carrots and
a lot more of this curiosityand excitement and maybe the
stick is the last resort tobring the laggards along.

Hannah Clark (10:42):
Yeah, exactly.
If you really can't forcecuriosity, then you can
bring the stick out.
So let's walk throughthe risk impact matrix
that you use when you'reevaluating AI initiatives.
When you think about helpingteams moving past that low risk,
low impact quadrant that wewere talking about when we're
talking about the horizon oneand getting stuck in that phase.
How do you move pastthat as a leader?

Randhir Vieira (11:05):
I think like any typical brainstorm, we want
to encourage all the ideas thatpeople have about where they
see AI really making an impact.
And then usually productteams will have different
frameworks that they use, theymight use ICE or any number
of different frameworks aroundscope, impact, et cetera.
One that we found helpful,and I've seen a bunch of other

(11:26):
companies use this, especiallyin regulated industries.
Is the impact and risk,and it just allows you to
list all these differentideas that come up in
brainstorms on this matrix.
And you can use ROI, etcetera for the impact piece.
And then there are elementsof risk that come up.
So we certainly don't wantto spend too much time in

(11:48):
things that are low impact,regardless of the risk.
So low impact, low risk,still not impactful.
So we ignore that.
Low impact, high risk,definitely we wanna
stay away from that.
And so the easy place tostart is things that are
high impact and low risk.
And typically those arethings that are around
summarization, internaltooling, those kind of things.

(12:09):
And there are lotsof projects that.
Companies find where thiskind of work can be super
impactful and is lower risk.
Great way to, to get the ballmoving and get people playing
with this kind of technologyand see the real business
impact of it internally.
But usually the horizon twokind of things are in the high
impact, but maybe higher risk.

(12:30):
There are ways to tryand mitigate the risk.
You do assumption testing,you can step your way into it,
but at least being open to thefact that's really where the
prize is and having things inplace to move towards that.
One of the questions thatcould be helpful in trying to
explore what those horizon twoare, high impact, what higher
risk items might be, is what ifwe started our company today?

(12:53):
So we didn't have thelegacy of the past.
We start our company todaywith all the tools that are
available, AI and non-AI toolsthat are available today.
What would we do andhow would we do it?
What would thecompany look like?
What would theoffering look like?
What would thepricing look like?
So it just gives peoplespace to think about what
could be, because you can besure that there are some new

(13:16):
companies who are startingwith exactly that mindset and
without even having to do itas a thought exercise, it is
their reality that they do nothave the baggage of the past.

Hannah Clark (13:26):
And this is why this topic interests me so
much, is there's really somevery clear pros and cons when
you're in a startup positionversus an established company.
When you're dealing with thiskinda the technology where
there's a lot of opportunityand a lot of white space.
So when we're thinkingabout retrofitting AI into
established companies thathave existing products, there
is that kind of possibilitythat's undrawn and unmapped.

(13:48):
What advantages do organizationshave over AI first startups
and what unique challengesdo they face in contrast?

Randhir Vieira (13:55):
I think the advantages that any big company
has are the advantages of beinga large, successful company,
which includes no resources.
The resources, right?
They have the revenue, they havepeople, they have the resources.
They usually have strongdistribution channels,
so there are a lot ofbenefits that they have.
Similarly, the other sideof that coin is because of

(14:15):
those, they've been successfuldoing what they've done in the
past and how they've done it,they can also be risk averse
to changing the formula,messing with the machine.
And there can also be inertiaabout Here's what we've done,
here's how we've done it.
These are the customers, here'swhat they like, et cetera.
So the same reasons that theyare successful and so many

(14:36):
of the benefits that theyhave can also hold them back.
And of course, we explore waysto get past those barriers
or friction points at least.

Hannah Clark (14:46):
Okay, I'm gonna switch gears a little bit and
talk a little bit more aboutinternal change management.
I know we talked a little bitabout this with the carrot and
the stick, but there's a lotmore to it than just incentives
and let's say consequences.
So you mentioned the importancein the past of change management
with implementing AI whenwe were talking previously,
but what specific strategieshave you found effective
in driving that culturalacceptance and enthusiasm

(15:07):
around AI capabilities?
If you really lean into thecarrot, you know, what are
like the actual leadershiptactics that you can implement?

Randhir Vieira (15:13):
I think finding and nurturing the early
adopters in a company and ona team are really important.
There are usually alwaysa few people who are very
curious and just gravitateto new technologies.
And being able to find themand channel that enthusiasm and
interest into something thatis impactful, and this is where
the high impact, lower riskkind of grid can help to steer

(15:35):
those people and the projects.
Being able to find and celebratesome of these early wins,
especially if they're impactful,can help generate some internal
momentum and enthusiasm forthe usual cautious skepticism
for anything that is hyped.
And we've seen this in thepast before of, oh, we've
seen this hype cycle beforeand there isn't anything
real there, so let's justwait for the cycle to pass.

(15:58):
Instead, we are able to seethat, sure, there is a lot
of hype, but there may besomething that is useful and
impactful for us and let'ssee what we can get out of it.
So being able to celebratethose early wins can be helpful.
The third one is co-development,which I think is super, super
important because rarely are wedoing things just for ourselves.
So we might be buildingthings for internal teams.

(16:20):
We might be building things formembers, but making sure that we
are developing these solutions,especially 'cause they're so
new in partnership with thecustomer, whether that customer
is internal or external.
And that is what drivesnot just the success of the
feature being shipped, but moreimportantly the adoption and
the impact of that feature andthe value of that co-development

(16:42):
cannot be understated.
So yes, it may make thingsfeel like they're going a
little bit slower duringthe development process.
Usually the result and theadoption is way worth it.

Hannah Clark (16:53):
Yeah, absolutely.
And it really seems likethat's a really concrete way to
mitigate some of the risks thatyou were talking about before.

Randhir Vieira (16:58):
Absolutely, yes.
Having the subject matterexperts, having the people from
your legal team, regulatory,security, et cetera, involved
early in co-developing thesolution helps to mitigate
a lot of those risks.

Hannah Clark (17:11):
I'd like to zero in a little bit on AI
and healthcare specifically.
This is a reallyinteresting field.
I think that there's a ton ofopportunity, as I'm sure that
the industry is well aware,but also a ton of risk inherent
in some of these solutions.
What are some of the uniqueconsiderations around ethics,
privacy, and accuracy thatyou've encountered that might
differ from other industries?

Randhir Vieira (17:29):
I've certainly seen this similar matrix being
used of the higher risk, highimpact area to try to guide
how much, how careful peopleshould be with solutions.
It is heavily regulated andit's the same for any other
regulated industry, whetherit's FinTech, et cetera.
Making sure that you'redeveloping solutions that
are within the bounds of theregulation, even as companies

(17:52):
are maybe trying to shaperegulation to accommodate this
new technology, but making surethat the solutions that we're
deploying are very much in linewith the existing regulations.
There's obviously in healthcare,the do no harm, and making
sure that we really stickby that, and that can mean
lots of different ways tomitigate some of the risks
that may come with this.

(18:13):
So making sure that's.
Outlined upfront and center,many companies will create
before they start creatingsolutions, will create
a set of AI principles.
How do we wanna make sure thatwe are developing solutions?
So examples might be to bevery clear and transparent
to customers and to memberson when they're dealing
with an agent and a virtualagent versus a human agent.

(18:35):
Maybe one example, if there'sanything that an AI agent
is unsure of, that they willtriage to a human person.
So there are lots of ways tomitigate that, but I think
starting with those principlesfor your company is super
important and helpful beforegoing down specific areas.
Being mindful of bias thatmay exist with some of the

(18:55):
foundational models and ensuringthat you're doing things to
plug those gaps and make sureit's as unbiased as possible for
your population can be helpful.
And ultimately, no matter whatwe do, we have to test a lot.
And it's not just testingbefore deployment, but it's
continuously monitoringand course correcting
based on user feedback.

(19:16):
Your own quality controlprocess that is continuing
post-production is important,so making sure that the
guardrails that you set areworking the way that you
intend, and there's no modeldrift or other things that are
happening, even post-production.
So I think all of those canhelp, but being aware and naming
those challenges upfront andthen trying to mitigate them,

(19:40):
whether you use techniqueslike pre-mortems or other ones,
can be helpful to name allthe risks and concerns and how
likely they are and what themitigations are for, especially
those that are high impact.
And even if they're lowerprobability, they can still
be important enough to knowhow we're gonna mitigate them.

Hannah Clark (19:59):
Absolutely.
Yeah.
Yeah.
I've heard it thisphrase, tossed around in a
non-healthcare industries oragency settings where, oh, we're
not saving lives or we're notdoing heart surgery here, and in
this case you actually might be.
So I'm sure you have to havesome very robust processes in
place to ensure that some ofthat risk is really tightly
regulated and mitigated.
Okay, if we take off ourstrictly CPO hat and we look

(20:20):
into our crystal ball here,just yourself, obviously
you've done a lot of thinkingabout what's possible,
where we're at right now.
What do you see happening inthe next three to five years?
What are you excited aboutin terms of AI developments?
And from your perspective,your, in your professional
experience, what do you thinkexisting companies should be
doing to prepare now to takeadvantage of that horizon two?

Randhir Vieira (20:40):
I don't think I'm well equipped at all to
try and look ahead on what'shappening in what will happen
in three to five years.
The pace of change is justso quick, even for technology
that in three to five yearsseems like an eternity.
I. However, I think what we doknow is that things are changing
a lot, that new capabilities arecoming on way better, way faster

(21:02):
than we've ever seen before.
Just in this last week,I've been amazed at the
quality of the voiceinterfaces that we have.
Just not the quality of thevoice, but the inflections,
the tonality, the size.
It's just, it's amazingwhat's happening and we'll
continue to see that evolution,but not just voice, but
this kind of technology.

(21:22):
So I think the ways forme to think about how we
can take advantage of itis to continue to keep an
eye on what problems are wesolving, what opportunities
are we trying to solve for?
And invest in people intraining, in creating space
for these prototypes and proofof concepts to be able to see
what the current technologycan unlock in terms of the

(21:44):
problems or opportunitiesthat we are interested in.
I think those can be helpful.
What constraintscould this eliminate?
So asking those questions whenyou see some new capability
or new technology come up.
Is it cost?
Is it time?
Is it resources?
And we talked about theAI interviews as just one
example, asking those kind ofquestions that are helpful in

(22:06):
that creative thinking, in thediverge phase of what might
we do with this capability?
What would we do if wedidn't have this constraint?
Can help unlock some of thoseopportunities so that no matter
what the specific capabilityis, that may be available in
three years or five years.
That we have people who areready, available and excited
about bringing that andplugging it into that solution.

Hannah Clark (22:28):
All really great points, and I think it's a
really good idea to be alwaysbe thinking about that criteria.
And I like to hearproponents of curiosity.
It sounds like you're alittle bit more leaning
towards the carrot, I think.

Randhir Vieira (22:39):
Oh, absolutely.
Big carrots.
Many of them.

Hannah Clark (22:42):
Yeah, we're both ProCare it sounds
thank you so much forjoining us Randhir.
Where can folks connectwith you online?

Randhir Vieira (22:48):
LinkedIn is the best place, so just follow me
on LinkedIn and you can keeptrack of what we're cooking up.

Hannah Clark (22:54):
Sounds great.
Thank you so muchfor being here.

Randhir Vieira (22:55):
Thank you.

Hannah Clark (22:59):
Thanks for listening in.
For more great insights,how-to guides and tool reviews,
subscribe to our newsletter attheproductmanager.com/subscribe.
You can hear more conversationslike this by subscribing to
the Product Manager whereveryou get your podcasts.
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.