Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Galen Low (00:00):
Is there really
anything that different about
the process of developing aproduct that has AI native
functionality versus otherproducts that might just
happen into existing AI tech?
Jyothi No (00:08):
When you are building
an AI native product,
you're dealing with threethings that are different.
First, AI is fundamentallyunpredictable.
Galen Low (00:18):
What is the first
thing you do as a leader
when you notice that ateam may not all be at the
same level when it comesto understanding, wielding,
and even just acceptingemerging technologies like AI?
Jyothi Nookula (00:30):
I
separate the problem
into two distinct issues.
The first issue is competency.
The second is disposition.
If you treat a dispositionproblem like a competency
problem, you'lljust make it worse.
Galen Low (00:44):
What are your top
things that a product manager
interested in developingAI products needs to have
on their resume or in theirportfolio to stand out?
Jyothi Nookula (00:52):
One is evidence
of building something with
AI, not just talking about it.
The second thing is...
Galen Low (01:04):
Welcome to The
Digital Project Manager
podcast — the show thathelps delivery leaders work
smarter, deliver faster, andlead better in the age of AI.
I'm Galen, and every week wedive into real-world strategies,
new tools, proven frameworks,and the occasional war story
from the project front lines.
Whether you're steeringmassive transformation
projects, wrangling AIworkflows, or just trying to
(01:24):
keep the chaos under control,you're in the right place.
Let's get into it.
Today we are talking aboutthe future of the product
manager, what it takes todevelop AI products, how AI
is being used to streamlinethe product development and
release process, and what teamleads can do when their product
teams have uneven distributionof skills and attitudes around
emerging technologies like AI.
(01:45):
With me in the studiotoday is Jyothi Nookula.
Jyothi has over 13 years ofexperience driving AI product
and platform innovationat companies like Netflix,
Meta, Amazon AWS, and Etsy.
She also holds 12 machinelearning patents and has
mentored over 1500 productmanagers in their transition
into AI roles throughher education company,
(02:06):
Next Gen Product Manager.
Jyothi, thank you so muchfor being with me today.
Jyothi Nookula (02:10):
Hi everyone.
So excited to be here today.
Galen Low (02:13):
I am excited as
well and I've been like looking
forward to this for weeks now.
When you and I first chatted,first of all, when I was looking
at your profile, I was like,wow, Jyothi is a powerhouse.
There's a lot of brands anda lot of technologies in your
profile that are envy worthy.
And then I'm always a suckerfor folks who are trying
to help the next generationof any craft, level up in
(02:37):
increasingly technologicaland now AI oriented world.
So, and then when wechatted I was like, wow,
we have so much in common.
I sit on the project side, yousit more on the product side.
But I'm really excited to diveinto how things are changing and
some of the things that you'velearned along the way throughout
your journey through AI andmachine learning products.
(02:58):
I know that you and I we'reprobably gonna hit some tangents
along the way that will bedeeply interesting and deeply
valuable, so I hope we do that.
But the project managerand me created this
roadmap for us today.
To start out, I just wantedto kind of get one big burning
question outta the way thatlike uncomfortable but pressing
question that I think everyonewants to know the answer to.
But then I'd like tozoom out from that and
(03:20):
talk about three things.
Firstly, I'd like to talk aboutidentifying and closing skills
gap on your products teams.
When dealing with products thatleverage AI and machine learning
features, then I'd like toexplore some examples of ways
that your teams have used AItools in the product development
process, whether that's inresearch, data analysis, design,
engineering, user testing,or something else completely.
And lastly, I'd like to explorewhat the future looks like for
(03:43):
the product management role.
What a resume or portfolio needsto have to even be considered
for roles at places like Meta,AWS, Netflix, Etsy, and other
heavy hitting AI forward brands.
Jyothi Nookula (03:55):
I love it.
I'm interested.
That's the latest hot topic.
I just love it.
Galen Low (03:58):
Awesome.
Let's start out withthat big question.
So my big question tolike frame this all in is,
I mean, it's around AI.
Because you've had a lotof experience working with
product teams to develop AIand machine learning based
solutions for giants like Meta,Amazon, Etsy, and Netflix.
My question is there reallyanything that different about
the process of developing aproduct that has AI native
(04:21):
functionality versus otherproducts that might just
happen to existing AI tech?
Jyothi Nookula (04:26):
That's a
great question because this
really gets at the heart ofwhat's really changing in
product development right now.
And the honest answer isfundamentally yes, but
it's not in the way thatpeople think about it.
So here's what I mean.
When you are building anAI native product, you're
dealing with three thingsthat are different from
(04:48):
traditional software or evenproducts that just integrate
with AI through an API.
So the first is AI isfundamentally unpredictable,
so your traditionalproduct development.
You write deterministiccode, like if this
happens, then do that.
Or if I click on this button,it goes to this next screen.
(05:09):
Every single time Iclick on that button, it
will do the same thing.
But with AI native products,you're working with
probabilistic systems, soyour AI feature might work
differently each time it runs.
What this means is your entireQA process, your edge case
handling your reliabilityguarantees, all of that has
(05:32):
to be completely rethoughtbecause you're not just
testing if something works,you are testing if it works
well enough, and it worksconsistently every time across
a distribution of outcomes.
That's the first thing that'sfundamentally different
is that unpredictability,that deterministic
(05:53):
versus probabilistic.
The second is we, you are nowbuilding products on shifting
ground because the underlyingmodels evolve in ways that you
can't or you don't control.
For example, a new modelmight change behavior.
It may not be a breakingchange, but changes behavior
ever so slightly, or it expandscapabilities or introduces
(06:16):
some new failure modes.
And these happen overnightat the pace at which
things are moving today.
So unlike traditionaldependencies where you
could version controlyour breaking changes, you
could document updates.
These model updates areshifting faster than we
can even catch up with it.
(06:36):
And the third, and this isa big one, is your success
metrics have to be different.
You can now just measurelike, did this feature
execute successfully?
Because now you haveto measure things like,
was the output useful?
It's not that it executed,but was it useful?
Did it match theintent of the user?
The question thatthe user asked?
(06:58):
And how do you even definegood for your use case?
So you end up needing a muchtighter feedback loops with your
users and you have to now bake.
These evaluation mechanismsright into product development,
which is very differentfrom how traditional
product development is done.
So those are the three thingsthat are very different and
(07:21):
products that just tap intoexisting AI, like if I'm just
adding ChatGPT integration.
They can still be liketraditional integrations.
You still have success metricsand things that you have
to look at and evals thatyou have to keep a tab on.
But here, at leastthere is a contract.
But when you're buildingyour product as an AI native
product from the ground up,then that's totally different.
(07:42):
And that's wherethese three things.
Impact your productdevelopment even more.
Galen Low (07:46):
So funny 'cause
coming into that question, I
was like, I'm gonna ask thisquestion and she's probably
gonna say it's, yeah, it'skind of the same with a
few minor things, but thatabsolutely changes the game.
I love what you saidabout measuring intent
and the measurementside of things, right?
Because I come from the projectworld, you know, we've got
our requirements document.
It's binary.
It's like yes or no.
Did it do the thing that itsaid in the functional spec?
But then there'sall this ambiguity.
(08:08):
A, because it's probabilisticand the outcome won't
be the same every time.
And B, because the underlyingarchitecture is in some ways
a black box and evolvingitself on its own every day
faster than humans understand.
And I'm like, that's waydifferent actually than
how I think most digitalproducts have been created,
(08:29):
you know, in recent history.
But it makes sense.
Here's what I'mthinking in my head.
You've got over a decadeof experience working with
machine learning and AI.
A lot of folks are just gettingstarted over the past two years.
It's new to them overthe past two years or so.
ChatGPT kind of liftedthe lid on how AI can be
used, but it's been insoftware and in our digital
products for a while now.
And the thing in my head islike, because you watched
(08:51):
right on Netflix, I'm like,that looks so simple and
it's probably not right.
That algorithm, the machinelearning, like in the backend,
the data processing, not justbecause everything in Netflix
is like tagged with a certaintaxonomy, it's not like
related links on a website.
It is actually aboutbehavior and intent and
it's easy to get wrong.
It's easy to almost neverfind out how wrong it is until
(09:14):
you get that tweet right.
That's like, oh my gosh, myNetflix never gets it Right
because I watched this.
Now it's suggesting whatever,18 seasons of Barney.
Right.
Jyothi Nookula (09:26):
Even with
Instagram reels and TikTok
happens all the time.
I would go and watch that onevideo of someone skiing and then
I'm flooded with ski videos.
Yeah.
Galen Low (09:34):
And I think
it's something we
take so for granted.
I think it would be easyfor folks, including myself
to be like, I don't know.
That looks prettytame, pretty easy.
Same trick, same skillset,but when you really unpack
it that way, it's not really.
I wondered if we could maybezoom out a bit from there,
because in addition to yourrather impressive background
(09:55):
leading digital products forbrands like Amazon, Meta,
Netflix, Etsy, the ones I'vementioned, you are also the
founder of Next Gen ProductManager, which has among other
offerings, other educationalofferings kinda like a five
week bootcamp to transition fromregular, I guess conventional,
I'll say product managementinto like AI product management.
(10:17):
And frankly, the fact that yourcourse even exists tells me
that not all the conventionalproduct management skills are
really cutting it these days.
There is a next gen or moreforward thinking mindset
and skillset that is needed.
And while you may have gottento like handpick your teams
throughout your career, itstands to reason that like not
(10:37):
everyone on your teams havehad the exact same competencies
and attitudes when it comesto AI and other emerging tech.
I guess my question is like,what is the first thing you
do as a leader when you noticethat a team may not all be at
the same level when it comesto understanding, wielding
and even just acceptingemerging technologies like AI?
Jyothi Nookula (10:57):
Yeah, and
this is a very practical,
like real challenge andsomething that I have dealt
with a lot and constantlycontinue to deal with it.
I've been a people leadermanaging teams of different
sizes all the way fromthree to five to 10 to 12.
I've been through that spectrumto the skill gap and the comfort
(11:19):
gap with AI is something thatI'm seeing is the widest that
have seen for any technologyor shift in my career.
So here's something that Ilike to do first when I'm
faced with this problemis I separate the problem
into two distinct issues.
So the first issue is competencywhere people don't know how
(11:40):
to work with AI effectively.
The second is dispositionwhere people are
skeptical or resistantor even anxious about it.
Now you have to diagnoselike, which of these two
are you dealing with?
Or maybe you have a combinationof both, but it's important to
recognize which problem you'retrying to address because if
(12:00):
you treat a disposition problemlike a competency problem,
you'll just make it worse.
So like for competency gaps,my first move is to create
some sort of a shared contextthrough doing not learning.
And this is the same ethos Iteach my students even in my
five week course of AI productmanagement, where they learn
(12:23):
through doing and not just.
So I don't like to like sendpeople to training or ask
them to watch tutorials.
Instead, I immediately likeembed AI into our actual
workflows and low takeaways.
It's about integratingClaude or GPT to write
sprint retrospectives orsynthesize themes or like do
(12:45):
some deep research, or craftuser research summaries,
something where people seethe tool doing real work
for them and not necessarilylooking at it as replacing
them within like two weeks.
I ask folks in the meeting like,tell me about something that AI
helped you to do faster, or Tellme an insight that you wouldn't
(13:08):
have caught without AI, andthen those shared experiences
become your foundation forbuilding competency together.
So for disposition issues,which is more to do with
like resistance or fear,I think the best way is
to lead with curiosity andnot with evangelism there.
So I usually like in myone-on-ones, I like ask what's
(13:31):
your actual concern here?
And I actually shutup and just listen.
Because usually what comesout of those conversations
is something legitimatewhere they'll say things
like, I feel like this isgoing to devalue my craft.
Or I don't trust its outputsbecause I can't verify it fully
or like I feel like I'm fallingbehind and it's overwhelming
(13:54):
and those are real concerns.
And that's when I havelearned that you can logic
someone out of fear, right?
What you can do is toacknowledge it, normalize
it, and then showthem a path forward.
For someone who is worried aboutthings like say the craft being
devalued, I'll show them howAI can handle the scaffolding,
(14:15):
but how now their ability tolike focus on these nuanced
high judgment work that theynever had the time for before
or for someone worried aboutverification, I'd be like,
all right, let's build thevaluation frameworks together.
So now they're expert inthat AI quality assurance.
The other thing that I haveseen work really well is to
(14:36):
immediately like identifythese bridge people.
In every team, there arepeople who are naturally
curious about AI.
They are experimenting withit constantly with innovation.
They're practical,they're getting results.
So I give these people likeexplicit permission to share
like what's working well.
So like in a low keyway, like in stand ups
(14:58):
do show intel sessions.
So this creates like thosepeer to peer learning because
when people see a peer or whenthey see their own colleague
adopting it and gettingresults, they are more open
to trying it out versus aleader telling them, go do this
from a top down perspective.
And the last one could, thiscould be like a spicy thing,
(15:21):
is I move quickly to establishthat fluency with AI is
becoming a baseline expectation.
I'm empathetic about thelearning curve and I'm very
patient with the process, butI'm clear about the direction
that this isn't optional.
Just like how we all neededto learn about agile process,
or we needed to learn aboutanalytics, or we needed to learn
(15:43):
about, say, first designingthis is part of the job now.
So I have found thatcombining this high support
with high standards,setting an expectation
actually reduces anxiety.
The teams that I have seenstruggle the most are the
ones where the leaders areambiguous about expectations.
They're also notproviding real support.
(16:06):
That's where you getresentment and because
there's a lot of helplessness.
So the goal isn't to geteverybody to the same level
immediately, but the goal is toget everybody moving in the same
direction with psychologicalsafety and practical tools.
Galen Low (16:22):
I love that.
First of all, I lovethat separation between
competency and disposition.
I'm glad you took it there.
And then as you were describingthat, I'm like, this is
almost like a blend of changemanagement and team building
merged together in real time.
And you know, we have thistendency to think of change
management as like a thingyou do once your, you know, as
(16:43):
a big change is rolling out.
It's almost like this one timeinitiative versus this is almost
like change management on thefly every day with a team.
You know, to your point,supported, not just go learn
this on your own becauseyou have to and like without
that clarity, but actuallyit builds that team, like the
collective competency and thecollective disposition of the
(17:04):
team because it's a littlebit flatter, it's a little
bit more grassroots, there'smore clarity and it's like,
I don't wanna say like peerpressure, but like peer support.
It's that we're sharingknowledge, we're
sharing information.
And I love what you said aboutsome of the anxieties that
are legitimate around AI.
You know, feeling likeyou're falling behind,
feeling like there isn'tenough time to do things.
Feeling like it's devaluingyour craft, honestly, like
(17:25):
probably no better way to atleast find a path forward there
than to see your peers actuallydeal with that and put it into
practice in your day to day,not just like theoretical stuff.
But actually you'redoing it together.
I really love that.
I'm a huge fan by theway, of like learning by
doing, so I'm really gladto hear that's a part of
the way you teach as well.
(17:45):
And honestly, I thinkthat clarity bit that
really resonated with me.
You know, your standardsare high, your expectations
are high, but the supportis there and the clarity is
what will bring us forward.
Because the decision of, isthis just the passing trend
and fad that you can ignore?
Or is it going to beas normal as typing?
We're like already past that,or at least organizationally,
(18:07):
a lot of the folks thatyou've worked with, that
decision has been made.
You cannot ignore AI, so let'sget there, but we need to
like move forward together.
Jyothi Nookula (18:15):
Yeah.
Galen Low (18:16):
You mentioned a
couple of things in there
about almost like the littleexperiments, the pilots, to
get people hands-on with AIto build their competency
and their confidence.
I imagine it's quite a wideberth of how that can expand
and where AI can be used in thedevelopment of products that are
intrinsically also AI products.
I mean, could you give us someexamples maybe of some other
(18:38):
ways that your teams have beenusing AI in the way that they
design and develop the products?
Jyothi Nookula (18:44):
Yeah.
I can actually give yousort of concrete examples
across the entire productdevelopment life cycle.
Galen Low (18:49):
Oh, love it.
Jyothi Nookula (18:50):
Starting
with discovery and research.
Right.
So, using AI to dramaticallycompress insight generation.
So like what wouldwe do normally?
We do a lot of user research,like say 15 to 20 conversations.
So instead of spending a weekidentifying those themes,
we feed those transcriptsinto Claude and ask it to
like identify patterns orcontradictions or edge cases.
(19:14):
But here's the key, likeyou don't just have to
take the output alone.
We would still like review itwith the researcher who reviews
it, who challenges it, whorefines it, so that human in
the loop is very important.
What AI gives is like a verystrong first draft in an hour.
Instead of like a week, and theresearcher would spend their
(19:35):
time on this high judgmentwork deciding which insights
actually matter or which onechallenges our assumption.
What should we dig into next?
Which researcher, like auser, researcher and product
manager can like work togetherto identify those patterns.
And the same thing withlike support tickets, right?
If your product is already outthere in the market, the support
(19:57):
volume is like crazy, always.
So instead of having, likemanually having to review
these tickets to figure outthose patterns, we analyze
thousands of customer supportconversations to identify
what are the pain points?
How big are these pain points?
What's the reach ofthese pain points?
And AI surfaces these patternsin a way that we wouldn't have
(20:19):
got these manually just becauseof the sheer volume that comes
through the support channels.
And then as a productmanager, now you see the
user research, you see thesupport volume patterns.
You are in a more empoweredplace to figure out what is
a priority to go and fix.
Which features shouldyou actually prioritize
on your roadmaps?
(20:39):
Even in like say all the wayinto like documentation, for
example, or communication,AI is actually eliminating
that grunt work.
We have seen folks usingAI to write PRDs, summarize
print reviews, and even createthese stakeholder updates.
And these are all thethings that AI can create
(21:00):
strong first drafts.
One of my PMs said like, Iused to spend like 30% of my
time writing documentation,and now I spend 30% of my
time editing and refiningthose documentation, which
is way more valuable becausenow you have a good starting
point to build on top of.
And we're also using AI to makedocumentation more accessible.
(21:22):
So someone can ask like, whatdid we decide about this?
Like say, payment flowredesign, and get an answer
synthesized from threedifferent like slack threads to
meetings and some PRD, right?
So AI is also helpingmaking that documentation
more accessible.
And so across the productdevelopment lifecycle, right?
(21:43):
From like experimentation tolike documentation to even
like how we interact withengineering, where initially
we used to like as productmanagers, we own and write, PRDs
and PRDs are not going anywhere.
PRDs still continue to exist,but there's this new component
of prototype that gets addedbecause why just give a list of
stories when you can actuallytry it out and prototype it.
(22:07):
Get a simple product, marketfit, and hand over the prototype
as an idea to engineering.
So it's a new way of PRD isyou have a PRD that captures
the vision, the evals, how itshould work, when it should
work, what does good looklike, what does bad look like.
But then you also havethis prototype that talks
(22:27):
about the interaction,the interactability, and
how that's a, the product.
Needs to be imagined.
So we are seeing it across theproduct development lifecycle.
Galen Low (22:36):
I love that you
brought it there because I
was talking with some productpeople just the other week
we were debating whetherPRDs, the product requirement
document, is it dead?
And this is when I was chattingwith you know, she was saying
no because you still have tohave those thoughts and they
still have to be good thoughts.
And she had built a littleapp that helps you come up
with those good thoughtsand the basis for why your
(22:56):
product fits the market andwhat features it should have
and what are the priority.
And then we tie that intoprototyping again, still has
to have that vision, likeisn't going to come up with
all the creative answers on itsown, and that's good enough.
You still need to have thatstrategic vision for what the
product does, how it servesthe needs of its users, how
(23:16):
we're taking it to market.
But I like that idea that the a)shift in somebody's job, right?
Because I know a lot of productmanagers and yes, it's like
documentation heavy, which alsohas become a superpower because
as you say, now they're spendingmost of their time editing it.
But it was already part of theprocess to have documentation
in the first place, whichmeans you can train AI on it.
Versus all thosepeople who didn't write
(23:38):
anything down before.
Now they have to startdocumenting in order to
actually benefit from it.
So in a way, product managersare ahead and I just love
that idea of the sort oflike first draft machines.
I had a question though,because you kinda like double
click on it in the projectworld, I know we have a lot
of folks who are like, yes,it's great for a first draft.
Are you finding that your teamsare using just that first draft
(24:01):
bot and then moving on to likethe other parts of the chain?
Or are they feeding it backso that they can improve
that machine so that it mightactually be a better and
better first draft bot byfeeding the results back in
their edited versions back in?
Jyothi Nookula (24:15):
Yeah, so
we call this one shot.
People assume that it's a oneshot thing with AI where you
feed it and then it gives you anreport and then you can run with
it, which is seldom the case.
It's just usually that youget a strong first draft,
but then you irate, you putyour thoughts into it, and
then you feed that into likecritique your positions or
(24:35):
where are those areas thatcould improve refinement.
And then it gives you itsthoughts and then you'd
be like this bit makes.
Sense, but this otherbit I don't agree.
And it's like havingthis partner that you can
continue to refine untilyou feel really good about
where you're landing with.
If you treat AI systemsas a way to like give it
something, get an answer, onesingle shot, and then just go
(24:57):
with it, that rarely works.
What I've seen value improveis when you iteratively work
through it and improve it,which is why we say it's easy
to teach how to do, but it'svery hard to teach Taste.
Taste is still something thata product manager needs to own.
Galen Low (25:15):
I like that taste.
Do you find a lot offolks are you know, it's
like, okay, now 30% of myjob is not just editing
documentation, but actuallylike talking to a robot?
Versus like you said, likea lot of the product manager
role is quite human, right?
You're doing user interviews andresearch and yeah, you're gonna
have like swath of data thatyou won't have an individual
relationship with a human, andAI can definitely help there.
(25:36):
But do you find that there'sthis like a loss of the humanity
in product management fromthe perspective of somebody
you know as a disposition?
Maybe one of the like there asanxieties or concerns is that,
well, I spend most of my timetalking to a robot teaching a
robot, and it's just not thesame as like talking to a human.
Jyothi Nookula (25:53):
I don't know if
it is as much as talking to a
robot because like I said, if itis just you and that AI system
talking, then that's different.
But you still have to likeconvince your stakeholders.
You have to talk toyour engineering team.
So product managers are inthis hub where they have to
connect with different teams.
And so in a way it's more likehaving a helpful assistant with
(26:17):
whom you can like brainstorm.
And I've seen my productmanagers, my leaders and
my peers actually use that.
Like they're running andthey're just talking with
it to like brainstorm ideas.
So it's less of like thatrobotic, more of like.
Let's use this in a way toget started and figure out
(26:39):
you have this one companionwho you can like always 24
by seven, go and talk to.
Galen Low (26:45):
To play the devil's
advocate, you worked at some
places where I am reasonablycertain that someone has come
and asked you, Jyothi, why can'twe just automate this process?
Why does there need tobe a human in the loop?
Couldn't we just grab all ofthose support tickets, run it
through an agent, have it comeup with the prioritization
on new features, develop thenew features, and release it
without anyone being involved.
(27:06):
A, you don't have to tellme if that's happened
or not, but you can.
And also like, I guess maybemy question is how do you
push back against some ofthe like tech first approach
rather than a human firstor user first approach,
especially at that levelthat you've played at, right?
In some of these big techenterprises where there is
pressure to be like, yeah,but couldn't we just use
(27:30):
this tech and then figureout what to do with it later?
How do you even go aboutpushing back against that and
making the case for some ofthe human in the loop stuff?
Jyothi Nookula (27:39):
Yeah, and this
is something I encounter a lot,
even when I'm consulting a lotof these companies as well and
with my students who say, Hey,this is the situation I'm in.
How do I work?
This is like a disease rightnow where everyone wants to
start with AI and for the betteror worse, I feel like we are
forgetting about users, theirproblems, and we are starting
(28:01):
with technology, which isvery counterintuitive, right.
The hardest through this,it's really hard to push back
because the institutionalincentives are all pointing
the wrong way because you'vegot these executives who have
read the same three articlesabout AI being existential.
You have investors who areasking, what's your AI strategy
(28:23):
on every earnings call?
And engineers who aregenuinely like excited,
you see this hackathons.
So the pressure to dosomething with AI is immense.
But here's what I say is likegoing back to the product
first principles is startingwith the user and the problems
rather than the algorithm.
(28:43):
So I always say this even inmy class as like users before
algorithms, when someonecomes really excited or an VP
says, Hey, there's this new AIcapability, or whatever it is
that's multimodal, or like voiceor whatever is a new thing.
You can't win by saying no,but instead I rephrase, I flip
this to like, oh, that's avery interesting technology.
(29:06):
What problem are wetrying to solve with this?
And I make them do the work ofarticulating what is the problem
and not in like a gotcha wayor anything, but more like walk
me through your user's day.
Where does this fit in?
And what are the userstrying to do today?
Or like, what's the alternativeif this doesn't exist?
(29:26):
And so usually one ofthe three things happen.
One is they realize that it's asolution looking for a problem
because the energy fizzles outnaturally then because they're
not able to like articulatea compelling user need.
So no confrontation neededor they would find that
it's a real problem.
But technology isn't actuallylike the AI tech isn't
(29:48):
probably the best solution.
Maybe it's like better solvedwith better UX or better
onboarding or fixing someother broken process, or
not necessarily AI, but aeasier like deterministic
version of a technology.
That AI is probably anoverkill they get to realize
when we do this exercise andother way it goes is this
(30:09):
is the best case, right?
They find a real problem whereAI actually unlocks something
new and this is gold and this iswhere innovation really happens.
And at Amazon, oneof the things that.
We famously do is call workingbackwards process where
we write a press release.
So I, I make my PMs or whoeverpeers who I'm working with to
(30:32):
actually write a press releasefor this product where they
need to like talk about theimpact of this product, maybe
add a user testimonial inplace and it's really hard to
fake something as amazing whenyou go through that exercise.
So that I have seen to bereally helpful in navigating
those conversations.
(30:53):
But there will also betimes where it's coming from
a top down, like say theCTO has decided, and it's
really hard to go to a CTOand fight with them for it.
In those cases, I have learnedto reframe it, to say, all
right, if we are going todo this, let's at least do
it in a way that's useful.
Let's go find and solvea real problem versus
(31:14):
having to fight againstthis particular use case.
So then I shifted to like,this is great, but here is
this other problem whereit could be really useful.
We have the data for it,we have a use case for it.
The ROI, like, I workedthrough all of those proposals
to say, yes, let's use it,but this problem is more
urgent, this other problem.
So these are like probably sometechniques that have worked
(31:36):
for me early in my career.
I used to like think that myjob was to like protect product
vision from distraction.
Now I realize my job is toactually channel energy, like
bring enthusiasm about newtech over enthusiastic, channel
them towards using it theright way, all towards these
outcomes that really matter.
(31:57):
The issue is not about theenthusiasm, but it's about how
do we channel that into a waythat's helpful for our users.
So bringing that alwaysback to those product first
principles is like think aboutthe user, the problem, versus
starting with technology.
Like you wouldn't start saying.
I opened a Word doc today,what should I write?
Like start, write or something.
Galen Low (32:19):
Where's
clippy when we need them?
I think honestly that was likea masterclass in a nutshell
for navigating productpolitics because I think your
perspective on the role, Ithink it's really useful.
I do know a lot of productpeople and as a project
person, I can also relatewhere you feel like you're
the gatekeeper guardian,you're like the defense.
(32:40):
And the pusher backer, right?
The person who's going to saythe gotchas and you know, really
like put people in their placeso that we can maintain whatever
it was that we set out to do.
I really like that ideaof channeling energy
though, because it canlead to good things.
I was interested, yourthird point is that,
okay, we're doing this.
Let's at least make it usefulI think is like, it's not a
(33:02):
perspective I hear often, butI think it's refreshing in
a way that like that's howI know businesses to work.
At any scale.
Sometimes you aren't given achoice and it doesn't always
have to be, well, let me getthat decision in writing so I
can tell you, told you so laterit can be constructive as well.
It's like the energy can benefitand I come from a human-centric
design background, so likeI'm always excited to hear.
(33:24):
People have conversationswhere they're bringing it
back to user and user benefit.
I think it's such auseful reframe, right?
To be like just thatgentle, what problem are
we trying to solve and getthem thinking about it too.
It's even likecollaborative, you know?
Instead of like thisdefensive, how do I try
and find a way to say nocloak and dagger politics.
It's actually okay, well let'slike reason through this.
(33:45):
And find the best outcome.
By the way, speaking ofoutcomes, that press release
thing, I'm stealing it like it's1000% the thing that A, like
we do like to work backwards,but not to the extent where
we get to the press release,but that is like the pinnacle
of describing what you didand why and its impact.
What a useful tool to get peoplethinking about the outcome,
(34:08):
not just getting it released.
Not just taking orders andgetting the job done, but
actually envisioning whatoutcome we're driving towards.
What's it gonna do for people?
What are we gonna be proudto say when we're done?
That's super cool.
I love that.
Jyothi Nookula (34:22):
Yeah I love
that press release concept.
I have always used it even afterleaving AWS so many years ago.
I still continue to likeuse that in my work.
It brings a perspective.
Galen Low (34:35):
I love it.
It's so useful.
I wondered if maybe we couldland out by talking about
the future a little bit,because I think throughout
this conversation, I thinkit's pretty clear that the
product management spaceis shifting quite a bit.
The products themselvesare changing, the tools
and methods are changing,and the expectations around
technical understandingand business understanding
and delivery strategy.
(34:56):
It's also changing.
What are your top three orfour things that a product
manager interested indeveloping AI products needs
to have on their resume or intheir portfolio to stand out?
Jyothi Nookula (35:07):
Yeah.
Thanks for actually askingthis question because this is
a very practical question, andhonestly, what I look for in a
resume has changed dramaticallyin the last like two years.
So here's what actually makessomeone stand out right now.
One is evidence of buildingsomething with AI, not
just talking about it.
(35:27):
I, as a hiring manageror recruiter, we
have a spot to fill.
It's not like a space forresearch or, so I want to see
that you've actually shipped anAI native feature of product.
Not like something to say,like, I've worked on a
team that used AI, or Icontributed to a strategy.
What I'm looking for islike, what problem were you
solving with this AI product?
(35:49):
What was the AI actually doing,and how did you evaluate?
What are some learningsthat surprised you?
And for people who haveshipped AI products, I
think this makes sense.
But a lot of them haven'tshipped AI products and they're
like, how do I communicate this?
That's why I saygo build yourself.
Some side projects, and thisis something I tell even
(36:10):
in my class, and we do alot of hands-on projects.
They actually build likea full portfolio kit by
the time they come out.
And I actually tell them,don't we start them as
projects, but you have toconvert them into projects.
It's not like you finishit and then you close
your laptop and go away.
That's a project.
You have to convert it intoa product, share it with your
(36:31):
friends, with your community,have them use your product.
Ask them to give you feedback,and so then you go and
improve upon that feedback.
Maybe slap a stripeintegration on a charge.
$1 or 50 cents doesn't matter,but make it a revenue product.
If that's how your productis slated to convert it into
as close to real product aspossible, that creates a lot
(36:54):
more impact on your resume thangoing and just talking about,
like, I worked on an AI project,for example, which is why many
people could be doing projects.
You have to build products,whether that is at
work or away from work.
The second thing that'simportant is showing
that technical fluency.
(37:14):
Now, what I mean by that is youdon't need to be an ML engineer.
I don't need you to code, butI need to see that evidence
that you can have credibleconversations with engineers
about how these systems work.
So on your resume,this might show up like
evaluation frameworks thatyou have used if you're.
Or how you have gone aboutdoing bigger testing.
(37:34):
What are the trade-offsthat you have navigated?
Be it latency versus quality,or cost versus capability,
or what specific modeltypes of architectures
you have worked with.
So the language you usematters because if your
resume says, I leverage AIto improve user experience,
that tells me nothing.
But instead, if you said, wehave built this rag architecture
(37:56):
to reduce hallucinations andcustomer support responses.
It has improved this accuracyfrom X to Y. Now I know
what you're building and thetest I usually use to see
is, can you explain this toan engineer, why we should
be using technique A versustechnique B for this use case?
And similarly, could youexplain to a business
(38:19):
stakeholder why that technicaldecision matters and how it
impacts business outcomes.
'cause you as a productmanager, you are at this
intersection and your mainrole is translating technical
possibilities to businessoutcomes and business value.
Converting that back intotechnical frameworks, that's the
(38:40):
AI PM that performs the best.
Last but not the least,is having that experience
to navigate ambiguityand this rapid itration.
So, like I said, AIproducts are different.
Models get updated,capabilities evolve, so you
need to be comfortable withambiguity and that should
show up in your resume.
(39:01):
Like for example, if you tellme that you launch zero to
one products or fast movingenvironments, now that tells
me zero to one is you haveto go through that ambiguity.
Or the experimentation mindsetabout that rapid prototyping,
testing and learning evenfor your side projects when
you converted into a product,then that's when all of this
(39:21):
also start to formulate.
So these are like a fewcapabilities that I think
will stand out or helpand applicants stand out.
What I'm not lookingfor is definitely not
a buzzword soup, right?
Like I don't need anyoneto say like, I leverage I
ml to drive synergies andoptimize tells me nothing.
(39:42):
Or going and doing six differentcertifications on Coursera
or passionate about AI.
Like these are not thingsthat will stand out.
So of all the things, ifI have to tell you what
is a pattern, the patternis a learning velocity.
How can you move fast?
How deep andtechnical do you go?
Do you understand how thesesystems work together?
(40:04):
So if you're trying tobreak into like AI product
management and don't have directexperience, you can create it.
You can build something.
Nobody's stopping youfrom building anything.
It's okay if your workdoesn't have those AI
opportunities, but nobody'sstopping you from building.
Or you can write about whatyou're learning as part
of what you're building.
(40:24):
So many challenges whenyou start building.
And contribute, create casestudies of AI products that
you admire, do tear downs,and then figure out how
would you improvise it?
Because the barrier toentry is really low now.
You don't need to be aquarter to go build your idea.
Right?
And also in the industry, notmany people have 10 years of
AI experience or somethinglike that, so everybody's
(40:46):
figuring things out together.
What really separates is peoplewho are actually doing the work
to figure it out versus peoplewho are waiting for permission.
Galen Low (40:56):
That was such
a good description of,
you know, that conundrum.
I'm sure you hear itfrom people all the time.
Oh, but I'm a productmanager, like I shouldn't
be expected to code.
But there's this middle layer,and I think you explained
it really well of like, no,maybe you don't need to code.
You do need to have thevocabulary to do this
translation, to understandthe sort of business and
user need and the sort oftechnical implications.
(41:18):
And you do need to have thissort of builder's mindset
where you understand thetechnology enough to know
where there are frictionpoints and what could go wrong.
And it's not just a you know, Idid what I was told working at
Big Tech Company X. Therefore,I'm a good hire because to your
point, as a hiring manager,you're like, I don't have any
(41:39):
idea If you can slice throughambiguity or if you understand
the user or if you can speak thelanguage of the cross-functional
teams that you are working with.
All I know is that you workedfor big company X in a certain
capacity, and you could havejust been the person who's
just kind of following somebodyelse's lead and not necessarily
being daring and being boldand learning at speed, and
(42:01):
having that velocity andopenness to just engage with
it and let it improve yourcraft instead of being like,
that's somebody else's problem.
Jyothi Nookula (42:09):
That's
why I say don't wait for
permission, just do it.
The barrier to entryis very low now to try.
Galen Low (42:16):
Jyothi, thanks
so much for spending
the time with me today.
It's been so much fun.
Before I let you go, where canfolks learn more about you?
Jyothi Nookula (42:23):
Yeah, you
can find me on LinkedIn
under Jyothi Nookula.
You can also visitnextgenproductmanager.com
where you can learn moreabout the courses I offer
across AI product management,agentic AI and PM Accelerator.
Galen Low (42:41):
Amazing.
That's awesome.
I'll also include thoselinks in the show notes for
folks listening or in thedescription for folks watching.
Jyothi, thanks again so much.
Jyothi Nooku (42:49):
Thank you so much.
I had so much fun.
Galen Low (42:52):
Alright folks, that's
it for today's episode of The
Digital Project Manager Podcast.
If you enjoyed thisconversation, make sure
to subscribe whereveryou're listening.
And if you want even moretactical insights, case studies
and playbooks, head on over tothedigitalprojectmanager.com.
Until next time,thanks for listening.