All Episodes

August 13, 2025 42 mins

What if your next competitor is not a startup, but a solo builder on a side project shipping features faster than your entire team?

For Claire Vo, that's not a hypothetical. As the founder of ChatPRD, formerly the Chief Product and Technology Officer at LaunchDarkly, and host of the How I AI podcast, she has a unique vantage point on the driving forces behind a new blueprint for success.

She argues that AI accountability must be driven from the top by an "AI czar" and reveals how a culture of experimentation is the key to overcoming organizational hesitancy. Drawing from her experience as a solo founder, she warns that for incumbents, the cost of moving slowly is the biggest threat and details how AI can finally be used to tackle legacy codebases. The conversation closes with bold predictions on the rise of the "super IC" - who can achieve top-tier impact and salary without managing a team - and the death of product management.


Follow the hosts

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Atin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Conor⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Vikram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠Yash⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


Follow Today's Guest(s)

Connect with Claire on LinkedIn

Follow Claire on X/Twitter

Claire’s podcast How I AI


Check out Galileo

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Try Galileo⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Agent Leaderboard

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
I look at the speed at which people are able to build things,
and I think velocity will be a massive, massive differentiator.
I think teams have got to get onthis train because they're going
to be competing on the ground. Feature for feature, capability
for capability. And if you don't embrace this, I
just cannot imagine you. You don't get left behind.

(00:27):
Welcome back to Chain of Thought, everyone.
I am your host, Connor Bronson. Our guest today is Claire Vo
Claire is the Chief Product and Technology Officer at Launch
Darkly, the founder of Chat PRD,and also the host of the How IAI
podcast. Claire, welcome to the show.
It's great to have you here. Thanks for having me, I
appreciate it. I love that you've got this

(00:49):
diverse perspective and approachacross AI.
Not only are you diving deeper with other folks and exploring
and creating content, not only have you founded your own AI
enabled application, but you're also building AI products as a
leader at a scaling company likeLaunch Darkly.
And I have to imagine this givesyou a varied perspective across

(01:11):
the space, A unique vantage point even.
And that's exactly what I want to explore with you today, from
the incredible product development velocity that AI
enables to how it changes the equation for risk and why
leaders need to be building their own vibe coded apps just
to keep up. And by the way, I'm doing a
terrible job of this so that I'mfeeling held accountable at this

(01:33):
conversation already. So let's start though, with this
demand that we're seeing around agentic AI, something we've
talked a lot about on the show, but I think it's really
important to continue to dive into.
And in particular, I think your perspective is what I want.
I want to understand deeper. With chat PRD, you're seeing a
significant demand for more agentic AI experiences over

(01:54):
Copilot like models. From your vantage point, what
are the key drivers behind this growing hunger for agents within
AI that perhaps didn't exist 612months ago?
I think there's a couple things that probably can contribute to
this rise of demand for more agentic experiences.
I think the foundational 1 is people are just much more

(02:16):
comfortable with the concepts ofgenerative AI and AI products,
and so they're able to wrap their heads around the things
that were keeping them from adopting any AI products, let
alone agentic ones. You know, where's my data going?
How are these responses being generated?
Who can I trust? Is my data secure?
Does it have enough context? Is it going to hallucinate?

(02:38):
If you don't have that foundational understanding of
how these products work, you're certainly not going to embrace a
form factor of the product that's a little bit more
independent, a little bit more asynchronous, and a little bit
more connected to your your dataand your products.
And so I think 1 baseline comfort and understanding with
AI definitely helps here. And then I think the second

(03:01):
thing that we're seeing is working with AI tools is still
work. You still have to like sit in
front of some sort of tool and figure out what am I going to
prompt this thing? What can it do for me?
What can it do? Well, it's still really coming
from a push from the human in order to get these outputs from

(03:22):
AI. And I think, you know, what
people are really wanting when they come to a gentic experience
is, is I want to discover what you can do for me.
I want you to be able to do a broad set of tasks for me.
And once I set you off on a path, I want you to take it to
its logical conclusion. And so I think there's this form

(03:43):
factor or UX of the agentic experience that is actually a
little bit easier to adopt than other kind of like AI products
that I've seen. And so I think it's a little bit
of the user experience as well makes a difference.
While the user experience may besuperior and I think will
continue to become superior, there are considerations on the

(04:04):
back end as far as creation thatyou therefore have to take on.
You've, I guess shifted left thethe challenge instead of having
it be in the copilot experience,it's earlier on in the
development process, in the guard railing, in how you
evaluate and improve those systems.
And yet we're still in the earlydays of AI adoption.
Yeah. This may not be the form factor

(04:24):
that sticks long term. We don't know yet.
Can you talk about the characteristics that you're
seeing leading edge teams that are comfortable offloading more
and more tasks to low supervision AI like agents?
Like what are those characteristics those teams
have? Yeah, I think 1 is risk
tolerance. And I don't think this has to do

(04:45):
exclusively with AII think thereare company cultures that are
just much more risk tolerant andhave much more of a embedded
experimentation mindset. So if you are coming to this age
of AI with a culture where you know what the appropriate level
of risk to take, you're willing to let people experiment and
fail and optimize, and you want to work towards outcomes, and

(05:09):
you're willing to tolerate learning and sort of like that
growth path through that process, you're going to be set
up really well. Because foundationally, the
number one thing I see with leading, leading edge adoption
AI teams is they're just open. They're just more open.
When someone says, hey, can you try this AI tool, they say,
already have it. So cool.

(05:29):
Or I can't wait to do it as opposed to other companies where
they say it's never going to, it's late, never going to work
in our code base, never and everand ever.
And you spend so much time upfront objective objection
handling and not enough time actually figuring out what
works. So I think that risk tolerance
is really important. And then I think there really is

(05:50):
operational maturity, at least at large scale.
You know, if we're talking aboutstartups, startups have the
advantage of their small, their code bases are relatively less
complex, their customers are probably fewer and lower risk,
they can do a lot more. But if we're talking about any
company of scale that's adoptingAI in a meaningful way, they're
operationally mature in a way that their engineering practices

(06:13):
already protect for quality, already protect for velocity,
are all ready to set up to make engineers productive.
When you have that foundation, it's very easy to add in a
gentic engineers or AIIDES or any of these sort of automations
that then accelerate because thesame practices, the same

(06:36):
technologies, the same operations apply to when you're
adding these tools in your stackas when you're adding, you know,
engineers to your stack, when you're adding new tools to the
stack, it all applies. And so I do see this combination
of risk tolerance, operational sort of maturity and then
honestly tops down or at least centralized focused and

(06:57):
accountability on adoption. So somebody has to say it's my
job to make sure we become the leading AI, you know, powered
engineering organization. And we've seen this lately at
the CEO level. You know, we've seen all these
like CEO edicts. So we're now this AI company,
you know, that needs to happen somewhere in the engineering

(07:18):
organization and launched darkly.
It started a little bit with me.Bless them, they're stuck with
me. So AIAI native whether they like
it or not. But honestly the, the shift that
made the biggest difference in engineering adoption was we made
one of our most senior tenured engineering leaders and
engineers the like sort of like AI czar at, at in the

(07:42):
engineering organization. And that centralized
accountability and week to week execution just makes the
practical adoption of these tools a lot, a lot easier.
So how did you decide who the right person was for that AIS
are, and what did you do to enable them to be successful?

(08:02):
You might be surprised by the answer, which is I picked the AI
skeptic. I like that I picked, you know,
not I wouldn't say it like the AI skeptic, but certainly
somebody who wasn't as naturallypre inclined as maybe I am to be
bullish on the AI opportunity. I picked somebody who had one, a

(08:24):
really good robust sense of our architecture and code base,
somebody that I knew kind of knew everything about our core
monolith and knew how our engineering organization worked
and understands, understood, youknow, where there are Dragons as
we say. And so somebody with good
foundational understanding or a code base is really important.
Secondarily, somebody senior enough with enough internal

(08:48):
tenure to both have credibility in the organization when they
say, hey, this really works or this doesn't, we trust that
person, but would also be able to sort of foretell and avoid
some of the big like land mines in adopting AI.
And then the you know, the third, the third category is I

(09:08):
really wanted to give this person has done so much for the
company and they're wonderful leader and they're wonderful
engineer. I wanted to give them a win,
honestly. So part of it is part of it was
motivated by them having the right attributes to be the
technical leader for this. And part of this was a career
development opportunity. I wanted to give them saying
you've been here for a while, you need to figure out what your

(09:29):
next level, next wave of impactsgoing to be.
Congratulations, I am plucking you the, the, the, the plum
prize of you get to put AI all over your resume and you get to
be the leader there. And so those were kind of the
three reasons why we picked thisperson.
Zach, thank you very much for doing it.
And it's been exceptional because he's not, you know,

(09:53):
overly enthusiastic. He's not going to say everything
works, but he's also opened up his mind to what does work, what
doesn't. Identified some technical places
where we can invest to make the adoption easier, and we know who
to go to for questions. Zach, how do I get access to X?
Zach, how do I figure out how toget this product to work with Y?

(10:15):
We have a centralized person that makes it easier for the
team to kind of understand whereto go when they're trying to
adopt these new technologies. Do you think leaders broadly
need to rethink their approach to innovation and risk with AI
given the opportunity that's ahead here?
Yes, 100%. I just, you know, the thing that
keeps me up at night is some hotupstart company with a clean

(10:44):
code base who's just ripping through features with AI like
it. It really, it really terrifies
me. I look at the speed at which,
you know, people are able to build things, and I think
velocity will be a massive, massive differentiator.
And I think people are going to wait too long.
I'm massively like, super paranoid about this.

(11:09):
And so I think teams have got toget on this train because
they're going to be competing onthe ground.
Feature for feature, capability for capability, they're going to
be competing for talent. And if you don't embrace this, I
just cannot imagine you you don't get left behind,
especially when these types of teams can access massive rounds

(11:32):
of a funding. You take the combination of like
AI native, super high velocity funded.
That makes me paranoid. And so we have an incredibly
healthy company, great brand, amazing engineers.
Like imagine if we had the guts to say we're just going to
operate completely differently and we're going to embrace this.

(11:53):
We're going to go as fast as possible.
We're in build some really cool stuff.
I think the teams that can embrace that and say that's
possible and that is for us are really going to remain relevant
in this next stage. And I think folks that don't
maybe will not. I will ask about one point of
what you said here, because while I broadly agree with you
and think any team that's not atleast attempting to adopt AI is

(12:16):
likely to be left behind, You did mention a team that has a
clean code base and it has a high velocity of AI development.
Then I'll just ask about that, given that that's one of the key
areas that I think many skepticswill will push on.
Yeah, yeah. I mean, look, it is very
different. You know, you mentioned at the
beginning I have this diverse, you know, perspective.

(12:38):
I get 2 things. I get my darling petite little
chat PRD repo. I've, you know, written every
line of code with with cursor with it.
I like, I know that whole thing by the back of my hand.
It's not that big and it's builton a modern stack and it's built
on a stack that AI knows how to write for.
And it's just a Dang delight to work in.

(12:59):
It's so, so great. And then I have launched
directly, which is an amazing scaled, proven production grade
enterprise product that has beenbuilt over the course of ten
years that, you know, maybe wasn't built optimized for the
languages and frameworks that AIseems to do the best with that
is complex, that, you know, has some tech debt.

(13:23):
Those are very different, very different situations.
Now what I think people underestimate though in the
situation where you have a legacy code base is a couple
things. One, the ability for AI to
accelerate cleaning up your gnarliest parts of techdat.
You don't like your front end framework used to be.
I promise you I've done this 2 or 3 * 18 to 24 months of

(13:47):
ripping out whatever JavaScript framework decided to deprecate
that year and like move to the new hotness.
I've done that almost every top of my career.
This makes it a lot easier. I know somebody who it at at
their start up just decided we're ripping everything out.
We're replacing it with with tailwind and with shad CN and

(14:07):
we're just going to have these like beautiful simple components
and I don't care. Let's just RIP and replace
everything. And now they can move very fast.
So when I think you can re platform some parts of your
product a little bit faster and clean up some tech .2 is you can
do purpose built things to make your repo better to work with
for AI. And the bonus of doing those

(14:30):
things is you make your repo better to work with for
everyone. If an Asian is having trouble
running your code base locally, an intern is going to have
trouble running your code base locally.
A senior engineer is going to have trouble running your code
base locally. Like if it takes three days to

(14:50):
get your local environment set up, that sucks for AI and that
sucks for humans. And if your code base is not
well documented, that sucks for AI, that sucks for humans.
And so I, I do think one of the most effective tactics I've
heard my peers do that we also try to embrace and launch
directly is do a spike on how can we make this code base

(15:12):
better for AI to work with. And that actually pays out, pays
out dividends in terms of efficiency.
And maybe the last thing that I would say is a lot of people, a
lot of people say it'll never work.
It'll never work in our disgusting, disgusting old repo.
Just it's impossible. And when I say that sounds like

(15:36):
a you problem, but two, like have have you tried?
Like have you given it a real go?
Because I think people maybe tryone PR with cursor or they try 1
task with Devin and it doesn't do well and they don't learn why
it doesn't do well. And they don't take the

(15:56):
accountability of like maybe my prompt was bad and then they
give up. As opposed to what I think we
did a launch directly, which is I said just go experiment and
report back what works, what doesn't.
And for every one total dud, we got 2 helpful wins.
And on the net, that's that's positive.
And so I do think there are things you can do in legacy or

(16:17):
more complex code bases to make it work both technically and
operationally. You just have to give it a go.
I completely agree. I think there's a broad
expectation misalignment based off of some of the marketing of
AI around this is just going to magically solve your problems.
And it actually does solve a fewof them.
But there's still set up work that needs to be done.

(16:38):
There's still iteration that youneed to do to make sure that
your infrastructure is in the right place, that you have the
context provided to Devon or Cursor that you may need for it.
And if you spend the time to do that, the dividends that you
will receive are fantastic. And I think what you're saying
out of all this is essentially that leaders are still

(16:59):
underestimating the opportunity with AI and underestimating the
risk involved with sticking witha human only engineering plan.
Yeah, I mean, what I think is people are so worried about what
if AI ships a bug that they decide not shipping anything is
better. And that is just such a
backwards way from a business perspective to look at

(17:21):
development. And we've all shipped bugs with
humans, like, yeah. We we ship bugs with humans and
we ship them quite slowly. And so I just think people
really underestimate the opportunity cost of moving
slower than than they could. And I also think leaders really

(17:43):
underestimate how irrelevant they will become if they do not
know how to do this in large organizations.
You know, as as I said, I gave the kind of engineer that is
leading our AI initiatives at launch track.
It was like I gave you the career gift.
This is like at launch directly and beyond.
Congratulations. In 2025, you led the

(18:05):
transformation of an engineeringorganization from one that
operated in a legacy way to one that's operating an AI enabled
way. You have all the learnings, you
know what works, you know what doesn't.
You have the success stories. If you as a CTOVP of
engineering, engineering manager, staff, principal
engineer are not developing those stories for yourself, I

(18:26):
guarantee you in two years when you go into interviews, you are
not going to be at the top of the list.
If you say we just, we didn't really worry about that, that
wasn't going to ever work for us.
Or I did a couple things, but I don't really know those tools
super well. You are just not going to have
the hard skills to do the job. And I really do think it's a
hard skills issue right now. It is a new type of engineering

(18:50):
skill you need to develop. Can you say more about that?
How Like what? Give me some more depth on into
Like what does a hard skill looklike?
Yeah, I think, I think there's acouple things.
So 1, you know, using all the tools available to you.
So I think coding is going to goto AI enabled IDs just as it's

(19:11):
just better, it's a better way to live.
You know, it's already happening.
And so if you do not know how tomanage context effectively,
prompt setup rules, access MCPS,all those things.
If you have not set up your toolkit for how do I use this new
set of engineering tools? Well, and to an advanced degree,

(19:34):
you're not going to have the hard skills to be a software,
the software team in a couple years because you will just not
know how to use the tool kit. And so I think that's one very
specific example. As a engineering leader, if you
do not know how to integrate andoperationalize the use of coding
agents or automations either in your DevOps or in your

(19:57):
engineering operations, If you don't have a sense of how those
things have or have not increased overall velocity in
your team. If you have not spearheaded
initiatives to, as we talked about before, make your code
base better for the entire organization to work with using
AI. You're going to be sitting next
to and interviewing against people who have done those

(20:19):
initiatives, who have figured out those operations.
And so again, I think this is just as we look at how we
evaluate the progression of engineers from, you know, Swee 1
all the way through principal engineer.
If we think about what it means to be an engineering manager,
director of engineering, VPCTO, I think we need to add AI

(20:39):
fluency into that list. And I think people need to come
up with a very specific list of skills that they evaluate for,
both in terms of promoting people and hiring new folks.
I'm already seeing in some of the hiring discussions I'm in
where people that would be fantastic candidates 2 years ago
are not as well rated because they haven't dove into AI feet

(21:04):
first. Yeah.
And I expect that to happen evenmore so over the next couple of
years and not just in technical roles, in marketing roles and
sales roles. If you are not embracing this
technological revolution, you are at a risk to be left behind.
And therefore, I think what you're doing at launch darkly of

(21:24):
building this culture, cultivating this culture where
the risk reward of velocity is seen as a net positive and where
AI is embraced and experimented with is so important.
So you know, you mentioned appointing an AIS are helping
them to transform the organization.

(21:45):
What other steps have you taken to really create a culture where
everyone within the R&D teams isenabled to take this on?
Yeah, we do a couple things. I think the first thing is very
tactical, which is you have to get finance and security out of
the way. And by out of the way, I don't
mean you don't go through finance security.
I mean you have a very simple framework for evaluating tools

(22:08):
and getting budget for them. And so I, yeah, I established
very early on, we're going to spend some money on AI.
It's going to be totally net positive.
I know in my in my soul and we just got to figure it out.
And then Infosec, I need like a fast turn evaluation and I need
you to be cool. Like be cool, be risk aware.
Do not put our customers at risk.

(22:29):
Do not break any of our contracts or compliance codes.
But like otherwise a guy, you got to be cool.
Like we got to be able to try stuff.
And so I think having those two teams deeply aligned to this
being something we're going to do is great.
Luckily we had no friction, friction there.
And in fact, the finance teams are always excited because they
see the, the potential efficiencies gained with these,

(22:50):
these kinds of tools that that'sthe first part.
Then I really believe in this building public culture when
you're trying to adopt AI. So we created the Slack channel,
it's called project building with AI.
It's got like 200 people in it and you just every time you do
something with AI, when it works, when it doesn't related
to work, not related to work, dump it in the channel so people

(23:13):
can see. Hey, I did this PR with cursor
and it totally blew, blew me away.
It built all my tests for me. It was super happy.
This is great to, you know, public chats with with Devin or
another agent. We're like, really, really ate
it on this one. And everybody's like yelling at
the agent to go to sleep and it's very, very funny.

(23:34):
So we just put it all in public.And the benefit of putting it
all in public is 1, you normalize it.
You say this is not something wehide or we're ashamed of or that
is wrong or it's not allowed. It's all open in public. 2, You
get this nice, like learning across the organization.
It's the best way to socialize. Socialized learning is across

(23:55):
the organization. So I love that, that public
channel, It's my favorite channel.
It's super fun. And then another thing that we
do kind of related to building apublic is we have kind of this
like AI Friday Power Hour. It's basically like a Twitch
stream of internal people using AI tools.
So we all get on 10 in the morning Friday.

(24:15):
Everybody's in a good mood because it's 10 in the morning
on Friday. And you know, two or three
people try something live with AI or show an AI workflow that
that worked for them. And so that is also something
where you can just kind of like look live and watch them in our
own code base, try to figure things out, explore new tools,

(24:36):
evaluate the quality, get some like champions out there.
And so I found that to be another really effective tactic.
I like that a lot and I have to say I've I've really enjoyed the
various vibe coding live streamsthat I've had the opportunity to
watch. There is one at Microsoft Build
a few weeks ago with Brendan Burns and the team at GitHub

(24:57):
that I, I really enjoyed where there's like, oh, we're just
going to build this reminder appfor my family that I'm going to
do, Let's just spend 2 hours andkind of knock it out and, and,
you know, quick vibe code session.
And I think seeing others, you know, essentially pair of
programming with also, and, and I with pair of programming also
with an AI enabled, you know, approach is so valuable to give

(25:23):
the context of how, how others are doing it and to, to, to
build a public and share these learnings.
But I know it could be hard. There's often hesitancy.
People are, are nervous about not being as good at something
initially or not wanting to, to show things off in public.
Are you finding more hesitancy with maybe senior level
engineers who are like Oh I've been doing this for years and I

(25:48):
know what I'm doing over? Or are junior level engineers
nervous about not excelling? Where are things sitting?
You know, it's hard. It's hard to my heart wants to
say, like, you know, you get oldand curmudgeonly.
You get stuck in your ways and you get a little paranoid.
And you know, the more senior you are in your career,
ultimately, like, the more it becomes your problem if

(26:09):
something goes wrong. Of course, my directors of
engineering are, you know, a little bit less risk tolerant
for this because you know, who gets paged in the middle night,
who gets yelled at when we have a sub zero?
Our directors of engineering like of course, because
accountability rolls, rolls up. And so I think they're
appropriately not skeptical, butjust, you know, risk adjusted

(26:32):
for for some of this stuff. That being said, you know, I do
think across the board there does still exist AI hesitancy
for a couple reasons. One, as I said, you're asking
people to learn a new hard skilland people just do not have time
to learn anything new. Like, OK, I could spend 2 hours
spinning up this agent and installing a new IDE and I like

(26:56):
them and blah, blah, blah, blah,blah.
And or I could just like knock out this PR, which would I
rather spend my time on. And so it's like all L&D
initiatives, you have to have the time to carve it out to AI
is not one shot 100% totally accurate all, all the time.
And so of course you were going to get these instances where you

(27:19):
work with AI and you get you betget bad outcomes.
And that to me is expected. It's cost of the game.
I think it Nets positive, but that can be a real detriment to
to adoption. And then I the the last piece
that I found really interesting is stylistic, right?
There are are both individual coding styles as well as

(27:41):
organization wide best practicesand styles that teams have just
gotten used to. This is how we write tests.
This is how we document things. This is how we do our front end.
And when an AI says I could do it, I could do the same thing,
I'm just going to do it differently than you.
Like people get frustrated. So I think there's a lot of
reasons for folks to be skeptical.

(28:03):
And then I do think leaders havethis really challenging line to
walk, which is, look, we have toget more efficient.
We have to get more efficient because the market is getting
more competitive and sucks to beus, but that just means we have
to do more with less. And I think that has been the
case for several years. Nobody is saying like, I have
way more head count than I used to.
And everybody's saying like, just hire to solve your

(28:24):
problems. No one's saying that.
And so I do think there's this reality that there is this
efficiency, you know, program tosome of this.
And when you say that out loud, people say you're replacing my
job with AI and I don't like that.
And so I do think it's very complex.
You have to have a very healthy culture in order to like put
your arms around this, make people feel like it's part of

(28:47):
developing both their personal career as well as the value of
the company, which benefits themfrom a financial perspective.
And so I think there are ways toget over to over the hesitancy.
We have to be really precise about what the hesitancy is,
address the heads on and then kind of say what we call like
say the stinky fish in the room,which is people are afraid
they're going to be replaced with AI.

(29:08):
People are afraid that they're just going to be asked to grind
out more work and more PRS and more features and more, more,
more, more, more with less, less, less, less.
If you don't say those things and you can address them, you
can hit them heads on and then hopefully get over some
hesitancy and get back to building.
I love it, and I've also heard you describe junior engineers
with AI as perhaps a loaded gun.Can you expand on that a bit?

(29:32):
Yeah, you. Look, I love them.
Give me all day. Like a kind of junior early
career engineer who's all in on AI, who knows every prompting
trick in the book, who has triedevery coding, open source coding
agent before you've even heard of it.
Who is, as I very gently say, like too dumb to know better in

(29:53):
terms of like what they can biteoff and what they can give it to
me all day. You need those in your team.
You need big early career energybecause you know, sometimes you
get some magic out of that and it keeps the rest of the
organization on its toes. And for folks that are early in
their career that have those attributes, the advice that I

(30:16):
would give to you is you have been given an incredible
opportunity. And it is also wise to know what
you don't know. It's just like super wise to
know what you don't know. And so if you can go in and say,
you know, I was bored last night, so I built an entire MCP
for our app. I want to put it on our public

(30:37):
repo and, you know, but I'm not sure I handled auth right.
Or is this going to be maintainable for the labs team
or any like just knowing what you don't know.
So you don't come to these, you know, code reviews or come with
these proposals without a good sense of where you need to look
around the corner, where you need advice, where you can

(30:58):
learn, you're just not going to do well.
I mean, I've, I've heard plenty of my peers who have hired that
sort of like cracked AI engineerwho in an interview is like, I
can build this and that and answers the questions well and
you're fine if they use cursor because it's great.
And then you realize they're just shipping a bunch of code
they do not understand and don'tcare to understand.
It's just a bad, a bad, a bad situation.

(31:19):
And so love early career, love agood AI powered Yolo and like,
know what you don't know and know, know how to grow your grow
your own skills. You have this great engineering
tutor in AI, but you also have great mentors on how to work for
the team, how to work in a big code base, how to solve scale
problems, how to solve technicalchallenges.

(31:42):
And I think you should take advantage of it.
I really appreciate you bringingthis broad perspective across
how to enable an AI first team and how you've approached the
transformation at launch Darkly,but perhaps most interesting for
me is how you've been solo building and AI start up on the

(32:03):
side chat PRD and you're moving in a velocity that would seem
impossible to someone who was trying to do this while also
having their main full job a couple of years ago.
And you've said that everything you think of you, you build in a
week. You don't really have a product

(32:24):
road map because, hey, you're shipping.
What's that experience been like?
I think it is such an important experience.
Again, it's probably the source of what makes me, as I said
earlier, so paranoid. Like what I stay up at night and
think about is like, what if there's a Claire out there that
is just ripping in our product space?

(32:46):
That makes me paranoid because as somebody who has built this
myself and has a career that spans over 2 decades, like I've
done a venture founded startup myself, I've worked at many
startups, I've worked at large organizations.
Like it is different. I raised capital 10 years ago to
build a product and I swear on my life I could probably build

(33:07):
that product before lunchtime today again if I need to.
Like it's just totally differentright now and if you as a leader
do not take a minute to really feel how different it is not can
I get cursor adopted by like finance and my engineering
organization? Can I get my PMS to write PR DS

(33:29):
in in ChatGPT? Like not that.
Like put your hands on a keyboard and feel how different
it is. Put your hands on a keyboard and
try to rebuild your own product,like until you feel that moment
of like, holy moly, it's so different right now.
You really are just not going tobe prepared.
What for what's what's coming next.

(33:50):
So I think that has been the most valuable thing about chat
PDI tell everybody love chat PDIdoing exceptionally well, better
than I could ever expect. And if it goes to 0, it will
have been worth it because I've I've learned this lesson.
So this is like my number one piece of advice to people is
learning how to build something like this and what it really
takes and what it really doesn'ttake is super valuable, even if

(34:12):
you remain in larger organizations and bring those
learnings to your kind of careerin a in a larger org.
Let's extrapolate this experience you've had and align
it to the concern that you say it brings up for you of, hey,
what if there is a Claire out there who's rebuilding our
product right now? What are the biggest operational

(34:32):
disruptions that you believe small AI native teams will cause
for larger incumbent organizations?
I think price disruption can be one of them, right?
You can offer some large percentage of future capability
for some small percentage of cost that that can be 1.
I think perceived innovation velocity is another one.

(34:55):
If you are just perceived at innovating at a lower pace than
your competitors, whether or notyour competitors are really
operating at any scale of the market doesn't matter.
Optics do have an impact. Then you're going to be
perceived as, you know, a laggard company.
I think that's something to really consider.
And then talent attraction is another one, which is for as

(35:18):
many AI skeptics as you have in an engineering organization, you
have just as many people who want to build, you know, modern
best in class engineering skills.
And if your organization does not provide those for them, then
they're going to go look elsewhere.
Yeah. I think you're absolutely right
that if you're not enabling folks to have the opportunity to
learn, they will either be doingon the side and maybe not

(35:42):
bringing those learnings to work.
Maybe they will, or they're going to look for a new
organization that's going to enable them.
Because the best engineers out there right now are fully aware
of what is happening and they are seeing this opportunity and
can't afford to let it pass themby because most of them aren't
retiring next year. Most of them have several years
in their career and they want tocontinue to be great.

(36:04):
And many of them simply are curious and excited, if not
both. So what would your advice be as
we wrap up this conversation to the different categories of of
folks within their career? Let's let's say maybe you know
more junior engineers who are getting started, leaders who are
are farther along, maybe they'redirector plus level and then the

(36:26):
folks who are that senior IC to like maybe engineering manager
team lead levels. How would you advise those
different groups to approach this AI movement?
Yeah. So for early in career, I would
say, you know, embrace your yournatural enthusiasm for the new
and share your learnings. I think the best things that

(36:49):
maybe early in career folks who bring into an organization is an
experimentation mindset, a sort of fearlessness in trying things
that maybe will require some work on the back end, but at
least can get to to prototype version and then really staying
in touch with like the new hotness.
What what's new out there? Share it.
We, we want to know, I think forleaders, this is the, the, the

(37:13):
folks that I really want to speak to, which is close your
eyes and imagine what an engineering organization is
really going to look like in five years.
What is it really going to look like if you just cast all this
forward? What's the shape of it?
What are engineering managers going to do?
Are we going to have PMS? What tools are you going to
have? How a software going to be
built? How are you going to attract
talent? Cast forward to that, you know,

(37:35):
five year future and then start preparing to get your
organization there now. I think that's so important.
It is yes, you have to worry about the day-to-day today, but
you really need to figure out how this is all going to shake
out in a couple of years and andget it figured out.
And then those kind of like senior ICS love em, my favorite
group. So so I think you all are going

(37:56):
to be the highest impact in thisnew era.
Like I actually, I've said this for a while now.
I think this is the era of the Super IC.
Like you're going to be able to get so much stuff done, you're
going to have so much impact. You're going to be able to
command a very high salary because you have a combination
of experience and like breadth of impact powered by tools.

(38:19):
If you just lean in like this isyour time and guess what?
Bonus to make more money and getpromoted.
You don't have to manage people.What a treat.
Like what a treat. You don't even have to have what
it wants. You don't have to do like
performance reviews, you don't have to deal with people's
complaints. You can just like build stuff.

(38:42):
And so I do think this is like the era of the Super IC,
especially senior I CS. I think leaders out there, you
got to figure out a path to pay them more money and give them
better, bigger titles without forcing them to take on teams.
And so I would say like, embracethat era and figure out what you
want the shape of your career tolook like during that time.
You mentioned something which I want to drill down on, which is

(39:05):
are we going to have PMS in a few years?
And we're already seeing this transition into AIPMS.
And obviously it's a bit of a nebulous term so far, but it
certainly involves creating MVPSa lot faster.
It certainly involves moving a lot faster and changing the
approach. What?
How do you see the role of APM evolving within engineering

(39:26):
organizations or disappearing? I mean, I famously murdered the
career of PMS at Lenny's cover Lenny's conference last year
with my PM is Dead talk that rattled a bunch of people.
Look, I think the role is going to change.
I just fundamentally think the role is going to change.
I think there are going to be sort of two archetypes of
product managers. I think they're going to start

(39:47):
to come from very different practices.
I think you're have like the prototype manager that is much
more of this like combo UX engineer PM who'd like defines
like product experiences and canget you to a high fidelity sense
of what that product experience looks like and how it needs to
technically operate very quickly.
I think that's one attribute. And then I think for those that

(40:09):
maybe are not that attribute or in in addition to that
attribute, you're going to have these like very commercially
minded GM style PMS who think a lot more about what market am I
selling into, how am I making money, what is the positioning,
all those sorts of things. And so I just think this like
middle ground of like I'm the keeper of what users want.

(40:30):
And you know, I'm a people person, damn it, sort of like I
just talked to the engineers because the engineers can't talk
to the designers and the designers don't really want to
talk to the executives and the executives don't really talk to
the humans. Like I just think that piece,
it's just not a real like a realrobust enough job with enough
impact when you take into consideration these tools.

(40:51):
And so I do think the product manager role is going to shift.
I am building a product manager agent that I think can take a
lot of those tasks off the plate, off people's plates with
chat, PRD, and then let them focus on things that I think
humans are really good at. Talking to other humans,
figuring out what they want, selling creative inspiration,

(41:14):
unique user experiences like special insights.
I just think the more we can clear our minds of the kind of
like tactical day-to-day operation stuff and the more we
can focus on like depth of creativity, better our products
are going to get. So I think it'll change.
I will definitely be wrong for many years and then suddenly
I'll be right. So I look forward to that.

(41:35):
I love confidence and I highly recommend folks go check out
chat PRD dot AI and explore moreof Claire's work.
Claire, where else should our listeners go to learn more about
you and to follow what you're upto?
Yeah, I'm on X at Clairvaux, also LinkedIn, that's my name.
I'm on Tiktok. We're reviving the Tiktok.

(41:56):
You heard it here first. I have chief product officer on
Tiktok, so with a word of that content and then tune into the
How I Air How IAI podcast where I talk to other people about how
they use AI. Fantastic.
Well Claire, it has been a distinct pleasure having you on
the show here. Thank you for an entertaining
and wide-ranging conversation. I'm excited to think through
some of your advice and implement it myself.

(42:18):
So it's been a ton of fun and wereally appreciate you having you
on. Thanks so much.
And for everyone listening, makesure you check out our YouTube
to see so much more. Pardon me.
Make sure you check out our YouTube to see so much more
behind the scenes content. You can find it at Run Galileo
on YouTube. There's demos, webinars, many
more incredible podcast with guests like Claire, and we'd

(42:40):
love to have you there. Thanks so much for listening and
we'll see you next week.
Advertise With Us

Popular Podcasts

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.