All Episodes

June 17, 2025 29 mins

The “AI is coming for your job” headlines have been exhausting—especially if you're a software engineer. But as Cortex.io founder Anish Dhar explains, engineering isn’t dying; it's evolving. In this episode, Hannah sits down with Anish to unpack what engineering excellence actually means in 2025, why measuring developer productivity is still wildly misunderstood, and where AI coding tools fit into the real world of production-scale systems. Spoiler: you can’t vibe-code your way to a million users.

Anish draws from his time at Uber and Cortex to break down how engineering leaders can better align technical initiatives to business outcomes, adopt AI without sacrificing code quality, and avoid getting swept up in hype cycles that don’t serve the organization.

Resources from this episode:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Hannah Clark (00:02):
It's become a joke how the era of AI has led to
the demise of every job in tech.
In fact, so many jobshave died this year.
I'm surprised I haven't beeninvited to more funerals.
Product management is dead,user research is dead,
and most egregious of all,software engineering is dead.
I'm probably preaching to thechoir here, but anyone who
really believes that softwareengineering is dead because

(00:24):
your friendly neighborhood LLMcan write code, is definitely
not an engineer themselves.
My guest today, Cortex.iofounder Anish Dhar would
even argue that engineeringis definitely not dead.
It's just growing up.
Formerly an engineer at Uber,Anish founded Cortex to make
it easier for engineers tounderstand complex code bases.
As an engineer himselfand someone who's users

(00:45):
are engineers, what he'sseeing in this space is
actually an evolution inengineering excellence and
a disconnect between new andold ways of measuring it.
We shared next gen thinkingfrom measuring and evaluating
excellence in engineering,and a hot take on how the
vibe coding craze fitsinto the conversation.
Let's jump in.
Oh, by the way, we holdconversations like this
every week, so if thissounds interesting to

(01:07):
you, why not subscribe?
Okay, now let's jump in.
Welcome back to TheProduct Manager podcast.
I'm here today with Anish Dhar.
He's the founder of Cortex.io.
Anish, thank you for makingtime to talk to me today.

Anish Dhar (01:19):
Thank you so much for having me.

Hannah Clark (01:20):
Yeah, so can you tell us a little bit about
your background and how youarrived at your role today?

Anish Dhar (01:24):
Yeah, absolutely.
So I'm the co-founderand CEO of Cortex.io.
We started the companyabout six years ago, but
before that, I used to workas an engineer at Uber.
I really started my careerthere and a lot of the problems
that I faced as an engineer atUber actually inspired all of
the reasons we started Cortex.
So two really closefriends of mine.
Uber has this massiveinternal service architecture.

(01:45):
As an engineer there, itwas really difficult for
me to understand differentparts of the code base,
especially when I joined.
There were so many differentservices that were being built.
It added this intense complexityand I was talking with a
really close friend of minewho was an engineer at a very
small startup called Lend.
They only had a hundredengineers while Uber
had over a thousand.
But we were both facingsimilar challenges around

(02:06):
organizing and understandingour service architecture.
And so that just rang thesealarm bells that, okay, if Uber
is on one end of the stores andPlace D Scale, and this other
company that just startingtheir journey on microservices
has the same problems, it'sclear that this is a big
problem in the industry.
And so we ended upstarting the company.
We went through theWinter 20 y meter batch.

(02:27):
Then yeah.
Fast forward to today.
We just made their seriesC and I work with a
few hundred differententerprises, use Cortex to
manage their simplicity.

Hannah Clark (02:34):
Cool.
That's an amazing journeyand it's always wonderful
to hear when a company comesoutta the ashes of an issue
that you know intimately.
And speaking of which,we're gonna be talking about
engineering excellence and whatthat looks like in today's tech.
Landscape on this episode.
So of course this is an issuethat you've been very close
to throughout your career.
So to kick us off, howdo you define engineering
excellence in 2025, andwhy is it becoming such a

(02:55):
critical focus for CTOs andVPs of engineering right now?

Anish Dhar (02:59):
Yeah, absolutely.
It's a great question.
So what we found at Cortexis that for a long time, the
conversation was really focusedon developer experience.
And developer experience isa really critical part of any
engineering organization, right?
It's.
Simple things like making surethat developers have, when they
join a company, it's really easyto get their internal system

(03:19):
set up and they're connected toGitHub and the various tooling
that they have, or when they'remaybe deploying a service, the
infrastructure set up in theright way so that there's not
a lot of troubleshooting orsteps to getting get there.
But what we found over the lastcouple of years especially,
is that the conversation hasshifted a lot more from just
developer experience to whatwe call engineering excellence.
And I'd say the bigdifference between the two

(03:41):
is engineering excellenceis really the focus of.
Different teams withinand your organization.
So you can think likeSRE, security developer,
productivity, evendeveloper experience, but
it really aligns them toactual business outcomes.
And I think that's the keydifference here where a lot
of engineering excellenceis thinking about how does
the work I'm doing actuallyimpact the business and how
does it actually move forward?

(04:02):
The goals that we have,all the way from, the CEOs
organization all the waydown to the specific SRE, for
example, that's working on it.
A good example of that could be.
As an organization, we'rereally focused on improving our
customer experience and we want,when customers use our product,
it to be reliable and a bettercustomer experience leads to
more revenue because peopleare using our product more.

(04:23):
So from an engineer excellenceinitiative, your SRA team
then might have a productionreadiness checklist that
they're trying to implement.
Because before servicesare deployed, they wanna
make sure that all servicesare meeting kind of the
standards of the organization.
And so you can see how aninitiative that starts with
the SA team drives back tothis real business outcome that
the organization cares about.
And I think it's reallycritical for teams to think

(04:45):
about their initiatives inthis way because it reaffirms
the value and aligns.
The technical initiativesis something the business
cares about, which is Ithink what engineering
excellence is all about.

Hannah Clark (04:55):
Interesting.
And this I'm sure you'vedescribed many times is kinda
like a never ending journey,which involves many disciplines
kinda working in tandem.
Tell me about this frameworkthat you've developed.
What are the keypillars look like?
How do you look at thisfrom your own organization?

Anish Dhar (05:08):
You're absolutely right.
It really is a neverending journey.
I think that a lot of companiesthat we work with, especially,
especially the size of thecompany or if they're a large
enterprise, there's a lot oflegacy infrastructure versus
maybe a new company that'sbuilt on newer technologies
and this maybe even AI first.
There's still differentinitiatives that revolve
around engineering excellenceand really acquire, I think,

(05:29):
a really thoughtful approachacross all these different
teams about what is thework and how does it impact
ultimately the businessexcellence goals we have.
And so from a frameworkperspective, I think one
of the things that we havereally worked on is, yeah,
like how do you defineengineering excellence
for your organization?
And I think the way that wethink about it is it starts
with business excellence, right?
There is differentgoals that you have as a

(05:50):
leadership team, and theycan be around things like.
Unlocking innovation andreducing time to market.
It could be maybe loweringcosts and increasing efficiency.
And then usually the third onewe see, which I just mentioned,
is how do you improve qualityand customer experience?
And then underneath thatis really the pillars
of rendering excellence.
And these are the differentteams and practitioners that
kind of make up the initiativesthat drive these eventual goals.

(06:13):
And that's things likevelocity, efficiency,
security, reliability.
Even within those subcategories,you'll have initiatives like
maybe there's a securitymigration or approach
readiness checklist.
Maybe there's an incidentmanagement process that
you're trying to implement.
Or just something as simpleas we want to track door
metrics to understand froma productivity standpoint,

(06:33):
how is our engineeringteam actually performing?
And then the foundation ofany engineering excellence
initiative really comesfrom what we call, we like
to call 'em the four Cs.
It's essentially like completevisibility, continuous
improvement, consistentdeveloper experience.
Of course, clear ownershipbecause without ownership and
understanding the differentparts of your code base
and all the services, it'sreally hard to actually drive

(06:54):
these initiatives forward.
And so typically wefind that without that
foundation, it's really hardto drive any initiatives.
And typically we also seeIDPs or internal portals
are a really strong wayto build that foundation.
They can also be done throughinternal tools, but you just
need some sort of systemto be able to understand
what people are buildingso you can drive these
engineering initiatives for.

Hannah Clark (07:15):
Okay.
So I wanna dig into somethingthat you mentioned there about
measuring performance, becauseI know that there's a little
bit of tension around measuringdeveloper productivity and
metrics, like lines of code.
Can be a little bitcontroversial between engineers,
so how should engineeringleaders think about measuring
productivity in a more holisticway that kind of takes into
account all these kind ofCs and that kind of thing?

Anish Dhar (07:34):
Yeah, absolutely.
I think the interestingthing is a lot of developer
productivity over the lastfew years has been about lines
of code or door metrics, andthere's a lot of different
frameworks that I think getfrom created to simplify how
to think of a productivity, andthere's some truth to how those
metrics are calculated, right?
Yeah, lines of code isn't agood indication of is someone

(07:55):
being productive or not.
But if you're delivering zerolines of code consistently,
quarter to quarter, there'ssomething clearly wrong
with the output or even fromcomparing team over team.
It's interesting sometimesto see those data points, but
I think the conversation isreally shifted from, okay, I
have this data to, how do Iactually get engineers to think

(08:15):
about that data or improve it?
If you really break itdown, it's like a completely
different problem.
A way more difficult one becauseanyone can go in, hit your
GitHub, API, and you get thesemetrics and get a snapshot
of how your team is doing.
But just because I show a setof metrics to an engineer and
try to describe, Hey, we haveto approve this metric as an

(08:35):
engineer, that doesn't reallymean anything to me, right?
Like I'm focused on buildingsoftware for the business
and I'm focused on doingthat, typically in the most
efficient way possible, but.
I think the conversation hasreally shifted, especially
with the CTOs that we workwith on, okay, I have these
metrics now how do I translatethat into something that
an engineer cares about?
And I actually think that'swhere engineering excellence

(08:56):
plays such a critical role.
Because I think engineers,especially once who work at
fast forward companies, theywant the business to grow.
They want, they're buildingproducts because they
wanna see the impact oftheir work with customers.
I think development productivityhas really shifted from here's
just a bunch of metrics to as abusiness, these are the things
that we're, we care about.
And these metrics tella story as part of that.

(09:19):
But for an engineer it's howdo you translate that into
something then that the workthat I, I'm actually working
on, it means something.
And so I'd say that'slike the big shift
that we've been seeing.

Hannah Clark (09:28):
So have you noticed it?
There are some kind ofoutdated evaluation methods
or KPIs that people arestarting to move away from.
What would you say is likethe new school of evaluation
or do you have any specificexamples you could share?

Anish Dhar (09:40):
Yeah, absolutely.
So I would think aboutproductivity as sort of
input and output metrics.
So output metrics are allof the classic frameworks
that you see today totrack drone productivity.
Like the, one of the mostfamous and popular ones is
door metrics, which is afew different set of metrics
that kind of give you thisholistic view of, or they're
supposed to give you a holisticview of like, how is my

(10:02):
engineering team performing?
I would say that mostengineering organizations today.
While they want to see thesemetrics, these output metrics,
and try to capture them, right?
It goes back to what I wassaying a little bit earlier
around, okay, then how doI actually influence those
metrics and see them move?
And so what we've been seeing,especially with our customer
base, and I think why we'veseen kinda this interesting

(10:23):
Cortex grow over the past fewyears is because there's this
whole concept of input metricsthat influences output metrics.
For example, let's takesomething like deploy
frequency, right?
Deploy frequency is a greatmetric to look at because.
The rate at which yourengineers are deploying
software is probably a goodpredictor of how fast you're
shifting product, which atthe end of the day is then
how you beat your competitionto market, and how you just

(10:45):
move faster as a business.
So maybe as an organization,you've decided that deploy
frequency is the mainmetric that you wanna track.
Okay, if I have a dashboardof deploy frequency and I can,
my OKR pull this up and try toshow the whole engineering team.
Hey everyone, we're atdeploying two times a week.
We wanna get that to four.
Okay as an engineer, how amI actually supposed to think
about as it relates to my workand, my part of the business or

(11:07):
the services that I own, right?
Like obviously deployingfaster means maybe you have
to ship more, but does thatlead to increased bugs?
Because I'm shipping more,is reliability gonna go down?
So there's so many differentvariables that go into
this and I think that'swhere input metrics become
really critical because theinput metrics ultimately
influence the output metrics.
And so maybe for deployfrequency, maybe there's a

(11:28):
process that you put into placeto actually see that go faster.
So I can give you an example.
Like one of our customers orRiley, they had an initiative
to kind a very similarinitiative like this and
they were tracking deployfrequency using one of our
engine intelligence dashboards.
What they found was that, okay,we want our engineers to deploy
faster, but the way we're gonnado that is by putting really
important guardrails in howengineers deploy and giving

(11:51):
them really clear guidelineson this is what a good deploy
looks like as relates toour reliability guidelines.
Because what was happening isengineers are trying to deploy
faster, but it would lead tobugs or things would break.
So there was this hesitanceto actually move as fast as
quickly as possible becausethere was all this customer
impact that was happening.
And so going back to thoseinput metrics, they basically

(12:12):
came up with this productionreadiness checklist and it was a
list of eight or nine differentinput metrics that as a whole
kind of give a really goodunderstanding of, is our deploy
process actually healthy or not?
And so these are metrics like.
It's our on callsetup up correctly.
Is our build processactually passing?
Do you have tests that arepassing on your services?

(12:33):
And what that did by puttingthose input metrics in place
was it gave engineers a reallyclear guideline on, okay,
I own these 10 services.
This is the process.
And these are input metricsthat actually means something
to me because they representthe services and how they work.
What they saw was like a gradualincrease in deployed frequency
from two times a week to threeto four, and especially with

(12:53):
the critical services and alsoa reduction in things like
incidents and things like that.
And so going back to youroriginal question, I think a
lot of enterprises are thinkingabout, okay, yeah, we have
those output metrics, but thenhow do I translate that into
something engineers care aboutand they feed into each other
in a very meaningful way.
I think you have to bethinking about your developer
productivity and metrics ingeneral kind of as like a

(13:16):
complete story between the two.

Hannah Clark (13:18):
Yeah.
Yeah.
That makes it a lot more, yeah.
I think holistic is a reallygood way of looking at that.
Switching gears just slightly,but we're on the same topic of
deploying more, more quickly.
We can't have a conversationabout engineering in
2025 without talkingabout vibe coding.
Let's talk about these AItools that are transforming
the coding workflows right now.
I've heard you say before thatyou can't vibe code your way
to a million users per day.

(13:38):
Maybe a hot take, maybe not.
So what would you say is thereality versus the hype when it
comes to using AI for productionscale at environments?

Anish Dhar (13:46):
Yeah, it's certainly a very hot topic
and I think that everyengineering organization
is thinking about AI or hasadopted some sort of AI coding
assistant, for example, right?
There's several reallypopular ones, including cursor
out there in the market.
Even the engineering teamat Cortex, almost all of
them are using some sort ofAI coding assistant to help
them with their day to day.

(14:06):
But what we found.
Talking to engineers on ourteam and also working with, most
of the customers that we workwith have been thinking about
similar sorts of initiatives.
I think that AI codingassistance are great for
when you have an initialidea and you want to
quickly validate something.
Or even if you're a front endengineer and you wanna quickly
mock out something very quicklyfrom an idea to how something
could look and feel like.

(14:28):
What we've seen is froma engineering development
process, it's perfectfor something like that.
You want something quick anddirty, something you just show
people how something can work.
Or if you have an ideaas a, entrepreneur, you
wanna quickly validate it.
I think, you're seeingsuch amazing growth
with things like that.
But the reality of where codingassistance and vibe coding is

(14:49):
today is that you could nevertrust code that is shipped
from vibe coding to actuallypower a production system
that is then being used by.
Millions of different users.
And that's just experiencefrom what we've seen, right?
Like at the end of the day, Ithink Vibe coding is at best
a junior engineer who hasreally just learned how to code

(15:11):
versus I think that's wheresenior and staff engineers who
understand system design, whounderstands how infrastructure
is actually deployed at scale.
We are just not nearlythere, and I'm not saying
that it can't get there.
I think the rate at which AIis developing is unbelievable.
And I think it'd be stupid tosay that maybe there's not a
world in which AI systems canunderstand actual production

(15:31):
instances, but the reality ofit is today, it is just, I can
tell you for a fact there'sno enterprise out there that's
relying on vibe coating topower any production system that
handles millions or billionsof users because it just.
The level of technical expertiseyou need to set up those systems
and diagnose them and makesure that they're scaling.

(15:52):
It's just not evenclose to that.
But as a whole, I would sayproductivity has improved
with coding assistance.
It's just in different areasthan I think maybe the market
likes to talk about, or it'ssexy talking about vibe putting.
But the reality is it'snot just not ready for
enterprise production systems.

Hannah Clark (16:07):
I think that's very relatable to a lot of
professionals who feel likefolks who are outside of their
profession are getting excitedabout the accessibility of
their profession using AI tools.
But that doesn't meanthat it's necessarily,
coming right behind you.
But this is an importantthing to talk about.
We have, leaders listening tothe show, so for engineering
leaders trying to balanceinvestment in AI tools

(16:28):
versus their headcount.
From your perspective, whatshould they be considering?
Like how do you evaluate theactual impact that AI is having
on a team's productivity,and how do you take that into
account with your budget?

Anish Dhar (16:37):
It's a great question.
I think there's a lotof different facets
to this question.
And at the end of the day,engineering, like even
just taking it from a verymacro point of view, I
think engineering teamsor engineering leaders.
That prevent their teamsfrom looking at these tools
or prevent their teams fromaccessing, like for example,
things like cursor or geta copilot or whatever.

(16:58):
I think it's a big disserviceto, I think, the overall
health and quality of yourengineering team over the
very long run because I thinkthe reality of it is in the
next 10 years, a lot of thesystems, or at least initial
code that people will write willbe AI assisted just because.
Kinda going back to what I wassaying earlier, if you're just

(17:19):
starting a company or you'restarting an idea, or you want
to quickly iterate and testsomething, the reality of it is
like it's just 10 times fasterusing something like Cursor
because you can just so quicklyiterate on your ideas and you
don't really need thing, thinkabout scale or how things work.
And so even if you take likea very long-term view of it.
I think that's why engineeringteams that are adopting these

(17:40):
AI tooling and are learning howto use it in different parts
of their coding lifecycle, evenlike the largest enterprises
where, you need productionskill systems, like those
engineers at a collective wholeare gonna be at an advantage
compared to kind of people whoare saying oh, it's not really
it's just hype, I think juststarting there, I think it's
very important that engineeringleaders let their teams explore

(18:01):
with these types of things andeven give their non-technical
users access to these tools.
Because that's probably themost interesting innovation
that I've seen, especiallyfrom our customer base, is
like product managers andTPMs and data scientists.
Who understand technicalconcepts but maybe didn't
have the expertise to code,who can take ideas and share
them with the engineering teamin a lot more powerful way

(18:22):
because they can actually spinup code and things like that.
I think from that perspective,it'd be really foolish not
to create budget to give yourengineers access to these tools.
Now, I think the milliondollar question is how
much productivity actuallyare we gaining from this?
And I think honestly, everysingle enterprise is trying
to answer this right now.
I. We see it allthe time, right?

(18:43):
A lot of times customerswill buy our product.
In conjunction withsomething like get a copilot.
And then the first questionthey ask us is Cortex is,
okay, we have this tool that issupposed to, three x four x the
output of our engineering team.
We actually wanna figure outis that actually happening.
And I think it actuallyjust comes back to the
earlier kind of system ofinput and output metrics I

(19:04):
was talking about, right?
Like it's not enough to justpull up, deploy frequency and
say, okay, just get a copilot,have an impact on that, right?
Because maybe get acopilots making it.
You shift faster, butit's really bad code.
And that leads to levelreliability issues, right?
So you have to really take thislike almost like 360 view of it
and look at different metrics.
And I think it comes backto engineering access at

(19:25):
the end of the day, right?
It's as a business, forget vibe,coding, forget everything else.
What are we focused onas a business right now?
What are the things thatare blocking us from hitting
our next scale of growth?
Or whatever it is as abusiness you care about.
I think you haveto think about is.
At the end of the day, theintroduction of the coding
assistant or AI tool actuallymaking a difference or

(19:46):
impact from that perspective.
And I think like the realityof it is for a little bit of
time, you might not really seeit impact or difference, right?
Because I do think it takessome time to understand where
these kinds of tools will havean impact on your business.
I think maybe the mistake thatI see a lot of enterprises
doing is like buying intothe hype immediately.

(20:08):
Just buying 5,000 licenses ofwhatever tool like cool tool
they see and then six months goby and they're, okay, actually
is there being an impact?
And you might have avery negative reaction,
I think without having avery thoughtful strategy
on why we're introducingthese types of tools.
And maybe the strategy is whatI was talking about earlier.
We don't wanna beleft behind, we want.

(20:28):
Engineers to continueto feel like this is a
cutting edge place to work.
And we just think thatit's advantageous for us
to have, AI assisted toolsin our engineering team.
Even if it's somethingas simple as that.
At least you know the intentionsof why you're buying it.
And I think maybe that's themistake is like just understand
your intentions and thenyou can do gut checks along
the way of is this actuallydoing what we thought?

Hannah Clark (20:48):
I tend to agree.
I think that right now.
There's so much pressure tohave mastered these tools and
figuring out where they fit intothe workflow, that's helpful.
But yeah I tend to agree thatright now what we're running
into across many departmentsis the difference between
quantity of output and qualityof results is like vast.
And we're like reallyhaving to be, and so

(21:10):
intentional not just in theengineering departments, but.
Are we applying these toolsin the right context in order
to facilitate and empower thebest of our headcount rather
than just blindly trying toaccount for it as next year's
headcount is going to belower because it's AI tools
will be able to replace.
So I think that right now's it'sinteresting to see everybody

(21:31):
figuring it this out indifferent ways at the same time.
And it, I, it's a verydisorienting time to be in tech.
Speaking of the AI tools andthem becoming more prevalent
in development workflows andspeaking of quality also, like
how do we think about findingthat balance of maintaining
code quality and securitystandards and reliability,
while also wanting to makesure that we are being, cutting

(21:53):
edge workplaces and takingadvantage of these tools to
the best of their function.
So in your organization,for example, how have
you guys approached that?

Anish Dhar (21:59):
Yeah, it's a, it is a really good question.
And just reallybriefly, I think.
I will say, I think the ideathat AI is replacing engineers
is so far fetched and silly,and I think what is true is
maybe when you're first startingout, instead of hiring like
15 engineers, maybe you canhire a few less than that,

(22:20):
just because the early days ofjust iterating and trying to
find product market fit, youcan just do so much more with.
A tool like Cursor thanyou previously ever could.
It could try out 10 differentideas really quickly.
And I think the stat'slike speed of iteration
is really powerful.
But yeah, I think once you'reactually deploying production
systems, in my opinion,enterprises that are saying,

(22:42):
oh, we don't have to hireas many engineers because
of AI is just like tryingto get some nice media.
Outrageous or somethinglike, there's just no way.
That's fair.
I know that for a fact.

Hannah Cl (22:52):
Without naming names.

Anish Dhar (22:53):
Yeah.
All those, like all ofthem are gonna continue
to, hiring engineers willnever not be in demand.
But kinda going back to yourquestion about how does AI
tooling and AI coding systemsgonna impact some of these
key pillars around likereliability and security?
Honestly, I think that's thebig open question right now.
And I think that's probablythe biggest thing that scares
these teams and scares,teams like SRE or security.

(23:15):
Or operational excellencebecause I think the
introduction of these toolsultimately creates more
surface area because peopleare shipping just more code.
And the more of your systemthat is built with ai, the
more likely it is that youdon't really understand how

(23:36):
everything works internally.
And what that means is whenthere's inevitably a reliability
issue, because that is.
Forever constant.
No matter how much youprepare one day there will
be something that happens.
Whether it's an influxof, users that you didn't
anticipate or a part of yourcode base that you didn't fully
understand, that blows down.

(23:56):
The more of your system knowsAI assisted, you're just in
a much more difficult placebecause as an engineer then
how do you actually go andtraverse all the different
parts of your code base ifyou don't fully understand it?
And I think that's maybethe biggest downside
I see right now.
Why?
Actually, I think it becomesmore critical than ever.

(24:16):
I was talking about thatfoundational layer of
engineering excellence, right?
Like complete visibility andclear ownership are really
key pillars of any engineeringexcellence initiative.
And if the more of yoursystems that are created
through ai, honestly you haveless coverage on those things
because who actually owns it,who actually understands and
understands that visibility?
I think those are the.

(24:38):
Speak things that scarea lot of security teams
and liability teams.
And honestly, I we see withsome of our customers, we
even see it with our ownengineering team, right?
When we do publish code withAI or something is created
with AI, engineers are verycareful to mention that, hey,
some of this was, a hundredpercent generated through
Cursor or whatever, and we takeextra look at those systems.

(25:00):
I've seen this.
Huge trend around likeAI assisted testing.
So there's a lot ofcompanies right now that
are actually creating likean AI engineer that like
review your tests and stuff.
Sometimes I see that and it'sa little bit dicey because
like at the end of theday, I think AI systems are
extremely powerful and aredoing more good today than in

(25:20):
any harm in engineering team.
But it's just scary to thinkabout a world where 80% of my
system is rooted through ai.
It is gonna lead to likereliability and systems
would take longer to resolve.
Security incidentsmight go up because.
You just have less visibility,and I think that's the thing
that you have to watch out for.

Hannah Clark (25:38):
I tend to agree, and I think that's the less
talked about logistical problemthat a lot of teams are dealing
with when it comes to, enhancedoutput, which also means an
enhanced demand for oversight.
We just had a conversation lastweek with the SVP of product
management at MastercardGateway, and he was talking
about how a lot of AI tools arereally accelerating the ability

(25:59):
to complete things like formsin order to enter new markets.
The inordinate amount ofpaperwork that you need to
do in order to enter a newmarket and kind of, comply
with all the regulations.
AI is allowing a lotof that to be completed
in a really rapid pace.
But then there also has tobe that level of oversight
to ensure that there'sownership over that of

(26:19):
over those submissions.
And so there's that kindof push and pull of you can
complete it quicker, but youalso need to be able to be
accountable at the same speed.
And that I think is a hugelogistical bottleneck for
Yeah, sounds like a lot ofengineering teams as well as
other kinds of folks who areusing these tools to accelerate.
So I think that's a layer that.
Bears underscoring a littlebit is, sure we can shit

(26:41):
fast, but can we also at thesame speed say that Yeah.
I'm like accountableto that work.
Yeah, this is sointeresting to talk about.
We've talked now to leaders inall kinds of departments and
all kinds of disciplines, andit's very fascinating to see how
a lot of these concerns are soparallel to each other despite
the fact that the disciplineitself is so different.
So I really appreciate allthe insights you shared today.

(27:03):
That concludes our episode.
Where can folks followyou online, Anish?

Anish Dhar (27:06):
Yeah.
I would say that you can followmyself on LinkedIn or Twitter.
If you just search myname you'll find me.
And then Cortex as well.
You can find us on LinkedIn.
We're always posting, we areactually hosting Android access
summits all across the world.
And so if there's onemean the city near you,
definitely come through.
We just like to have acommunity of people who think
about these types of things.

(27:27):
And every company'sthinking about them.
So it's really coolto see such a great
community come from that.
And then we all started hostingour conference called IDPCON
and it's really centeredaround engineering excellence.
And so we have leaders from,different types of enterprises
who come and developers who arejust interested in connecting
with other developers aroundengineering excellence.
Long of great talks.
It'll be New York City inOctober, so hopefully we'll

(27:49):
see you there as well.

Hannah Clark (27:49):
Yeah, that sounds fantastic.
Thanks for letting us know.
We'll make sure to plugthat in the description box,
and thank you so much formaking the time to join us.

Anish Dhar (27:55):
Yeah, thanks for having me.
It was great.

Hannah Clark (27:59):
Thanks for listening in.
For more great insights,how-to guides and tool reviews,
subscribe to our newsletter attheproductmanager.com/subscribe.
You can hear more conversationslike this by subscribing to
The Product Manager whereveryou get your podcasts.
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.