All Episodes

April 16, 2024 44 mins

This week, our host Dan Lines sits down with Tara Hernandez, VP of Developer Productivity at MongoDB. Together, they explore the nuances of developer productivity and the impact of AI in engineering environments.

Tara emphasizes that achieving developer productivity requires a focus on outcomes, reduced noise for developers, and a healthy balance between technology, processes, and communication. She also touches on the strategic framework of the 'three horizons' for conceptualizing your investment breakdown across different projects and how to maintain focused on meaningful development work.

Episode Highlights:

01:43 How should you think about developer productivity?
09:09 Three pillars to improve developer productivity
16:07 Automated does not equal autonomous
24:46 Making the golden path the easy path for developers
27:51 What’s exciting in developer productivity and AI?
29:57 The three horizons
38:34 Developer performance vs productivity data
40:23 What is the right way to think about goal setting for noise reduction?

Show Notes:


Support the show:

Offers:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Tara Hernandez (00:00):
And one of my other sayings is, automated does

(00:02):
not equal autonomous, right?
And so, yeah, if you haveautomation that goes nuts, it
probably is doing the reverse ofwhat you would hope.
I think a lot of what thingsboil down to actually in our
world is what outcome are youtrying to achieve?
And really start from there andthen work backwards.
Because a lot of times youthink, oh, well, hey, if we had

(00:23):
a bot that was, you know, likeDependabot.
My dependencies are out of date,go fix, right?
Huge.
Yeah.
But if you multiply that outfive times, absolutely, you
could get in a situation whereit could take three engineers
all day long, every day to keepup,

Conor Bronsdon (00:35):
How can you build a metrics program that not
only measures, but improvesengineering performance?
What's the right metricsframework for your team.
On May 2nd and seventh LinearBis hosting their next workshop
where you will learn how tobuild a metrics program that
reduces cycle time by 47% onaverage improves developer
experience and increasesdelivery predictability..

(00:56):
At the end, you'll receive afree how-to guide and tools to
help you get started.
You can register today at thelink in the description.
Hope to see you there.

Dan Lines (01:05):
Hey, what's up everyone?
Welcome to Dev Interrupted.
I'm your host Dan Lines, COO andco founder at LinearB, and I'm
delighted to be joined by cloudcurmudgeon, nerd manager,
process wonk, and of course, VPof Developer Productivity at

(01:25):
MongoDB, Tara Hernandez.
Tara, welcome to the show.

Tara Hernandez (01:30):
Hello, I'm glad to be here.

Dan Lines (01:32):
Totally awesome to have you on today.
And of course, we have a crucialtopic.
It's developer productivity.
Very, very popular today inengineering teams, engineering
leaders.
Everybody's talking about it.
And you probably can't talkabout developer productivity

(01:53):
without talking about AI.
So we'll dive into that a bit aswell.
And, you know, I'm really,really Uh, to talk with you.
I think you have a greatapproach to improving efficiency
and productivity, and we'regoing to get into all of that.
How do you think about theproblem of increasing developer

(02:18):
productivity?

Tara Hernandez (02:21):
I want to start with, this is just like the
latest name.
It's like marketing, it'sbranding, right?
When, when I first started, itwas like, Oh, there's the build
and release engineers, and thenthere was the infrastructure
engineers, and then what wasthis whole platform engineering
thing?
And yeah, and then Google hasthe whole engineering
productivity and now there'sdeveloper productivity and it's

(02:41):
all the same thing, which is,there's just going to be a
category of people that helptheir, their fellow developers
do their jobs.
Right.
Whatever we call it.
And, you know, DevOps and Agilemethodology came out around
that.
And, uh, now we've got the spaceframework and, you know, other
forms of, of new research.
And it's really, it's just, youknow, that the challenge is, is

(03:03):
the industry evolves so fast,right?
You went from, you know, localcomputers with floppy disks, and
then there was optical media,and, and then there were the
beginnings of hosted SAS basedsolutions, and now everything is
cloud, and, and so it's reallyjust this, this constant
evolution of how do we I supportthe industry needing to go as

(03:25):
fast as it does without thedevelopers losing their minds,
right?
Um, you know, the Dora metrics,which is a very popular thing
and invaluable, I think it talksabout like time to deploy and
time to restore and our appetitefor that to be as close to zero
as possible as an industry forour customers.
Customers is, you know, it'sprobably unreasonable now, just,

(03:47):
you know, how fast we expectthings to be.
And so I think that's one of thebiggest challenges about, you
know, trying to be anorganization supporting the
facilitation of softwaredevelopment within that realm,
right, is the big challenge.

Dan Lines (03:59):
Yeah, you made me think of a bunch of things
there.
All right, hit me.
And.
One of them that stood out to meis like driving developers a
little bit crazy.
Maybe I remember that there'sall these different movements.
And then you said like, kind oflike the marketing terms behind
them, like, okay, we're doingplatform engineering, we're

(04:20):
doing DevOps, we're doing.
Agile methodologies.
We're doing pair programming.
No, we're doing Kanban.
We got AI coming.
We got bots.
We got, it's not exactly thesame, but like SRE movements.
And then there's like, hey,you're going to be a full stack
engineer.
We're going to remove QA.
Everything's going to beautomated.

(04:40):
You're going to do the QA andyou're going to do, and You
know, right now, I thinkprobably the popular terms that
I'm hearing is like the platformengineering, developer
experience, you know, AI, andI'm thinking to myself, is it,
do you think it is all the samething?
Like, from the beginning oftime, it's kind of like, hey, we

(05:02):
want to ship our product.
Code faster or do you think someof these, like marketing names
or like some of some of thisterminology we're, we're using
now platform engineering, dev,developer experience, ai.
Do you think it matters likethe, is there meaning behind it?

Tara Hernandez (05:16):
I mean, yes, mostly, but also no.
Right, and lemme tell you what Imean by that At the end of the
day, as a human being, you know,we've evolved over, depending on
who you ask, some 3 millionyears from Lucy or whatever,
right?
And there's a certain, like,progression of how our brains

(05:38):
have evolved and all that stuff.
But you think about technology,and you think about where were
we 50 years ago, 20 years ago,10 years ago, last year?
Right, uh, from the, you know, Ithink Industrial Revolution is
a, is a nice, uh, sort ofinflection point that people
like to talk about.
Like, the amount of changearound us is, like, hundreds of

(06:01):
orders of magnitude faster thanthe previous, you know, 2.
9999 million years or whatever,right?
And so, if I were to describewhat is the biggest value of
anything around developerproductivity or the developer
environment, it's to reduce thenoise.
Right?
It's, it's to allow thedeveloper to be able to focus on

(06:24):
what they need to do byremoving, hopefully through
automation or telemetry orinsights or whatever, the things
that they don't have to focus onbecause we've helped them
prioritize the things that theydo.
Because we just can't, ourbrains, you know, that's how we
burn out.
Like, we, we drive ourselvesnuts trying to keep track of
everything.
Well, let's make it so that wedon't actually have to have our
developers keep track ofeverything.

(06:46):
Let's make it so that the systemallows them to stay focused on
the things that are mostimportant.
And then as they finish a thing,they can move on to the next
thing.
Because we have a process thatsupports it, right?
And, uh, I think that part isreally hard.
And I think it's a, you know,it's my way of looking at it.
Maybe not everybody's way oflooking at it.
But what I think is adding tothe noise.
And you touched on this, is, youknow, it back in the day when I

(07:08):
first got into it in the 90s,like the build and release
engineers were the not, youweren't the real engineers, you
weren't, you know, you weren'tworking on the product, you were
working on the make files andthe shell scripts and the Perl
code and whatever and like,fine, okay, whatever, you're
wrong, but sure, but now we'verecognized over the You know,
subsequent decades, and thatinfrastructure is actually a

(07:29):
valid thing.
You know, look at GitHub.
That's what GitHub is.
It's an infrastructure company.
CloudBees, CircleCI, um, half ofall of the hyperscalers, it's
developer tools, right?
And so now there's a market.
And we're trying to sell intothat market.
And that's, you know, so nowthere's a lot of more marketing
around it.
And like, Oh, Atlassian, I thinkhas made tons of money on like,

(07:50):
Hey, we're going to solve allyour problems by giving you a
wiki and a bug tracking systemand a CI, like all of that
stuff.
And just use our system andyou're great.
Right.
And not to bag on Atlassian atall.
They recognize it probablysooner than most.
And so now I think that hasadded to the noise, right?
Now it's like, Oh, if only Ihave the solution, or if only I
have this tool or that tool, or,you know, a GitHub stack, PR

(08:10):
stacks is one that's coming upfrom for me lately.
Everything will be better,right?
And so in, you know, the job ofmy team is to try and distill
the signal from the noise.
Will that actually help us?
Right?
Or not?

Dan Lines (08:21):
Yeah, I love the way that that you frame it.
And there's always pros andcons.
I think that's definitely inlike a capitalism society, the
one that we live in, like we getbetter through competition.
That's the way we work, at leastin the, you know, the U.
S.
and a lot of people, people inthis, yeah, on the planet today.
And again, there are pros andcons to that.

(08:43):
But one of the things that I amseeing is that businesses, so if
you think about businessleaders, even CEOs, but like
heads of engineering, you know,now we have heads of, you know,
productivity and all of that,they've recognized that.
And I'm not saying it's alwayslike not to make money, I know
they want, like everyone wantsto make money, but at the end of

(09:05):
the day, leadership hasrecognized, hey, if we reduce
the noise.
If we streamline what they'redoing, if we make their day to
day better, better things happento our business.
So I think there's a realizationof that, which I think probably
overall is a good thing.
And you have some of thesemetrics you called out, like the

(09:27):
DORA metrics, right?
They've been around for a while.
At LinearB, we offer these forfree.
It's like, you know, you getyour cycle time, you get your
deployment frequency, yourchange failure rate.
But when you start looking atthose at a business level.
And you say, Hey, our cycle timeis eight days or whatever it is.

(09:47):
And it would be really cool ifit would be like five days.
And I think that I could getthere by making the lives of the
developers less noisy.
I think it's cool.
I can buy into that, but I alsoagree on the other hand, there's
a lot of terminology out there.

(10:08):
Because businesses are trying tomake money, I have seen, hey, we
work with a lot of companiesthat work on reducing cycle
time, it helps, and a lot of theways that they are doing it is
through reducing the noise, anda lot of the area, like one of
the areas that we see is around,you know, reduce the noise in

(10:29):
terms of pull requests and thereview process there and
streamlining it, it was reallycool that you kind of gave like
a history of this that you'veseen throughout your career.
And now that we're at the pointof, I think you said noise
reduction, what are the topthings that you're seeing now,
like today, in order to eitherreduce noise or measure it or

(10:52):
like where, where do you standwith all of that?

Tara Hernandez (10:54):
One thing that I think will always be true, and
it's a tension that we have asan industry, of balancing what
are the things that are industrystandards that we should conform
to and tools that we shouldadopt versus where are the
things that are going to bereally relevant to a particular
company and the engineers withinthat company, right?

(11:14):
Because it does matter, and Iwill go on and on about, you
know, how there's, you know, Tome, there's three key pillars to
developer productivity.
The first one is the obviousone.
It's the technology.
Like, what are the tools thatyou use?
But the next two are theprocesses that you use.
You know, the mechanics of howyou organize yourselves.

(11:38):
And then the third pillar is thecommunication, which is both how
do you, do you talk to eachother, but then also how do you
have transparency around themetrics or around the key
insights that you need toelevate, right?
And then the three of thosethings are the three pillows of
your stool.
But notice, you go back, onlyone of those is tech.
Right?
So people, I think ultimately isthe most important aspect of

(12:01):
good developer productivity.
It is always going to fall downto people.
How do you, um, engage andunderstand what is going to
enable your developers?
It's invariably not going to bea tool or not solely a tool.
And I think that's the thingthat's really important to
recognize, but it's also thehardest thing to quantify,
right?
Because it's intangible, uh, or,or is often intangible.

Dan Lines (12:24):
Right, so two thirds, so if we recap, you have tools,
and then the other two thirdshave to do with people.
You have process, and then isthe third one like
communication?

Tara Hernandez (12:36):
Well, it's communications, like how do you
talk to each other?
Like, what are the forms?
Like, you know, if you have anidea or you need to give
feedback.
But also, like, how do youcommunicate?
And then the other way around,how to use a team, communicate
how you're doing against yourlittle piece of those goals,

Dan Lines (12:55):
right?
So let's do this.
Let's talk about measuring.
Maybe we can talk aboutmeasuring the three areas, or if
you want to dive into the peopleside, whatever you're
comfortable with.
Like, how do you measuresuccess?
And then we can move into, youknow, what approaches have you
seen or are we taking toactually improve the experience
and efficiency?
So let's start with themeasurement side.

Tara Hernandez (13:18):
So one of the things I love when I got to
MongoDB, it's almost two yearsnow that I've been here.
MongoDB has had a, anincredibly, invested culture
around testing, right?
From, from the get go.
It was very much early adopterto the point where they
actually, we, we've actuallyimplemented our own CI system
purpose built to, to build andtest distributed document
database, right?

(13:38):
It's called Evergreen CI.
You can see it in GitHub.
Um, the, the, and that's great,right?
I don't, like, usually when I goto a company, I have to convince
them to write more tests.
Well, at MongoDB, we actuallyhave so many tests, it's like, I
want to know what tests areproviding the most value, right?
So we need telemetry to tell us,like, these tests are actually
providing us the most value.

(13:59):
Let's make sure we really makesure those tests are getting the
most, um, visibility.
These tests maybe are notproviding that much value, or no
value at all.
Let's stop running them, becauseit's, you know, It's taking up
thought space, right?
Or it's taking up consumptiontime.
So that's, you know, so codecoverage, you know, statistical
analysis, like those, there'stools that you can use to try

(14:21):
and like figure that out, right?
But then there's other partslike, okay, you have an idea or
you've been requested to dosomething.
Well, now you need to understandwhat are you going to do?
So there's like review time,design time.
How long does that take?
Right.
And then we're actually gettinginto Dora metrics, um, there,
right.
You know, from business insightto implementation and then from
implementation to deployment.
But here's an interesting one.
Which is, we don't have onething to measure here, because

(14:43):
at MongoDB we have MongoDB thatyou could download on your, and
run on your local Debianinstances, right?
You could self host it.
But then we also have Atlas,which is a SaaS, right?
So obviously we're not going tosay, Oh, well, for our
distributed solution, we'regoing to have a two hour
turnaround time because ourcustomers do not want to install
a new version on their operatingsystem every two hours, right?

(15:05):
But our Atlas folks probablywant the latest and greatest.
You know, as quickly as possibleif there's bug fixes and
security remediation, right?
And so then we have to havedifferent metrics for different
types of products.
And then have that, you know, bea thing.
And then communication, like, ifmy boss, who's Jim Scharf, the
CTO of MongoDB, needs to be ableto answer a question, how fast

(15:28):
can he get to that answer?
Does he have to come find me, orcan he just go to a dashboard
and go, Oh, look, I need that.
Next time I talk to Tara, I'mgoing to ask her about that,
right?
Or the TPMs, you know, are theyable to report back to the
product team how well we'redoing?
So there's like all thesedifferent tendrils of
information and how much, andthis comes back to tooling
again, how much can we automatethat, right?

(15:50):
And make those things beautomatically discoverable so
that we know they're thereforealways up to date, right?
So that's another kind of sidechallenge.
around information flow.

Dan Lines (16:00):
Let's finish the note because you had me thinking
about something we were talkingabout tooling, right?
Then we were talking aboutprocess, and then we were
talking about people,essentially, I think you said
collaboration,

Tara Hernandez (16:16):
communication, collaboration,

Dan Lines (16:17):
communication.
Okay.
We're working with a customerand we use.
I won't say who the customer is,but we, we use cycle time, to
kind of benchmark where theystand.
And in our cycle time, the waythat we do it at LinearB is we
have coding time.
So how long am I, am I spendingin coding?

(16:39):
We have how long, once I put apull request up, how long does
it take for that review tostart?
So how long am I rate?
Waiting for feedback.
And then once the review starts,how long does it take to
complete the review?
So that's kind of looking at thereviewer there and like, is it a
big PR?
And then how long does it taketo get deployed, merged and
deployed?

(16:59):
Right.
So that's our cycle time.
And what we saw at this companyis they started using a lot of
bots.
So it's like starting to getinto the, it's not like pure AI,
but they had a lot of botactivity and these bots were
opening up PRs.
Actually, 20 percent of alltheir PRs were actually coming

(17:19):
from a bot.
Now, some of these are like themore common bots, like
Dependabot, but some of themhave to do with like
localization and accessibility,and there's more and more bot
generated code.
So that's on the tooling side,they're using a bunch of bots.
But what was happening on theprocess side, is all of this was

(17:40):
getting put onto humans forreview.
And not all of it actuallyneeded like a human review,
cause sometimes these bots aredoing little version bombs, or
little like, Tiny, tiny changes.
So they're creating a ton of PRswith a tiny change.
And then on the communicationside, it was kind of like, okay,

(18:01):
who's going to do this review?
I have a lot of work on myplate.
It's not coming from a human.
And these PRs were just sittingthere.
And now your cycle time isincreasing.
And so, you know, we worked outa way with them where we say,
okay, we look at what the, thebot was actually changing and we
make a decision if we need ahuman review or an automated

(18:22):
review and so on, we, you know,we have a solution.
But I wanted to get like, justsee what you thought about that.
Cause that kind of reminded meof putting all of those pillars
together.

Tara Hernandez (18:32):
I have many sayings, right?
I'm half Irish.
I can't help it, right?
We just, we have the bardictradition.
And one of my other sayings is,automated does not equal
autonomous, right?
And so, yeah, if you haveautomation that goes nuts, it
probably is doing the reverse ofwhat you would hope.
Right?
Yeah, exactly.
And so being thoughtful aboutwhat's the outcome.

(18:53):
And so, uh, and I think a lot ofwhat things boil down to
actually in our world is whatoutcome are you trying to
achieve?
Right?
And really start from there andthen work backwards.
Because a lot of times youthink, oh, well, hey, if we had
a bot that was, you know, likeDependabot.
Dependabot is one of thegreatest, you know, inventions
of our modern time.
Like, oh, my dependencies areout of date, go fix, right?

(19:15):
Huge.
Yeah.
But if you multiply that outfive times, absolutely, you
could get in a situation whereit could take three engineers
all day long, every day to keepup, right?
So yeah,

Dan Lines (19:23):
Piling up and distracting me from the work
that I wanted to do at thiscompany, which is like, I don't
know, maybe build cool features,like

Tara Hernandez (19:32):
exactly.
Yeah.
And they're not going to lastvery long.
Very long, right?
Because they're like, I'm tiredof just reviewing Dependabot
PRs.
So, so yeah, that is a greatexample of where you can kind of
get yourself into trouble andwhy I talk about the three
pillars, like technology is one,but only one of the three.
But they all do interact becauseit is a complete system, right?

(19:55):
That the technology enablesprocesses, right?
But then the communication comesin and like, is the right
information happening?
And here you can see where thosethings, things are out of
balance to follow the analogy,your school stool is wobbling,
right?
It is not.
Yeah, the

Dan Lines (20:09):
tooling is taking over.
There's a bot army doing a bunchof work and I, you know,
negative

Tara Hernandez (20:16):
impact on your process.

Dan Lines (20:18):
And the interesting thing is, you know, and I do
want to hit on the threehorizons, but maybe we can go
into the world of AI.
Like it seems to me more code isbeing generated faster.
If that, if that makes sense,that's what the data says, at
least the data that we have inbenchmarking, like more stuff is

(20:41):
happening, let's say, but that'swhy I think you said, like,
automation doesn't meanautonomous.
It's like more stuff ishappening, but it doesn't mean
that the experience is betterand it doesn't mean that we're
more productive.

Tara Hernandez (20:54):
Right?
And it doesn't mean that theoutcomes that we're getting are
actually what we want.
And let me give you two greatexamples of the things that
we've seen around the use of AI.
And by the way, like MongoDBvery much is pro AI, right?
And as a platform, like it, youknow, sticking your, sticking
your data into our databases,we're all, we're all about that.
And we're, we're trying reallyhard to make that be awesome.

(21:15):
And there's a lot of customersthat are using us for that
purpose.
But you really want to bethoughtful about.
But how you use it and why.
So here are two areas ofconcern.
Um, from my perspective.
From a developer productivityperspective.
One is, going back to automateddoes not equal autonomous.
If you say, you know what?
I'm eliminating 50 percent of myengineering resources because

(21:37):
this AI thing, I am soconfident, right?
It's going to just generate allthe code we need.
We're good.
You have two problems from that.
Uh, one is what happens when itfails, right?
The customer is not calling

Dan Lines (21:50):
you.
AI did it.
I don't know.

Tara Hernandez (21:54):
Exactly.
That is going to fly.
Not at all.
Right.
You know, the customer is goingto drop you so fast.
If you were to say something,you know, as ridiculous as that.
So

Dan Lines (22:02):
yeah,

Tara Hernandez (22:04):
there's a joke I saw floating around.
It's like, you know, AI savesyou.
Uh, it lets you implement codeten times faster, but the
debugging time is now a hundredtimes slower because somebody's
going to have to go in there andfigure it out, right?
In a worst case scenario.
I mean, obviously that'sextreme, but Yeah, you can't,
you can't go to

Dan Lines (22:19):
a customer and say, yeah, AI did it.
Let me, well, at least now, letme ask AI what, it's like, no,
I'm done.
Trying to talk to you as ahuman.
What?
We don't know who wrote thatcode.

Tara Hernandez (22:31):
Yeah.
So that's not going to fly,right?
And so AI as an assistant towork.
Yes, absolutely.
I think that is a great thing,but, but so many people I think
are over committing on what AIactually provides as it stands
now.
And here's the other thingaround AI that's kind of
interesting.
So copyright law, right?

Dan Lines (22:52):
If

Tara Hernandez (22:52):
you, if you have AI by its very definition,
generated code is learned fromanother source.
Therefore, by its verydefinition, is prior art.
So if you have a whole bunch ofgenerated code in your corporate
intellectual property, you maynot be able to retain your
copyright.

Dan Lines (23:12):
Oh, that's interesting.

Tara Hernandez (23:14):
Right?
Like it's legally untested.
So like, For the things that ourcustomers are doing in, in, in
MongoDB and Atlas and, you know,in whatever, you know, they're
using vector search and they're,they're trying to do very
specific things that keep themsafe, right?
But as a company, if you put toomuch AI into your actual product
code, you might get into alittle bit of trouble.
You know, you might lose yourcopyright.

(23:34):
Someone might sue you, like, andwe saw that, right?
When Copilot rolled out.
There was all kinds of lawsuitsthat instantly started taking
place because of, you know, thefail, fail to attribute, a code
that was not licensed to bescraped by a machine language,
you know.
So it's, it's a reallychallenging thing.
And, and as a, as companies, youknow, You know, engage with AI,
as we should, the genie is outof the box, it's not going away.

(23:56):
We have to be really thoughtfulabout it, and so part of
developer productivity's, Ithink, task is to help keep our
engineers safe, right, to,again, not have to, here's where
you can use it, here's where youcan't, and we're going to make
it so that you're not going toaccidentally shoot yourselves in
the foot right?
That's another form of noise.
To be quite honest.

Dan Lines (24:16):
Yeah, well it's another thing they have to keep
in mind.
It's not a good thing, I think,for developers if they're, if
it's like, Hey, go use, like, gouse AI generated tooling.
But also you have to keep allthis other stuff in mind while
you do it.
Like to me, then that would belike, wait, what?
I thought this was supposed tomake my life easier.

(24:37):
Like why you're putting more onmy plate.

Tara Hernandez (24:39):
So going back to my three pillars, right.
There is a tool, but then theprocess and policy and the, you
know, the communication aroundthat helps create guardrails so
that they don't have to thinktoo hard, like, and that's, and
that's where the three thingswork together in a, in a
challenging way.
Right.
Here's where the technology hasgone faster than the industry
actually was probably ready toabsorb it.

(25:02):
Right?
Um, and so, for each company, wehave to figure out what is our
safe space here.
You know, how do we keep ourdevelopers safe, and then how do
we keep our customers safe?
And that's, I think, theresponsibility of every company
that's getting into the AI spaceright now.

Dan Lines (25:15):
I like that you mentioned the guardrails.
And I'll just add one becauseI've been thinking about this a
lot.
We're experimenting with some ofthat kind of stuff like
guardrails and policy.
At the end of the day, like, I'man AI proponent, I think, like
you are, and I'm trying to say,OK, how can we do this?
But actually, Increaseefficiency, because I'm not,

(25:36):
it's not correlating yet.
And on the guard rail side, ifI'm using the terminology,
hopefully correctly, I'd like tosee that be automated.
I'll, I'll try to explain whyagain, I don't think like the
developer should have to keep intheir head what all the guard
rails are and when they shoulddo this and when they should do

(25:58):
that, otherwise it's like, sothink about if we had now like a
rule engine, which is.
The rules are the guardrails,and you can tell the developer,
yeah, go use, like, our approvedAI tooling to your heart's
desire.
Now, once you put that pullrequest up, we have a set of

(26:19):
guardrails and policies thatautomatically kick in for you.
Depending on what the code waschanged, sometimes it will need
a human reviewer.
Sometimes another AI could Uh,actually do that review and
you'll be okay.
Sometimes it will automaticallycall in, you know, the security
team or legal.
If something is, it's like,that's the way I think about the

(26:40):
guardrails.
They need to be automated, sothe developers don't have to
keep all of that in their head.
Otherwise I think it will becrazy.

Tara Hernandez (26:48):
A hundred percent.
You cannot expect the developersto keep, you know, a thousand
things at the forefront of theirmind when they're doing their
day to day work.
Right?
It's just not going to work.
My staff engineers, they saythis to me a lot, and I don't
know where it came from, but Ibelieve it fully, which is, you
know, our job is to make thegolden path the easy path.
Right?
And to make it so that you willnaturally fall, go the direction

(27:12):
we want you to go, becausethat's the easiest way to do it.

Dan Lines (27:16):
Yeah, I like that.
We

Tara Hernandez (27:17):
have to work harder to go out of bounds,
right?
Because people go out

Dan Lines (27:21):
of bounds when it's easier.

Tara Hernandez (27:23):
Right, exactly.

Dan Lines (27:24):
Yeah.

Tara Hernandez (27:26):
This is a, this is a challenge that, you know,
you may recall that, um, I mean,this is probably still true now,
but it was certainly true, um,when, when cloud hyperscalers
started to become more and moreof a thing, um, you know,
security controls, right?
It was really easy to havesecurity controls.
Super bad security in the cloud,because we were used to
everybody sitting behind afirewall, right?

(27:47):
And so we would have behind thefirewall, our security profile
would be Swiss cheese.
And that was okay, because allright, the network engineers
will make sure that the firewallis in place.
Well, now we're in the cloud.
And we have developers that are,you know, in the early days
spinning up random EC2instances.
And, and the, you know, and nowthey're spending 15, 000 a
month, and it's wide open to theinternet.

(28:08):
Because they could.
Right.
And so then we had to put incontrols.
So, you know, another aspect ofhumans is that, you know, we
tend to learn our lessons thehard way.

Dan Lines (28:19):
Yep.
And on the, I think we said likesome of the concerns, but we
talked about using likeguardrails and policies and
automating that to make, extractthe efficiency out of AI, but
send it in the right direction.
But what, what excites you onthe other side?
In terms of developerproductivity and AI.
I

Tara Hernandez (28:39):
Anything that improves developer velocity, I'm
going to be in a safe way.
And in a sustainable way, I'mgoing to be in favor of, right?
Like that's my job is to helpmy, my, my partner teams go fast
with high quality.
Right.
Because that serves our businessultimately.
Right.
And so as, as much as we canincorporate AI, such that we

(29:02):
improve our time to market.
And we improve our, oursustainable product quality and
we improve our ability to, tomaintain the stuff that we've
deployed, you know, so reduceour support, our call instance
rate into support, for example,that's another type of metric
that matters, right?
Then we're being successful.
And what are the different waysthat we, we can do that?

(29:24):
And then maybe now there's atime to kind of talk about the
whole idea of three horizons,because it kind of comes into
play around this idea, right?
I

Dan Lines (29:32):
would love to talk about the three horizons.
I have one point that I'm goingto make, and then we're going to
move to the three, because youmade, you made me think of
something else.
Okay.
One thing that we're doing rightnow is at the, you know, some of
these larger companies, we'remeasuring the adoption of
Copilot.
So that's something that we doat LinearB.

(29:52):
So who's using it, how muchusage, how much code is in the
PRs, that type of thing, exceptit's great.
Yeah, and it hit me because theother thing that we're then
doing is saying, how is thatimpacting your cycle time, your
change failure rate, bugs foundin production, MTTR, and I think

(30:14):
that kind of just rounds out thepoint that you were making of,
yeah, I want, I want all of thisto happen, but my success
metrics, Like the point is theyimprove, like these KPIs, right?
Let's talk about, so the threehorizons, they're related to
software infrastructure.
Is, do I have that correct?

Tara Hernandez (30:32):
Well, so the, the original model was the three
horizons of business.
And I used to think like, Ithought it was an Intel or an
IBM thing, but actually it cameout of McKinsey.
I don't know, 20 years ago orso.
And the idea is how do you dobusiness investments, right?
Your, your first horizon is yourcore business.
That's what's making you moneynow, right?
Yes.
So if you're, if you're, ifyou're Intel to follow that
example, you know, you'rePentium chips back in the

(30:54):
nineties.
Your second horizon is what'sabout to make you money.
Business.
So your Xeon chips or, you know,whatever the next generation is.
And then the third horizon iswhat's, you know, looking down
the road five years from now,where do you want to be?
Right.
And, and, and the model is howmuch do you invest in each
thing?

Dan Lines (31:10):
Oh, perfect.
Right.
So your,

Tara Hernandez (31:12):
your core business, hopefully you're not
actually putting a lot ofinvestment in it because that
investment's already been made.
That product is released.
It's making you money.
And hopefully the cost ofmaintenance is, is low enough
that it's, you know, it's mostlyprofit.
Your margins are beautiful.
Right.
The second horizon is where youwant the most of your investment
to happen because that's thething that's going to take over
your first horizon.
That's going to become the nexthorizon one.

(31:32):
So that's where most of yourfocus is, but you want to still
have some focus on that horizonthree because in order to
sustain your business, right,you have to have that same
quantum computing, like, or, youknow, Q chips or whatever it's
going to be for Intel.
So I took, I love that idea.
And I took it and I adapted itfor software development, um,

(31:52):
and I, with, my focus isinfrastructure, because that's
who I am, um, but, you know,any, any software team could
think about it, and the idea isthat, what does it cost you
right now in engineeringresources and, and budget, if
you're a budget person, to justkeep the lights on?
You've got your products outthere.

Dan Lines (32:09):
KTLO, they call it.
KTLO,

Tara Hernandez (32:12):
absolutely, right?
And so, you know, if 60 percentof your engineering effort is
answering support tickets, youknow, that means that at most
you have 40 percent going intoyour, that's your Horizon 1,
right?
That means at most you have 40percent going into your Horizon
2, and zero in, probably, ifyour ratios are that off.

(32:32):
into your Horizon 3.
So you're, you're pure tactics.
You are hanging on by yourfingernails, right?
And so as a, as a, as aleadership team, you want to
think about what do we need todo to get that number down?
We probably have technical debtthat is, that desperately needs
to be addressed, right?
Maybe we have documentationimprovements.

(32:53):
Maybe we need to get our DevRelfolks out there building more
demos.
Like, I don't know, right?
Whatever it is from a businessperspective.
In my case, it's like, okay, Myengineer, my support engineers
are having to answer, way toomany questions about, you know,
using this particular service.
Well, clearly that service isnot where it needs to be.
So, hey, director who owns thatservice, your next round of

(33:15):
quarterly planning betterinvolve how you're going to make
that number drop by half, right?

Dan Lines (33:19):
Right.

Tara Hernandez (33:19):
In an ideal world, you think about what your
ratios want to be.
To me, I think 30 percent onaverage across all of developer
productivity, and I've got 50something engineers, right?
30 percent would be really good,right?
My range is somewhere between 25and 80 percent depending on the
team, and that's okay.
Right, because it depends onwhat you're doing, but I want to
make space for that.

(33:40):
What's next?
And for us, that what's next ishow do we up level, right?
How do we improve the type andquality of information we're
giving our developers so thatthey can improve their
productivity?
They can increase theirvelocity.
Right?
So that they are then supportingthe business goals of getting
those features out, gettingthose bug fixes out.

Dan Lines (33:57):
Got it.
I love this.
We did some benchmarking aroundthis, specifically around
investment into future value.
So I think that lines up withthe second one that you said.
It's like, where will my moneycome from next?
Investment into, let's say,enhancement to what I have

(34:18):
today.
That was your first category.
And then I think you're, you're,you had a category that was kind
of like future future, you know,like five years.
Okay.
I love that.
And then there's things like.
KTLO, Keeping the Lights On.
And then we had anothercategory, at least when, when we
have this module called, uh,Resource Allocation, we had an

(34:42):
investment profile.
We had another one that's kindof like, uh, we called it
developer experience.
That's what we said in ourproduct.
But at the end of the day, whatit means is like, how much
investment are you internallyputting into to reduce the noise
for your developers?
And I didn't, I wanted to see,do you have in your head
benchmarking?

(35:02):
On this, I think our report saidfor like KTLO, we wanted to be
like 11 or percent or less, likekeeping the lights on.
Like if you're spending morethan that, it means that you
can't invest in the other areas.
Do you have any benchmarks onthese?

Tara Hernandez (35:18):
Yeah, so for, I think for our development team,
11%, 11 to 15% maybe.
It, I think is a great numberfor For KTLO, right?

Dan Lines (35:26):
Yeah.
Like a lead.

Tara Hernandez (35:27):
Yeah, my team is both an engineering team and a
service team, right?
So we're always going to be in aposition where we're answering
questions, you know, fixingbugs, like, you know, helping
the developers out.
So to me, 30 percent is probablya more realistic goal, right?
Again, aggregate across acrossthe whole organization, some
teams will have more supportload than others.

(35:48):
Um, because that's just thereality of our function, right?
And so that's why I also, it'slike, you know, our tech support
organization would probablyhave, you know, it might be 60
percent would be okay for them,right?
Because that's, that's theircore business.

Dan Lines (36:02):
But then they

Tara Hernandez (36:03):
also want, want time to create tools to help
them get better at what it isthat they do.
Right?
So I think there's every, everyleader can think about, you
know, what, what is the righttruth for us?
And then how do we get there?

Dan Lines (36:14):
yeah, for us, I'll see that we can include like
our, our benchmarking reports inthe, in the pod and all of that.
But I think the, the way thatwe're looking at it would be
like your entire softwareengineering organization
combined, as opposed to, youknow, yeah, if you're a service,
It's going to be much higherthan 11%.

Tara Hernandez (36:36):
Yeah.
And there's so many nuances,right?
And I, I really hate, I alwayshate talking in absolutes, even
though they're easierultimately, right?
At a certain level.
But, you know, I actually wantedto circle back to something else
that you had been talking about,which is, you know,
understanding, you know, forexample, on, on an individual
level, how long does it take toget the code written and get the
PRs reviewed?
And those are all likeindividual actions.

(36:57):
And one of the things I worryabout, I won't lie.
One of the things I worry aboutis the misuse of data, right?
It is very easy, I think, toeven inadvertently end up
weaponizing some of that stuff,right?
Because, again, you're trying toquantify something that has a
lot of value.

(37:18):
Sort of intangibles around it,like one engineer who's really
good might land fewerchangelists, but they're bigger,
right?
So they take more time.
Versus someone else who's reallydialed into small check ins, you
know, small commits are better,right?
And so, you know, the numberswill look the same, but the
ultimate outcomes could beidentical.
But how do we measure thedifferences between those two,

(37:39):
right?
And so, uh, and, and also youdon't want to inadvertently
disempower the leadership.
That's it.
Right?
Like a manager, frontlinemanager's job is to kind of have
a handle on how the team isoperating and how the team works
as a system.
Like the human systems are asimportant as the technical
systems, right?

(37:59):
I love the, the companies thatare coming out with good tooling
around metrics.
The ones that are super focusedon, using language around, you
know, help identify yourunderperformers.
I'm like, whoa, okay, let'sthat's not the end I want to
follow.
But the ones that are like, howare your teams doing, but then
provides to the manager.
Additional set of informationthat could be helpful to them

(38:21):
because now they have thecontext, right?
You don't want the individualdeveloper to think, Oh my God,
I'm going to be fired based onthese numbers, right?
And so that's another part of,that's the culture element.
What kind of culture are youbuilding throughout?
You know, the tools, theprocesses of the communication
that is going to incentivize andinspire your engineers, right?

(38:44):
Which ultimately isn'tintangible, but it's the things
that we have to think about asleaders throughout all of this,
because engineers that feelinspired and motivated and
rewarded work better, right?
You know?
Absolutely.
Yeah, I mean I think that's asimportant as any tool.
More important, honestly.

Dan Lines (39:00):
The culture needs to be there.
In order to, I would say, deploydata correctly.
When we're talking aboutproductivity data, I think I
want to be very clear that we'renot talking about developer
performance.

Tara Hernandez (39:18):
Yes.

Dan Lines (39:20):
Nothing in this podcast is about developer
performance.
What we're talking about isproductivity of the engineering
team.
Yeah.
And I think that that's thedifference.
Yeah.

Tara Hernandez (39:32):
And sometimes they get conflated.

Dan Lines (39:34):
And they can be, they can get very easily confused.

Tara Hernandez (39:38):
Yeah.

Dan Lines (39:38):
Now, when we're looking at data.
You can see signals wherethere's a bottleneck in the
process, the tooling in theprocess, and maybe the way teams
are communicating together.
Those were your pillars andusing the data to find
bottlenecks and then goimplement solutions, whether

(40:00):
it's automation or it's notautomation, or it's, you know,
something else that's great.
That is way different thansaying I'm a performance
management tool.
From like an HR perspective, andthat is not what we're
advocating for here.
So just, you know, I, I've seen,

Tara Hernandez (40:19):
but I think it bears repeating, right?
Because going back to, in thesame way that AI is not going to
assume the responsibility to anoutage to a customer, right?
An AI or, or an AI enabledanalyzer.
I cannot assume theresponsibility of a people

(40:41):
manager, uh, uh, uh, you know,viewing people performance.
And so that's where it's like wehave to understand this, this
falls on that side of thatboundary.
This is a people thing and needsto stay that way.

Dan Lines (40:53):
I think a good, Maybe topic to kind of round out the
pod, there's a lot of leaderstoday.
I consider you a leader.
I said, there's a lot of reallysmart people today that are
trying to improve the, you know,productivity, developer
productivity.
They want to set goals aroundthis.
They want to use numbers.

(41:14):
How should, should leadershipthink about goal setting for
noise reduction and all that?
Like, what is the right way todo it, in your opinion?
I

Tara Hernandez (41:25):
mean, to me, I said it before, it always comes
down to outcomes.
What are the, what are thethings that at a high level, We
state are really important,whether it's, you know, uh,
getting features out the door,hitting revenue targets,
whatever.
It's like, we need to haveclearly defined outcomes.
And then you need to empower theteams as you go down, down the

(41:46):
org chart to figure out whattheir piece of that is, and then
be able to define success.
Right at the end of the day,it's, it's outcomes and, and
success criteria.
And the more you can make thatbe really clear and concrete,
the more successful you're goingto be.
Cause you can, you havesomething to measure yourself
again, the lowest level engineershould be able to know why is

(42:10):
what I'm, am I doing right nowmatter and how am I moving the
needle?
Right.
That to me is, is, is the bestscenario.

Dan Lines (42:17):
Do you, I love it.
Do you have any thing?
Um, for those outcomes that likeyou have used or seen teams use
that are like more concrete isan outcome like, Hey, we want
to, you know, you, you talked alot about like, uh, noise
reduction, maybe in an outcomecan be like, we want to increase

(42:41):
developer focus time by 20percent and we're going to do
that by reducing meeting time by20%.
Like, is there an outcome thatyou've seen, seen work or, or,
or not work as well?

Tara Hernandez (42:55):
Um, yeah, I mean, there's, there's lots,
right?
I think, and again, it's goingto be like, what's most relevant
to your organization.
Um, for me, uh, a great goalwould be, uh, develop a
development team could startfrom scratch.
Get a repo set up, get their CIset up, get a preliminary set of
automated tasks, get performanceanalysis, get security checking,

(43:15):
all those other things withouthaving to go ask for help.

Dan Lines (43:19):
That's cool.

Tara Hernandez (43:20):
That would be a stupendous outcome, right?
What do we need to do to dothat?
We start working backwards andfigure out where are the gaps,
right?

Dan Lines (43:29):
And I, the way, and I would just say for the leaders
listening.
And then we'll wrap it up here.
What I really liked about theway that you said that it was
really tangible things with theexample of what we're trying to
do and the outcome is, you know,Hey, a developer doesn't have to
ask for help, so let's measuremaybe how many internal help

(43:50):
tickets that we get or whateverit is that

Tara Hernandez (43:53):
Slack requests, whatever they are.
Yeah,

Dan Lines (43:55):
So, you know, we're, we're up on time here.
but Tara, it's been an awesome.
I, I really enjoyed having youon the pod and, and speaking
with you.

Tara Hernandez (44:07):
Uh, this has been fun.
I always like nerding out onthis stuff.
I've been doing this job for 30years.
It never gets old to me.

Dan Lines (44:14):
Amazing.
And thank you everyone fortuning in with us today.
And remember, if you haven'tgiven us a review on your
podcasting app of choice, itdoes mean the world to us.
If you could take 90 seconds.
To give us, uh, a review on thispod.
Tara, thanks again for comingon, and, uh, we'll talk again

(44:38):
hopefully soon.

Tara Hernandez (44:39):
Sure thing.
Thanks, Dan.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.