Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Rob Zuber (00:00):
And the first time I
did continuous deployment, which
(00:02):
was like 2011.
I was terrified, Deployment usedto be seven gallons of coffee
and four all nighters in a rowto try to get the thing to work.
And I was like, we're going todo that every day?
That sounds awful, right?
But then we tried it and I waslike, this is amazing.
I will never not do this, right?
I will never stop deliveringvalue to customers as quickly as
(00:22):
I think about it.
Here's my idea.
Now it's in the hands ofcustomers.
Like there's a little typinginvolved, but that's going away,
apparently
Is your engineering team focusedon efficiency, but struggling
with inaccessible or costly Dorametrics.
Insights into the health of yourengineering team, don't have to
be complicated or expensive.
That's why LinearB isintroducing free door metrics
for all.
(00:42):
Say goodbye to spreadsheets andmanual tracking or paying for
your door and metrics.
LinearB is giving away a free.
Comprehensive Dora dashboardpack the central insights,
including all Forkey Dorametrics tailored to your team's
data.
Industry standard benchmarks forgauging performance and setting
data-driven goals.
Plus additional leading metrics,including emerge, frequency, and
(01:03):
pull request size.
Empower your team with themetrics they deserve.
Sign up for your free Doradashboard today at LinearB dot
IO slash Dora.
Or follow the link in the shownotes.
Conor Bronsdon (01:13):
Hey everyone.
Welcome back to Dev Interrupted.
I'm your host, Connor Bronston,and today I am joined by the CTO
of Circle ci, Rob Zuber.
Rob, welcome to the podcast.
Rob Zuber (01:21):
Glad to be here.
Thanks for having me.
Conor Bronsdon (01:23):
My pleasure.
It's, uh, really great to have atechnical leader like yourself
here.
You've been a two-time founder,obviously doing massive work at
Circle ci, and it's given youthis in-depth perspective on
when efficiency is paramount andtechnical, uh, expertise.
You know how businesses grapplewith aligning technical
direction to business goals.
And avoiding wasted time andresources and missed
(01:46):
opportunities when you'releading.
And as an expert in craftingtechnical strategy, I know
you're giving a talk todayaround how to do so effectively,
but I'd love to understand therisks of failing to do so.
Because I know so many companiesstruggle with this.
Rob Zuber (02:02):
I think it's a great
thing to think about it, and to
be super clear, like, I wouldnot call myself an expert.
Conor Bronsdon (02:07):
I'll do it for
you, don't worry.
Rob Zuber (02:08):
Well, I mean, all of
that comes from trying many,
many different things andlearning and, you know, trying
and maybe we'll call it failing.
Having a hypothesis, seeing whatworks, what doesn't.
All that stems from exactly whatyou're asking about, which is
that the risks of beingmisaligned, the cost, the
overhead of maybe buildingthings in a disparate direction.
(02:29):
So I would say a couple keythings in there.
Aligning folks across anorganization so that you get
more leverage out of the workthat you do.
Often the same problems arebeing solved in pockets around
organizations.
And you don't necessarily wanteveryone waiting for one person
to solve that problem, but as aleader you often have visibility
that individuals or maybe teamsdon't have, where others are
(02:52):
facing similar challenges.
And so helping lift those up,surface them, identify, hey,
These folks have a really greatsolution.
Maybe you can learn from them,take from them.
Maybe we can build somethingthat others can use, et cetera.
That duplication is a trade off.
There's sort of individualvelocity or throughput, right?
(03:12):
But over time, that tends tobuild up as sort of debt across
your organization where you'repaying for the maintenance of
many different implementations,let's say, of a solution to a
similar problem.
I think engineering leaders as awhole, once you get to, let's
call it a director level, whichis sort of the audience here,
are really not just engineeringleaders, right?
(03:33):
They're, they're businessleaders.
The responsibility is to usetechnology to drive the goals
and the outcomes of thebusiness.
And so one of the challengesthat we face as engineering
leaders is the investment timehorizons for technical strategy
tend to be longer, then the timehorizons that we have clarity
about the business, right?
(03:53):
I mean, particularly look at thelast few years.
So many things have changed soquickly.
Feels like chat GPT and likegenerative AI showed up
overnight, right?
I mean, it was like 60 years inthe making, but it still feels
like November 30th, 2022 waslike the day sort of thing.
And so being in a position toadapt.
is a big part of what you shouldbe investing in, technically.
(04:14):
And I think people tend to getfocused, engineering leaders,
and again, I'm speaking for myown mistakes, too focused on
sort of a precise view of wherethe business is going and what
you're trying to achieve, versuswhat I now refer to as the sort
of set of most likely outcomes,right?
Like the, the span of thosepossible futures, if you will,
and putting yourself in a goodposition to adapt.
(04:35):
I mean, from a software deliveryperspective, we've spent the
last 20, 25 years moving fromWaterfall to Agile and then CI
and CD and like progressivedelivery.
All of these tools are designedaround the assumption that we
are wrong.
That we won't know what tobuild.
That that MRD, PRD, whatever wasnot right.
(04:57):
And so how can we quickly getfeedback?
And so if you assume that we'regoing to be wrong, next week.
You definitely have to assumethat we're wrong about next year
and the year after, so how do Ibuild systems and strategies
that support the ability tochange, not support this very
specific view that we had atthis point in time?
Conor Bronsdon (05:16):
Yeah, that
accounting for variability is a
really crucial skill I see a lotof venturing leaders develop as
they go through things.
If I look more broadly,absolutely.
Maybe the feature we'rebuilding, we're gonna get
customer feedback that we needto adapt it.
Like, I feel like every timeI've ever built something, once
(05:38):
it's in the field, whether it'sa piece of content or, or you
know, a new app or a newfeature, customers go, oh, I use
this differently.
Yeah.
Like, yeah, I like the way youthought about this, but no, no,
no, no.
Mm-Hmm, the triangle goes in thesquare hole.
'cause the square hole's easiestone.
Yeah.
Yeah.
And I kind of hear this a storyfrom you that we think a lot
about, which is the dual mandatethat faces engineering leaders.
(05:59):
So for a long time, I thinkengineering leaders have really
just thought about efficiency.
Like, how can I make my teammore efficient?
How can we move faster?
How can we solve these softwaredelivery problems that you bring
up?
And now we've really realized,and of course some folks have
known this for a while, butwe've fully accepted as an
industry that, If we fail to bebusiness leaders, we're gonna
(06:24):
see cuts hit us in, in ways thatare maybe not conducive to
long-term success.
Mm-Hmm.
And so this dual mandate forengineering leaders now is like,
yes, operational efficiency.
Yes.
Be great at delivering software,but also deliver the right
software.
Make it something that'simpactful for the business.
And to your point, that's a,it's a hard thing to learn when
often engineers are promoted fortheir technical skills.
Mm-Hmm.
And then have to learn thebusiness side.
Rob Zuber (06:45):
I think that's, I
think that's very true.
I think it's a, a big growthstep as you move up through
engineering leadership, youknow, from a manager to director
to VP sort of thing.
But I would say, even as adirector, like that level, think
about your peer group, yourfirst team, which is an
expression that we use a lot.
Like, I think directors thinkabout their peer group as other
engineering directors maybeversus all of the directors in
(07:06):
an organization.
Conor Bronsdon (07:07):
Oh.
Rob Zuber (07:07):
When realistically,
How marketing is trying to
achieve results out in themarket.
How your sales team is meetingwith customers, what customer
success folks are hearing backfrom customers.
All of that stuff is critical.
I mean, yes, to the feature thatyou're building now, but more
importantly, to your ability toset yourself up to adapt as
things continue to change.
And I think when you talk aboutengineering efficiency,
(07:29):
delivery, et cetera, you have tothink about why we want that,
right?
It's so we can be more preparedfor change, right?
Like things are constantlychanging in the bar.
The only consistent thing ischange, right?
So, you wanna set yourself up tobe able to change.
One option is to make everythingperfectly abstract, right?
And what engineering team thatmade everything perfectly
(07:50):
abstract can we even namebecause they're not in business
anymore.
Yeah.
Right?
So you have to make choicesabout where you're gonna have
that flexibility.
And in order to make the mostinformed choices, you want to be
as informed and knowledgeable aspossible about the business,
about the market, about how youplay in that market, not just
about the feature that your PMhanded to you to go implement
sort of thing.
(08:10):
And so maybe as an IC, You, youhave to be able to focus on that
thing that we're doing rightnow.
Um, but even as you get moresenior, as an ic, you want to
have that visibility out intothe, out into the market so you
understand where the value andflexibility is coming from.
Yeah, totally.
Conor Bronsdon (08:25):
Great staff plus
engineers have context on what
customers want.
Mm-Hmm.
have context on the businessneeds.
They understand the prioritiesof the business and they help
mentor and guide others in that.
I think a lot of folks hear thisand they go, oh, great general
advice.
Like, I think you had a greatnugget in there about making
sure that you're, you'recommunicating with your first
team, not just withinengineering, but also you know,
marketing, cS product, you know,product marketing.
(08:48):
What are other ways that youthink engineering leaders should
work to find that businesscontext and understand what's
happening in the rest of thebusiness or the needs of the
customer?
Rob Zuber (08:56):
Talk to customers.
Conor Bronsdon (08:57):
Great advice.
Rob Zuber (08:57):
We're probably in a
bit of a unique position
because, we sell to developers,right?
Like our customers are basicallypeers.
Yeah.
To many of those folks.
So there's no excuse for us.
I mean, we're talking, I'm,every engineering leader I'm
talking to here is either acustomer or a potential
customer, right?
So they're thinking about theproblems that we're thinking
about.
So that's, that's probablyeasier for some, but all of the
indirect feedback that you'regetting is passing through
(09:20):
filters that prevent you, Ithink, from being able to apply
your own creativity to aproblem, right?
Like if I hear a customer hassaid something to a salesperson
who passed it to a sales leaderwho passed it to me sort of
thing, then I'm getting a veryprescriptive view of what the
customer is looking to do, butif I talk to that customer
directly, again, particularly inthe case where they're more like
(09:42):
me than they are like any of thepeople in that chain in terms of
what their interests are, I havea much better chance of
understanding the problemthey're trying to solve, and
maybe even how that problem isgoing to evolve in the future.
That allows me to better thinkabout how do we build the system
that enables us to solve all ofthose problems, as opposed to
how do we implement the featurethat they asked for right now.
(10:03):
And I think that's true acrossmany spaces.
I mean, I haven't always workedin developer tools, so I would
give that advice to anybody.
And even if you can't talk tothem directly, like we, we use
Gong, which is like, I thinkmany people do, right?
Call recorder, you have somekind of call recorder, you have
access to this feedback.
And sitting and watching thosecalls as an engineering leader
is fantastic for me, because Ihear the phrasing and sort of
(10:24):
nuance in the way peopledescribe things that maybe
doesn't necessarily gettranslated by the time it would
get to me some other way.
Conor Bronsdon (10:29):
Yeah, it's
interesting you bring up this
idea of reducing the game oftelephone that occurs so often
in organizations.
But there's multiple layers ofcommunication within that where
things can get adjusted,perspectives.
Come on.
(10:50):
Mm-Hmm.
I mean, this is true of any sortof communication and I, I love
this advice.
You know, whenever I hearamazing engineering leaders give
it, uh, they say, look, talkdirectly to customers.
Listen directly to customers.
So, you know, gong is a greatresource.
Um, finding, finding ways to bethe customer, frankly, like on
this podcast, it's one of thereasons we do it is we're like,
let's get this qualitativesignal.
Understand what's on people'sminds.
(11:11):
And I think it not only providesthis information benefit and
gets you directly to the source.
But it also is something whereyour customers end up
appreciating it.
They're like, yeah, thank youfor listening, thank you for
having this communication.
And you end up with betterproduct results.
How do you retain thatinformation, though, as you're
building out longer termtechnical strategies?
Is it more of a set signpostsand guideposts so we make sure
(11:34):
we're getting this feedbackrepeatedly and adjusting?
Or what do you think about asfar as building out the longer
term strategy?
Rob Zuber (11:48):
So it's less like
we're trying to solve this
specific problem and more,there's a set of problems or a
type of problems that ourcustomers are consistently
running into.
And we're not sure how we'regoing to solve them.
So how do we build a system?
That will make solving themeasy, as opposed to how do we
solve that specific problem,right?
And then, what you're lookingfor is, okay, we believed that
(12:09):
would put us in a good positionto solve these problems, and
then when we got to thatproblem, it was hard, right?
We were cutting across thegrain, if you will, of our
system design.
Okay, that tells us somethingnew about our system design.
Can we incorporate that in a waythat, you know, continues to
make our system designappropriately flexible?
Like, that's the way that Iwould think about it.
Again, if it's abstracted inevery possible way, it becomes
(12:31):
so generic, it's useless.
But I want to be able to makethis kind of change rapidly.
So how do I ensure, for example,that everything related to that
change is sort of consolidatedin one area or easy to make by
the folks trying to make it,right?
If every project is a 10 teamcross-functional, you know,
Gantt chart, then yourlikelihood of delivering what
(12:53):
your customer's looking for getsvery, very low.
Conor Bronsdon (12:54):
Yeah.
J just hearing you say that, I'mkind of like over here, I'm
getting goosebumps, like, oh no,this is a problem.
Right?
Because it creates such an issuewith prioritization and the
ability to pivot when needed.
So, I think we hear this advicea lot of like, okay, get to
customers, learn more, have moredirection.
I think a challenge a lot ofengineering leaders face is,
(13:15):
okay, how do I then prioritizethat feedback and actually apply
it to make decisions?
Rob Zuber (13:19):
Well, there's a tough
balance in there.
So yes, I'm trying to set myselfup for sort of future success.
You can, you can ratchet thatdown or sort of focus in or zoom
in maybe a little bit, and thisis down to the IC, right, even
below the, the engineeringleader, um, in that everything
you're doing, everything you'rebuilding, everything you're
(13:41):
implementing, you want toconsider What's going to be easy
to change or hard to change,right?
Simplicity, I think, would bethe first thing that I would
point to.
Again, with a, with a sense thatwe have a really clear direction
and we're gonna, we're obviouslygonna be on this path for a
while, we tend to, like, orientspecifically around that
direction.
(14:02):
I'm going to use hard code aslike a metaphor more than
specifically, but say, you know,everything's going to move
through our system in this onepath and changing that path gets
hard.
Whereas we don't really know,like really accepting that
ambiguity and saying, okay,again, like all the way down to
the way that I implementspecific parts of the system.
How do I make this easy tounderstand, easy to change?
(14:26):
A lot of that comes down tosimplicity for me.
Yeah.
In terms of everything from likesimple function structure to how
services communicate with eachother and then sort of like that
change co-location that I talkedabout before.
Conor Bronsdon (14:37):
I think this
concept of applying simplicity
is a great one and it's reallyuseful I find for engineering
leaders to have like a clearexample in their mind.
You have an example from yourcareer, whether CircleCI or
previously, that you coulddescribe about how you've
applied this concept to drivethat success.
Rob Zuber (14:52):
Is it easier if I
talk about where it hasn't
worked?
Conor Bronsdon (14:55):
I mean, that's,
that's a great option too.
Rob Zuber (14:56):
Yeah.
So I mean, a very sort of, longstory of my arc at Circle CI has
been, in order to get to marketin the early days of Circle ci,
we leveraged our relationshipwith GitHub, right?
Which was, you've already got anorganization, it's got users,
you've got a relationshipbetween those users and the
projects, et cetera.
And so we were able to takeadvantage of a lot of things to
(15:18):
get ourselves into the market.
But, what we unfortunately alsodid was sort of include that
understanding in many parts ofour system instead of pushing it
all to the boundary and thenmaking the internals of our
system, I guess, simpler andless coupled to, to something
else, right?
So loosely coupled systems issomething you hear a lot when
(15:40):
you talk about simplicity,right?
Yes.
Over time we've changed that, Imean there are many providers of
Git, there are many peoplebuilding things today like
prompts for AI and ML that havenothing to do with Git repos.
And so we now are effectivelydisconnected from that, but we
had to take a lot of pieces andpush them out to the boundary
(16:00):
and define our own internalmodel.
And if we had thought about thatfuture, which is not really that
surprising now that we're in it.
We would have been in a muchbetter place to make some of
those changes faster.
So, we had to go back and do thework to basically say, Okay,
this is not the full set of waysthat people are going to use our
system.
And therefore, we have toabstract ourselves away from
(16:21):
that and push that again to theboundary.
And I think, even on that firstimplementation, but if you are
talking to an external system,make all of your understanding
of that external system isolatedto a single spot, so that
changes in the external systemdon't impact you, right?
And so that you can connect intoa different one, or like, change
the way that your internalsystems work without worrying
(16:43):
about any of the understandingof that external system.
And your ability to make change,right?
Or your ability to respond tochange, like, when someone else
is changing their API, one ofthe third parties that you
depend on, you want that veryclearly isolated in one spot.
I mean, they will give younotice and all those other
(17:04):
things, but if you're, yeah,exactly.
Okay, sometimes.
Let's just But you wanna makesure that you have a very easy
path to deal with that.
Yeah.
And it's not this scramble.
17 teams are trying to fixthings right now because one
external API changed, right.
It should be really easy andisolated.
Conor Bronsdon (17:21):
To pick a
example outta a hat here, like
look at what happened withReddit earlier this year where
we've seen massive protestsagainst the changes of the API
Mm-Hmm.
Part of that's that so much ofthe systems that we're relying
on were directly relying onReddit.
They weren't using otherplatforms.
But, it's so easy for what youbuild to be fantastic and work
incredibly well until one thingchanges with this key system
(17:42):
that you're integrating with,and it can completely destroy
the infrastructure of whatyou're trying to do.
So, yeah, I think this is a, asmart approach to ensuring that
you have reliability and controlof the technical approach you
have.
However, I'm also, from whatyou're saying, hearing this
thing that from the start, maybethat wasn't a priority.
You had to just get into market.
(18:02):
What did you do to take thisretrospective and almost like
post mortem approach to decidehow you needed to adapt?
Rob Zuber (18:10):
In that particular
example, there were many
processes.
It's been a long ride, but therewas effectively an early
realization that we, you know,we wanted to talk to other
systems.
And we put some initialimplementations in place that
still, you know, they were moreabstract, but still more kind
of.
In multiple parts of ourinfrastructure or code base.
(18:31):
And ultimately looked at thatand said, okay, this is actually
getting more expensive, not lessexpensive to make changes.
So how do we push all thisstuff?
I keep calling it pushing it tothe boundary.
And DDD, if you want realadvice, just learn about DDD,
but it would be called an anticorruption layer.
And so how do we take this andpush it out to a boundary where
we then translate it effectivelyinto our own representation of
(18:54):
how we want all these things towork?
I think from a, a retroperspective, like some of this
was obvious in terms of how themarket was shifting and then
also in terms of our own abilityto move.
I think there was some naturalprogression, but then along the
way a, if we're going to benaturally progressing, this is
kind of where it comes back totechnical strategy.
(19:14):
Like if there's a naturalforcing function, let's say to
change the way this works, let'sactually stop and have more
people just for a minute, thinkabout what we think this needs
to look like versus solving thenext day's problem and the next
day's problem.
This is where that time horizonsort of situation comes up,
which is, yes, we could add thenext thing right now, but what
(19:36):
do we think the next 5 or 10 ordoes this turn into 300 look
like?
And in that world, what would weneed to build?
Okay, we need to build this.
Great.
Let's start.
And now I need to back that downto like, what are the first
steps?
Like, how do I take usefulsteps?
And one thing that I thinkabout, you know, kind of coming
all the way back to techstrategy is, do I have something
in one of these projects that isan easy off ramp?
(19:59):
Like, if we actually only end upbeing able to invest two weeks
or four weeks or two months orwhatever it might be, Have we
made real progress, or were westill, you know, drawing designs
and talking about new systemsthat now we've put a little bit
of a new system in place butwe're still using the old system
and we're actually net worseoff?
And so, you have to then draw onall of your skills from an
(20:21):
agile, incremental, however youwant to think about it,
perspective to evolve towardsthat outcome and not just say,
oh yeah, in the future we'regoing to be here.
And that's gonna be awesome.
And then try to build thefuture.
Yeah.
Right.
There was actually a talkyesterday at Lead Dev about a
multi-year, kind of rewrite ofan entire system and, kudos to
(20:41):
them for getting through it, butthat is an extremely, extremely
dangerous place to be.
And so looking at how can I dothese things incrementally?
Like of course we think aboutthat with product, because we
assume we're going to be wrong,but like technically we're going
to be wrong also.
There was another talk actuallyyesterday, later in the day,
about a project that justtotally went off the rails and
they had to start over sort ofthing, because they were wrong
(21:03):
about some of the decisions theymade.
Which is what's going to happen.
You know, what is the thing thatwe can do to get better
information, right?
If we're unsure about this,let's not make a guess and
charge ahead.
I mean, unless it's like both ofthese options are totally fine,
but rather, how do we getstarted and learn something new?
(21:23):
What's the smallest thing wecould do to get better
information so that we can makea better decision if it's
something that's going to take abig investment?
Conor Bronsdon (21:30):
Yeah, this key
through line that I'm hearing in
what you're saying is learningfrom your mistakes and learning
quickly and iterating.
This is something that we talkabout on a technical side of
like, oh, iterations, we shouldlearn, we should get feedback.
We can keep doing this.
But it's also something that isreally important to apply as a
leader.
Mm-Hmm.
Whether you're looking at thestrategy side or just managing
your team.
This like fast feedback.
I mean, you'll hear folks talkabout it.
(21:52):
Uh, obviously continuouslearning, that phrase comes up a
lot.
Or you'll hear a growth mindset,which really just means learn
from your mistakes, learn fromthings you're doing, and keep
improving.
Do these postmortems, keepevaluating and to your point,
break that unit of learning downas far as you can so you can be
doing it as often as possiblebecause that's how you, outpace
competition, so to speak, byimproving your knowledge.
Rob Zuber (22:10):
I think that that,
you know, your point about
mindset is huge in there, whichis.
We don't know.
There was so many sayings aboutthis, but as we grow as leaders,
I think we tend to like realizethat more and more, that it's
not about having the rightanswer.
It's about having the right setof questions.
Like, how can we get betterinformation?
How can I help people get betterinformation so that they can
make good decisions?
Right.
(22:30):
And how can I grow them?
How do you learn to the placewhere they realize I don't have
to have all the answers.
What I have to have is anapproach to get some answers so
that we can move forward.
Conor Bronsdon (22:39):
The other key
piece that I'm hearing from you
about how to adapt strategy andpriority based on conditions,
obviously applies to thatcontinuous learning mindset.
But I'm also hearing this ideaabout looking around corners,
trying to project ahead and say,let me be a little prepared for
some of these risks.
Let me understand some of thedirections it can go.
(23:02):
Let's say something goes wrong,you're like, oh yeah, I thought
that might happen.
Like, Mm-Hmm, well, okay, now wecan learn from it and apply it.
And instead of going into themoment and saying, oh God,
something went wrong.
What do we do?
You have at least a general ideaof how to apply it.
Rob Zuber (23:14):
Yeah.
I think that notion of beingprepared for something going
wrong, something just, I, Imean, I, I'm a big fan of
hypotheses.
Yeah.
Right?
Like at the end of the day, thisis a hypothesis.
We're making a bet, and so we'retrying to measure the risk of
taking that bet.
Right.
And there's lots of ways to derisk bets, right?
Again, how can I learn a littlemore to sort of make a better
(23:36):
decision?
Um, but if you treat it as that,this is an exploration to get
new information, then it's amuch more comfortable
conversation, first of all,right?
Like, what did we learn?
Oh, we learned that that was theincorrect hypothesis.
Cool, like no one's upset aboutthat.
No one's emotions are tied toit, right?
That was our plan, was to learnsomething, right?
I just finished this book,Modern Software Engineering, and
(23:57):
I think, uh, it's Dave Farley'sbook.
He does a great job of talkingabout applying the scientific
method, if you will, to buildingsoftware and all of science,
like everything that we knowcame from guessing and then
testing.
I don't know, maybe it's this.
Oh, can we get empiricalevidence that that's true?
Oh, actually it's been provenwrong because this other thing
came up.
No one's like, I failed as ascientist.
(24:19):
They're like, I did my job.
I learned something reallyinteresting about how the
universe works or whatever.
And if you think in that way,then all you're trying to do is
get better information all thetime.
Instead of, again, tyingeverything to this was my idea
and now it's a bad idea sort ofthing.
That does not allow you to moveforward.
Conor Bronsdon (24:34):
It does bring up
another risk, though, where some
folks get so obsessed with thisidea of being right the first
time, or like really nailing itthat they spend too much time on
that.
Like, let me figure out myhypothesis.
Let me strategize.
Let me try to project the futureinstead of saying, okay, let's
get my hands dirty.
Make one hypothesis, try it out,fail.
Learn from it, have more contextand information, then go apply
that.
Rob Zuber (24:55):
We have tried to
instill, and this is hard, I'll
be honest, that even doingthings that you know are the
wrong direction, if you will,can be useful if they will give
you new information.
So, in an incident handlingsituation, right?
So, let's say we're having aproduction incident.
We would want to restore serviceto our customers first and
foremost.
That's the most important thing,right?
(25:16):
Totally.
And saying, what if we turn thisthing off?
Well, we're not 100 percentsure.
Or what if we roll this back?
Let's take a really simpleexample.
What if we roll that thing back?
Well, we're not 100 percent surethat that's the problem.
But if we roll it back, and itdoesn't fix it, now we know it's
not the problem, we don't haveto debate it anymore, right?
Instead of feeling like, oh,but, you know, we need to have
the right answer in order tomake a decision right here.
(25:37):
Like, even doing things, that'snot necessarily the wrong
direction, but doing things thatwe aren't confident in, if they
will tell us something new andwe can gather new information
about the situation, is acompletely reasonable approach.
And then, if it does fix it andwe don't understand why it fixed
it, Let's discuss it in theretro instead of while customers
are not able to do their work.
(25:58):
A, that's a little bit on theside about having clear
priorities and what you value,but B, saying if it gives us new
information, then we're willingto try it.
And, you know, we apply thatacross a lot of things where we
say, Oh, you know, we're notreally sure if this is important
or if this is important, likelet's get rid of it and see what
happens sort of thing.
Right.
Like we're particularly in this,you know, year that we've all
(26:19):
been operating in from amacroeconomic perspective,
there's a lot of, you know, dowe need all these tools?
Do we like, can we be moreefficient sort of thing?
And you sort of, you know, youlook around and see, well,
someone's using this andsomeone's using this and
whatever.
Like, could we consolidate onone?
And, and, you know, well, thatwould slow us down in this spot,
but would it speed us upoverall?
Right?
And sort of like trying to zoomout a little bit and look at
(26:39):
what's the big picture and willthis drive us towards our goals?
And that might feel a littlebackwards over here, but we can
make that decision to sort ofdrive all of us forward.
Conor Bronsdon (26:48):
I think this
aligns back to one of the first
themes you mentioned around howto create great technical
strategy and make sure you holdit forward, which is getting
business context.
Mm-Hmm.
because what you're talkingabout is two things.
One, it's getting technicalcontext, you know, on an
incident, on something else likethis is something we already do
or should do as a best practice.
And so then if you apply thatout to saying, okay, we also
have to be business leaders at acertain level.
(27:09):
Yeah.
Great.
That then it's logical, youshould try to get the business
context from thesecommunications.
Yeah.
Hypotheses, et cetera.
And the other pieces I wouldsay, where you're thinking about
efficiency of tools, thinking itthrough.
In some ways, that's alsobusiness context.
We need to cut our burn rate, orwe need to reduce our tool
spend.
(27:29):
Or simply, we need to manageless tools because the overhead
of that is challenging for ourplatform team, or it's
challenging for justorganizational throughputs
because it creates thesecommunication barriers, and it
creates information barriers.
And so I'm hearing this themefor you where you deeply value
learning and context, and Ithink it applies directly to
(27:49):
your talk.
And so I'd love to zoom back outa bit and say like, what are
other things that you view askey to maintaining this crucial
technical strategy direction?
Rob Zuber (28:01):
It depends on your
level a little bit, but I would
say for me personally, I eventhink of technical strategy as
secondary, because what itallows me to do is focus more of
my energy on my first team,which is the executive of the
company, right?
When you're a CTO, your peersare not mostly technical
leaders, right?
(28:21):
They're the CFO and the CEO andthe CRO and whatever.
I don't necessarily think of itas just collecting context and
putting that into technicalimplementation.
Like, my job is to solve theproblems of the business, and if
technology is helpful, I willapply technology.
If my team or my organization orthe engineering organization is
(28:42):
helpful or how they're behavingneeds to change or what you know
what they're focused on needs tochange then absolutely like I
will use that knowledge to drivethat But at the end of the day,
I'm trying to create morealignment, whatever, so that
folks are more effective and Idon't need to be as involved so
I can go focus on the thingsthat really matter for the
(29:04):
business.
Right.
Yeah.
I mean, I am a salesperson forthe company.
I am a, you know, customerfocused leader.
I concern myself with the issuesof legal and finance and
everything.
Right.
And we, as a team, look at allthose problems.
And work to solve them together,more than everyone focusing on
their own department and thencoming and reporting status.
(29:26):
Like, we don't delivereffectively as an organization
until we as an executive areworking on those problems
together.
And yes, we have specificskills, we have specific
organizations.
But as a CTO, if I don'tunderstand, in pretty decent
intricacy, how our financialmodel works, or what our legal
concerns are, But all of thosepieces matter.
(29:50):
And I can help the legal teammake good decisions about how we
should be structured globally,right?
I can help the finance team makegood decisions about what we
should be spending in differentareas or look at where their
concerns are and say, Oh, Ithink I know a way to solve that
problem that may not even becoming from a technical
perspective, but just from myunderstanding of the business
(30:11):
and the market.
Conor Bronsdon (30:12):
That brings up
two things.
So one, I, I hear you talkingabout helping provide context,
right?
Like CFOs need context into howtechnical leaders are thinking
too.
Yeah.
Like you have a role inproviding them that and Mm-Hmm.
and helping share, oh, this iswhy we're thinking about the
product this way.
Yeah.
This is why we're, you know,engineering this this way.
So I think that's.
That's something that I wouldlove to talk a bit about and
(30:34):
understand how you approachthat.
Because yes, absolutely whatyou're saying, we need to get
context from the rest of thebusiness.
But you're also at a role wherenow you've started providing it.
And I think a lot of folks whoare maybe at that director level
for the first time, and arestarting to have to do more of
that, would love to understandhow you approach best practices.
How do you then bring thatcontext that you gather back to
(30:56):
your team effectively?
Rob Zuber (30:58):
I think it's a little
bit collaborative problem
solving, right?
Like where I see these thingscome together.
And you know, there was aninteresting talk on AWS cost in
the last couple days.
And I'm sure cloud costs isexciting for everybody.
There's a lot of situationswhere folks in engineering, for
example, feel like financeprovides them a set of, you
(31:20):
know, a set of budgets.
You can spend money on AWS, youcan spend money on people, you
can spend money on whatever.
Let's say R& D, right?
So I'm thinking like inaccounting lines from a public
statement, but R& D is going tospend a bucket of cash.
And try to drive the maximumoutcome for the business.
And I, as a technical leader,can say, Actually, if we spent
(31:42):
less on people over here, andmore on third party software, or
actually reduce this part ofthird party software and put
that into our cloud spend, orwhatever, we would have greater
impact for the business.
Right?
So, That's not necessarilysomething that someone in a
finance role is going to see.
Like, I need to help them seethat, help them, and learn from
(32:04):
them.
Like, what is the difference ifI hire a contractor versus a
full time person?
Like, what does that mean forthe company?
Do I want to buy a three yearcommitment to get a, an
additional discount right now,or is that too much risk for us
as a business, right?
Like all of those things areplaces where I have useful
information and my CFO hasuseful information and the best
(32:24):
outcome is not going to be, themtelling me something or me
telling them something but ussitting down together and saying
here are our options.
Let's figure this out Let'sunderstand the trade offs yeah,
exactly and it's going to takeboth of us and I think that's
not something that has to happenonly at I think at a, again, at
a director level, like everydirector has a finance partner
or something like that, likesomeone in the finance org who's
(32:45):
working with them, and theyshould be having the same
conversation, not, here's yoursale in this spreadsheet, try to
stay under this number, but thisis what I'm trying to achieve,
here's what I think it wouldcost, here's the impact that I
believe we can have on thebusiness, You know, are we in a
position to spend that cashright now if it's going to
return in 12 months?
Or do I need to be spending cashthat's going to return in 6 or
(33:05):
in 3?
How can I think aboutstructuring that?
Like, that's a, that's aconversation that I would want
directors to be having.
And if you're not having thoseconversations, you're not
preparing yourself to then be aVP where you're definitely
having them, or a CTO, orsomething else.
Conor Bronsdon (33:17):
If I'm a
director or maybe even just like
a senior engineering managerMm-Hmm.
Who wants to take a step to be abusiness leader.
How should they be thinkingabout that level up process and
how to, to grow?
Rob Zuber (33:28):
Yeah, it's, it's a
fantastic question and I always
struggle a little bit in theseconversations because I've been
a CTO for like 16 years bystarting companies.
And so that's one option.
If you go run your own business,you're going to learn all this
stuff very, very fast.
CFO at some point because youwon't have one yet.
And it gives you a very, verygood perspective of what
matters, particularly whenyou're It's a very good
(33:55):
perspective.
You don't have to do that.
It's not for everyone.
Uh, it, I mean, people arestressed out right now.
It's like a very high stressthing.
Participate in thoseconversations.
I mean, whatever level you'reat, there's someone above you
who's having a biggerconversation about that topic.
And I would say, I don't, Idon't have a lot of people
coming to me saying, some of myfolks, I mean, to be clear, I
(34:17):
actually don't run theengineering organization.
I have a sort of small group ofvery technical IC folks that
work for me.
But some of those folks areinterested, like, help me
understand, you know what theCFO is thinking about right now.
Yeah.
And I'm happy to have thatconversation'cause I have enough
context about what the C FFO isthinking about.
'cause we are very close to beable to share that with someone.
And then if they said, I'd liketo understand more, tell me
(34:38):
about this.
And I say, I don't have ananswer to that question, but she
would love to talk to you.
Right.
Because that's helpful to ourCFO as well.
I mean, I'm not gonna send 200engineers to talk to the cfo.
Right.
But people who are keenlyinterested in trying to get that
growth or you know, let's have asession maybe if all of you
wanna understand, like, if I seeenough of that interest, all of
you wanna understand.
Let's have a session and talkthrough the things that we're
seeing and the things that, youknow, our finance team is seeing
(35:01):
and what really matters, so thatwe can each, get a little bit
more context about what matters.
I don't know if it's everyone,but I'd say folks are generally
reluctant to just ask around,Hey, you know, this, this is out
of my wheelhouse, therefore Ishouldn't talk about it.
But those things out of yourwheelhouse are the things that
are going to help you be a goodbusiness leader and ultimately
growing as an engineering leaderis about growing as a business
(35:23):
leader because the more of thevalue and impact that you can
have around the business thatyou can bring.
The more you're going to be ableto move up through those roles,
like at some point you're notabout technology anymore.
You know everything you need toknow other than the ability to
learn new tech or understand theimpact.
And what you need to know now ishow the business functions and
how to be impactful.
Conor Bronsdon (35:43):
That alignment
piece and bringing that
alignment back to your team asyou brought up is also a crucial
skill set.
What were some of your bestpractices that you had mentioned
around helping share all thecontacts you gathered, or
encouraging others to gathercontext themselves?
Rob Zuber (35:56):
I um, I had this
moment recently, which was, I'll
just describe it, where I hadreceived some questions about
some other stuff related to, youknow, how we were structuring
teams and some sort of low levelstuff, but the signal that I got
from that set of questions wasthat people didn't understand,
you know, within parts ofengineering, basically house
CircleCI makes money, and Idon't mean we charge customers
(36:19):
and therefore we get paid and wemake money, but literally, like
how we really go to market,right?
What our growth and expansionmodel is inside of our
customers, et cetera.
Because they think about theuser and they think about the
problems that user, but theymaybe don't see.
You know, our go to marketmotion and the split between
what happens with self servecustomers versus what happens
with, you know, customers wherewe have a sales engagement.
(36:41):
Those sorts of things.
And so, I basically kind of drewa bunch of diagrams and did a
talk at one of our R& D callhands.
We call it whatever ridiculousname, but anyway, an all hands
that's on a call.
And just walk through, step bystep, this is how people join.
This is, you know, this is how,what it looks like when they're
free.
This is what they, how they payus.
This is what revenue recognitionis.
We're like very sort ofaccounting pointed terms, but
(37:05):
that matters in terms of, youknow, how we build features,
honestly, like what it is, whatkind of capabilities we're
trying to give to our customersso they'll be successful on the
platform.
You know, there might be bigopportunities like that to say,
Okay, like, this is how ourbusiness functions from a purely
sort of someone else'sperspective, right?
A finance perspective, a go tomarket perspective.
(37:25):
And understanding that picture,I mean, maybe doesn't change,
everything that they do everyday, but might change the way
that they think about a veryspecific problem.
And, you know, one thing that Ioften say is the number of
decisions that your developersare making every day is you
can't keep track.
Every line of code, in somefashion, is a decision, right?
I'm making a trade off decisionevery time I hit the keyboard,
(37:48):
and you're not going to beinvolved in 99 percent of those
decisions.
And so, trying to give people,that bigger picture
understanding, which again, youcan do at very large scale.
Also, at smaller scale, you havea whole management structure,
right?
So you want your directors to bereally clued in so they can help
the managers understand.
And also translate that downinto context for that sort of
(38:11):
more localized pursuit.
Just as a CTO telling everyoneeverything about what's
happening in the business isprobably a little overwhelming.
Take a really long time.
And often ends up being in aplace like, okay, I think I
understand that, but I don't seehow that applies to what I'm
doing.
And so using director, manager,senior manager, maybe in there,
(38:34):
Localize those, those problemsand add the context of the
challenges of a particular team,right?
So, you can imagine, while wehave certain teams where, like,
how money flows through thesystem is really critical.
Right.
And then we have other teams whomaybe aren't thinking about it,
but that might impact, oh, ifwe, you know, if we built this
feature in this way, that wouldhelp us grow, you know, free
users, which is helpful to us inthe long term.
(38:55):
Right.
So how can we do that?
Conor Bronsdon (38:57):
Yeah, I think
this scaling thing you're
talking about is reallyimportant too.
Like we think a lot about, likeas leaders, how do we scale
ourselves and scale ourorganizations?
And one of the things thatreally underlies a lot of the
approaches you're taking here islike you clearly trust that your
team will get context, beprovided context by you and
others.
Mm-Hmm.
And then make good decisions.
And there's a lot of researchthat shows that when engineers
(39:20):
or any team members feel thattrust, they perform better too.
They, they understand what theyneed to do.
They keep doing that learningprocess and they feel enabled to
do their best work.
What's your approach to buildingthat trust in your organization
and making sure you have theright people in place?
Rob Zuber (39:38):
Yeah, it's a great
question.
I mean, that's an ongoingpursuit, obviously.
And I mean, you, I, I'll alwayshedge like we are not perfect,
right?
Like there nobody places,nobody.
Yeah.
There are places where I thinkwe do this well and there are
places where I think we struggleand that can depend on people.
You know, it can depend onpeople involved, it can depend
on the particular context.
Like I think if you're aCircleCI engineer, um.
(40:03):
We talked about money, that'snot the core of what we do.
Absolutely we need to chargepeople and we need to make money
and all that, but that's not howmost of our developers, for
example, think about theproduct.
They use the product every day,unsurprisingly, to deliver their
own software.
If you're working on thosefeatures, then it's really easy
to have context.
(40:24):
You have this like, oh I know,this would be easier if I did
this, right?
And you can go talk to your PMand you can sort of share that
locally.
You know what we don'tnecessarily see, I won't go all
the way into like, paymentsystems, but what we don't
necessarily see is really largeenterprises, right?
The, the challenges of reallylarge enterprises are not as
close to our challenges asindividual developers.
(40:45):
'cause we're a smaller company.
We things like SOX compliance,right?
Public companies, and sort ofseparation of duties and things
like that that we, you know, alot of us earlier in our
organization thought.
Nobody needs that, right?
That's not a good way to deliversoftware.
Versus, okay, so now we have abigger picture.
So yes, how do we take that andhelp folks see it, right?
(41:08):
And a good part, we talked abouttalking to customers.
Obviously we have PMs, but, um,who gather that at a higher
level.
But really encouraging.
The discussion, right?
Oh, that's interesting.
Why are we doing that?
Like, help us understand, giveus some examples of customers,
whatever that might be, tounderstand this is why they're
asking for it, this is how thisworks, and, that works well in
(41:30):
some cases and doesn't inothers, but trying to get
everyone to understand thecustomer and that particular
customer problem in a way thatthey can come up with creative
solutions.
You know, it's one of the thingsthat I've always loved about CI
and cd, agile, whatever, like wetalked earlier about, we're
probably gonna be wrong.
Right?
And we're all about gettingfaster feedback.
(41:50):
Yeah.
And one of the, the key elementsof that fast feedback for me is
that as an engineer, I amputting code in front of
customers constantly.
And so they might not be callingme and saying, Hey Rob, I don't
like this feature, but I can seethe metrics moving.
Oh, we put this thing out andyeah, everyone loves it.
Or we put this thing out and noone's touched it.
Like, what did we do wrong?
(42:11):
How can we learn more?
We do lots of user research,which is kind of in the middle,
and we have access to therecordings and, and sort of
backroom access and stuff likethat so that like individual
engineers can go really hear howcustomers are thinking about the
problem.
And I think all of those thingsto get you that fast feedback
directly from customers reallyconnects you as an engineer with
(42:33):
the problem, right?
I mean, I grew up throughwaterfall and 12 month delivery
of products that nobody caredabout and nobody wanted by the
time we got them in the market,and the first time I did
continuous deployment, which waslike 2011.
I was terrified, right?
Deployment used to be sevengallons of coffee and four all
nighters in a row to try to getthe thing to work.
(42:55):
And I was like, we're going todo that every day?
That sounds awful, right?
But then we tried it and I waslike, this is amazing.
I will never not do this, right?
I will never stop deliveringvalue to customers as quickly as
I think about it.
Here's my idea.
Now it's in the hands ofcustomers.
Like there's a little typinginvolved, but that's going away,
apparently And so, uh, so thenlike, why would I stop doing
(43:15):
that?
Right?
And that the thing that I loveabout it is that I understand
how the customer is respondingand that direct connection and
direct feedback allows me toyour point, to be more engaged
in the problem.
More so say, okay, I understandhow to do something for my
customers, and I feel that.
I don't know, call it a dopaminehit if you want, but like I feel
that positive energy towardsdelivery.
(43:36):
And when it's negative feedback,great, I know how to fix it,
right?
Versus, okay, let's sit and talkabout it for a while, and then
like this person's going to talkto that person who's going to
talk to that person.
So, I think everything we'vedone around software delivery in
the last 20 plus years hasreally got us to this place
where we're connected to thecustomer in a way that's really
engaging.
Conor Bronsdon (43:54):
Yeah.
This is a, a generalism, but,something we felt think a lot
about is like merging devs orhappy devs.
Mm-Hmm.
You wanna deploy co code.
Yeah.
You want that dopamine hit, youwanna see how things worked and
then learn and grow off it.
We like problem solving.
Yeah.
And I mean, frankly, every timewe put out a podcast, I get
excited too.
Yeah.
I'm like, oh, great.
Like we're, we're, we're puttingsomething out there.
Hopefully it has value.
I think a lot of what you'resaying is great advice for
(44:14):
technical leaders at everylevel.
Yeah.
Get business context.
Understand the customer, um, youknow, talk to your peers.
Learn, learn, learn, learn,iterate, test hypotheses and I
would love to understand alittle more in depth what advice
you would have for people whowant to go be technical
founders, because I know a lotof our audience is like, okay,
I'm an engineering manager,that's, I wanna be a director,
(44:35):
or I'm a director, I wanna be avp.
And I think you've given somegreat advice for them, but I
know you also have this uniqueperspective as a multiple time
founder who's had success doingit.
Rob Zuber (44:43):
It's gonna be a ride.
It's hard work.
uh, that's not really helpful.
I mean, I think a lot of myapproach as a leader comes from
that.
Right?
Like, so if a lot of what I'msaying is how I would approach
starting a business.
Even more than in anythingyou're doing now, the faster you
can get feedback about youridea, the more chances you're
(45:06):
going to have to get it right.
And to go to the, the sort ofsimplest extreme of that, like
when I want to launch a newproduct, I'm going to start with
a landing page, right?
I'm thinking about this idea.
If I put up five, 10 landingpages, which means just, hey,
coming soon, put in your emailif you want to be notified.
(45:26):
Does anyone respond, right?
Does anyone click on the adsthat I put up?
Does anyone put in their email?
If no one cares about thisthing, how much money am I going
to invest in it, right?
Like, I think there's a goodamount of, Aura?
Founders who have this amazingidea that no one believes in,
but they have this tenacity andthey go after it for years and
(45:47):
then they succeed.
That is an outlier.
That is an extreme outlier.
What most people do is pivot,pivot, pivot, pivot, pivot, like
try to figure out there'ssomething here.
I know there's something here,but I need to get it into a
shape that people are going torespond.
If I phrase it in this wayversus, okay, people are
interested in this idea, but ifI explain it this way versus
(46:08):
this way, if I call it thisversus this, does that drive
differences?
Right?
These are like 50 experiments,right?
I talked to people who arethinking of founding companies
and they're like, I'm going tomortgage my house and hire a
team and do, I'm like, pleasestop.
Like, until you know that youhave product market fit or
something, some signal thatyou're on the right track, the
(46:30):
cost of experimentation today isalmost non existent, right?
Like, I can spin up a singlehost in a cloud, I can run a
Lambda job to respond torequests, I can just pay for a
subscription on, I don't know ifUnbound still exists, but like a
landing page site.
The cost of experimentationshould be, it shouldn't even
show up on your personal monthlybudget.
(46:50):
It's what I would say, right?
And so there's so manyopportunities to, to learn and
figure out where you have anopportunity.
And then from there, don't losethat discipline, is all I would
say.
Like, doesn't matter how big youget, it doesn't matter how fast
you're going, like, stick withdoing the things that you have
some evidence are going to drivevalue, and if you don't have
(47:11):
evidence, find evidence, right?
That same, like, how can I testthe hypothesis that tells me if
I'm going the right direction ornot?
Because the faster you adjust,the more time, right?
Let's say you're funded at thispoint, right?
You've got cash in the bank, andyou need to put that cash to
maximum use.
Like, are you going to spend itall building the first version
of the product that no onelikes?
(47:32):
Or are you going to build 18versions in that time?
Right?
Because if you build 18, you'vegot way more chances to get it
right.
And so, that discipline, Ithink, is super critical.
Conor Bronsdon (47:41):
Keep making
hypotheses or, or betts and, and
getting more information andlearning from it.
Yeah.
And I, I mean, you brought thisup or alluded to it earlier
that, I mean, things changereally fast.
Mm-Hmm.
Both in technology.
I mean, yes, chat GT's beentechnically out for a while, but
it went viral really fast andnow it's this normalized thing.
And L LMSs are everywhere andwe're all talking about how can
we apply an LLM within ourtechnology?
(48:04):
How should we be managing ourdata?
Two years ago, not everyone wasdoing that.
Yeah.
Uh, or look at just what'shappening in the business world
broadly.
I mean, a couple years back,like the, the balance of how
people were hiring was verydifferent.
Mm-Hmm.
uh, there weren't massivelayoffs that were happening.
Uh, maybe you could fully grow abusiness off of, oh, we're
selling to small startups andscale up.
(48:25):
And it's a lot harder nowbecause, money is a lot more
expensive just looking atinterest rates.
Yeah.
And things changed fast.
We went from boom times to notso boom times and.
If you lose that learning thatyou keep on bringing up, it's so
easy to be caught flat footed.
Rob Zuber (48:44):
absolutely.
I think things are always goingto change, right?
Change is the job, right?
Like, if you think about it,even if you go to the very, very
micro, right?
I actually did not studycomputer science.
I have an engineeringbackground.
Almost every other engineeringdiscipline has way less
flexibility, right?
I design a bridge, I build thebridge.
Conor Bronsdon (49:05):
It's going to be
around for a while.
Rob Zuber (49:06):
Yeah, exactly.
I mean, unfortunately, the newOakland Bay Bridge, they had to
retrofit, I think, two or threetimes right after building it.
But, for the most part, like,you're committed to a design by
the time you start implementingit, right?
So, the process, the process ofdesigning, of like, multiple
engineers signing off, and youknow, lots of research up front,
et cetera, that's how thosethings have to get done.
(49:27):
But software is like a, it's sopliable, right?
All, so you can start from, youknow, everyone builds their
hello world, right?
And that's it.
Initial commit, get a knit,whatever.
And then everything after thatis a change.
I change it, I change it, Ichange it, I change it, right?
And so orienting even thesoftware that you build at the
smallest level to be prepared tochange is the job, right?
(49:52):
Because all you're doing afterthat is changing it, and so you
want to make it easy to change.
And so, if you take thatdiscipline and apply it to how
you think about business, you'regoing to put yourself in a
really good spot, right?
Like, we want to be wellpositioned to change.
Doesn't mean we should changeall the time, right?
Confusing our customers,launching different products,
whatever, like, that's not agreat place to be.
(50:12):
But there's always going to beshifts in the market.
There's always going to bedynamics.
Like we're going to go from cashis easy to come by to cash is
hard to come by, right?
So we're changing the model ofhow we operate our business.
We're going to go from AIML isnowhere to if you don't have it
in your product, no one wants totalk to you, right?
So how fast can I make thatchange?
Right?
And if I am 100 percentcommitted, all of my weight
(50:36):
going in one direction, and nowI have to turn, that's way
harder.
Yeah.
And it's gonna leave me exposedwhen someone else is going to
grab that idea and, and run withit in another direction.
Conor Bronsdon (50:46):
A actually, you
brought up a, I think, an
important piece of advice tofounders, which is just buy a.ai
domain.
That that's the first thing youshould do.
After that, you know, it allgets a little easier.
It does bring up something Iwant to ask you, which is, what
are the changes that you seecoming next?
Are there things that you haveyour mind on and you're like,
you know, I'm anticipating this,whether, you know, broader
market conditions ortechnological things that you
(51:08):
see coming down the line thatpeople should be thinking about?
Rob Zuber (51:11):
We're definitely in a
time of uncertainty Ask anyone
about economic decision, youknow, future economic
conditions, you're going to geta broad spectrum of answers.
So, I mean, there you go.
That's like your set of possiblefutures is quite wide.
I'm not going to say this is theone I'm betting on.
What I'm betting on is a highvariability in there, right?
So, so then your questions are,how do I hedge?
(51:32):
Right.
Which I think is, I've beenasking that question a lot more
lately than I have in the past.
And I think it's a reallyinteresting question.
I don't know if people arefamiliar with like financial
hedging and I'm not gonna talkabout that model.
Conor Bronsdon (51:41):
I'll, I'll just
say that's a mistake I've made
before.
Right?
Like thinking I have thepotential future in mind.
Yeah.
And going all in and going,jumping on it.
And when it works, it's great.
Yeah.
But when you're wrong, the riskis high.
Rob Zuber (51:51):
And I would say in
those kinds of conditions,
another thing that I wouldhighlight, which I think is good
for founders and for many folksis I can only think of it
metaphorically, but knowing whatyour stop loss is, what is the
point at which if this is notworking and we can't prove it,
we are going to call intoquestion this plan.
I love this expression, aUlysses contract, and I'm not
going to explain all thedetails, but it's basically like
(52:14):
before, when I am of sound mind,I make the commitment to certain
behaviors, such that when we arein the moment, right, it's so
easy to say, what if we justkeep going, what if we just keep
going, and I like, my favoriteexamples of this are all from
like mountaineering disasters,and I'm sorry for bringing this
up, but like a turnaround timewhere you say, if we are not In
(52:34):
the final stretch by 2 p.
m., we are going back down themountain because then we're
gonna live to, like, do itanother day.
And every mountaineeringdisaster story you read has
people going past the tide.
Like, oh, we'll get the summit.
Yeah, yeah, no, we're just, butwe're so close, right?
We're so close.
Conor Bronsdon (52:46):
Maybe we're
oxygen deprived, and now we're,
like, not making straight.
Rob Zuber (52:49):
Right, because you're
making terrible decisions.
Yeah.
Right?
And we may not be it so, youknow, 8, 000 meters or whatever
in our day jobs, but understress, right?
Oh my gosh, we've like burned abunch of cash, and now we're
feeling stressed that we have tomake this work versus we've
proven it doesn't work, how muchtime do we have to pivot?
Or like, what is our hedge,what's our other option?
You know, all those things Ithink are good to frame early.
(53:13):
There's this belief thatfounders are people who commit
to this one idea, and they justkeep, you know, charging
forward.
Don't let that be your thinking.
Absolutely there are pointswhere people will question, and
you have a vision and you needto proceed, but you need to
decide like where that is.
Because if you just think thefirst thing you thought of is
gonna be amazing, you know, youcan burn tons of cash, tons of
(53:34):
trust, tons of futureopportunities, by pursuing
something sort of withoutchecking in, I guess.
Conor Bronsdon (53:40):
Yeah.
That check-in is crucial.
Rob, I've really enjoyed thisconversation.
Do you have any closing thoughtsyou wanna share with the
audience?
What's coming, advice, oranything else you want to
mention?
Rob Zuber (53:50):
Prepare yourself for
change.
I think the thing that I wouldwant to land out of all of that
is that approach makes thingseasier.
I think we often end up in thisspot and we're all under a lot
of pressure right now.
We're all sort of exhausted fromjust a few years of what's been
a grind.
But it generally pays off to putyourself in the situation where
(54:15):
you are more adaptable.
So I would take that extra time,like focus less on what's the
thing I have to do right now anda little bit more on how do I
set myself up, particularly asleaders.
For a wide variety of outcomes,uh, in the next, in the next
couple years.
I mean, we, at CircleCI,unsurprisingly, trying to adapt
and, and focusing on adaptingto, you know, this whole shift
(54:38):
in ai, ml, how do we putcapabilities in our product?
Like are we prepared for thatchange?
How do we support other peopletrying to build those things?
Like what does softwaredevelopment look like over the
next couple years?
That's super fascinating.
I'll just say if anybody wantsto talk to me about that, come
find me.
There's a broad spectrum ofpossible futures, probably more
than we're used to at themoment.
And so take the time to stepback and look at how are we
(54:59):
going to act in each of thesesituations and how do we set
ourselves up to be successfulacross that broad spectrum.
Conor Bronsdon (55:04):
Yeah, de risk a
bit and understand what could be
coming because you'll be moreprepared for it.
I think that's a great insight.
Do you want to talk about whatCircleCI is doing in a I ml?
I'm happy to dive in there a bitmore.
Rob Zuber (55:15):
I would say that
there's two things that we're
very interested in.
One again, is, is how do we helppeople deliver software faster
of any kind.
Yeah.
Right.
Yeah.
We, we launched this errorsummarizer, so you have a stack
trace, you know, you can click abutton and get an English sort
of explanation of what theproblem is and how to fix it.
We're gonna continue to iterateon tools like that.
We've, uh, in the lab at leastdemoed self-healing pipeline.
(55:37):
So if you're, if you buildbreaks, we'll just fix it for
you.
'cause we have all the tests.
We know what it looks like to beright, so we can generate code
that is right, according to yourdefinition, even if your code is
wrong.
Right.
That's a pretty cool spot to me.
Yeah.
Um, and then on the other side,many of our customers are in the
same situation, right?
They're building AI enabledcapabilities into their
products, and they're unfamiliarwith how to test those.
(55:59):
Right.
We're, as engineers, we're allused to determinism, like two
plus two has been four for areally long time, and now it's
like somewhere between 3.1 and4.9.
It's probably okay.
Right?
Yeah.
And so helping folks.
A, just learn and understand howto do that effectively in their
standard software practices.
Like, we don't need to reinventall of software delivery to
(56:20):
deliver software that happens tobe AI enabled.
And so including some additionalcapabilities around non
determinism, stuff like that.
And then, and then we're lookingat all the other tools that we
have and helping folks applythem.
Like, I'm putting out a new, I'mtesting a new endpoint in a
product, right?
Does that look much differentfrom, uh, I'm just rolling out a
new version of something?
(56:40):
Like.
Canaries, blue-green featureflags, like we've built all the
tools for these things overtime, and software engineering
has solved a lot of theseproblems.
But we're at this interestingintersection of kind of two
communities, folks who have beenbuilding AI ML in the lab, and
folks who have been deliveringsoftware that they expect to
behave exactly the same everysingle time.
(57:01):
And so we're bringing thosepieces together to help those
software engineers deliver withthe tools that, like honestly,
the A IML folks have built allthe tools.
It's just that the softwareengineers don't necessarily know
that they exist or how to usethem.
So putting that into yourdelivery pipeline so you can
continue to deliver effectively.
And I always tell this joke orwhatever, I guess it's like
self-deprecating, but when CTOshows up and says, I don't
really know what AI is, but weneed it in the product by next
(57:23):
week.
Right.
And, and as a developer, you'relike.
What do I do?
Yeah.
And the reality of thissituation is so much of this
stuff has been done for you andyou have all the pieces, and so
just helping people sort of seethat path and continue to
deliver effectively.
So that's what we're, we'refocused on at the moment.
I mean, it's, it's an awesomeopportunity, I think for
everybody.
People are building some very,very cool products.
(57:45):
People are building tons ofproducts that are not cool, but
it's experimentation, right?
It's like, absolutely behypothesis driven, because this
is brand new.
We see tons of possibilities.
Not all of those possibilitiesare going to play, but some of
them are.
Send a request to a foundationmodel and put something in your
(58:08):
product and see if it works,right?
Yeah.
And then when you know that itworks, you can worry about how
do I scale it, how do I optimizeit, how do I deal with cost, et
cetera.
Conor Bronsdon (58:14):
Fantastic.
Thank you for all these greatinsights, Rob.
I've really enjoyed ourconversation.
If you want insights from moreengineering leaders, just like
Rob, make sure you tune in everyweek, uh, whether you're on
YouTube or Spotify, ApplePodcast, we're on all your
podcast players.
And, uh, you can also check usout on substack
devinterrupted.substack.com.
We put articles out there everyweek along with, uh, information
on the podcast.
And, uh, thanks for tuning in.
(58:35):
Hope you enjoyed theconversation.
Thanks for coming on, Rob.
Rob Zuber (58:37):
Thanks for having me.
It's great to be here.