Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hey, maybe your company used to merge 20 changes a day, and now
it's merging 50 changes a day, even with bots and all this
stuff going on, maybe even 100. And then you've merge conflicts
and merge skews and these changes.
What's funny is before I had reasonably good confidence in
them, I thought, with Connor, I trust you.
You're a great engineer. You wrote this code chain.
Now it's like, OK, Connor, I'm not sure you wrote this.
(00:23):
Welcome back to Chain of Thoughteveryone.
I am your host, Connor Bronsden,and we've all had this thought.
AI coding tools are maybe the fastest moving category in tech.
But as we flood our systems withAI generated code, what do we do
about code reviews? These can be a bit of a critical
bottleneck, especially if we're seeing folks like the CEO of
(00:46):
Coinbase saying that 40% of their code is now AI generated.
My guest today is tackling this problem head on.
Greg Foster is the founder of Graphite.
You may have seen them over at Graphite dot Dev.
You may be using them. And after seeing the power of
Airbnb's internal tooling, Greg set out to build a solution for
(01:07):
everyone else, accelerating development for teams like
Cursor Replet and Snowflake. Greg, it's great to see you.
Thanks for joining us on the show.
Yeah, thanks for having me, Connor.
I love, I love this topic. I love talking about dev tools
and I love geeking out on how it's all evolving with AI right
now. I saw that Coinbase tweet this
morning. I was, I was enjoying that one.
Yeah, it's it's really interesting to see the responses
(01:28):
on that. And I guess folks who are
listening to this later now can place when we recorded this in
time. So we're we're giving the game
away here. But before we get started, I do
have one quick note I need to share with our listeners.
We have just launched our Chain of Thought newsletter on
LinkedIn, and we would love to have you join us every week as
we bring you the conversations from train of thoughts research
(01:51):
to help you ship reliable AI andeverything that is crucially
happening around observability evaluations and key
infrastructure for AI. We're going to have a lot more
conversations with dev tools leaders like Greg and you can
find it by searching for the Chain of thought newsletter on
LinkedIn to subscribe or check out the Galileo AI LinkedIn
(02:11):
page. You can find us pretty easily
or, or find me and then, you know, easy to find us from
there. But let's get into the
conversation here. Greg, you and I have both been,
I mean, involved in the dev toolspace for for years at this
point. Some listeners, the show may
tell me from when I was at Linear B and hosting the Dev and
Rapids show for years, or we talked with a ton of engineering
(02:32):
leaders and most of the conversations happening in AI
today are about generating code faster, solving, you know, this
challenge of like writing something better.
But you know, as Galileo is obviously focusing on like
observability and evaluation of that, whether it's an AI agent,
(02:54):
whether that's writing, whether that's code generation is
crucial. And for particular workflows,
say code, which I think we all agree is one of the the obvious
opportunities to solve given themassive open source databases
and internal proprietary databases that can be applied to
the problem. If you want a deeper
conversation on why code is 1 ofthe top areas for LMS to to
(03:15):
solve for knowledge of work, we have our whole episode of the
poolside founders that was from earliest year.
Go check that out. Great deep dive.
But Greg's really focus on what happens next after the
generation. And I think all of us who've
done dev work know reviews can be hit or miss.
It can really depend on who's reviewing the code, how
(03:37):
motivated they are to actually review your code, and how much
they're paying attention. You may just get an LGTM.
Looks good to me. And with so much new code
generation happening. And frankly, with LMS not always
being the best at being brief with their, their code
generation reviews are clearly acritical and often overlooked
(04:00):
part of the puzzle. So I'm really interested to hear
your perspective on this, Greg. And I think we're both excited
to see the AI engineering landscape evolve from that
vantage point. So let's set the stage for our
listeners. Greg.
How is the nature of AI engineering fundamentally
changing the software development life cycle?
How is the nature of AI changingthe software development life
(04:23):
cycle? So it's definitely everywhere.
You can't, you can't go on Twitter, you can't go on
LinkedIn. Look at tech any, any source of,
of media or information or hypeearticles right now is constantly
talking about AI changing engineering.
I mean, Hacker News, it's flooded with this.
(04:43):
There's constantly a debate of like, do junior engineers still
have a job? Should people even study CS
anymore or or is it overblown? And you can see the flame wars
happening in the comments. But if you get like control F
for AI on HN, it's just it just lights up the page.
So it's definitely doing something.
It's definitely really interesting and it's been pretty
fast too. I've been working on dev tools
(05:05):
for a while. I used to be an engineer at
Airbnb, work on the dev tools team.
I was in the late twenty 10s. I started working on graphite
with my amazing Co founders around 2020.
AI was not really mentioned. I remember at one point I was
talking to a, a, a GitHub exec and this must have been
2022-2023. And they were so excited about
(05:26):
the advent of some of this AI overhang.
And they're like, yeah, you should use it to to summarize
commit titles to fill out committitles for what y'all are doing.
And we sat there like, yeah, I don't know if that's we're like,
we're like, I don't know if that's physically possible.
Like maybe it is like, maybe that's just that that like, you
know, just can't be done with current technology.
And here you are a year or two later and it's like, oh, that's
the most trivial thing I could possibly do with AI.
(05:46):
So it's really and a seemingly kind of kind of come out of
nowhere and just take in the field.
If I think around, you know, personally, like as I've
watched, as I think about how it's kind of emerged, it feels
like the first major use case was GitHub Copilot.
I feel like they really cracked this open and it's, it was a
cool synergy because you take, you take Microsoft who owns
(06:09):
GitHub, they also own VS Code. And then they have that major
investment in open AI. And so they, they have all the
puzzle pieces, They have an IDE,they have that training data set
from, from GitHub and they have the AI talent and they apply and
they build what feels like the first really successful product
here, which was like tab complete.
As I'm typing, I already had tabcomplete through their abstract
(06:32):
syntax trees and, and stuff happening in my IDE.
But now I can use AI and it could predict a little bit
better and it could fill up likethe rest of the line or heck,
even in exciting cases, a coupleof lines in front of me.
And this got really popular and people were willing to pay for
this. And you had a couple companies
spin off from this, you would Maven was one of them.
People started really getting excited and working in the space
(06:53):
and then it feels like the next thing that came along Cursor and
the cursor team looked at that and took it took amazing
inspiration. I love I love the cursor folks.
I knew them. I think they were also getting
started at round 2020. Such a smart group of folks.
We looked at them that hey, theysaid, hey, this is promising,
but man, you could do so much more.
In fact, this, you know, don't just be like a plug in into VS
(07:14):
Code. What if you own the whole IDE?
You had more custom UIS, you were a lot more ambitious with
this. They extended tab to not just
fill out your line, but also jump ahead and fill out multiple
lines and you can kind of keep slap and tab and just watch your
whole screen get filled out. And that we're still in the auto
complete land. And I think everyone looked at
this and they said, OK, fantastic.
What a nice dev tool and minor productivity level up this is.
(07:36):
But I don't think anyone was like, OK, we're out of a job.
The software engineers then thenanything about second way.
Then we had then we had like kind of another another level
up. And I think this was kind of
enabled by more intelligent backend systems.
As SONET 3.5 comes out, SONET 3.7 variants of GBT start
evolving. You get this sidebar that again,
I think cursor really popularized where you can get a
(07:58):
sidebar and I can ask it, hey, just go do me a thing.
Go add a button, Go take this, go to refactor this file out of
unit test here. And it doesn't have infinite
context, but it can go on small missions and it can actually pop
out rather than tab quitting. It can just spawn and generate A
reasonably good amount of code or a reasonably good amount of
edits. And now I have this director
mode where I can go and I can have a conversation and I can
(08:20):
just suggest changes to the codebase.
I'm still sitting at an ID. It's not too dissimilar.
I'm still working on a code change.
And heck, it's making enough mistakes where I'm also
immediately just moving my cursor into into the window and
then like fixing a couple of typos, winter errors, you name
it. That really feels like the
second wave. And that felt distinctly better
and stronger and more interesting than just tab
complete. Tab complete is still useful,
(08:41):
but this felt like a big level up there.
And I think people started scratching their heads and and
really starting to think, OK, man, this is this is getting
special. And as this is extended and the
duration of missions that you can send the the those bots on
has has increased, you have the Ovid of vibe coding gets coined.
You have people, you know, the the the surface evolves and now
(09:01):
you've Claude code doing it in aterminal.
But it's still that general ergonomic pattern of you type of
English sentence. It does a little work get back
gets back to you and maybe you're checking in 1020 times on
a pull request. Now, the last level of this that
I'm kind of interested in seems to be this was the most emergent
right now. But is this this concept of
background agents? I don't know if it has a fully
(09:24):
official term yet, but that's that's one term that that's been
coined where to say I view, I view it as headless.
You know, I say go Slack at message something on Slack, go
ping something on GitHub, wait acouple hours and then have a
pull request come back to you. It's like the full autonomy self
driving car. It's not highway autopilot.
It's just, it just goes, does a delivery and comes back.
(09:44):
And it's really interesting. I don't have to be, I don't have
to have my hands on the wheel. I don't have to be paying
attention. I can, I can do it off my phone
while I'm going to lunch. The scope of missions and asks
that I can make of that are quite small right now.
It's often small code changes. If you ask anything too
ambitious, it, it likely goes off the rails.
It also takes a good amount of time.
Usually what's happening is a sandbox is being spun up in the
(10:05):
background, kind of like CI, some codes being cloned, it's
building, it's iterating, it's it's giving you back APR.
But it's, it really to me is showing this like final
evolution or so far final evolution of, of what these AI
can do in software development, which is kind of headlessly
acting like a teammate and spawning a whole pull request,
allowing you to interact with itthe way you'd interact with a
(10:25):
teammate and with a teammate. I don't, I'm not sitting with
them on an ID 2 hands, four hands on the keyboard, NCIS
style, trying, trying to collaborate.
We're talking Slack and we're talking on GitHub and that's how
we're entering those board quests.
And we're starting to get to that face.
And maybe it can only do 10% of PRS today, but heck, you know,
give me GPD 5 GPD 6, you know, maybe can do 20%, maybe starts
inching up that ladder. And where we find ourselves
(10:48):
today, we have all three people used to have complete people use
these IDE or Clock Co style chatbots and they also use a little
bit as headless, headless background agents.
And so AI is definitely shaking up the game board and we're
doing everything a little bit differently.
I see these three major patternsemerging.
And there, there's so much to unpack there.
I think you have a lot of great insights and the idea of these
(11:09):
three development waves is, is really spot on.
And we could obviously talk moreabout things like context
engineering and how that's becoming a much larger frame for
a lot of these approaches and how people are thinking through
the infrastructure piece of it. But I want to drill down
something you said early on, which is commit titles.
You know, a couple years ago we were like title generation.
(11:32):
How are we going to figure that out?
And now I, I don't think about commit titles anymore.
Like, come on that, that is so far down the list of problems
that we're trying to solve here.And I think it's a great example
of how the infrastructure and AIhas moved forward.
And yet the public perception I think is that maybe AI is a
(11:54):
little stuck, like, oh, there hasn't been that much
improvement. But once you actually dig in the
details and you start talking topeople who are building with it
every day, who are involved in the dev tool side of it, you see
that there is so much infrastructure that's happened.
And I think there is this perception of like AI should be
a magic bullet and it should just solve your problem.
You know, it's going to kill thewerewolf off the bat, no
problem. And instead we're finding that
(12:16):
it does take a lot of infrastructure work to, to get
it right. And you have to do a lot of
context engineering. You have to provide a lot of
context. You have to make sure you're,
you're setting your agents up for success.
But when you do, that's where the magic happens.
And I think that's what's reallyexciting about the efforts of,
of folks like yourself, but alsoof how we're seeing this
transformation. Because I mean, you, you talked
(12:38):
about this first wave of like, hey, I'm, I'm in my ID.
I'm, I'm basically talking to this AI, You know, it's
completing stuff for me. We're kind of going back and
forth on debugging. I think that's still a fantastic
paradigm for folks from our junior or bad devs like myself,
frankly, to go back and forth to, to learn, to iterate and to
really be pair programming with an AI.
(13:00):
And then you have this movement up a stack to, oh, now I can
start to, I can be a team lead and I can leverage Ratis and
really assign tasks out. And now I, I have a team of
agents working for me. And it's it's so obvious we're
heading in that direction of just like these multi agent
systems that function semi autonomously and get inputs from
humans as architectural leaders and sometimes debuggers.
(13:25):
But that doesn't solve all the inflector problems.
You know, even if we're shiftingleft in some areas there, there
are these blockers that are in the in the software development
life cycle and review has alwaysbeen one, but there are others
that I think are are maybe emerging.
And I wonder if you are seeing particular changes to segments
(13:47):
of the software development lifecycle where now the SDLC for AI
has has new blockers that are emerged or or new challenges
that you're kind of thinking about beyond the the core
problem that you're looking to solve?
No, 100%. I mean, there's so much that
goes into software engineering that is not just typing keys on
(14:09):
a keyboard and a writing code. I mean, that's what, 10 to 20%
of the work. Yeah, exactly, exactly.
There's so much that goes into it.
That's not just that you have pre work, you have the alignment
that goes on. Yes, stakeholders create a
design dock, a loop in product and design, depending on what
you're building. You're thinking gracefully about
what you're actually going to gocreate.
(14:29):
Then you go and type it out, youtest it, you build it, make sure
it feels good. That's on your local experience.
That's where all this action, a lot of this action is happening
on that code generation happening on your local machine.
Often times then you would create a pull request.
And as we all know, you don't just create a pull request and
then you're like, oh, great, done, feature complete, we're
solved. Like call it launched, call it
(14:51):
launched. I'm going on vacation.
No, not at all. Right.
I think there's a saying like code complete is not, you know,
the same as, you know, feature complete.
After that you got to go get a code review.
We think a lot about that. But you know, even just breaking
the code review, what's actuallyhappening there?
OK, you're, you're looping in other teammates to get context
from them. They also share context back to
(15:11):
them. So people have some idea of
what's going on with Covase, howit's changing, how it's
evolving. Maybe they know something you
don't know and say, hey, that's actually not the pattern that we
do here. Or you know what, that's
actually so reasonable, except we just had a meeting yesterday
and we're actually going in a different direction.
And let me, let me just let you know that we should take a
change here before you go merge that in.
Maybe there's a security issue and this is a great chance for
(15:32):
folks to apply that acumen. Maybe there's code owners or
expert subject matter expertise where they can weigh in on
certain areas. All this is going on in the
human coder view. We're not even talking about
bugs. Bugs, of course, we're looking
at looking for bugs in in code review, but we're also looking
for them in CI. And when that PR is put up,
we're now trying to pass a bunchof automated tests, everything
(15:54):
from linters to unit test, end to end tests.
Make sure this is safe and hopefully not about to crash
production. OK, let's say we pass CI and we
get a code review. Now we push a merge button.
If you're at a small company, it'll just merge on a trunk.
If you're at a large company, itmay go into a merge queue.
And there's so many people merging at the same time.
But now we need to sequence these retest CI in case someone
broke something in between you getting review and passing CI.
(16:17):
OK, so we got to queue it up. We got to retest it.
Great. That could take another hour.
Some complex systems there, thenwe merge it.
But now once you merge it, it's still not user facing, it's
still not launched. We're going to have to rebuild a
new artifact. We're going to have to
gracefully roll that out onto production servers, checking all
of our metrics and making sure we're not regressing something.
Spiking errors have causing a problem, hits production, still
(16:38):
not done. Still now it's on production
users seeing it. Hopefully we're good engineer.
We're not at lunch yet. We're still like kind of
monitoring our systems, maybe checking Datadog or Gafana, but
maybe we're. Working on something else
already. Maybe we're already having
contact switching costs because I've transitioned to my my next
feature I'm working on. Or maybe I had to lunch and I'm
like, oh God, what did I write earlier?
Or maybe it's two days later because I didn't get a human
(16:58):
review in time and that was a blocker in my system where I
needed a second review for some reason from an SME.
There's there's so many ways this can go wrong.
And I think, you know, we can goon and I think every part of
this is being challenged, rethought or bottlenecked by AI
to connect it back to AIAII think is really just obsessing
right now about that code generation.
(17:19):
Is it touching the design docs alittle bit?
We're using deep research to help us with those design docs
for code review. There's AI code reviewers, we
work on one, there's a few out in the market for tests.
It's helping generate them, running them.
That's more deterministic execution of compute.
So it's not changing the the runof them.
I do think you could do some funstuff around ordering those
tests and trying to be as efficient as possible, but
(17:39):
mostly it's just helping us on the generation side merging.
It's actually kind of the same problem it was before, but now
we have a higher volume of code changes.
This is where it kind of gets interesting.
It breaks down. I'm hearing reports and I think
people see as anecdotally, but you also see in the metrics that
companies are just generating a lot higher volume of code
changes. I think Facebook was quoted in,
you know, over the last year doubling the internal code
changes per engineer that they're seeing and they expected
(18:01):
to double again really soon. We see it in our metrics because
we're graphite is used by so many of these companies.
And of course we we sync and we we manage all the code changes.
So we see the volume of code changes and it is strictly up
into the right for engineers. It's putting strain on these
systems You as an IC, you got toreview more stuff your, your
human time is still your human time the merge system.
(18:21):
Hey, maybe your company used to merge 20 changes a day and now
it's merging 50 changes a day ormaybe even with bots and all
this stuff going on maybe even 100 and then you have merge
conflicts and merge SKUs and these changes.
What's funny is before I had recently good confidence in
them, I thought, Connor, I trustyou.
You're a great engineer. You wrote this code change.
I'm double checking it. But heck, you know, like we're
(18:42):
we're already in a reasonably good place and you wrote this
thing now it's like, OK, Connor,I'm not sure you wrote this.
I you might have written it. You might have just like held
tab down while like staring at your iPhone.
I would or. Do that, OK.
Yes, you would never. Greg, you're casting aspersions
on my honor here. Exactly.
You would never, someone might be you in there or heck, maybe
maybe like no human has ever read this code before.
(19:03):
There's like a bot that just generated it and now I'm the
first person ever reading it. So this code review, there's
more of these code changes. It's also higher stakes.
I trust it a little bit less. Maybe it didn't get the original
human who wrote it, didn't pay as much attention.
There's also a risk of, you know, machines are gullible And
we, you know, we, we hear more and more about security
incidents happening with these code changes.
That code review really starts mattering because I can't even
(19:25):
just trust the team that had good intentions who was creating
it. And the context also matters
too, because we take for grantedthat when we write code changes
ourselves, we're absorbing the context of how the code base is
and it is evolving, and then code review sharing that out a
little bit. But if we're not writing those
code changes with a really active mind, we have less and
less context. So we better be absorbing that
(19:45):
in code review. Because if we're not observing
in code review and we didn't absorb when we wrote it, the
entire engineering team might not know what's actually
happening in the code base. That comes a really dangerous
problem over time. There's more weight on this code
review system, There's more weight on the merging system,
deployment system. That whole outer loop of
software development is getting challenged and bottlenecked.
I think you've, there's been tweets and and so on from the
anthropic team and others about how it's not the code Gen. the
(20:09):
code creation, the code writing side that's the issue anymore.
Now it's the dev OPS, the management, the good engineering
practices around shipping and releasing to production.
That's kind of the, the new bottleneck and we can solve it.
I'm, I'm, you know, I'm optimistic for engineers.
We can solve it, but that's, that's a major issue of
challenge now. Yeah, I think it's really
interesting to look at where effort is going now because
initially there was so much being poured into the Cogen
(20:30):
challenge and and not to say obviously there are massive
billion dollar plus companies that are built on this.
I mean Cursor, which I know you work with has is like the
obvious success story, but thereare frankly so many VS Code
force that are are being successful here.
You like a windsurf and others. And now I think we're all
realizing, oh, that only gets you so far Like this.
This does as I think anyone who's spent time thinking about
(20:56):
the leadership side of software engineering for a long time
realize, oh, this, this the coding's only part of it.
This is only a small portion of the, the work that has to
happen, this inner loop first outer loop that you mentioned.
And I think something else you said is really poignant.
And, and that's the idea of whatlevel of context is being
(21:17):
applied to APR. Now, we may have an expectation
of a human Co worker, but we maynot know the context being
applied by our agent Co worker now.
And, and really that's what they're becoming is these like
I've been calling them junior async digital employees where
it's like, I, you know, they're not, they're typically not
senior level yet. They, they lack the context and
(21:39):
the architectural thinking. But God, you can unleash a horde
of junior employees at a problem.
And if you give them enough context and you enable them and
you manage them, they can do fantastic work.
But it feels like there's still a big gap in how different
companies are providing context to their AI agents to actually
(22:03):
get them to be successful withinthe code base.
What best practices are you seeing around actually enabling
these agents and this third distinct wave of this software
engineering revolution? It's.
A good question. How are people getting the
context into the agents so they do a good job?
(22:24):
I think it starts with the person at the helm being a very
good software engineer. You still got to hug the agent
on both sides. You got to hug it on the input
and you got to hug it on the output.
You still got to be it's agent in the middle, human on on the
end and the out. And we need the software
engineers to be really good. The people, you know, the people
I think are the strongest and most effective with working with
(22:46):
AI agents are the are the seniorpeople, are the staff people,
the really experienced folks. Yes, junior engineers are
finding great success and I loveit when I'm learning something
new. I love using, you know, AI
agents, but man, the really strong engineers, the folks
who've been doing it forever, they are lethal when mixed with
with with these AI technologies.Why is that?
So I think on the input side, they're, they're applying their,
(23:06):
as you say, you know the context, they're, they're,
they're maybe collaborating on adesign dock with Claude code.
They're creating a markdown fileto start off with.
They know what a good design dock is.
They know what kind of questionsto ask and they're actually
filling that out. They're maybe even having like
iterative debates with the AI tocreate really good design dock
style guides before they even actually execute the code
(23:27):
change. Maybe they're running deep
research queries, They're scanning docs, they're double
checking the APIs that they're cometting on.
They're thinking about all of the classic software engineering
principles that have always mattered.
Clean code, clean architecture, deendency inversion, good
abstractions, narrow interfaces,all this stuff matters.
If it mattered to the you matterto the human.
If it kind of still matters to the AI and they're putting that
down in the pre work and you askyou how they get in the context
(23:49):
of the machines. I again, it's simple, but a good
pattern I've seen is people justare creating really thoughtful
markdown file design docs and actually go into cloud code or
cursor and say, hey, read this thing now let's let's start
working on that code change. Another thing that still matters
a lot and we obsess over this ismaking sure the code changes are
small and modular, just as just as a design docs, you know, it
(24:09):
helps humans and it helps the AIas well.
You don't want quad code or, or,or cursor or any of these tools
to go and create 2000 lines, 3000 line PRS.
They can do it pretty fast, but you don't want them to create
that put up for a poll request and hope they got it passes.
Reviewing CI. There's a lot of famous studies,
I think Google had one, where actually the longer the code
changes, and we've seen this in our data too, the longer the
(24:30):
code changes, the fewer number of comments per line.
It gets people just the human reviewers, their eyes glaze
over. You're way more likely to give
an LG, you put up A10 line PR, you're going to get like 5
comments on it. You put them, they're going to
tear them apart. People are like, I don't know
what do you, what do you want todo with this?
Same thing if there's, if your CI fails or if there's a bug,
you're like, OK, let's, let's now like let's look through this
(24:51):
2000 line code change and like see if we can find the issue.
Much rather small modular code changes, incrementally built,
rolled out. If one fails, you know exactly
where the issue was. And I think that this helps AISA
lot too. So this pattern I'm describing
is small discs. Stacked IFS stacking is a great
is a great technique to Facebookinvented to allow you to create
(25:14):
a small code change and then create a small code change on
top of that and small code change on top of that.
The origins of this concept trace all the way back to old
school git where people used to actually work on commit by
commit. Nowadays, I know we all work on
pull requests, no matter how youwant to describe or call it,
this many independent small codechanges one at a time.
It works very nicely. And you know who likes it most?
AI really likes it. It's actually, it's when you
(25:35):
actually start getting these coding agents to stack their
code changes to create many small ones, they start applying
chain of thought to those stacks.
It's really funny. You can see them breaking out.
OK, great. I'm going to, you know, we'll
create the function first, then we'll create the unit test next,
then we'll create the endpoint on top of that.
And they're trying to modularizeit, test each one of them and
roll that out. And the AI even builds it better
than if you ask it to kind of one shot in one large code
(25:56):
change. Then a guy said the human on the
inse, the human on the out. The human reviewer can look at
that stack of changes. They can look at many small code
changes and they can apply a better human review.
Heck, maybe they can, they can review the first one while the
AIS work on the second one. You can tag in different subject
matter experts from different areas that CI is being run on a
more granular level. So it works really well
(26:16):
together. And then, yeah, you better be a
good engineer on the review side, because that still matters
and you still got a lot of work after creating that code change.
I'm curious, I'd love to unpack this concept of stacking because
I think a lot of us are just really used to using pull
requests and that's just the norm.
We're, you know, maybe too used to looking at our green squares
(26:38):
and our GitHub profile and goinglike, Oh yeah, county PR, they
have a lot of commits here. And stacking, while it's been
popularized by certain companies, is, is less of an
established approach within the industry.
What is, I guess, 1, where are you seeing the growth?
Are you seeing a lot of people adopt this or is this still
(26:58):
pretty nascent so far? And and two, how do you think it
can be further leveraged to enhance these agent first or
agent centric coding approaches?Yeah.
I will say we are seeing it get more and more popular across the
industry and across the top techcompanies that I really do think
set the trends and dev tools. All these companies from Datadog
(27:22):
to Versail to Notion to Figma, alot of it, a lot of the
engineers are starting to embrace stacking more and more.
I love seeing it this. The concept has been around for
a long time. I think the, the the biggest
manifestation of it has been within Facebook and Meadows
engineering culture. I think they had the world class
tooling around creating stacks of small code changes.
(27:42):
And it's interesting because Mehta is also one of the one of
the few companies who doesn't use GitHub.
Google's one of the other ones and they also have similar
patterns here. Why is that?
It's a, it's a, you know, fun historical stories, but in part
because, you know, GitHub is very nascent at the time that
Facebook and Google were coming to scale and use our systems.
So they built in house. And when they built in house,
they thought about things from first principles.
(28:03):
They needed small Co changes. They thought you could create
stacks of them. It worked really nicely.
So these patterns emerge. They're becoming adopted more
and more throughout the industryfor a couple of reasons.
One, I think the tooling is getting there in order to
support them better. We build tooling at Graphite
help stack Changes. Also, there's some great open
source tools and CL is that makeit easier to manage your stacks
(28:24):
changes. Vanilla Git never did a great
job of helping you manage these.And #2 I think again, there's
more and more of this emphasis on keeping developers unblocked
and moving quickly. It's always mattered and we're
just watching that matter more and more, and that's creating
more appetite to embrace solutions like stacking.
The touch on exactly why, why this is like not means or not,
why this hasn't always been mainstream.
(28:45):
It's such a fascinating historical artifact to me.
I think the origins of git was exploring this idea and they,
they, they had two constructs. They had commits and branches.
And what is a branch, but a chain of commits, a chain of
interdependent code changes that, you know, stack on top of
one another. And so they, they were kind of
on this from the get go, your code, your checkpointing it, you
know, you're, you're proceeding along what different here was
(29:09):
the pattern that engineers collaborate on this code changes
because in open source git land,you would go to a maintainer,
you say, hey, I've been working on a branch for a little while,
a future branch, a fork. It's I've gone on my own journey
here. Would you pull in my changes
stranger online? Do you trust me?
Would you like to take a take a review of this and bring in my
branch pull over. I'm requesting you to pull it
(29:30):
in. That's where the pedal comes
from. And then that that maintainer,
you know, sits for a couple of days.
They give some thought. Maybe they do, maybe they don't.
Depends how they. Won't be able to prove it out,
crucially. Yeah, yeah.
And a lot of some people just squish, squish those things.
Sometimes they won't. Depends, You know, there's a lot
of, especially when you come to like Linux and a lot of these
open source projects, they've, they're very picky about this
stuff. So that was the origin.
(29:52):
But they, how are they collaborating?
They're collaborating on GitHub.And the unit of change was the
pull request, but the pull request was a bunch of commits.
It was a single branch, but a bunch of commits.
So what ended up accidentally happening is that the unit of
change became the branch and notthe commit.
Get started with the end of changing the commit and it kind
of became the branch. And then all these companies
adopted GitHub as the code change collaboration tooling of
(30:13):
the day. It was just the main system
lying around. If you didn't want to build it
in house and no one did, they would just buy GitHub.
So you start slinging PRS back and forth and people care so
little about commits. You can't review the commit, you
can't run CI on the commit you review, you review and test the
branch. And most all companies these
days when you merge are they just enable squash and merge,
just compress that branch down to a single commit on main to
(30:34):
keep it really clean. And some people use some people
just freebase a bunch. They some people stack on
commits. But overall we've we've really
just centralized on that branch workflow.
That's why I think that there's been an under adoption of
stacking. Is that the vanilla git, vanilla
GitHub? It is kind of hard to adopt A
sport if you've ever tried to change.
We're not. Used to it either.
(30:55):
Yeah, they're not. Yeah.
Yeah, you try. You try and change chain
branches locally. It's really tough.
You make a single change, you got to recursively rebase up the
stack. It's super painful.
Maybe someone's branched off a branch, maybe done it three
times over. But you very quickly learn a
lesson of like, oh, that's toughto incorporate down stack
feedback. Maybe I won't do that again.
You want to create a stack of 10changes by hand.
Good luck. Good luck.
You're doing 3 point rebases allday.
(31:16):
What you need is a better tool when you need it on the GitHub
side, you need it on the local CLI side.
And again, I like we build stuffhere, but there's, there's open
source, there's a variety of open source CL is that really
help you manage those pointers, manage those stacks and
recursively execute those rebases?
I want to talk about your approach at Graphite a bit, but
before we get there, I I have a more philosophical question.
You know, GitHub has become kindof the center of the coding
(31:39):
universe in a lot of ways. Now, it's not everything, but as
you pointed out, like it's it's the norm for most companies and
it's where open source work really happens.
Do you think GitHub needs to be reimagined for this new era of
agents? Do they need to rethink their
entire approach? It's a good question.
(32:00):
It depends. You know, I don't want to tell.
GitHub is a fantastic company with a huge yeah, we're and I
want to tell them how to run their show.
But yeah, exactly. But what do I think?
GitHub is a tough challenge. I'm deeply empathetic to GitHub
because they have a tough challenge.
They got to serve 2 user communities.
They have to serve all the open source developers in the world
who have the longest tail of workflows and use cases, weird
(32:22):
side projects and 20 year old Linux, you know, foundation
versions as well as some bleeding edge new stuff.
You know, all the new projects are also being hosted on GitHub.
So they have the really wide user base over on open source.
Then let's do step one, Step 2. They have all the closed source
companies that are running on GitHub.
They're running on GitHub because it's kind of facto and
it's kind of main tooling around.
(32:42):
Microsoft does a good job of supporting it and maybe they
used to be on Bitbucket or explore GitLab, but by and large
a lot of the modern companies are just are just using GitHub.
One of the, you know, another reason is they often have a
little bit open source. Even if you're a close source
company, you might have like 10%open source libraries or
something. Set it off.
It's nice to keep that all in the same GitHub organization.
Make some public, make it make the rest private.
(33:04):
So they centralized on get up. Now you can get a reimagine
this. It's tough.
Take a take a concept of AI codereviews, take a concept of
stacking, take merge reviews, take all these different systems
that we're seeing work together to help handle lower trust,
higher volume coaching is being created.
It's tough because of what worksin one environment might not
work in the other environments. The stuff that is best for
(33:26):
closed source development, personally, I think is a lot of
things that optimize Mono Rebos trunk based development.
You've got 900 developers shipping every day.
You have a really high trust environment because the average
code review is a team of five people handing each other pull
requests and saying not, not please stranger pull in my
changes. They're saying, hey, stamp this
please. And I'm going to merge it in,
right. There's a different ownership
model going on. You probably don't really have
(33:47):
forking. You probably don't really have
Long live feature branches. Open source.
Invert every one of those constraints you have.
They're more strangers though. The the PRS are open longer.
You don't want a concept like stacking because you're saying
like, hey, don't incrementally merge in your changes because
what if you disappear halfway through and then my code base is
screwed up. I don't trust you.
I don't have a hero sit next to me at work.
(34:08):
Give me the full complete working solution or I'm not
going to merge this thing. It's now we get we're back to
Longwood feature branches, slower review cycles because
there's less trust AI code review and so on.
Maybe, but who's foot in the bill?
The same, you know, you run intothe same issues with with CI and
open source. It's a little bit less developed
than the the really heavy, rigorous intense testing in
closed source, partly because private companies are willing to
(34:28):
spend a lot of money on that validation.
So that divergent pattern. So can get up to it.
I mean, of course they can of course, of course they can do a
lot of stuff. They're really, really strong
engineering team. But it's I I empathize with the
tough challenge they have in front of them.
And I do think that the more they choose to lean into some of
the AI power tools around software engineering, the more
they may offend their long time open source communities because
(34:51):
I do think it's being adopted more fast and heavily in closed
source. And if you lean into those
patterns, it's going to, you're going to, you know, turn off
some other workflows. Yeah, I would equate it to the.
Challenge Google has faced wherethey had such a lead in so much
IAI research, but face this really classic innovator's
dilemma where they didn't want to kill their search business.
And I think they've kind of figured out how to approach it
(35:12):
and it seems to be working out well for them now.
But they were a couple years there.
Well, frankly, they were stumbling a bit while open AI
was fast out of the gate and others were, you know, diving
forward and then Tropic was growing and you know, people are
kind of going, hey, where is Google in it now?
And I think Gemini is doing a lot of fantastic stuff.
They've integrated extremely well in Google products.
And, and frankly, they've had that integration internally for
(35:33):
a long time with a variety of LMS that helped fuel internal ad
delivery, a lot of other things internally, but it wasn't really
seeing the public light. And it's taken them a while to
figure out, OK, how do we approach this cash cow we have
in in search and not kill the, the golden calf while we are
(35:54):
trying to grow the next era. And Github's not the same level
because it's not like the ads business, which is obviously a
very special unique one, but it still has this, I think,
challenge of how do we keep whatwe have here.
And, and to your point, this, these bifurcated user group
concerns while still trying to set up this next era.
(36:14):
And you see them focusing a lot on Cogen.
And I think that that's reasonable because code creation
is, is an area where among thesetwo user groups, it's, it's
actually pretty similar, whetherit's an old source project,
closed source project, you're still going to VS Code, you're
still typing for a little while.Maybe you're using AV0 style
style like project starter or creator.
I think they have get up Spark, which is kind of their
(36:35):
equivalent. So on the Cogen side, it's a
little bit easier. It's, I think it's really when
you get back to that outer loop of software development, how
people are collaborating and integrating, you see immensely
different work flows and it's a little bit tougher for them to
optimize that. And Speaking of outer loop.
I think the work you're doing inGraphite is really interesting.
The entire out of loop of software engineering is changing
(36:57):
at as we've kind of been talkingabout this whole conversation
and Graphite has taken I think really unique approach that it
is seeing a lot of success obviously with some of the the
brands you're working with. I believe anthropics back in
you, you're using Claude in somenovel ways.
(37:18):
Without giving away the secret sauce, can you talk a bit about
the unique challenges of applying LMS to the task of code
review as opposed to just code generation?
Absolutely, yeah. Yeah, you know, I in many ways
with folks who haven't heard of Graphite, I can describe it a
lot of different ways, but one way I can describe it on simple
terms is a client on top of GitHub helping folks execute
(37:42):
code review and execute those outer loop steps.
We help you once you have a pollrequest process it integrated
test to validate and merge it assmoothly and quickly as
possible. You might use an advanced e-mail
client on top of your old Gmail or Yahoo account.
You might have a calendar clienton your iPhone or a client on
top of GitHub in in many respects.
And we leverage, we try to use alot of these AI features.
(38:03):
How are we using them and how isanyone honestly using using them
in in this phase of the process?There's a couple of really small
no brainers you can do. You can help generate the PR
title and the description and you can give some advanced
search features. These are very simple stuff.
You can help run a auto winter of sorts, an AI code review a
(38:24):
take that DIF feed it through the LLM and try and call out
mistakes, issues, vulnerabilities in the code in
line in many ways. I mean, you can you can brand it
differently. You can brand it as as you know,
AI code review. It's kind of exciting.
You can brand it in a boring waytoo, which I, I kind of enjoy as
an engineer, which is, hey, thisis just another variant of CI.
What is CI? You know, CI is, I'm going to,
(38:45):
I'm going to take the, take the code change, run some
deterministic unit tests and deterministic abstracts, syntax
tree winter things on it. I'm going to do all the sorts so
that I can give you a check markor an axe and I can leave some
in my comments. What is the LLM?
It's kind of like a fuzzy version of that.
It's like a, you don't have to configure it much.
It's, it's not very brutal. It kind of is very flexible
refering language, but it's just, it's just giving you some
(39:06):
validation and you don't, you don't, you don't fully accept
the code change just because it passed CI, but it definitely
sure as heck helps. I think AI code refits that
bucket. So we build some of that and a
lot of it, and I'm not going to lie that we don't do too much
fancy stuff. We of course, we work hard on
this, but a lot of it is just taking really smart models.
We don't build our own from Entropic or Gemini or Open AI,
(39:28):
any of these companies. And, and some there's a mix of
them. We'll feed that DIF through,
we'll take custom rules and style guides from the users.
We'll take previous uploaded anddownload comments and we'll try
to build all together and try and leave them some good
feedback on the pull request. We, we personally really
optimize on leaving actual inline comments and trying to be
quiet if we don't have high confidence in the issue.
Because man, I really think trust matters a lot with all
(39:52):
these emergent AI tools. And when I see information, if
it's right, that's great. If it's wrong, I get really
annoyed really fast. So I'd rather, if you're going
to distract me, you better, you better have some really good
confidence. We try to build that into the
system. Other things we do, you know, we
take inspiration again from the from some of the innovations
that the cursor team created cursor team added this great
sidebar on the IDE that let you ask questions and make
(40:15):
modifications and you know, it'snot just not just us look at
like any productivity product inthe world right now from Google
Docs to to wherever and they allknow, oh, you're going to have.
Just a copilot with you that's going to, you know, work
through. And I think everyone got
everyone got chat, yeah. Exactly, exactly.
But you know, so we, we've brought in a chat to the
product, but it's actually worked.
It's been really nice. You know a code review is
(40:35):
actually a case where you want to ask questions or maybe run
quick research queries on a bunch of old PRS.
You're really care programming in a lot of ways.
It really helps. It really helps and.
Also in line modifications, sometimes you're like, hey, this
pull request looks pretty good. Just rename a couple of these
variables. Now either I can check out that
PR, pull it locally, modify those in my ID, commit it, save
it, push it back, or just ask chat, Hey, just just fix those,
(40:57):
those, those quick changes righthere.
And then we're good to go. And so we've also enabled it to,
to make really fast early modifications.
These are the, these are starting to, to be the, the, the
advents of how to use AI to makethe review process better.
It's it's conversational chat toresearch and modify the PR.
It's AI automatically scanning, looking for issues.
(41:19):
It's helping fill out the busy work of the process.
We love doing the stuff because we already have a surface.
We've already spent so much timebuilding a really powerful pull
request UI for looking at that code change and reviewing it
manually. So it's really nice and easy to
kind of layer on these, these nice features on top of it.
I think it's, it's a little bit tougher if you don't start with
(41:41):
AUI surface area because you're trying to do everything
headlessly. Again, it's why I respect that
the cursor team actually said, hey, we're not just going to
build a VS Code extension. We're going to take the entire
IDE and it's going to allow us to have a better UX and go a
little bit deeper on integrationhere.
Now they, they were able to forkVS Code.
We we weren't able to fork anything.
So here we had to kind of recreated from the ground up.
Well, in many ways we're still just a client on top of GitHub
(42:01):
and modifying the underlying data this focus.
On. The.
Outer Loop brings something you said earlier, which is that
seniors are really the ones who are seeing a lot of gains from
these AI tools. They are deeply enabled by them.
They have the personal context and experience to get the most
(42:24):
out of them. And I'll be frank, I'm worried.
I'm worried juniors are being left behind.
Like, sure, maybe they can learnmore easily by using AI as a
pair programmer and leveraging its knowledge.
Maybe that's easier than going through Stack Overflow page
after Stack Overflow page. But the.
Outer loop. Area is this area where you need
(42:45):
a lot of contacts, where seniorsthrive and we're seeing less
junior software engineers get hired.
We're seeing a lot of grads comeout of top schools with
engineering degrees seemingly atleast anecdotally, struggling to
get jobs. We're seeing the percentages of
hiring for juniors for most of the the major, the MAG 7 and
(43:06):
others go down. Is there a hiring gap that we
need to be thinking about as an industry?
I have. Let me see.
So we're going to get into like pontification land.
So yeah, we're yeah, we're we'reguess of the future here.
So let's see, let's see what these, these, you know,
predictions take us. But I have a couple of
(43:27):
predictions going on. And 1 is a little bit
pessimistic and 1 is a little bit optimistic.
So you know, on a on a, on a pessimistic prediction, I
wouldn't be surprised if the software engineering field
starts getting a little bit closer to what I've already seen
happen in finance and in consulting and law, where you
(43:47):
also get a lot of new grads froma lot of top universities and
programs work really hard and really grindy and really humbly
to desperately try and get a small number of opportunities at
a big three consulting firm or top law firm.
Or can they go to Goldman and get destroyed in I banking,
partly because there's not a tonof value.
There's a lot of people want to do it and there's there's a
(44:08):
little bit less value for an entry level person.
But if they can clear that entrylevel, then they start getting
leveraged, they can start doing really powerful stuff in these
fields. Engineering has been an
interesting anomaly for the last20 or so years where at entry
level people can be really useful in productive
organization. There's a shortage of them.
And so people graduate college and they get, you know, high
(44:28):
$100,000, sometimes total comp $200,000 plus job offers.
It's pretty incredible. And they're being swept up and
go to companies with incredible perks and have an amazing time.
I love that I benefited from that.
A lot of people do, but you know, if you see that regressed
to the mean a little bit and it looks a little bit more like
these other fields, I wouldn't be too shocked.
It's a bummer. It's going to be tougher on on
(44:49):
folks graduating, but I just feel like we already have
precedent for this in other fields and those fields are
they're holistically OK. There's OK, there's room for
improvement inside the eye banking, but like the society
seems generally OK with that. It's perhaps perhaps we're
watching an anomaly kind of approach.
I mean, here, that's my pessimistic take.
My optimistic take is that if someone is, if you're an
(45:10):
individual and you have high initiative and I think it really
takes initiative, but if you have high initiative, man, the
getting has never been so good. I really think so when I, when
I, so in high school, I, I, I, Ihad to get a job, so I didn't
want to pack groceries the localgrocery store.
Instead, I was like determined Iwas going to make iOS apps and
it was like iOS three or four. I was like trying to figure out
how to make iOS apps And I'm, I'm reading tutorials online.
(45:32):
I got coding for dummies. I found like an old low quality
like iTunes U university course on iOS development and I'm
fighting my way through X code on a virtual machine on a family
Intel computer and you make it work.
I think the bar is lower now. I think if I was in high school
trying to make an iOS app to make money on the App Store, I
think I have, you know, I have incredible tutoring.
(45:53):
Heck, I don't. I have an Oracle that'll debug
every problem for me and also has a learning mode built into
that Oracle that'll help me. I have coding agents now.
I got to take initiative, but I can do a lot.
I can get a lot of reps in. I can create a lot very, very,
very quickly. And, you know, maybe I can kind
of progress through that junior faced software development and I
can I can take on barriers that would have existed before then.
(46:16):
You're right, maybe it's hard toget hired at a company, but I
do, from what I can tell, a lot of these companies are still
running relatively normal interviews.
And if you can crush a coding interview and you can crush an
architecture interview and you can show an amazing project,
people will still look at you. You're just, you're just going
to have to show some initiative.You have to work very hard.
But I'm optimistic for folks whohave that energy, it's actually
(46:37):
gotten a little bit nicer, a little bit easier and a little
bit less gate kept in many ways.So competition is higher, but
there's incredible opportunity. And I, I just really think like
over the next 10 years, we're going to see so much incredible
technology to be built out. Is it?
I'm just such an optimist. I think it's the best time ever
to be an engineer, you know? Not the worst time, I suspect.
We're going to see a melding of your two cases here where, yes,
(47:00):
software engineering broadly is going to regress to the mean of
it's a little harder to get thatincredible entry level role, but
where people with high agency are enabled with tools and with
capabilities beyond so far, so far beyond.
What we've seen in the past. And I think we're already seeing
that with some people building companies, building iOS app, as
(47:22):
you mentioned. And while we're on the topic,
let's just keep going into the future.
I I'd love to keep unraveling from you what what you're
seeing, What's graphite strategyas you look at the next God,
It's hard to say this, but let'scall it five years of software
engineering, which is a lot of time in AI land.
Maybe it's really two years. Like how are you thinking about
(47:45):
these next stages of development?
Do you see a new wave coming or trying to address?
Is it really all about like kneeling the current level of
agents? Curious on your perspective.
Absolutely. So I think about.
This, I have to, I'm the founder.
It's my job. Step one, what are we?
What are we trying to do as a company?
We're trying to build the best tooling possible to help
(48:05):
engineers collaborate on code changes.
We think fundamentally code collaboration, it exists now.
It's going to continue to exist.Maybe some of that collaboration
is with AI bots, but we're stillgoing to be working together.
I don't think the whole world moves to like individual
companies run by one engineer. No, no, no, we're, we still got
engineers working together, collaborating this code changes
and we are in the business of helping them work on that.
So now I can extrapolate that and you know, a lot of that goes
(48:29):
into just building classic product fundamentals.
Let's build a great, great UX. Let's build great workflows.
Let's take good ideas from thesebig tech companies that were
bottled up and let's bring them out to everyone.
These classic dev tool principles still apply.
Now I have to, then I also have to play Oracle with AI.
And when I play Oracle with AII,think about a couple of
different outcomes. 1 outcome iswe stall wherever, wherever we
(48:51):
currently are. There's always a chance you're
like oh I guess this this was the peak you know this is as
good as smart as it gets. It's likely I'm just jump here
and here even. If it does stall at that,
there's still so many gains to be had over the next few years
around the infra, around the providing contacts.
Like even if the actual level ofreasoning stalls for a year or
two, which I I think is possibleword or or at least slows down,
(49:14):
I think we're maybe saying that there there's just so many games
around the edge of the outer loop that you're already working
on that. That'd be my take at least.
I completely agree. I think we can.
I think not all the UX patterns have been invented here.
No, I think not all the smart ideas have been have come up and
I want to help find those, understand them, incorporate
them into the product and keep optimizing that mission of
helping engineers work in code changes.
(49:35):
Now that's if it, if it you're, I agree.
If, if we, if, if this is as smart as AI gets, we, we, we
still got our work cut out for us.
There's another question. OK, what if AI gets so, so smart
that it really starts changing how software engineering gets
done? Maybe you really are mostly
collaborating with bots. We're still in a world where
you're mostly collaborating withhumans and people are using the
create code, but you're, there'sstill like another human avatar
(49:57):
user on the other end, mostly incode review in these systems.
What if that really starts shifting?
What if we really start getting like, like a lot of these
coaches are AI. OK?
How do we think about the world?Maybe it looks, you mentioned
like a little more tech leading or everyone's acting like an
engineering manager. You're now wrangling various
agents and you're, you're chatting with them, giving a
feedback and trying to incorporate their code changes.
(50:17):
And I think that I, I, and I think there's like this third
level. I, I imagine if AI gets really,
really good and I don't even need much of the review side
anymore. I think it's so good that it
listens in on all my meetings and all my feedback and it just
does, it can do review better than I can.
It writes my test and executes them and coordinates my database
infrastructure. What's left for me?
Then? I 01 OK, now, now we really are
(50:40):
like wondering if we're out of ajob.
But I, I think that starts looking like an agency model.
I actually have a paradigm on this.
So if, if, if like one step up of intelligence is you're a
manager or tech lead. The next step up is you're,
you're, you're like a, you're someone hiring an agency.
And if you've ever really contracted an agency to build
your website, what actually happens is you talk to them,
they say, fantastic, thanks for calling us to build a website.
(51:00):
Step one, tell us what you want.And then they're like, OK, yeah,
yeah, but you're really bad at articulating your vision.
So I'm actually going to like interview and like pull out what
you really want for 30 minutes. And they're like, I'm going to
come back to you in a week and I'm going to show you 3 variants
because I still think that you don't know what you want.
And then we're going to pick that one.
And then I'm going to keep developing.
I'm actually going to hand you awebsite that we worked on that I
like did for you and you were annoying to work with and you
(51:21):
have no idea how it works. You've never the code, but
here's your website. That's like the agency model of
AI. That's that to me is like the
upper bound of, of how far we approach this, where that's what
happens if you stop reviewing the code a little bit and these
AIS get so smart that they're just pulling the context out of
you. It like treating you like a like
a little child. And so that, so I think, you
know, we, we end up on a spectrum somewhere in that
world. I think, I don't know I'm this
(51:42):
is this is a very vanilla tape, but I think we'll end up kind of
in the middle where 20% of changes are headless.
Engineers are still running someof them.
We're still heavily involved in code review.
The very least. I think the the security in the
context is just such a blocker to getting out of the code
review side of things. Even if these things are
amazingly smart, I've yet to seepeople solve the gullibility
problem of AI. You just you look at these hacks
(52:05):
and they, they're like you just you just insert a little prompt
injection, it goes off the rails.
I really hope the Tesla autopilot team is still reading
the code changes going into they're got me too.
Really hoping, you know, there'shigh stakes engineering
happening in the world. So I think it'll be milk kiss,
but I do, I do pontificate aboutlike what happens if we just
don't even review, don't even read the code ever.
Oh man, Greg. I feel like we could have
(52:26):
another hour of conversation, but we do have to wrap it up.
Thank you so much for the the lovely chat.
It's been such a fun time. Where can our listeners go to
find you and to find graphite ifthey want to learn more?
Absolutely, if you want to learnmore about graphite.
You just go to graphite dot dev,you have a lovely splash page,
you can sign up for a free trial, you can play with it.
There's no lock in. If I learn more about myself,
(52:47):
you can check me out on Twitter slash X or LinkedIn, or I'm
reasonably active. Fantastic.
So much fun chatting with you today.
Really, really appreciate the time and thanks for all the
insights and and I think mild debate, but a lot of a lot of
interesting stuff here. I think there's a lot of
takeaways and I mean, threads wecan keep pulling on here because
(53:10):
there is such a transformation happening in software
development. And it's clear that we need to
rethink how work happens, not just for today, but for the next
couple of years as we look at these potential scenarios we're
talking about. Thank you all for listening.
That's all for this episode, Chain of Thoughts.
Don't forget to subscribe to ournew channel Thought newsletter
on LinkedIn for more insights and building with AI.
(53:32):
And make sure you're subscribed to the podcast while you're at
it, whether you're on YouTube, Apple Podcast, Spotify, your
favorite podcasting Apple choice, we don't want you to
miss a conversation. And God, if you are an Apple
podcast, if you're on Spotify, if you're on YouTube, you know
what means a lot to us? A comment.
I like a review. That sort of engagement really
is an incredible signal to both your human counterparts that
(53:53):
maybe they check out the show. And you know what, it's great
for the digital counterparts whoare deciding whether or not to
index us, whether to promote us in the feed, etcetera.
So, you know, provide the context that everyone needs to
enjoy a train of thoughts. And Speaking of great
conversations, we're always looking for AI builders and
leaders feature on the show. If you know someone who would be
a perfect fit, please drive to us on socials, leave a come up
with your review of the show. Thanks for listening.
(54:15):
We'll see you next week. Greg, thank you so much for
joining us. Tons of fun.
Thanks for having me and we'll get you peddling.
Reviews I have to. It's not just me.
Yeah, awesome. Thanks, guys.
Thanks.