Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
How'd you like to listen to dot net rocks with
no ads?
Speaker 2 (00:04):
Easy?
Speaker 1 (00:05):
Become a patron for just five dollars a month. You
get access to a private RSS feed where all the
shows have no ads. Twenty dollars a month. We'll get
you that and a special dot net Rocks patron mug.
Sign up now at patreon dot dot NetRocks dot com. Hey,
(00:34):
get down, rock and roll build, We're here. It's dot
net rocks.
Speaker 2 (00:37):
I'm Carl Franklin, average cap.
Speaker 1 (00:39):
And the end of the second day of recording for us.
So you know, it's been a good two days so far.
Speaker 2 (00:46):
I got another one to go today. I've been knocking
out run as, having a good time doing it. Awesome.
Speaker 1 (00:51):
I've already done a couple of Blazer puzzles and Jeff
Fritz is chock full. I saw our friend Beth outside
just now. Yeah, so who used to be Massy? And
now I can ever remember her new last name is?
Speaker 2 (01:05):
It? Didn't change it? Oh? Okay, okay, did the wedding?
Speaker 1 (01:08):
Well, that's true, I was there and I don't remember.
I don't even remember what I have for breakfast. That's
good because I didn't have breakfast.
Speaker 2 (01:16):
Okay, here we go.
Speaker 1 (01:18):
Before we start, let's talk about nineteen fifty five. Episode
nineteen fifty five. This episode nineteen fifty five. So we've
been talking a little history about what happened. Okay, here,
What do you like about nineteen Well, I just know
a couple of things significant, you know, cultural, political, and
social events. The first McDonald's restaurant opened there. It's definitely
(01:38):
a cultural event. Yeah, yeah, fast food, right, Disneyland debuted,
and The McKey mouse Club premiered on TV.
Speaker 2 (01:47):
Coincidence, I don't think so. No.
Speaker 1 (01:51):
Politically, the Warsaw Pact was formed and West Germany became
a sovereign state. On the social front, Rosa Park's arrest
sparked the Montgomery Bus boycott, and the racially inspired murder
of Emmett Till caused national outrage. There's a lot more,
for sure. What are you thinking about computer tech wise
(02:12):
or science wise?
Speaker 2 (02:14):
The first atomic clock, which was measuring the frequency of caesium,
so it was an incredibly precise clock. Wow, it was
the beginning of the atomic age. Nineteen fifty five, right,
But more importantly Velcrow now Velcrow and the Big Mac
in the same year, and the first wireless television remote
the Zenith flash Omatic. You know. I called it the
(02:35):
flash omatic because it used light. It used light. They
put a sensor in each corner of the screen, and
depending on what corner of the screen you pointed at,
it either turn channel up one or the channel down one,
or the volume up one or the volume down one.
Speaker 1 (02:48):
Isn't that?
Speaker 2 (02:48):
It was simply cool. And before that there was you
had a remote, but it was wired and people didn't
like the wire So this was the first wireless, just
with a flashing light. Yeah.
Speaker 1 (02:57):
I was the remote from my older brother. Yes, let's
get up and change a channel. I'll smack you right.
Were you the remote for your older siblings? Is that
why you play?
Speaker 3 (03:07):
I'm the oldest, but I was definitely the remote for
my dad.
Speaker 2 (03:09):
For your dad, I will have children except stupid.
Speaker 1 (03:14):
I tried to train the dog to do it, but
never never work. It doesn't work out, all right, So
I guess it's time now for better Now a framework?
All right?
Speaker 2 (03:30):
Okay, dude, what do you got? All right?
Speaker 1 (03:31):
So I went looking on what's trending in GitHub and
again for repos and.
Speaker 2 (03:36):
There's a lot of AI weird trending and here we are.
Speaker 1 (03:41):
Yeah, it's an AI world, So this one is awesome.
Chat GPT prompts. This repo includes chat GPT prompt curation
to use chat, gipt and other LLLM tools better so
they chatpt but works with these prompts, work with other
AI models. Claude Gemini, hugging Face, jet Lama, mister Roll
(04:05):
and Moore. Thank you, thank you very much.
Speaker 2 (04:09):
Yeah, that's not going to show up all the ship.
Speaker 1 (04:10):
No, probably not, but Brandon, if it does, I would
like to say thank you.
Speaker 2 (04:15):
Okay.
Speaker 1 (04:16):
So yeah, so that's it. I mean, it's it's kind
of a long read me because there's a lot of
stuff there. Oh yeah, but here's just a couple. Act
as Ethereum developer, nice, act as a Linux terminal, act
as an English translator and improver, act as a job interviewer,
act as a JavaScript console, as an Excel sheet. So yeah,
(04:41):
some very creative prompts there. I'm sure people are finding
good use. So that's what I found. Who's talking to
us today.
Speaker 2 (04:46):
Richard Gravin Kalinov Show fourteen twenty five, So going back
to New York, have a little bit. That was March
of twenty seventeen. We talked to our friend Damian Brady
about Brownfield DevOps. Yes, I happen to know Nicole has
done a little bit of DEVO related stuff over the years.
And this particular, this is when we're sort of talking
about bringing the DevOps practice of that sort of automation
(05:07):
and controls over an existing application.
Speaker 1 (05:10):
Right, was Damian with Octoplus back then?
Speaker 2 (05:12):
I think he might have been Actually, yeah, of course
he's a cloud advocate these days. And so our friend
Hilton ge Is now commented on that show. Yeah, and
he said, I was listening to the Brownfield devop show
and realize some things. One, A green field is actually
when it's lush and growing. Well, I'm thinking, say, like
a cornfield or something, it's actually about to bear fruit.
(05:33):
As contrast to Brownfield is one, it's just the start
of the process. There's sort of a file new project. Right,
So he's totally clipping the concept on his head. Right,
he's not wrong, Yeah, you know. And then two if
one things of financial services or wealth management firms is
the idea of quote unquote building a legacy is a
tremendous need to be proud of. It's something valuable lead behind.
(05:54):
Whereas the software development is concerned to all something old,
rotten and tarnished. It's definitely something you left behind because
you changed jobs.
Speaker 1 (06:01):
Yeah, so I'm thinking of ways that brown field could
be a bad thing. You know, I'm not so sure
I would want to walk through a green field, but not.
Speaker 2 (06:11):
Through a brown field because yeah, you getr shoes to money. Yeah,
it's not a good thing. It's not a good thing.
So I mean it is possible. As an industry, we've
actually got these terms flipped around, and you should start
fresh with brownfield projects and build a legacy system to
be proud of.
Speaker 1 (06:23):
It's really smart.
Speaker 2 (06:24):
Oh, Kilton, you're so clever. But you know, the.
Speaker 1 (06:27):
Planting grass, it's got to be brown before.
Speaker 2 (06:29):
It becomes green, and then the birds are gonna eat
the seeds. You're gonna have to plant it again. Ask
me how I know, Hilton. Thank you so much for
your comment, and a copy of music Goba is on
its way to you. And if you'd like a copy
of music kobay I, write a comment on the website
at dotnet rocks dot com or all the facebooks we
publish every show there, and if you comment there and
ever read on the show, we'll send you a copy
of music Goby.
Speaker 1 (06:46):
Music to Code by the Flack version is now the
most popular downloaded version.
Speaker 2 (06:52):
Yeah, because Flack play on everything, I presume it does well.
Speaker 1 (06:55):
But what most people do is they download them and
Flack and then they convert them to wave right and
they you know, you get the same quality, but it's
half the size. It's like a zipped wave. Oh so
flat the flat smaller.
Speaker 2 (07:06):
The wave is big.
Speaker 1 (07:07):
But yeah, yeah, Flack is half the size, but it's lossless.
Good anyway. Music to Code by dot Net there's twenty
two tracks. Go get them there. They'll help you focus
and they'll help you be a better programmer. Okay, let's
bring back Nicole Forsgren. She's a DevOps and developer productivity
expert who thrives on helping large organizations reshape their culture, processes,
(07:30):
and technology to improve their business and enhance the developer experience.
She combines AI tools with developer expertise to help teams
unlock productivity in innovation. And I know this isn't one
of those bios that you feel you had to sneak
AI in there just to be relevant.
Speaker 3 (07:48):
Because we know you really do that, I actually do.
Speaker 1 (07:51):
You're an AI person.
Speaker 2 (07:52):
Oh my goodnessess, well, I need wear on Donna rocks
back in twenty seventeen with our friend Jas Humble in
the day, the Dora days and a regular on run.
Ass know, half a dozen times over the years, starting
with that in person interview at the CHEF conference. I
think it was. I think so it was a long
time ago. Good to see you again, Good to see
you having a good build, I mean love.
Speaker 1 (08:16):
Yeah, it's historic in my mind, this period right here.
Speaker 2 (08:20):
Yeah, really crazy emerging technologies. But you've moved on from Dora.
It got acquired by Google, didn't it.
Speaker 3 (08:27):
We were acquired by Google.
Speaker 2 (08:28):
Congratulations, thank you?
Speaker 3 (08:30):
It was it was great. Yeah, the team that I
got to work with there was wonderful. And the Dora
team continues now.
Speaker 2 (08:37):
Sure they still do the report every year. I read it. Yeah,
and then ever.
Speaker 1 (08:41):
Can you tell me just what is Dora and what
was it? Or what is it?
Speaker 2 (08:45):
Yeah? What's the shorthand for Dora? Yeah?
Speaker 3 (08:47):
Absolutely So. Dora was a multi year now at this point,
over a decade research project that looked into, you know,
kind of investigating the types of things that can improve
software delivery performance and organizational packed and output. So things
that now everyone jumping the way back machine. Imagine being
the late two thousands, early twenty tens, when we weren't sure.
(09:10):
We were hearing stories that things like CICD were important,
that things like continuous integration were important, you know, in
particular automated testing cloud. So you know, discussions would come
up with leaders and organizations or managers and they'd say, oh, well,
you know that won't work here, or well that doesn't
(09:30):
work like this, or we're a different organization, or even well,
what do you mean by CI? What do CI even mean?
Because you know at that point also everyone's just redefining
it constantly, and so the research program kind of broke
all of these pieces down and tested statistically across the
whole industry. I mean, we had thousands, tens of thousands
(09:51):
of responses and so we could and data points that
we could really test on. So that's so DORA was
originally short for DevOps Research and Assessment and it was
the State of Develops reports. He originally started with the
Puppet team and then broke out and did it through
DORA well, and that led to the book Accelerate, So
some folks know Accelerate. Accelerate is kind of it yeah,
(10:12):
synopsis and summerization of the first several years of work.
Speaker 2 (10:16):
I always felt that you were the person who brought
the real rigor to analyzing the results of these product
teams to show that when they followed these practices, the
results were substantial. They were orders of magnitude productivity increases.
Like it seems so anecdotal until you started writing those reports,
where now it's not acdotal, it's completely scientific, like, here
are the numbers, here's what you do, and you two
(10:38):
can have this experience, your teams can perform like that.
You know.
Speaker 3 (10:42):
The other thing that I think was great, you know,
I really value the partnerships that we had with you know,
Jess Humble, Gene Kim Puppet, a lot of Brown, Nigel Kurston.
I was this cute little baby faculty member and I
walked in and I'm like, I would like to help
you with some of your data. Oh, by the way,
let's also really rethink the research design here so we
(11:04):
can test some of these relationships out. And so, you know,
to their credit, they were like, yeah, let's go for it,
and so everyone kind of jumped in, and you know,
Papa had been getting some really interesting results before and
Alana Brown was the first person to really see that
this was kind of an emerging field and deserve some
kind of study. But by having this kind of research design,
(11:26):
we were able to also do things like what pieces
of continuous integration are important for performance and predictive with performance. Right,
So for example, it's three things. That's when you check
in code, you automatically get a build it, automa kicks
off build it automatically kicks off tests, and you have
to keep the build green.
Speaker 2 (11:45):
Right.
Speaker 3 (11:45):
And for some people that was you know, like they'd
have two of the three because you know, it's same.
I think it's same thing with AI right now. Right,
it's all about the tech. It's like about the yes
and the culture and what you prioritize and the decisions
that the teams and the processes around that. So the
nice thing is we had kind of the statistical significance,
but also the research design helped provide direction regardless of
(12:09):
whatever technology stack you were on.
Speaker 2 (12:12):
Yeah, that was the thing that was so cool for
me reading all this. It's like it doesn't matter, right,
like you if you follow these practices and grow this
culture mindset, it doesn't matter what jewels you use. You
can get these kinds of results. You know, there's obviously
things ultimately need to build, but there's no secret sauce
other than making the effort.
Speaker 1 (12:32):
Yeah, and now you don't even really need to make
the effort. I mean, so many of these processes are
built in, you know, get up actions and things like that.
I mean, you have to set them up and turn
them on. But it's trivial compared to what it was.
Speaker 3 (12:43):
Yeah, the world, the world's different. But I would also
say that, you know, a bunch of this is table stakes.
Now it's super important, and if anything, it's more important
now that AI's here. Yeah, because now people are just
making code and innovating, experimenting super super fast, and you
know it just frustration and sadness and using computers and
anger if you can do all of this incredible coding
(13:06):
and prototyping and then you're stuck behind this giant wall
of waiting for build and integration and slow feedback and
flicky tests and nothing else works.
Speaker 2 (13:17):
Yeah, yeah, waiting for iety to heat to bloy. And
remember build servers back in the nineties, build servers yeah yeah,
well and the build.
Speaker 3 (13:28):
Master yeah oh yeah, what's old is new again?
Speaker 1 (13:31):
Yeah, and one person that was the only person allowed
to do the build. It's like build, we did that
on Fridays. But why we would do that on Fridays
before going home from the weekend, I don't know.
Speaker 3 (13:40):
Because sometimes it took forty eight hours to round.
Speaker 2 (13:43):
Yeah, that's true. Yeah, no question or was that it
was bunch like? So now a new book coming, Yes,
And I saw on LinkedIn you had a vote for
the title. Yeah, and I think Frictionless is such a
good title, Like it just gives me chills.
Speaker 3 (13:59):
That's the one we had it on. So shout out
to LinkedIn folks. Thank you.
Speaker 2 (14:03):
It's a good name. What are you talking about?
Speaker 3 (14:05):
So the book is about developer experience, right, and how
it's important, why it's important, how you can make the
business case, but especially, and I think most importantly, how
you can improve developer experience yourself.
Speaker 2 (14:19):
We have you know, as a developer as the leader.
Speaker 3 (14:22):
Yes, right now, it's probably going to be easiest or
most straightforward for someone who is the head of a
developer productivity team or an infrastructure team or a DEVX
team or you know, pick the word that your company loves.
But there are still some really cool, good tools and
tips and tricks that anyone could use. You know, it's
(14:44):
almost like the old DevOps days, right, Sometimes you had
exec support, sometimes it was uh grassroots.
Speaker 2 (14:50):
Yeah, just you know, I remember we told the story
of this one person walk around asking questions. It's just
sort of surfacing areas really of friction, yep. And and
as the conversations start, it's like, hey, you know I
could build you a tool that would make that piece easier,
or you know, we can I could deal with this piece.
You know, the folks and OPS were able to do
(15:10):
certain things that was tough for DEV, and there were
things that DEV could build and made life easier for OPS.
And as soon as those conversations started bearing, things started
moving faster.
Speaker 1 (15:18):
Yeah.
Speaker 3 (15:18):
And you know, so I'm writing the book with Aby Noda,
who is you know, founder and CEO of d X
and he's also you know, we're both but you know,
having all of these conversations with technology leaders and developers
and you know, finding the best you know, the best
practices that we have right now around developer experience, what
(15:39):
it means in traditional systems, how to improve it, what
it means for AI right now. Like I said, if anything,
it's even more important.
Speaker 2 (15:46):
Yeah, yeah, I'm just thinking about how because again you
and I've had this conversation many times. How different is
dev ops today? Admittedly, as car brought up, the tooling
is better, but the asks are different now too. Everything's
going to the cloud. People expect you to iterate very quickly.
There are almost no version numbers per se anymore. You
just supposed to the scale.
Speaker 3 (16:06):
The scale and the speed and the size are really no,
they're almost On the one hand, I want to say
they're changing things, but really they're not. I think they're
amplifying things. Right, The fundamentals and the principles are still there.
And to be fair, I'll say that every you know,
I used to say every year or two. At this point,
every three or six months or so, I kind of
sit back and I say, has anything fundamentally changed? Is
(16:28):
something that used to be important no longer important? And
so far the answer is no. Right, we still need
to have fast bills, we still need to have good tests,
we still need to have good disaster recovery. We still
want to, you know, think about security. How that's implemented
might look a little bit different. And again, right, it's
at scale, and now we have AI not only are
(16:49):
things happening faster, they're happening larger, right, like our dicks
are much much bigger. So how do we think about
approaching that? So I think, you know, like I said,
on the one hand, it hasn't changed at all because
the fundamentals are there. And on the other hand, it's
very much changed because we're kind of dealing with different
technologies and new technologies and again different scale and size
and reliability and security. I mean, yeah, the world has changed.
Speaker 2 (17:13):
Yeah, yeah, without a doubt. And then yeah, I just
think about you talked about the repository of prompts, Carl,
and oh, yeah, like the prompts are becoming code.
Speaker 1 (17:22):
Yeah, that's right. They're becoming sort of the lingua franca
of you know, instead of I can imagine like a
sub stack where you're you know, publishing prompts, yeah, or publishing.
Speaker 3 (17:34):
So a colleague of mine at MSR ben Zorn, has
been working on a project around you know, how prompts
are code and you need to be thinking about saving
them and versioning them because also as our models change,
different prompts will be differently effective, right.
Speaker 2 (17:49):
Yeah, yeah, they won't be. It's actually from a historical purpose.
Interesting if they take this prompt and just run it
on all these different models and see how they're evolving too.
People keep telling me the new models are better, and
I keep asking them, how do you know? Well?
Speaker 3 (18:02):
And also define better? Yeah, it's like define quality exactly,
so many different ways to think about it and define it.
Speaker 1 (18:08):
And everything getting a little meta. You know, it probably
won't be long. Probably doing it now, you know, people
asking their AI to help them with their prompts for
other AI.
Speaker 3 (18:20):
Oh, it's happening, Yeah, it's happening.
Speaker 1 (18:23):
Yeah, Like, what would the best prompt be for you
to have this outcome?
Speaker 3 (18:27):
And then well, especially an agentic right, because sometimes a
role is to execute something, but a role can also
be to write and craft and tune the prompt right.
Speaker 1 (18:38):
Yeah, So why didn't you give me that in the
first place? You didn't ask?
Speaker 2 (18:42):
Yeah, yeah, yeah, we're talking about this like rapid iteration.
It's going to be the iterating of prompts as much
a it's going to be the iterating of any given
piece of code. Yeah, and how that stuff ultimately gets generated.
Speaker 3 (18:56):
Well, and then I think also not just iterating on
the prompts, but understanding and tracking the quality of cops. Right,
what does testing look like across different props, across different models?
Speaker 2 (19:08):
Sure? Yeah, well let software behave the way the customers back. Yep.
That is a very broad principle. There's just a lot
of ways to get to that point that had come
close to a yes or with some degree of certainty,
just like my software is secure to some degree of certainty.
Does is this ux coherent? I mean, we usually don't
(19:32):
directly articulate those sentiments. We work towards them, but now
with prompt behavior, I think we tend to say them
out loud now because we have to feed them to
the machine. Yeah, to get it to start moving down
that path.
Speaker 1 (19:45):
I had my first experience today with agent mode in
the Evisual studio.
Speaker 2 (19:50):
Yeah, you've been busy, Not really.
Speaker 1 (19:52):
I just took it for a spin and I asked
to do things that I knew how to do, and
it did them, except it got hung up on one
little thing. Okay, So here's the thing. Blazer has these
edit forms and there's an input text that subclasses the
HTML input so it can support validation rules and things
like that. And I asked it to make a form
(20:16):
and it did, And then I said Okay, I want
you to do the validation on every keystroke. And when
you do that with an input tag, you use the
event on input right to handle the binding the bind event.
But you can't do that with an input text. It
doesn't support that out of the box. But the agent
didn't know that, and so it tried twice and I said, no,
(20:38):
that didn't work.
Speaker 2 (20:39):
You're going to have to subclass that control. And it
did well.
Speaker 1 (20:43):
I was the one that knew what it should do,
and it actually did it in a way that's different
than I'd done it. So I actually learned something and
it worked, and it worked.
Speaker 2 (20:54):
But you still had to give it the hints. You
still had to tell it. Just speak to the role
and expertise in all of this though, absolutely, yeah, yeah,
that's why I told the story. Yeah, no, I appreciate that.
And at the same time, it's like, it's not enough
to write the prompt, it's also to evaluate the results.
Speaker 3 (21:10):
Absolutely, you know, is.
Speaker 2 (21:12):
This good enough? Should it be better? Like you'd almost
have a default that says you should improve that, just
to see if it does well.
Speaker 3 (21:19):
And you know, it's interesting because now when you write code,
you write generate code and review the code as you go,
So we're sort of introducing another early layer of review
code review.
Speaker 2 (21:34):
Yeah.
Speaker 1 (21:35):
Yeah, especially when you tell your agent to handle this
handle this issue on your Kidhub repo, for example, and
it creates a full request and documents it.
Speaker 2 (21:47):
Yeah, better than you. Better than you would.
Speaker 3 (21:49):
Have that, right, especially documents.
Speaker 1 (21:52):
But you know you're doing a code review of the
agent and you have to figure it out. You're the
one that approves it ultimately.
Speaker 3 (22:01):
Yeah, it's going to give us some really good questions
to think about because as we are evaluating, you know, reviewing,
validating code and pull requests that are entirely generated from
someone else something else, right, we need to have a
good mental model of the code base and the architecture
in our head to understand how it fits in with
(22:23):
how it interacts everything else like that, and moving forward.
I think that's going to be a really interesting question
and challenge because now none of my code is operating
production right now, but it used to, and I know
that that's how I built up my mental model, and
I kind of like feel of the system was doing
that coding and writing that code, and you know, running
these tests, and it'll be really interesting to see how
(22:46):
we can help support the building of mental models, the
evolution of mental models of how the back end code
base works because agents can and then agents will probably
also you know, evolve to be able to handle more
of that. But at the end of the day, you know,
developers are still deciding and approving and orchestrating all of
(23:08):
these systems.
Speaker 2 (23:08):
Yeah. Yeah, absolutely so.
Speaker 3 (23:10):
Sorry that was real meta.
Speaker 1 (23:11):
Yeah, but we can also end that with for now
right for now.
Speaker 3 (23:17):
Wow, someone tried telling me something was going to It's
just impossible, and I'm like, m is it though? Things
are moving way too fast right now to say anything
definitively Yeah.
Speaker 2 (23:27):
Yeah, And then at the same time with like, I
don't think the innovation is particularly wildly new. It's not exponential,
but we're very much in an engineering cycle where we're
finding ways to slip these new tools in and starting
to think about how they fit together and what they
could do for us. And it does feel like we're
really changing the development workflow, even though the principles are
(23:48):
still the same. Yep. We're still trying to deliver value
to customers and trying toterate quickly get those those features
out as soon as possible. Take feedback from their use
and shape the next versions something I guess it's an
area I haven't explored much yet.
Speaker 3 (24:02):
Well, and version code. You know, before it was versioning
code and understanding tests and you know, building out the
build graph. Well, now when lms are at the heart
of a lot of work, now we need to be
thinking about data and data versioning and model and model versioning,
and so you know.
Speaker 2 (24:18):
It'sa do you know they're getting better when you make
the new version?
Speaker 3 (24:22):
Yep, exactly, And then how would you roll back or
shift to a model or a model version or to
a to a different data set, right, And so, I
you know, that's another thing that I think comes to
mind to some folks are like, well, you know, we
won't need developers anymore, there's no junior devs. And I'm like, no,
we're creating an entirely new class of development. That's important.
(24:42):
If we think about the earliest days of you know,
before it was DevOps, we didn't really have infrastructure teams
the way we do now, and they weren't we didn't
understand the whole industry, didn't understand how important it was
from a leverage standpoint, just like right now, right for
mL developers out there. You know, it's been kind of
a challenge. And now that skill set is super important,
(25:06):
and that skill set you know, plus plus plus plus plus.
Speaker 2 (25:10):
Sure, well, and there isn't I don't feel like there's
consistent patterns to making of those things right now. It's
very wild westy.
Speaker 3 (25:15):
Oh, because we're all still figuring it out.
Speaker 2 (25:17):
Yeah right, and so we don't. I really haven't seen
a set of practices that make me feel like that's
a way that's going to be successful. These new tools
every day, and they're you know, moving the goalposts all
the time, so it's hard to try and come up
with a set of practices that are really going to
be consistent.
Speaker 3 (25:33):
This is a fun time to be in tech.
Speaker 2 (25:35):
Oh yeah, well, I'm kind of excited for a junior
to imagine a junior developer just coming up now where
you're always going to have a prompt like, how much
of our time is now scraping off the craft of
our old coding practices because we have all these new
possibilities in front of us, right well.
Speaker 3 (25:50):
And how much time can you save by chasing down
a problem that's just kind of silly and nonsense. I
was right, Well, you know, TPD on this term. You know,
some people love it, some hate it. It's vibe coding
the other day and writing up I know, right, I
don't know if you've heard it's vibe coding thing or
(26:11):
you know, I was just yollowing it.
Speaker 1 (26:13):
Is vibe coding a pejorative to you.
Speaker 3 (26:15):
I think it is to some people. I joke when
I chat with my team, I tell I'm just yellowing it, right, Yeah,
I mean I still check the code sometimes. Every once
in a while, I've done something just to see how
far it can go. And I've been like, I've set
up a condition where I'm not allowed to change the code.
If I were someone who didn't understand any of this
and I only had the prompt, what would I do?
Speaker 2 (26:34):
Anyway?
Speaker 3 (26:35):
I was, Yeah, I was working on this thing and
it kept breaking, and I was like, okay, well i'll
i'll you know, relax that constrain. I'll go figure it out. Listen.
I spent a couple hours like going through every forum
and googling everything I could, and finally I was like,
this is dumb. Please fix What does this error message mean?
Please fix it for me? And I think probably because
(26:56):
I've done this before, I will ask it first because
it like loads into context what it needs to know.
And then fris that Listen, that was fixed in thirty seconds,
and it was it was an obscure configure error that
like I wouldn't have found for another couple hours. And
as a junior dev, those were the types of things
that I hit, especially my first six months or a year.
Speaker 2 (27:16):
Yeah, right, that were just showstoppers.
Speaker 3 (27:18):
And to understand this, yeah, it would take hours because
I didn't want to ask someone or I would and
then like and then I'm interrupting someone. I think this
is one of the great unlocks where junior developers are
going to be so productive. I'm super impactful, and honestly,
I was a professor for years, you know, Richard, Sometimes
(27:38):
junior folks have the best ideas because they aren't they're
not constrained, they're not hardened by all of the broken.
Speaker 2 (27:44):
Deaths stuff that we've been doing.
Speaker 3 (27:46):
They don't have the scars, and so now they can
actually surface these ideas in like very visible productive ways.
Speaker 2 (27:55):
This is what I'm thinking, is that junior is going
to come up whose reflexes to prompt again, Yeah, that
reflex we don't have yet. And things in those terms.
How do I deal with this from a prompt perspective
rather than our visceral reaction would be get under the hood.
Look at the code I've changed?
Speaker 3 (28:12):
Well, both and both right, because sometimes we need to
tell it. You have to break down a problem a
couple of times.
Speaker 2 (28:17):
So I think, just like you said, you need to
subclass that.
Speaker 1 (28:20):
Yeah, well yeah, I think imprompts now all the time.
If I'm spending more than you know, thirty seconds scratching
my head about something, it goes right into an agent
or a chat, GPT or something. Take a screenshot. What
does this mean? If I don't know?
Speaker 2 (28:37):
Yeah, you know?
Speaker 1 (28:38):
Yeah, Well it's time to take a break. We'll be
right back after these very very important messages don't go away.
Did you know? You can easily migrate asp net web
apps to Windows containers on Aws. Use the app to
container tool to containerize your IIS websites and deploy to
AWS Managed Container service is with or without Kubernetes. Find
(29:03):
out more about app to container at AWS dot Amazon
dot Com, slash, dot net, slash modernize, and we're back.
It's dot in a rocks. I'm Carl Franklin as Richard Campbell,
either and we are talking about AI with Nicole, and
we're talking about DevOps in AI. We started on the
(29:25):
DevOps thing got sidetracked into AI. But how does how
do AI tools make their way into the DevOps architecture infrastructure?
Speaker 3 (29:33):
I love this question right in part because I think
it's something we're still discovering, like we were kind of,
you know, chatting about, and also I think it's important
because it's something that is not on everyone's radar right
right now, everyone is super excited about Copilot and you know,
all of these tools that you can use to write code.
Speaker 2 (29:52):
Yeah, this is on.
Speaker 3 (29:53):
The ninety and you know we all know this because
we're developers or you know, we're developers. In my case,
there's a lot more to software engineering and building great
product and building great systems than just writing code. There's
this review, there's pr there's integration, there's release, there's so
(30:14):
many things, and so I think there's just a world
of possibility for AI and AI agents to be able
to help out. Now that also comes with you know,
but wait, right, what do we think about hallucinations or
correctness or all of these other things, which also introduces sorry,
I'm gonna harp on this because I have heard too
many people say that the junior dev is dead. We've
(30:36):
now created and introduced entirely new skill sets, right, how
do we validate this? What does testing look like across
all these different stages for non deterministic systems, you.
Speaker 1 (30:49):
Almost have to test a junior developer's creativity and their imagination, because,
let's face it, that's what's holding the old guys back.
Speaker 3 (30:58):
I woul gonna say, I think juniors are some of
the best equipped here because now seniors will have some
great experience in architectural design limitations and trade offs involved,
but also we're just so ingrained in the way things
work and the way things look and how to build
a thing and not okay, what if we flipped the
(31:18):
entire thing on its side differently exactly?
Speaker 2 (31:20):
Well, and then we're also at a time where nobody
has a decade of experience with all alabas. Right, that's right,
we're all juniors effectively. Yeah, this is a question of
how much bag did you bring to the table. But
at the same time, you know, hopefully you've gone through
a few iterations like this over the years, you have
a bunch of sort of standards that you go by
about quality and approaches to things so forth that are
(31:43):
beyond any given tool. Yeah, it helps me. I want
to jump back onto the frictionalist side of things because
I think we've focused too many on the tools and
not enough on the culture.
Speaker 3 (31:54):
Absolutely right. So as we you know, Auby and I
have been going through the book, there a not insignificant
part of the book that isn't about tools at all, frankly, right,
It's about understanding what devex is and why we need
to remove friction and communicating that to others in the company.
(32:15):
Whether that's you know, making the business case you can
kick something off, or whether it's once you're starting to
deploy a technical solution or something. Right that frankly needs
a COMMS plan because internally we need to communicate to
several audiences. We need to identifiers stakeholders. We need to
tell developers why it's valuable and why they should use it.
We need to tell engineering managers why it's valuable and
(32:37):
why they should care about it. We need to be
telling executives and leaders across various disciplines, you know, our
CIOs and our CTOs, but also HR and finance why
this is important what they should care about and all
of those messages are going to be different, and how
do we remove blockers and barriers, and how do we
reduce whendn't they remove fear because a lot is changing, right,
(33:01):
and so I think, you know, the cultural and the
people in the communication part is is huge. And also process, right,
we say.
Speaker 2 (33:08):
That's the old DEVO process tools.
Speaker 3 (33:10):
Absolutely, you know, yeah, exactly, well, and we were chatting
with some of our earliest you know, reviewers and interviewers,
and you know, the point kept raising up that sometimes
the best DEVX change you can make is to a process. Right,
Sometimes you can just clean up a process and it's
a huge significant impact across an entire organization. Tools weren't involved,
(33:32):
or maybe like tools were minimally involved.
Speaker 2 (33:34):
They weren't the deciding factor.
Speaker 3 (33:35):
Yep, exactly.
Speaker 2 (33:37):
Yeah.
Speaker 3 (33:37):
And then we also go through, you know, how to
collect early data, how to more rigortory collect data so
that you can get good signal into what's happening to
make you know, decisions for maybe some of the obvious
problems and then communicate those across. But then also how
to prioritize and choose next projects as you continue on this.
Speaker 2 (33:58):
Journey yeah, the feedback side. Yeah, exactly, the cycle. And
I wonder if jenerative AI is going to pay more
of a role in that, doing a better job of
trying to pull the signal out of the noise of
telemetry to let us to let us see people are
struggling with the app this way, or you know, these
features aren't being utilized, Like all of that data is
(34:18):
in the telemetry, but it sometimes it's really hard to see.
Speaker 1 (34:22):
Yeah.
Speaker 3 (34:22):
Well, and I will say frankly, there are a lot
of times where the data doesn't exist, right, and so
I have Yeah, and so I think this, you know,
this DEVS journey or the journey to improve developer experience
right includes both identifying friction points and improving them and
also identifying where you have gaps in the data what
you can do in the interim until you have something
(34:43):
instrumenttic because by the way, don't start or don't wait
to start until you've instrumented your entire stack, because that's
never happening. Part of it is not just using AI
to kind of evaluate the telemetry, but it's also understanding
what data and what telemetry we need. Right, many times
we have gaps in our data, and so it's part
improving tools and process and communicating and culture, and sometimes
(35:06):
it's prioritizing investments in instrumenting systems. Right, so what does
that look like? I will say one area that I
think AI can make a huge impact it already is
is you know what I just talked about. We need
to communicate to different stakeholders. Right now, we can pull
together a bunch of data. I can have a very
rough list of bullet points and findings, and I can say,
(35:27):
help me rewrite that.
Speaker 2 (35:29):
Can you tune this for the C suite?
Speaker 3 (35:31):
Tune this for the C suite?
Speaker 2 (35:32):
Right?
Speaker 3 (35:32):
Also, you know we know ask it politely. This is
what I want you to do. Can you please help
me write it this way?
Speaker 1 (35:37):
Okay?
Speaker 3 (35:37):
Now write it for engineering managers? What am I missing
if I'm writing this for HR? Can you help me
generate a list of you know, starter questions for conversation
in a workshop I'm going to run. That has gotten
so much faster.
Speaker 2 (35:51):
That's really interesting, you know, I love it. I did
very well as a product manager going through the logs
and writing up a post factor report and a version
like two weeks after a going through to see if
we worry about to tip over and when we weren't
just saying, hey, here's how a bouch of new features
being used and things like that, Like it made a
lot of people happy. What it didn't have was time
to write it for all of those audiences. Yep, that's
(36:11):
something in LLM is genuinely good, at at.
Speaker 3 (36:14):
Least for a first draft, right, because then you can
go through and you can check a couple of things.
But I don't know about you. I am faster at
editing than I am writing totally, and so if I
can give it a bunch of information, a few cues on,
you know, this leader really cares about these one or
two things. How can we frame it this way? It
you know, I'll change a couple of sentences, I'll add
(36:34):
a couple data points. But it's so much faster.
Speaker 2 (36:37):
Yeah.
Speaker 1 (36:37):
Do you see there's a place in the DevOps tool
stack for agents, for ani agents to actually you know,
do things of their own volition with your various resources,
even if it's notifying you or sending communications to stakeholders
(37:01):
or you know, do you see that there's a place
in the tool chain for that.
Speaker 3 (37:06):
I think there will be a place. I think it's
still you know, so early and you know, a gentic
development and use that. Like, we're still kind of seeing
where to best deploy them, how to best use them.
What does what does autonomous mean?
Speaker 1 (37:21):
Right?
Speaker 2 (37:22):
How autonomous is autonomous exactly?
Speaker 1 (37:24):
Well, you know, you you develop a prompt for your
agent to say, you know, I'm not I don't even know.
I'm just thinking. Well, so for example, you can tell
it how to behave when a certain something arises, and
it does.
Speaker 3 (37:39):
Sure, but as an example, and I'm just going to
use the reporting one because sure, I think it's one
that's going to be obvious to everyone. I can have
a whole bunch of data come in, I can have
it generate a draft of report for everyone. The reports
could be perfect. I personally would never have an agent
automatically send the executive report to the CEO on the board.
(38:00):
That's not a risk right now.
Speaker 1 (38:02):
To some of our get up pull request model, right.
You know, it sends it to you. You can send
it on after an edit or two or whatever.
Speaker 3 (38:09):
But I also might have it go ahead and send
it automatically to all of the developers because they may
be expecting it on a monthly or a quarterly cadence. Now,
executives might be expecting it on a six month cadence,
but if I know they're in an off site or
something just blew up, I'm not sending it that day.
I will go ahead and wait forty eight hours, and
I want it to come from me, right, I mean, yes,
(38:29):
I know we can configure things so that it looks
like it comes from someone, but that's you know, I
do think there are cases where we're going to start
feeling out what that looks like.
Speaker 2 (38:39):
Right, I'm pretty comfortable idea that we're going to want
entities for each of these bits and software. So it's
very clear that this was generated by an LM's analysis,
and oh.
Speaker 3 (38:49):
My gosh, yes, please. You know, there's also a big
open question around how do we evaluate and think about
the downstream outcomes of code, because if we think about it,
very few systems currently have let's say key logger level
tracking or per character tracking, and so you know, right now,
(39:12):
the question is often did you use AI or did
you not use AI? That is such a yeah exactly
grammar check or qualifying right, or like did I use
AI and it like helped kickstart my brain and then
I deleted everything it suggested except for like some set
up comment something.
Speaker 2 (39:31):
Right stimulated an I and you ran with the idea.
Speaker 3 (39:34):
But then like downstream, which portions of the code perform well,
Which portions of the code are more readable and so
they do well in code review, Which portions of the
code are challenging in build or integration, Which portions of
the code have long term viability and success in production?
(39:55):
And right now we just I'll say, we generally just
don't have that level of granularity again, introduce new opportunities.
Speaker 2 (40:03):
Memory for me of what code reviews well versus what
performs well, because I've done a lot of web performance
optimization over the years that always reviewed poorly because it's
on obvious code and looking at it and going why
are you doing this? It's like because in a multi
re entry, high velocity environment, this code is safe even
though it's obtuse.
Speaker 3 (40:25):
Right exactly well? And then there's even i think really
interesting open questions. Sorry, this is like researcher hat. We
know that many times reviewers will react differently to code
written by different people. Sure it's a junior reviewer, it's
someone on another team, it's someone who probably isn't familiar
with your codebase, or you think is not familiar with
your code base. How are going to how are people
(40:47):
going to react differently to code that is entirely written
by AI? What conventions can we use so that it's
more readable, so it's no more understandable, so it's more trustable?
And in which case you know, like in your example,
does it just need to be hard to understand? But
it's okay because we know that that's what's happening.
Speaker 2 (41:06):
Yeah, and I used to write those comic blocks off.
This is this code look stupid to you. Don't touch it.
It is a hard, one fought over piece of code
for a very difficult problem. Yes, I wish it would
be simpler, but it cannot be.
Speaker 1 (41:20):
Just to get back to the term AI and what
does that mean? We go through these cycles where the
public blombs onto words that they think means something and
that means something else. Like your clock radio now has
artificial intelligence.
Speaker 3 (41:35):
Right, It's like, I mean, if you put that sticker
on the box, it's going to sell the more money.
Speaker 2 (41:40):
Yeah, of course, but I'm sorry clock radio, clock radio.
Speaker 1 (41:44):
Algorithm sort of thing, right, Yeah, oh that was an
algorithm that did that.
Speaker 2 (41:48):
What does that mean?
Speaker 1 (41:49):
Or let's go to the world of food. Enzymes is
a great one, and enzymes a protein, but it sounds
like it's doing something you know, enzyme. Well, there is
an enzymatic act. Okay, yeah, okay, but but you know, yeah,
people just throw these terms around.
Speaker 2 (42:05):
Well, I've certainly been big on the fact that our
official intelligence a term coined for raising money. It was
a marketing strategy that then got hijacked by science fiction,
and so of course everybody has a bad perception of it.
We've had sixty years of abusing that term.
Speaker 3 (42:22):
And it very legitimately means several different things.
Speaker 2 (42:26):
Yeah.
Speaker 3 (42:26):
Sure, given the context, given the discipline, given the background,
it's a blacking technology. They're all AI. And also depending
on what your assumption is, it's definitely not AI.
Speaker 2 (42:37):
Well, I've always said if you still call it a
dais because it doesn't work. As soon as it does work,
it gets a new name. It becomes image recognition or
you know, sentiment analysis or large language model. But if
you only can call it AI, it's because you haven't
figured it out yet.
Speaker 3 (42:52):
Yeah, I think it's also Yeah, I mean sometimes it's
also the underlying technology, right, Like we've had AI for
years and it's one thing. Now, lllms use a different
type of math.
Speaker 2 (43:04):
Yeah, there's always been that schism between the decision tree
systems and the neural net systems, the neats and the scruppies. Yep,
going back to Minski terms, which is really old school,
so fun. Even when you actually are talking about the
technological side of this, there's been two totally different sets
of philosophies. Yeah, but then throw hal and Ultron on
(43:25):
the top of that, and no wonder people are confused always. Yeah,
I appreciate it though, but I also appreciate that we're thinking,
in the terms with all of these technologies about the
development experience could be better from this Not only could.
Speaker 3 (43:39):
Be, but I think it now. I've always said it
needs to be, But now with a lot of teams
and organizations that I talk to, we're really saying that
where a subpar I won't even say bad, a subpar
developer experience, the wheels are falling off. Yeah, things are
just breaking because now we're trying to move so fast
and generate so much, and so what does that mean
(44:01):
for everything downstream? We have to figure out how to
review it, we have to figure out how to test it,
We have to figure out how to do building integration.
We have to figure out how to release these huge
amounts of code in compressed timeframes. And so you know, Richard,
you know we've been saying for a while that, you know,
DevOps or whatever you want to call it, Yeah, is
table stakes. Oh listen, it's the bar is.
Speaker 2 (44:22):
There's a whole set of conversations you didn't get to
have if you haven't figured this part out yet.
Speaker 3 (44:26):
And now for teams that have been organizations who have
been winging it well enough, they are no longer. Yeah,
we're going to see a lot of sprinting and burnout
and or hopefully right improved systems that improve devs well.
Speaker 2 (44:42):
And recent convinced that software was eating the world. And
now you know your your companies are more and more
dependent on it. If you haven't solved these problems, you're
not going to be able to build the software.
Speaker 1 (44:51):
Or you will.
Speaker 3 (44:52):
But now that things are moving so incredibly fast, you
won't be able to keep up, right. I mean, it's
always been a challenge of competitive advantage and who can
build faster, And before it was maybe a difference of
like six months and nine months or one year and
one year and a half, which like isn't great but
kind of fine. Well, now it's like weeks and if
you're pulling in at a year plus for you know,
(45:16):
a solid feature set or new product or something, and
all of your competitors and brand new competitors because the
industry is just yeah, it's like a bunch of weeds.
Everyone's popping up everywhere. If they're coming out with something
in six to eight weeks with a really solid tech
preview or beta.
Speaker 1 (45:34):
Yeah, your toast.
Speaker 2 (45:35):
Yeah, you have troubles. I remember they accelerate. You were
talking about the fact that version over a version you
should be getting faster, but you found a lot of
organizations at version over version were actually getting slower. Like
you get mired in the croft and can't ship.
Speaker 3 (45:49):
The next version exactly, or you know sometimes the reflexes
we shipped and we had some outages. So now what
we're going to do is we're going to slow down
and we're going to do a more thorough review, which
then means you have even more code to try to
push and integrate into PROD, which you know now you've
just had a huge blast radius. Yeah, so you have
(46:10):
more outages and then sometimes the reflex is then to
have a code freeze or wait longer and go even slower.
Speaker 2 (46:16):
Yeah. Those are anti patterns. Yeah, the opposite of limitless.
It's very limited.
Speaker 3 (46:21):
Yes, and it's fairly predictable. We see it very very often. Now,
for any you know, haters who are listening, we are
not saying to just go as fast as possible, throw
everything at the wall and run. No, that's also not good, right.
We're definitely saying that we need the automated processes and
tools in place to ensure that your code is safe, secure, reliable,
(46:42):
performance correct. But without those things, without the automation in particular, Yeah,
it's real tough.
Speaker 2 (46:49):
But the automation can't come first. You're still in this
place of can we evaluate well enough to know this
is where the bottom lick is and then try and
solve the bottleneck exactly? Yeah, and that then maybe tool
makes sense there, but only because you want to solve
the problem. Yeah.
Speaker 3 (47:03):
And you know something I learned kind of early on
in my career, or someone said this to me. I
wish I remembered who. But we can't automate something until
we understand it. Sure, there really is value in doing
something manually. A handful of times. Yeah, so we understand
it and then we can automate it. Because if we
automate too fast, all we've done is sped up something
that is likely wrong or incomplete.
Speaker 2 (47:22):
Right, But now it goes quickly.
Speaker 3 (47:24):
Yeah, but now go exactly.
Speaker 2 (47:25):
You applified stupidity instead of intelligence. Now it's faster.
Speaker 3 (47:28):
Well it can congratulations.
Speaker 1 (47:30):
It could have been too slow, but the fact that
you're automating it, you won't notice that because hey, I'm
not doing it, so it's faster.
Speaker 2 (47:37):
Than me doing it. Yeah, doing dumb faster.
Speaker 3 (47:39):
Yeah, but the solution is not to continue doing it
manually forever.
Speaker 2 (47:43):
No, right, yeah, just because you do it dumb slow
doesn't mean it's better. Right. But this is the consulting effect. Right.
I never got hired to re engineer a business process.
I got hired to automate. And my process of scrutinizing
the workflow to make the automation re engineered the process.
Like the two kind of go together that you have
(48:03):
to you have to do what's acceptable to people as
well to say I'm here to you know, make things better,
make things better, and automation is safe. They're comfortable with that.
But then in order to automate, you have to work
through the manual process and understand it. And because you're
the outside of you also get a point to go.
Speaker 1 (48:19):
I mean you're like Colombo out to Oh yeah, you
get just one more question about this process.
Speaker 2 (48:25):
It's kind of inefficient.
Speaker 1 (48:27):
Do you mind if I just take a look at that?
Speaker 2 (48:30):
It's a good Columbo, thank you, good Peter Fox there go.
But yeah, when do we scrutinize? I mean, it doesn't
have to be an external like I definitely seen folks
who've embarked on that sort of DevOps mindset where they
just start talking to more people and they ask a
lot of whys and often run into it's the way
we've always done it right, So yeah, you know, we
(48:52):
believe that that was necessary.
Speaker 3 (48:54):
Ye change, and we're seeing a lot of that now. Yeah,
because everything's changing and we need to rethink or at
least revisit how our systems are working.
Speaker 2 (49:05):
Yeah, you think we would do that routinely, but we
don't because there's so many other things to do. Anyway,
that idea of just taking the step back and looking
through the processes again and say is this the right process?
Speaker 3 (49:15):
Well, and it's really hard to articulate the importance or
the value of cleaning up systems and clearing up tech dead.
Speaker 2 (49:22):
Sure, it's better if you.
Speaker 3 (49:24):
Can talk about and tie it to outcomes. You know,
we need to make sure that we can improve the
consistency and the predictability and the reliability of the software,
and so we yeah, we carve out a portion of
our time to ensure that is happening right and over
the past period of end months and years, we can
(49:46):
show that these investments are paying off. And the times
when we haven't because of whatever reason, you know, a
race to push something, we tend to see these you know,
negative outcomes. So like that can help, but it's it's
tough to try to explain or you know, calculate an
ROI for tech debt.
Speaker 2 (50:06):
That's not no, I mean I think a lot of
us just included it as like a ten percent rate
or something like that. You do want to knock it down, Oh, absolutely.
Argument has always been if we don't do this eventually,
we can't shift anything.
Speaker 3 (50:17):
Exactly.
Speaker 2 (50:17):
We have to do an all tech at all tech
debt sprints are multiple where if we just stay on
top of it so that we're diminishing it, there's less
of it than there was the next time. We're better off.
Speaker 3 (50:27):
Now keeping that you know goal of ten percent is
or so it's great. It's really challenging when I chat
with teams who they're booked up, their capacities one hundred percent,
they're fully utilized, and then things start breaking. Yeah, and
it's because and it's not that the developers ever, you know,
(50:48):
don't want I mean, tech det isn't super fun, but
they understand, you know, when codebase is it. When it's
really hard to make a change, that's not fun, right,
But it's hard to explain why the way a finger quilt, right,
the way we've been working for the last and years
isn't going to work anymore because we were basically sprinting
and now we have to do things to harden our
(51:09):
systems or make them more scalable or more reliable or
easier to change and understand.
Speaker 2 (51:15):
Yeah, it's a different cycle, yep. The difference between getting
that initial version of the door improving initial value exactly
what is sustainable value?
Speaker 3 (51:23):
Yeah, And you know, I'm going to pull us back
to that AI thing. I think this is also where
we're going to have some big questions come up because
you know your point when it's just about getting the
next version out of the door. Okay, right now, some
companies are releasing new products or new significant features every month, right,
So how can we be thinking about code hygiene and
(51:47):
system automation and hygiene and improvement when we are constantly
moving this fast?
Speaker 2 (51:53):
Yeah? Yeah, Well, and I would always look at the
delta trajectories there it's like, are we still able to
move every month? Or we shipping a little less each month?
Like are we decaying or are we strengthening? Yeah? I
picked that number ten percent a little bit arbitrary. Like
what I'm really looking for is the total amount of
tech debt less this sprint than last print. So if
we're accumulating more from the new things we're shipping, we
(52:15):
actually have to fix more too, yeap. Otherwise we're just
delaying the inevitable.
Speaker 3 (52:19):
Now, I love that you pointed that out right, if
we look at the things we're accumulating from shipping. One
reason I'm super excited about how quickly we can develop
things now is we can do a lot of experiments.
So hopefully we are not in a world where every
single thing we build gets shipped.
Speaker 2 (52:36):
Yeah, hopefully it should be way more comfortable thrown away
code to it.
Speaker 3 (52:40):
At least fifty percent of the things that we try
sure should fail, and it should be done so quickly
that like the business doesn't really it's just almost consequence exactly. Yeah,
because you know, definitely to your point, code that doesn't
do much at least it doesn't hurt the system.
Speaker 2 (52:56):
Oh you have.
Speaker 3 (52:57):
Added to tech debt.
Speaker 2 (52:57):
Yeah, yeah, you've loaded down. But I now feel like
we're going to make more branches to do experimental coding
these new tools and then make an assessment and go, okay,
we've made a learning Now dump the branch. Let's try
again with what we've now learned to see what we're
going to make from the end YEP, and in relatively
short periods of time we could spit a lot more
code out. Now. The real question is isn't good yeah?
Speaker 3 (53:20):
Or you know, even if it's not the best design code,
is that the functionality is that, the user flow is
that the.
Speaker 2 (53:27):
Did we get it into thing that we want some
kind of value? I appreciate that.
Speaker 1 (53:30):
Yeah, I'm just looking forward to things that I was
never looking forward to. Right, merge conflicts, I think those
will get easier to deal with having you know, an
AI in visual studio for example. You know, just some
of those things that that are cringe worthy, like oh,
(53:51):
I don't want to go down that rabbit hole, but
then you know, what do you got to lose? It's
all about the prompt baby. Oh.
Speaker 3 (53:57):
I'll say, in some ways, I think technically get easier,
but I think personally it will be challenging because right
now merch conflicts is a person making. Now many times
the system suggests the right answer. But what happens when
we have many, many more merch conflicts to evaluate? And
(54:19):
now you know the things I was talking about before?
Did it come from an agent? Did it come from
a person?
Speaker 2 (54:26):
I know you have that issue list clicking fire to copilot,
which one of them spins up a branch to build
that thing and sends you a pr and ten of
them come to roost. I hope you get some time. Yeah.
Speaker 3 (54:38):
Yeah, Now, I will say I keep identifying like open
questions and hard problems. I hope people who maybe are
less familiar with me and my work means that this
is a lot of excitement. There are a lot of
really cool things to figure out, and there are a
lot of cool things that don't have established best practice.
So for anyone who's in the field now, whether you're
junior or a senior, there is so much to do.
(55:01):
You're so much to do, and AI can't fix that,
and they can't solve it because it's pretty good at
responding to a prompt, it's not creative.
Speaker 1 (55:09):
But also these sort of meta prompts. You know, here's
a big problem I want to solve. What are the steps?
You don't have to do them for me, but what
are the steps I need to go through in order
to achieve this goal? You know, organizing your thoughts and
breaking big problems down into smaller problems. It's really good
for that. Oh, it's so good.
Speaker 2 (55:28):
Yeah.
Speaker 1 (55:28):
Yeah, And whether you use it all the way through,
that's another thing. But at least getting started, things that
seem insurmountable can be made easier. So what's next for you?
What's in your inbod?
Speaker 2 (55:44):
When are we going to see this book?
Speaker 3 (55:46):
End of year?
Speaker 2 (55:46):
Hopefully? So right now I already came out.
Speaker 3 (55:49):
We're working on it, So continuing to work on the book,
doing some really exciting work at Microsoft around AI and
systems and how we can improve and you know, what
that looks like. I still occasionally find some time to
talk to folks across industry and I love to learn,
you know, from them and their experiences. So yeah, still,
(56:11):
you know, it feels like the earliest deep off days,
where like we're kind of identifying and finding patterns and
sharing best practices and you know, connecting dots and connecting teams.
So it's super fun.
Speaker 2 (56:22):
Yeah, that's alazing times. So good to talk to you again,
and Bill, thanks, good to talk to you.
Speaker 3 (56:26):
Thanks for having me.
Speaker 1 (56:27):
Yeah, thanks for coming back, and we'll talk to you
next time on dot net rocks. Dot net Rocks is
(56:54):
brought to you by Franklin's Net and produced by Pop Studios,
a full service audio, video and post production facility located
physically in New London, Connecticut, and of course in the
cloud online at pwop dot com. Visit our website at
d O T N E t R O c k
S dot com for RSS feeds, downloads, mobile apps, comments,
(57:17):
and access to the full archives going back to show
number one, recorded in September two thousand and two. And
make sure you check out our sponsors. They keep us
in business. Now go write some code. See you next
time you got jack middle vans
Speaker 3 (57:34):
And