All Episodes

September 24, 2025 65 mins
How has AI changed coding with Visual Studio Code? Carl and Richard talk to James Montemagno about his experiences using the various LLM models available today with Visual Studio Code to build applications. James talks about the differences in approaches between Visual Studio and Visual Studio Code when it comes to AI tooling, and how those tools continue to evolve. The conversation also digs into how different people use AI tools to answer questions about errors, generate code, and manage projects. There's no one right way - you can experiment for yourself to get more done in less time!
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey Richard, Hey Carl, what do you know?

Speaker 2 (00:03):
Well, I know that our friend Michelle Rubusta Monte is
with us to tell us about something that's going on
adjacent to DEV Intersection.

Speaker 1 (00:11):
What is it? It's cybersecurity Intersection. Let's let Michelle tell
that story.

Speaker 3 (00:16):
Hey Michelle, Hey Carl, Hey Richard, how are you.

Speaker 2 (00:21):
Tell us about cybersecurity Intersection?

Speaker 3 (00:23):
Well, so, Richard and I are partnering with the group
that does DEV Intersection and next Gen AI, and we
are putting on a new conference dedicated to one hundred
percent security focused topics. And I mean, honestly, the lineup
of speakers is incredible. We have Paula A. Jenis, who's

(00:43):
here from Poland and does keynotes all over the world
and is one of the top rated RSA speakers and
black hat speaker. We're so lucky to have her. But
she's not only keynoting, she's got a workshop teaches you
about protecting your environments against hackers and shows you about
how to you know, do attacks so that you can

(01:03):
prevent them. It's pretty cool and sessions like that as well.
But we also have speakers from Microsoft. We have we
have speakers that specialize in you know secure coding practices,
Azure security, zero trust architectures on Azure uh and people
who do decision maker tracks, so things around governance policy
and you know how to how to manage and your

(01:26):
production operations keep them secure. So it's an amazing group
of speakers, really excited about it.

Speaker 2 (01:31):
And I think I can count myself among the group
of speakers there.

Speaker 3 (01:35):
Well, yes you can. That is great.

Speaker 2 (01:37):
Yeah, I'm doing a securing Blazer Server applications talk and
also I think we're doing a Security this Week live
show there somewhere that is correct.

Speaker 3 (01:48):
Yeah, we'll be recording Security this Week Live. We're going
to have a great panel with some folks. The interesting
thing here is we don't really have a Microsoft and
dot net and Azure focused toecurity conference yet, so that's
the reason we're putting this on as well. You know
there are other security conferences, but they have a spread
of topics that maybe don't focus on the things you

(02:10):
do day to day. And you know this overlaps with
again our community of folks that specialize in again dot net,
Azure and yeah, they need to keep it secure too.
So with tons of.

Speaker 1 (02:22):
Talks, cyber Intersection is part of a trio of conferences
we're doing. They have Intersection alongside the Next Gen AI
Conference all in Orlando the week of October fifth through tenth.
That's workshops and the main conference. And you can get
a special registration code if you sign up through Cybersecurity
Intersection dot com.

Speaker 3 (02:42):
Yeah, so if you sign up at Cybersecurity Intersection dot com,
then you put in this code, so Alliance cyber three
hundred and you'll get three hundred off the entry price.
So that's a special code that only works at cybersecurity
dot com. And then you have access to all the conferences.

Speaker 2 (03:04):
Like Richard said, Wow, that's cool. Thanks Michelle. I'm looking
forward to it and I'll see you there. Hey, guess what,
it's dot net rocks all over again.

Speaker 1 (03:27):
I'm Carl Franklin, an amateur Campbell.

Speaker 2 (03:29):
We're at episode nineteen hundred and sixty nine.

Speaker 1 (03:33):
The first time I've looked at history and thought we
probably should do a geek out all by itself because
this is all by itself on this year. It's the
craziest year, just out of it.

Speaker 2 (03:43):
Yeah, completely madness, cultural shift, end of the sixties. It's
a big deal, no kidding. It was a pivotabal moment. Yeah,
well we might as well do that.

Speaker 1 (03:51):
Now you want to go right into it.

Speaker 2 (03:52):
Yeah, what happened in nineteen sixty nine. I'm going to
let you talk about the space stuff because it's pretty significant.
Why don't you start with that?

Speaker 1 (04:00):
Yeah? Sure, I mean we're talking about the Moon landing.
So Apollo nine, which tested the lunar module in lower orbit,
Apollo ten, which flew all the way to the Moon,
practiced the landing, got within fifteen kilometers of the surface,
and then aborted to test the abort systems. And then
in July of nineteen sixty nine, the Apollo landing A

(04:20):
follow eleven and Neil Armstrong and that other guy Buzz
still good old Buzz Buzz Buzz, who was also like
the guy He's the one who said magnificent desolation describing
the Moon, and they pulled off this remarkable mission again
at ridiculously high risk. Yes, that vehicle, the lunar lawn

(04:45):
module was the limiting factor. It could support two people
for three days, but it took more than three days
to stage your rescue, So anything had failed anywhere in
that vehicle, it was not survivable. Neil happened to bump
the a breaker on his way out of the lunar
module and broke that breaker. That breaker happened to be

(05:05):
the power connection for the ascent engine. Ouch turned out
that the shape of the breaker cap that would have
pinned it back down was exactly the same shape as
a felt pen cap, which Buzz happened to have found
jam did in there, and that's the only reason they
were able to get back. That's so cool. I'm sure
they would have come up with another solution.

Speaker 2 (05:25):
Well, and Apollo of their team would come later. And
that was even so much innovation in order to get
those guys on so much of emergency to find it,
find a way to survive. Not to ignore the Soviets,
but they were behind at that point. They did their
first in orbit rendezvous that same year, but the real

(05:47):
accomnchihment with the first successful landing on Venus with the
Venera six mission that made it to the surface of Venus,
sent back footage for about twenty minutes and then milting it.
Other aerospace news again to ninety sixty ninety Insane seven
forty seven's first test flight and first commercial flight in

(06:09):
the same year, go Boeing also Concord's first test flight.

Speaker 1 (06:13):
Nineteen sixty nine. It's crazy, but for all of us
being computer people and what you are doing right now.
This was the year that arpanet was turned on for
the very first time. So this was a packet switching
network precursor to TCPIP was all about decentralization, no central help,
multiple routing routes. Although the first messages attempt a first

(06:34):
message attempted to be transmit across the network got as
far as L and O in login before crashing. Yeah,
still work needed to be done. But all of that
in nineteen.

Speaker 2 (06:46):
And of course, in terms of culture, the Beatles' last
public performance on the roof of Apple Records on January thirtieth,
And did you watch Get Back? Yes, the remake of
it so a great movie. Great movie and the way
that they cleaned up the footage and everything. It was

(07:06):
so much better than the quote unquote Let It Be movie,
which was just horrible quality. Woodstock, the Woodstock Festival in
August in New York, Upstate New York was a big deal.

Speaker 1 (07:22):
Yep. Yeah.

Speaker 2 (07:23):
In politics, Vietnam War escalated, significant anti war protests occurring
across the US. The Libyan coup September one, Marmarga Daffi
ousted King Idris the first ho Chi Minh died September second,

(07:43):
at the age of seventy nine. And there was a
few other things. But wow, what a year. Yeah, crazy years,
extraordinary year. And we were two, so we were just
becoming conscious of everything around us.

Speaker 1 (07:57):
Not really, yeah, barely. Oh they lift off of Apollo
eleven was July sixteenth, my birthday. Yeah. Oh so I
turned two as the rocket was taking.

Speaker 2 (08:05):
That is so cool. Random yeah, random, but very cool.
All right, so let's do better? No framework roll the music?

Speaker 1 (08:13):
Awesome? All right, man, what do you got?

Speaker 2 (08:22):
I found this really cool trending repo on GitHub. Web
gooat to web gooat a deliberately insecure web application.

Speaker 1 (08:32):
Oh nice.

Speaker 2 (08:33):
It's maintained by OASP designed to teach web application security lessons.
Big disclaimers while running.

Speaker 1 (08:41):
Do not deploy this.

Speaker 2 (08:43):
Don't even be on the internet when you're running it.
Oh wow, it's a demonstration of common service side application flaws.

Speaker 1 (08:50):
Right.

Speaker 2 (08:51):
The exercises are intended to be used by people to
learn about application security and pen testing techniques, and so
warning one is, while running this program, your machine will
be extremely vulnerable to attack. You should disconnect from the
Internet while using this program. Webgoats default configuration binds to
local hosts to minimize exposure. And of course this program

(09:13):
is for educational purposes only.

Speaker 1 (09:15):
Right.

Speaker 2 (09:16):
If you attempt these techniques without authorization, you are very
likely to get caught. If you are caught engaging in
unauthorized hacking, most companies will fire you, claiming that you
were doing security research. Will not work? Is that as
the first thing that all hackers claim? How about that?

Speaker 1 (09:34):
Yep, don't do it. I mean, you know we're big
on Troy hunts. You know, pen test yourself, you know,
hack yourself, but do it with permission. Let people know
what you're doing. You know, your intent should be good,
be very careful.

Speaker 2 (09:47):
And you know, like they say, don't be on the internet,
don't expose yourself. Yeah, you can do all this without
added risk.

Speaker 1 (09:54):
There are tools sweeping ips all the time looking for vulnerabilities.
You will not be it will not be long. Yeah,
and that's it. Who's talking to us today? Richard Grabbing
comment off a show with nineteen fifty four, the one
we did a build with our friend w O'Brian talking
about how AI has come to playwright with the playwright
MCP which I think you and I both really enjoyed. Yeah,

(10:16):
this is a funny comment. This is from Karthik VK
who said in the podcast was mentioned that Microsoft should
be leading the agent space, but I argue they already were,
just without getting recognitional rewards. Microsoft has consistently been in
first areas but rarely reaps the benefits. I don't know
if I agree with you on this, Karthik, but let's
see your argument. Take Copilot studio. It's a solid platform
for building agents with real finesse. Semantic Kernel is another

(10:38):
underrated gem, true enough, not easy to work with, but
pretty powerful, letting developers convert existing applications into LLLM powered
ones just by adding attributes using function calling in a
well architected way. This is new for Microsoft. They were
first with a touch based OS, but never got credit.
That's definitely not true, you know, doubt They did a
lot of experiments and built tablets early on and back

(11:01):
in the XPE and so forth, but there were touch
based interfaces. Heck, we talked about it on the show
here going back to the sixties. So yeah, they've been
around for a while. The much criticized Vista layout is
now being embraced by Apple as the foundation for AR
glasses of spatial interface. Yeah. I don't think Apple would
coin it that way. Yeah. Wow. Yeah, the basic idea
of bigger icons that give space for those kinds of interfaces.

(11:23):
I don't know that you can copyright any of that. Yeah.
Microsoft often builds foundational tech that shapes and ecosystem, but
not always ways and build bring them glory. I think
like all companies, Microsoft does allow experiments to happen and
sometimes put them in the field, and sometimes they're too
ahead of the market. You know, Apple may have built
the iPhone, but they also built the Newton ten years before.

Speaker 2 (11:44):
Yeah, you can't say always this and never that. I mean,
it's just not the way it works that. Microsoft has
made some great contributions to tech over the years, and
also some flops. So and so is Apple. It's just
not a.

Speaker 1 (11:57):
I'm really disappointed to Courier tablet never shipped. You know,
they they got to final prototype on that one before
they pulled the plug on, which is two bad because
it's it looked like an interesting machine. I not think
that it would have succeeded, but I would have bought
one I would have taken one up for a spin
for sure. But yeah, Karthik, I think there's more research
to be done if you want to see these different things.
But I agree there's many technologies that get put out

(12:18):
there but are put in the in front of people
in a way that they necessarily embrace. If Microsoft says
sinning for anything, it's not advocating for their own stuff
as well as they possibly could. Often they're just building
things and it gets out there, and whether or not
people can see what it can do is another question entirely. Heck,
half our shows are based on Hey, did you know
what could do this? Yeah? Right? But that being said,

(12:42):
thank you so much for your comment, and a coffee
of music Cobuy is on its way to you. And
if you'd like a copy of music Code, I read
a comment on the website at dot NetRocks dot com
or on the facebooks. We publish every show there, and
if you comment there and we're reading the show, we'll
send you a copy of music Code.

Speaker 2 (12:54):
By Before we get started with James here, I want
to let everybody know that Jeff Fritz and I have
a new YouTube show that we're doing in addition to
Blazer puzzle, and we're probably going to alternate weeks, but
it's called code It with AI. As you probably know
and James certainly knows because he's his boss. There's a
big mandate to do AI content for Microsoft Evangelism, you know,

(13:18):
to because there's a lot of new stuff and there's
a lot of things to understand, and so we wanted
to take some of the stuff that he did in
Copilot that John, which is his website of all these
little tips and tricks for using Copilot and other things,
and do videos exposing some of the things that we
can do as dot net developers, not only to help

(13:39):
us write code and publish software, but to incorporate AI
into our applications. And so we started It's interesting you started.
You talked about Playwright. We started with the playwright MC
was it MCP? Yeah, yeah, we started with the Playwright
MCP to create documentation for Copilot dot com and we

(14:02):
did an individual studio code with the Sonnet four to
oh and it was amazing. It basically was a very
small prompt and it just created a user manual for
Copilot dot John using playwright. So that's coded with AI
dot com. If you want to check it out and

(14:24):
we will do more and let us know.

Speaker 1 (14:27):
That's it.

Speaker 2 (14:27):
So let's bring James on. James Montemagne is an old
friend of ours, a developer community lead at Microsoft focused
on building community around and helping developers learn and adopt
the latest frameworks, languages, and agentic developer tools. Hey man,
what's up.

Speaker 1 (14:47):
Yeah?

Speaker 4 (14:47):
I think that new one at the very end is
the first time we got to add that on sence
the last time I was on the pod.

Speaker 1 (14:52):
So yeah, it's been a last time we were on.
It was like zamorin Land, it was just a while ago,
like entirely too long ago, to be clear.

Speaker 4 (15:00):
I've missed you both, and I'm I'm very pleased and
honored and humbled to be back on the pod. So
it's really good to be back.

Speaker 1 (15:06):
You have you on, man.

Speaker 2 (15:07):
Let's give a little a little history here.

Speaker 1 (15:09):
Oh my god.

Speaker 2 (15:10):
Back in the days of Zamorin, we met James and Hardy, right, yeah, yeah,
in Boston at the beginning of a dot net Rocks tour, right,
and you guys were you know, zamorin and at that point,
and we were talking about zamorin forms and all that stuff.
You were talking about zamorin forms on the road trip

(15:32):
with us, and we were just talking about dot net
in general. I think, if my memory is correct, I.

Speaker 1 (15:38):
Always had the sense that he was chucked under the
bus in that sense, it's like, hey, get on this
bus with these strange men.

Speaker 4 (15:43):
It was it was my I believe it was my
second month on the job, and I'm pretty sure that
they said, Hey, you're going on Toro with these two dudes,
or got this RV and just driving around the US
and go ah.

Speaker 2 (15:58):
And I don't know if you and Chris did it
the same time, but you guys had built apps and
got noticed by Microsoft. And I guess that's you know,
how you worked your way into the organization, isn't it?
The mobile app?

Speaker 5 (16:11):
Yeah?

Speaker 4 (16:11):
So I had been a professional mobile I worked at
Cannon early on, writing printer software for them out of college,
and then I went to PDC, which was right before Build.
I was on big tent on campus and I got
a Windows phone. I was a c sharp developer, fell
in love with mobile development, got an Android device an iPhone,
started building apps, and I got a job in Seattle.

(16:32):
Moved my life up there as a mobile developer, found
zamorin to do cross platform mobile devon dot net. Never
looked back and that was it. I wrote an app
that got featured for the company I worked at, Seaton.
I got featured in Gadget and I was writing blog
posts and doing kind of advocacy off to the side,
and yeah, they randomly emailed me. I thought it was
actually for like their MVP program. I didn't actually realize

(16:53):
that it was to come in and interview like with
Natt Friedman at the time, so I like win and
course by then I knew I was interviewing. And yeah,
it was three and a half years had xamred before
the acquisition into Microsoft, which is which is awesome. So
you know, I still building and publishing apps. I just
published a brand new Blazer hybrid MAUI app to the

(17:14):
app store last week, you know ndred percent Vibe Code,
which was awesome.

Speaker 2 (17:18):
So it was rad I remember Chris Hardy talking about
his favorite app that he wrote was how many days
until Christmas?

Speaker 1 (17:27):
He's a big house fan.

Speaker 4 (17:29):
Yeah, and I took his it was ioas only and
I ported it to Android. Those are the very one
of the very first things that I did, and I
put that on the app Store with Chris, which is hilarious.

Speaker 2 (17:37):
So yeah, yeah, funny, funny stuff, long long time ago.
But we boy, that's where we met and we have
been friends ever since.

Speaker 4 (17:44):
Absolutely, yeah. And it's like it's so interesting the year stuff. Right,
you think you were, you know, around in nineteen sixty nine,
Like what a time to be alive. And that's kind
of like now. I kind of like think that now,
and just like how everything is. Everything's always be moving fast, right,
but it feels like things are super accelerated. But if
you've latched on, I feel like it's in a really

(18:07):
really fun and interesting place. I'm excited to dive in
with kind of some new topics with y'all.

Speaker 2 (18:12):
Well yeah, I mean we don't. We haven't really we
don't really talk about visual studio code that much because
I don't think Richard and I use it all that much.
I don't know about you, Richard, but.

Speaker 1 (18:21):
I'm in and out of it all the time. But
you know we're we're studio people or ide people, right,
that's where we came from. I don't know that when
I want to develop, that's where I go. When I
want to edit a zambal doc. I mean I mean
studio code, but.

Speaker 2 (18:36):
I've found that the agents work better in studio code
than they do in studio probably because of the threading
model or something like that. I don't know what's going on,
but I really really enjoy it. You know, every time
that Fritz and I do something in studio code, I'm like, hmm,
maybe maybe I'll you know, although I have customers that
are in studio and I have to use that, so.

Speaker 4 (18:57):
Yeah, I think I think, you know, for me, it
was kind of in maybe January or February this year
or I kind of made this leap and jump, and
I think a lot of developers are early on, like
in their sort of like how much AI coding stuff
do I adopt every single day and their journey, Like
we kind of think that everybody's using it, but that's
not the case. However, many people are and adopting it

(19:20):
kind of slowly. But we have always been if you
think about just intelli Sense and intelecode, and then we
have the extensions that have been giving us and helping
us write code faster.

Speaker 1 (19:29):
So really, in.

Speaker 4 (19:30):
January February, I kind of did dive all in when
agent mode dropped inside of VS code and I really
dove in deeper because the c sharp Defocate was getting better,
the Maui extensions were getting better, the Plazer integrations were
getting better, and it's really just dove all in. And
I like to say I like gave in. I gave
myself to agent mode, and you know, I've gone back

(19:52):
and forth kind of talking about the IDEs, like the
vs team has been doing great, like you know, adding
more and more features to the Relief twenty twenty six
is coming out soon, and like there's more and more integration.
So it feels like they have some unique features that
are like the profile aer and some of the debugging
stuff for ID specific things. But yeah, the agent mode
and the rapid pace. Like I'm on Insider for VS

(20:16):
code and I'm just getting updates every single night, just
like rapid, right, and that's how I live. I don't stable,
I on the insider and just go.

Speaker 1 (20:24):
I feel like VS code can move faster into this
new paradigm than the studio can, Like studio customers tend
to be supporting large projects, like we tend to not
emphasize the studio responsibilities to project management as much as
it is to coding space as well, Like the show
we did with Mads a few weeks ago talking about

(20:44):
Studio twenty twenty six. He's very clear this is the
AI version of Studio and it's still coming where you know,
Studio Code had this in the spring to some of
the rate. Obviously there's more to be done, but it's like, hey,
would you think's going to happen with this behemoth that
is an eye like it only goes so quickly. They've
I've really done the plugins and so forth, but the

(21:04):
integration is not the same. It'll be interesting to see
where they get to. But this is far from a
played out store.

Speaker 2 (21:10):
We were talking to Dustin Campbell and I was sort
of complaining a little bit about the Razor Editor. He says, oh, yeah,
I got that, man, I'm working on that, and apparently
he has. I haven't seen twenty twenty six yet, but
Fritz says he has, and the Razor Editor is like
night and day of what it is and you know,

(21:32):
currently in twenty twenty two.

Speaker 1 (21:34):
Yeah, I think so, I can't wait.

Speaker 4 (21:35):
The teams are really pushing super hard, and I think
it's a good point. Like I also think that it's
great that these you know, two paradigms exist. A lightweight
code editor that's an AI first open source everything editor,
and then visual Studio, which is this ide with all
these big workloads sets up everything for you, right, And
I think with the Visual Studio, right, it's also not

(21:57):
just I'm going to open a project, it's that they're
come and is that And individuals that have huge, crazy projects, right,
huge c plus plus games like game studios are using
these things with you know, hundreds of millions of lines
of code, right, and they're like legacy projects too, and
you have to think about how do you support the
really really old stuff and the new stuff and then
make all of that AI stuff work seamlessly across all

(22:19):
of that. That's a big chat challenge to represent compared to, Hey,
I'm on modern stack, right, Like I built this at
feedback Flow, which you know, I've one hundred percent vibe
coded AI and I went back and forth between VS
and VS code. But that's all modern, right. It's it's
done at nine, it's Blazer, it's Azure Functions, it's done
at APIs, it's modern MCP stack. So I'm in the

(22:41):
modern world of doing stuff inside of there. And that
was great, you know, that I could go and I
could also open that in VS if I need to
do deep debugging or do like some advanced profiling or
things like that. But I can also code anywhere. Right
right now, I'm on my my Mac Mini, I got
my surface loptop, I got all my devices. So that's
where that sort of experience goes. I think it's kind
of a It's always a great time to be alive

(23:03):
as a developer because things are evolving. But just that
choice and flexibility I think is important. And we say
that with AI as well. There's lots of choices out there.

Speaker 1 (23:11):
I'm seeing that lots of teams, especially those they also
have younger generation developers. They are mixing the two like
studio and studio code, you know, especially once you get
dev kid in the equation, they work and play well together.
And a lot of web devs and again I'm going
to say skew younger, they're not interested in the ide
they learned on studio code. That's how they want to develop.

(23:32):
They have a plan on how they want to do that,
but they need to work within those larger projects that
let's face it, more senior folks are living, you know,
we're originally built in the IDE and a lot of
the dependency on that, so it's not like these two
are mutually exclusive to each other now.

Speaker 4 (23:46):
And I think the team's done a pretty good job,
especially in this last year, especially as Visual Studio has
actually adopted a faster iteration cycle instead of every quarter
every month, and you're actually seeing a lot more parody
jumping between the two, right actually, as far as model selection,
how the actual like agent mode and chat modes work,

(24:07):
and how different integrations like now with coding agents are
being integrated between the two. So VS Code because it
ships crazy fast, you know, is going to have things
super fast first, but also VS will soon follow up. Right,
it's going through different sort of you know, rolling out
as just sort of people adopt things at a different pace,

(24:27):
but also adding unique features, like I said, specific for
that type of development.

Speaker 1 (24:32):
Being done, and you would hope one informs the other two,
like what they learn from those modules running in studio
code then is reflected in studio. You know, can can
build a better version or more it may make more
sense for that customer.

Speaker 4 (24:44):
Base, absolutely, you know, And that's why I like the naming.
The names are the same, right, ask an agent they're
the same. Right, It'd be really weird if you open
the same project in both VS Code and Visual Studio
and then like everything is one hundred percent different, right,
So even icons and placement and things thought about. I
think at that factor, however, like inherently they're different. They're

(25:05):
different editors, they're different spaces inside of there. And for
me at least, I've really enjoyed kind of being on
this like super breakneck fast, you know, on the things
that I'm building. I like to say the year of
twenty twenty five was the year that I shipped and
wrote more code in my entire life, at multiple levels

(25:25):
small like little prototype levels, to large production applications that
are infusing you know, AI elements of foundry into them,
to functions to databases, and nearly none of the code
I wrote by hand at all. Right, I'm actually like, really,
like I said, dove into this prompt first type of
development between both VS Code and VS and it's really

(25:47):
fascinating to watch the editors evolve and also the deep
understanding that like VS and VS Code and Microsoft itself
in the developer space, you know, developers, right, So we're
building tools for developers and we're dog footing, right.

Speaker 1 (26:00):
The vs Code.

Speaker 4 (26:00):
Team builds vs code with vs code and agent mode
and these things, and same with the Visual Studio team.
So it's like deep dog fooding and understand that we've
been doing this for twenty years, thirty years, whatever it
is now that deep understanding of how software is built.

Speaker 2 (26:14):
Can we talk about the models a little bit. It's
my understanding that Claude's on it for is like the
best right now for coding C sharp, Blazer, CSS, JavaScript.
What's your take on that?

Speaker 4 (26:30):
Well, Carl, back in my day in February weeks ago,
you know it's now fun because you know all the
web devs like with JavaScript lies. Oh it's a new day,
it's a new like. Now it's a new model.

Speaker 1 (26:45):
Right.

Speaker 4 (26:45):
So when I always when I super dove in like
Clauds on it, three five Droma and four is using there,
I think there's a few things models or models will
be new models they have all the time. I think
for me, it's it's when I think of this and
if I was to encouraged developers listening, it's a new
tool in your toolbox. Every model is a new tool.
Inside of that toolbox is ask, which is agent mode,

(27:08):
which is coding agents that are working autonomous in the background.
For me, it's a great question. It comes down to
how do you want to work and how do you
want to work with your model? So let me break
it down into two categories.

Speaker 1 (27:20):
You kind of have.

Speaker 4 (27:22):
You have the GPT models and I usually am in
GPT five Mini or GBT five and a lot of Sonnet.
I go back and forth, and I'll tell you why
I go back and forth. Okay, that these models inherently
work and think different and they're different people. It is
if I have two different co workers sitting side by
side of me, working.

Speaker 1 (27:42):
With different kinds of brain damage, basically.

Speaker 4 (27:45):
All sorts of different thinking and logic and type of
code that they write. So I think with the GPT
models they like to kind of be told what to do,
like what files, what do you want to work on?
You know, how do you want to work on it?
And go off and do it. They are very much
give me a ticket, describe the specs. I gotcha, right.

(28:08):
They're very very good at that, and they're very very
fast at it, right to be more pointed at it,
and that's good in a lot of scenarios like bug fixing,
like examining, just looking and doing ask and like kind
of getting detailed information quickly, because that's.

Speaker 2 (28:22):
That's what the GitHub agent uses, right, getub agent, the
on GitHub, the coding agent, the coding Yeah, the coding agent.

Speaker 1 (28:29):
I think it bops between a few models, does it? Yeah?
I think it does fast.

Speaker 2 (28:33):
I checked it GPT only, but the son it was coming,
I guess yeah.

Speaker 4 (28:37):
For a while it was just son it for I
think so. Really, yeah, I think so.

Speaker 1 (28:42):
But son it.

Speaker 4 (28:43):
Models are fascinating. They are super curious, and they are ambitious,
and they take time to understand a lot of the context,
explore the code base. And when you tell it to
do something and you ask it and you kind of
give you know, smaller, medium, or even large prompts or

(29:05):
assign an issue, they like to get in there. They
like to explore, right, They like to just just figure
out all the little nooks and crannies and what qutblem
break it down. And what claud will do though, is
it will do things you don't necessarily maybe even want
it to do. But then you're like, maybe I did
want it to do that.

Speaker 1 (29:22):
I don't know.

Speaker 4 (29:23):
It'll start updating things. It's like, oh, I have to
dis method. Oh, I should update the docs or I
should do this right, And as it sort of context grows,
it will start to like really explore the COVID, which
is good and bad because it's good and that it
may you know, get things that you missed, but also
at the same time, it takes longer. Right, you just
could be letting it churn and kind of letting it

(29:44):
do stuff which background coding agents, like background tasks, background agents,
things like that that run autonomously out there. That's great
because they can take a lot of time. They can
be very verbose to get run tests. But I've seen
with Claude, for example, it's like, hey, let me write
this tone Okay, let me run this test. I'm gonna
write test, I'm gonna write the docs. I'm gonna run this,
and I'm gonna run this, and you're gonna run this.
You're like, Wow, you just did an entire test suite

(30:05):
and all.

Speaker 1 (30:05):
I can do is an overachiever. Uh employee.

Speaker 4 (30:10):
It's ambitious. Yeah, they're really ambitious, and you want that
sometimes and sometimes you don't.

Speaker 1 (30:14):
I've always got the sense that claud it's like they're
pre the prompt you write to chet GPT is the
prompt that arrives at chet GPG. When I write a
prompt to Claude, it's like somebody added a bunch of
that stuff, a bunch of stuff to the prompt to
do more.

Speaker 4 (30:28):
So there's a few things, you know, I think of
best practices here of how do we get these models
to generate code as if we were writing it. One
is like the team, the VS and VS code team.
Like when you send a prompt, there is a system
prompt that also gets sent, right because there's tools, there's mcps, results,
other stuff. Each model has its own prompt, right because
each model is different. So the team is working directly

(30:48):
with Anthropic and open AI and these other model vendors
to make sure their models work great based on how
they built a model. But then there's stuff that you
can do right. So for example, agents dot MD and
copilot and diductions, which are instruction files that get sent
with every single request that you put. So think of
it as your team's best practices. How do you want
your code generated? Do you want your CSS and eraser

(31:11):
dot CSS or do you want to interact dot CSS.
Do you want things to be light theme and dark theme?
Do you want specific you know, M underscore, underscore, S underscore, CamelCase,
Pascal case, how do you code? The model can infer,
but the model also wants to please, and it wants
to please quickly. Right, So like if you think of GPT,

(31:31):
especially for one or five five mini that aren't necessarily
deep thinking models, they want to respond to you as
fast as humanly possible.

Speaker 2 (31:40):
Right, So, if you're right, they want to be Promorphization
is killing Richard, I can just well, no, don't.

Speaker 4 (31:45):
Worry, it's coming. You're absolutely correct. You're absolutely correct. I
see the mistake now, you're right, and they want to
make you they want they want to make you happy inherently, right.
So that's why they have this like verbiage that is like,
oh no, you're absolutely right, you're totally you're totally good. Yep,
I see the problem up good? Yeah, Oh I fixed it.

Speaker 1 (32:06):
Did you?

Speaker 5 (32:07):
I don't know what I're looking for is obsequious, but
you know when you think about it, you know, if
you had another engineer sitting side by side, do you
you know you'd be looking at the code.

Speaker 4 (32:20):
Oh yeah, I do see the problem there, it is, right,
let's fix it.

Speaker 1 (32:23):
When you actually pair a program. You are pretty kind
to each other because we've all sat in the seat, right, Yeah,
we've all sat beside the seat.

Speaker 4 (32:31):
If you look at it that way, I think that's
the way to achieve it. And then also not giving
up on it. When I built feedback Flow, like I said,
it's it's hundreds of thousands of lines of code and
nice architecture and fully open source. And I went into
it saying I don't want to write any code. I
want to really dive deep into understanding how every model works,

(32:54):
how the agent works, how I can customize my instructions,
how I can get this working. And I'm at the
point now I don't even run the app on on
my local machine. I just push it to a branch,
do a PR, it goes into staging, have the I
don't even run it. I don't even need to because
it's gotten to the point that I've massaged the infrastructure
around it so much that the thing is building it,

(33:15):
it's running the test, that's doing all the stuff before
I push the code, sure that it's either going to
look or not right, and just wasting time running it
and testing it's not going to pass the tests if
it is.

Speaker 2 (33:24):
When you say massaging the infrastructure, do you mean like
setting up a context so it kind of knows your
style and it and you said it infers it, but
does it remember it?

Speaker 1 (33:34):
Like is there?

Speaker 2 (33:35):
Do the agents have enough context to learn what I
like and keep doing it that way?

Speaker 4 (33:40):
Right now you're in the mode of telling it kind
of how you wanted to do. So that agent's dot
mdfile or the copilot instructions filed. They're the most important
files in any project. System prompts, yeah, they're well, they're
they are not necessarily system prompts. Think of them as
a set of you know, guidance that you send with

(34:03):
every prompt. So for example, it'll tell it like what
the projects are, what frameworks they are, how you like
your CSS, how you like your c sharp, how you
like these things, how you like different things constructed in
your application. Maybe for example, like you prefer using XI
nate over MS tests, and the tests are run here
and this is how you run them, or using aspires,

(34:23):
or here's how you run a spire, Here's how you
want your CSS versus not. And with every request that
gets sent off, so it gets attached to the system
prompt So give it the guidance of the context. Now
that being said, there's not like there's memories today. I
mean you can inherently create memories. So I often have
a docs folder or an ideas folder or you know,

(34:45):
kind of something in my repo, kind of like spectruve
in development, like here's my specification, so it could go
look at how I want things created. But when you're
in the agent chat, there that entire, entire context is
being sent back and forth. Right, So the memories, if
you're working on a big feature, you inherently are like, okay,
I want to start new chats and I'm working on
something different. It's actually better to keep that thing around

(35:06):
until you've implemented or fixed that bug and then change context.
It's almost like opening and closing a ticket because that
context is there now. Ideally over time and I think
we'll get there and probably not the far future. Is
that these bits of memory, right Like Carl, for example,
you're like no, no, no, I really want my CSS
this way, blah blah blah. It should remember that, and

(35:27):
it should remember that just like you would write that
in a documentation. So right now today, what I do
is whenever I see something wrong that it generated and
I asked.

Speaker 1 (35:35):
It to fix it.

Speaker 4 (35:36):
I say, oh, and can you go write a note
in the copilot instructions so you don't get this mistake again.
So I'm telling it to go inherently keeping it memory
out there.

Speaker 2 (35:45):
I've found with the GitHub code agent coding agent that
even though I did that, it's still insisted every time
I asked it to do some laser coding to take
my existing dot net nine application and downgrade it to
dot net eight for set, no matter how many times

(36:06):
I say, don't do this, keep storing it.

Speaker 4 (36:09):
Okay, okay, okay, So here's what's happening with coding agents.

Speaker 1 (36:12):
Bluish pluck here.

Speaker 4 (36:15):
Okay, so I just I just watched this as black
like whole labubou thing, which is hilarious ID YouTube. Okay,
so here's a pro tip for dot net developers. Okay,
So think of coding agents as so agent mode chat.
You know, code completions inside of VS, inside of VS
code model context switch. Coding agents. These are that, but

(36:36):
now they're working on some other machine doing work asynchronously
for you. It's like having a whole plethora of coworkers
assigning and doing work all at the same time, multiple branches,
multiple things, things like that, and to get up Copilot
coding agent is one of those same thing right now.
Inherently what happens here, think of it is it needs

(36:56):
to spin up architecture, needs to spin up a machine
to write the code and run your code and test
the code on it. So just like you would you
write a GitHub action, or you would say, hey, I
want this to be on a Windows VM, I wanted
to have dot Net nine, I want to insult these workloads,
blah blah blah blah blah, you gotta do the same
thing for the coding agent. So you have to create

(37:18):
a workflow, and you need to create it with a
specific name, which is the Copilot setup instructions. And I
had this happen. It was so upsetting everything that you're
talking about in general. I'm opposed to link there. This
is to mine. And what I do is I say, hey,
you're going to run this on an am boontu Latest,
give it read permission, check out the code, set up

(37:39):
dot net nine, and then insull dependencies and insult the ASPIRECLI.
And when you do that, what ends up happening is
it writes the code and then it builds your project,
so you can think of it like this. What happened
to Carl is it tried to run the code and
it's like I only got done at eight. I don't
have dot at nine. So instead of it, does it

(38:00):
necessarily know how to install stuff, you know on your
behalf on that machine, you need to tell it.

Speaker 1 (38:05):
It would be cool if it did.

Speaker 4 (38:06):
To be honest with you, I mean it can you
get restore, it can run those commands. So it's like, hey,
I know, I know how to fix this, Like, oh,
I see the problem. I see the problem.

Speaker 1 (38:14):
I'll just eight.

Speaker 4 (38:16):
I'm on a machine that's done at eight, but you
want down nine, so I'm just going to downgrade it automatically.
So that's how you get around that. And the same
thing for for Maui, for example, if you're doing Mali
work you install the Maui workload for for Android and
I just run it on a boon joke because you
don't need to inherently run it on iOS and all.

Speaker 1 (38:33):
These other things.

Speaker 4 (38:33):
So that's how that's how you set it up. It
seems silly, but once on at ten's here, then the
default right, the default machine, Well, then get upgraded. So
That is a pro tip because I ran to that
same thing. I use coding agents all the time. I
have an idea at midnight, I open up the GitHub
app on my phone, you know, I create an issue.

Speaker 1 (38:53):
Boom done. Yeah.

Speaker 4 (38:54):
And now if I'm on a branch, I just you
just go in and there's an agent's panel to say, yeah,
this app. Go. I wake up in the morning, you're
like five pool requests ready for your review, you know.
And it's like this crazy thing. Once you go all
in that you can really build in hip code if
you get it into that state where I think you're
in the flow kind of like when we talk about
code flow, right, you're in the addit area blah blah

(39:15):
blah blah.

Speaker 1 (39:15):
Right.

Speaker 4 (39:16):
If you can get in in the flow with these agents,
and I think it can be super super productive.

Speaker 2 (39:22):
So I know, we got to take a break here,
so let's do that. When we come back, I have
a message about Azure deevop, so we'll be right back
after these very important messages. Did you know there's a
dot net on AWS community. Follow the social media blogs,
YouTube influencers and open source projects and add your own voice.
Get plugged into the dot net on Aws Community at

(39:43):
aws dot Amazon dot com, slash dot net, and we're back.
It's dot net rocks. I'm Carl Franklin, that's Richard Campbell. Hey,
that's James Montemagno. And we're talking about AI and visual
studio code in other places. So, yeah, I have a
customer where we started out in GitHub and then they

(40:07):
said we had to go over to Azure DevOps because
there are other stuff, their legacy stuff is over there whatever.
And then the GitHub copilot agent coding agent comes out
and I'm like, geez, I wish I had this over there,
but I don't. And so now I'm trying to get
him to come back over to GitHub right and they

(40:28):
won't do that, and I'm just like, what.

Speaker 1 (40:33):
Can we do about that?

Speaker 4 (40:34):
That's a great question. So there are a few integrations
with get up copilot for azur DevOps. One, there's been
GitHub Advanced Security for a while, which is pretty cool,
so you can turn that onto scanning. There's also now
Azure boards integration for get hub copilot. This is a
private preview you can kind of see where this is going.

(40:56):
But basically how this will work is that if you
have using Azure DevOps and if your code is hosted
on GitHub. Because Azure DevOps can connect to multiple Git repositories,
you can then assign work through Azure DevOps to getthub
code pilot and will perform that work for you on
your back right. So it is a hybrid flow today.

(41:19):
But that condition infrastructure.

Speaker 2 (41:22):
That condition means that the code repositories have to be
on GitHub and that's not where they are on TFS,
and that's whe're going to stay there.

Speaker 1 (41:29):
Which really what I thought was why how as your
DevOps are supposed to work? That was the TFS approach,
where GitHub actions was the GitHub appro.

Speaker 4 (41:36):
Yeah, you know, I think that there's been a lot
of listening and learning to Obviously, you know, the companies
using Azure DevOps, we use a lot of Azure DevOps
internally as well, so this is a problem for us
to inherently we have tons of code on get up,
but also tons of code not on GitHub. And there's
a lot of teams that are like exactly in carl
Spot that want to do this. So there's a lot
of work being done there. I don't have necessarily like

(41:59):
insight into it, but you know, the teams are listening
and you're starting to see some of that listening back
into product right away. Right inherently, there are two very
different products that work very very differently. And the thing
is do you build the thing twice?

Speaker 1 (42:14):
Right?

Speaker 4 (42:14):
You know what I mean?

Speaker 1 (42:15):
Or that's what they suggested.

Speaker 2 (42:17):
Actually they well they suggested I clone the repo in
the kidthub and then use all the coding agent stuff
and then move it back into the TFS repunt. I'm like, God,
I want to do that that.

Speaker 4 (42:29):
There's a few things though, I will say this is
is that that's a little bit tricky in general on
that ideal. Obviously for all the stuff that you're doing
in VS and VS code, obviously all works just fine,
just that coding agent part that is there for asynchronous work.
And the thing is also like remote indexing. I don't

(42:51):
think we have remote inducing for Azure DevOps, but if
your code is on GitHub, when you're working inside of
VS code or Visual Studio, we actually remote index your repo.
And what that means is it can do semantic search
a whole lot faster. And that means that all of
your agent mode requests are like way faster, especially on
huge codebases. And that's also super important. So I'm not

(43:11):
sure that's going to come to Azure devlops for what
that will look like, that would be pretty cool. I'm
sure there's a future requests out there, but that is
something to think about. But yeah, you know, I think
with this world of AI, there's a lot of implications
of where the tools are, what tools are you using,
how do you blend these together? And also the team.
But one thing we'll point out is, you know, I
work with a lot of companies that they're not only

(43:32):
using dot net spoiler alert. You know, they're not only shocking,
they're not only.

Speaker 1 (43:36):
Using vs code.

Speaker 4 (43:37):
Right, they're in Exco, they're in Intelligent, they're in other ideas.
You have a team that's maybe building mobile apps natively, well,
they're in exco, they're inside of Android Studio. Maybe they're
back ends in dot net. So do you buy five
different AI coding projects?

Speaker 1 (43:49):
Yeah, well, I'm seeing that in a lot of organizations
where they're quite fragmented. There's guys running Windsurfing, guys running Cursor,
and yep, they're all working against the same codebase. But
some weird things happen sometimes, and the models right inherently
are the same models across all of them right in general,
just different user experience. But I will point out this
get up Copilot does have extensions for excode for Intelligent

(44:12):
for Eclipse. Right, that's out there, so you can actually
use you know, Claude so on it, you know, powered
by get up Copilot inside of xcode to write swift
code like that exists today. That's out there. And I
will say, if you're being enterprise, get up Copilot and
the enterprise and business skeus give.

Speaker 4 (44:29):
You a lot of.

Speaker 1 (44:32):
Control.

Speaker 4 (44:32):
For example, uh, my wife, we were just doing some
coding and I was showing her all these models and
this and this that, and then she's like, oky, oh
I got we're doing like a little She goan to
go do this thing. She's like, I only see GPT
four oh and four one. She's like, I want sown it.
Where's my son it? Where's my GPT five? And I
was like, hell, your IT department needs to turn it
on because it's like for her company. Right, So it's like, yeah,
there's that control. So I think that's one thing too,

(44:54):
is what do you want?

Speaker 1 (44:55):
Right, I've asked. I've also seen you.

Speaker 4 (44:57):
Unfortunately some folks that just are like contractors. Right, they
work for a bunch of different companies, and then they
get their get up account onboarded by the company temporarily,
and then their get up copilot settings get overruled by
the company processes. Right, So how do you manage that?
It's a very interesting thing about, you know, different policies

(45:18):
that are out there, so good or bad, right, how
things are working? But yeah, it's not probably about there's CLIs,
there's coding agents, there's integrations, there's all sorts of things
and many tools. Obviously I use our stuff that we build.

Speaker 1 (45:31):
I'm a little.

Speaker 4 (45:31):
Biased, obviously, but I think our stuff's really great, and
I've been really building and shipping a lot of code.
But kind of to Carl's point earlier, it's also hard
to really keep up with all this stuff, right, You.

Speaker 1 (45:43):
Can't keep hopping between tools all the time. You got
to get some work done too, exactly right. And so
one thing that has been really nice is some into
the newer standardization when it comes to things like MCP servers,
model contacts and protocol servers that want to work everywhere.
It isn't the embracing of thems CP just to proof
that the industry is desperate for some standards, maybe a

(46:04):
little bit because MCP is not great, but at least
it's something people could agree. Yeah, it's a way to
provide that additional context to data to these models, right,
And you know there's tons of folks across the industry
that are on the board, including folks from Microsoft and
GitHub and the registry as well. So standardization of how
does this work? How does it work in businesses? But

(46:24):
also agents dot MD. You go to agent dot MD
that is a kind of sid in the show link
already because it's well worth a look and everybody seems
to be conceding that too. Yeah.

Speaker 4 (46:33):
You know, here's the problem as well, is like how
do you make sure that you have unique features versus
everyone else? So it takes time to create those standards,
but creating time means like this thing was created like
a few months ago. Now it's standard, which are kind
of crazy to think about in this modern day. So
Agents at MD is basically a way of open format
for coding agents that works across all those things that

(46:55):
you just mentioned, including vs code. The coding agent will
come to Visual Studios soon obviously, and CLIs and it
is exactly like copilot instructions in a.

Speaker 1 (47:05):
Way, it's like a universal language.

Speaker 4 (47:07):
It's a universal language. So what we see is folks
will mix and match these agents and copilot instructions have
really specific things what they're using a get up copilot,
and then more generic things for the agent's ot MD
that anything can work on. But yeah, so it's great
to see so many names there, and I think you'll
see more of this sort of standardization as it goes.
But also tools will do unique things and they'll stand

(47:29):
out as well.

Speaker 1 (47:30):
So yeah, yeah, I got to push back on the
vibe coding term. It kills me. I mean because what
Caparthy was talking about and it was only earlier this year,
which is crazy to think about that. It really was
like senior developers should experiment, but you know, you don't
deploy those experiments where I feel like you're using it more.

(47:52):
You're using your tools as in a PM role, And
you really say vibe coding and we say, I'm not
writing the code, but I am supervising the process.

Speaker 2 (48:00):
It's like a pejorative that got reclaimed by another culture, right.
You know, so my kids used to call me bougie.
You know, Oh that's so that's so boogie dad. You know,
when I'm like cooking a steak that costs sixty bucks
or something like that, Oh, it's so bougie. And then
you know, I'm like, I'm going to use that term

(48:21):
in a positive way. I'm gonna, Oh, this is gonna
be the boogiest dinner you've ever had, right, And they're like, no, Dad, No,
you can't do that.

Speaker 4 (48:31):
I think if I'm going the same thing, it's hard.
When I think of vibe coding is really just coding
with AI. Let's just be honest with you and you're
in a flow, just like you can be coding anytime.
To me, honestly, that's all vibe coding is. I went
on hansom In's podcast and he had the same exact thing,
I hate you know, blah blah blah. I was like, listen,
I think the term is silly, but it also describes

(48:54):
what is happening in my mind. I was up with
David Fowler working on the feedback flow app because him
and I were going back and forth on it, and
it's like this is a real production now, like this
is a thing that actually ships, like a real money
for my personal subscription that has It's one of the
biggest things that I've ever shipped, you know, in the
last fourteen years since I did advocacy. It's it's a
real product. And I also vibe code tons of tiny

(49:15):
projects as well. But it is just really coding. But
I am in a vibe like I'd be up at
two in the morning going back and forth and David's like,
what if you had this new feature and literally five
minutes later, I'd have it pushed to production. It was
like the vibes, the flow. It's a coding flow, just
with AI. So the vibe coding part.

Speaker 1 (49:32):
Is it's all vibes.

Speaker 4 (49:34):
I say, it's vibes all the way down. I turn
on some music. I'm just going. And I told my wife,
I said, when I was really deep in this project,
really in the beginning, I said, I'm sorry, I'm just
I haven't had this much fun coding in like a decade,
and I can't help myself. And I know I'm gonna
wake up refresh and I'm going I managing a project.

(49:55):
I'm not not coding. I am supervising in real time.
I don't know the code is mine. I'm prompt, I'm
doing things. I got into this flow where literally as
it was churning and it was describing what's doing and it's
doing I'm actively reviewing the code so fast. In my mind,
that flow, that flow state was just there. I'm not
say it's going to be there for everyone when you're

(50:16):
working on very pointed updates, but it happened.

Speaker 1 (50:18):
No, No, And I totally and I agree with what
you're saying too, Like, I think this is a great point.
It's like, if your passion is typing code, you're not
going to love these tools. Now if your passion is
delivering solutions to customers, boy, this thing cranks out solutions fast.
And you're running in there that PM role of supervising
the overall flow, but you're also responsible for the architecture.

(50:39):
You know, you're you're kind of put on the QA head. Ever,
so I woulday, are we really going down the right path?
Like you're pressing on a lot of things. But and
that could be really fun. I have a friend who said, dude,
I've learned how to fly, and I'm really enjoying flying.

Speaker 4 (50:53):
I worked Okay, so not ALLAI coding is vibes right,
So I worked with co pilot pother and I implemented
authentication into my feedback flow app. It took one month
one poll request. I added twenty we added twenty two
thousand lines of code, removed five thousand lines of code.
We had two hundred files changed in a conversation of

(51:15):
two hundred and forty comments back and forth. We're having
a conversation reviewing code back and forth in real time, right,
And this happened pushing back and forth to get us right.

Speaker 1 (51:24):
And to be clear, you're conversing with software.

Speaker 4 (51:26):
I am conversing with software as if it is. It
is Carl sitting next to me, and and I'm reviewing
code with Carl. That's what I'm.

Speaker 2 (51:34):
Doing, James.

Speaker 1 (51:36):
But I also have a great appreciation for putting all
that as issues and in pull requests. Commentary gives you
a clear documentation of what the intent was and what
the tools did for you.

Speaker 4 (51:47):
Yeah, one hundred percent, one one hundred percent and there,
like I am, the guidance is just a tool, whether
whether it's there or not, just like intellcode or Intelecentre. Honestly,
it's exactly the same.

Speaker 1 (51:59):
And I'm high resisted the personalization because people are overdoing it, right,
They're taking this to the r but do I talk
to my car and my car misbehaves? Damn, right, I do.
There you go.

Speaker 4 (52:10):
I will say this. I do talk to agents and
AI much different than I would talk to Carl A
hundred percent sure, are.

Speaker 1 (52:16):
Per I know a lot of folks just like if
you conversed with a person the way you're talking to
this tool, HR would be calling you. Right, this is
abusive language.

Speaker 2 (52:26):
You know what I talk about when I talk about
the Amazon thing that starts she who starts with a right.
We can't really say her name because she does cause harm.
So my wife, when you know, I will usually say,
you know, hmmm, stop when she's rambling on about something
I wanting to stop. My wife says shut up, and
it works, but I feel bad.

Speaker 1 (52:47):
Yeah, you know like negative language, negative language and it works.
It affects both ways.

Speaker 4 (52:52):
Yeah, I think just looking at it is like if
I'm writing the code or I'm writing the prompt. It's
like I was going to write this code eventually, right,
but I'm doing the code reviews. I'm like you said,
I'm in charge of the archestras.

Speaker 1 (53:03):
You are responsible.

Speaker 4 (53:05):
Yeah, And I would say this even if you're like,
I don't necessarily wanted to write my code. I was
sitting down with a developer the other day and this
really complex application, big scale application, and they were hitting
this endpoint. It was making a call off to EF
and hitting their database and it was just returning a
five hundred er no exception. It just was five hundred
coming back in the swagger. And I said, Okay, here's
what I wreck And they were like, I've been trying

(53:26):
to figure out what the heck this is going for
the last forty five minutes. I said, take the entire exception,
go into ask mode and put on a thinking model, right,
a thinking model that can do deep research. Say hey,
I have this method in this thing. When I call
this this is happening. I know the breakpoint goes here,
it doesn't hit the next line.

Speaker 1 (53:43):
What's the problem.

Speaker 4 (53:44):
Within thirty seconds, it's like, Oh, what actually happened is
you'd set up a new model and the DTO and
enerny framework is expecting it as a foreign key, but
it expects it as a different name inherently, so you
need to add an attribute to foreign key.

Speaker 1 (53:58):
Right.

Speaker 4 (53:59):
Could they have figured that out?

Speaker 1 (54:00):
Yes? How long would it took?

Speaker 4 (54:01):
Who knows?

Speaker 1 (54:02):
But it's smart. Could have been the whole day and
your keyboard imprints on your forehead right like being there thrashing.

Speaker 4 (54:08):
So I think it is it is a do you
need to go all in to this agent mode and
it's coding agent like start slow, right, get some suggestions,
ask it, get some insight into the application. I was
working with the with copilot just yesterday with my MCP
server I have for this app, and I was like, yeah,
you know, how do I architect it? What does this

(54:28):
look like? What do I need to update my application?
It came together with a plan. It's like, here's the plan.
Let's review the plan, and then let's implement the plan. Right,
So I'm in control, right human in the loop, I'm
always in control of everything that it's doing. I'm prompting,
I'm creating the plan, I'm creating the issues. I'm reviewing
that code. I'm the one pressing the merge button right right,

(54:48):
I'm merging the code right at the end of the
day that I've reviewed inside of it. And the hope
is that with the guidance of these instructions and these
agents on MD that that the code that it generates
is very similar to code that I would generate at
the end of the day, so I have to review
and the compiler always gets to say and the requirement
stock is sitting there to compare against.

Speaker 1 (55:10):
Like I do think we're in a unique place in
our role as creators of software to take these generative
AI tools to places that are harder for other industries
to do, just because we're used to vetting this and
we're used to building goal posts and evaluating against them.
Like we have a lot of tools and behaviors long

(55:30):
before the LM showed up that support this development practice.

Speaker 4 (55:34):
It's a mind shift. It's just a mind shift, just
like any tool or extension that you've ever in soaved
individual study or VISKA. It is how do I learn
this tool? How do I become an expert at this
tool to help me be more productive? As little or
as much as you want to use you know, it depends.

Speaker 1 (55:51):
You know.

Speaker 4 (55:51):
It's just a slider bar, that's all it is. And
to me, it's am I saving a minute, an hour
a day? All of those are important. It's just a
I can shave a few minutes, that's great.

Speaker 2 (56:03):
It's another choice that you take before you sit down
and write a line of code. It's like all right,
here's here's what I have to do. HM, can sign it?
Help me out here?

Speaker 4 (56:12):
Yeah?

Speaker 1 (56:13):
Can you know is.

Speaker 2 (56:14):
It going to be faster for me to do it
myself because I know, or is it going to be
faster for me to create a prompt and blah blah
blah and explain things and have the AI do it.

Speaker 4 (56:24):
Example, if I want to like rename files or move
things around or actually like change in name space, like
Visual Studio is very good at that and it's very proficient. Right,
if you move a folder from models to models, person
people or whatever, you need to update the name spaces,
It'll just do it for you. You could have prompted the LM.

(56:45):
Now you're just burning tokens for no reason. Let the tools.
Let the tools be awesome at what tools are doing,
right vs Code and Visual Studio are awesome at so
many things without AI right that it's doing, you know,
just inherently the static analysis. Let it be awesome at that,
and then use the model for these harder problems to
kind of solve as well.

Speaker 1 (57:03):
Or just nuisance things.

Speaker 2 (57:05):
Right, So I had I don't know a list of
twenty properties that I had to boolly and check and
if if the properties were true, then I had to
express some HTML, right, and I had to do that
for like twenty properties, but they weren't in alphabetical order,
so you'd end up with a list with you know,
names of things all over the place. And I just
you know, told the I think it was chat Chiptia,

(57:26):
just said can you alphatize this for me?

Speaker 1 (57:28):
And yeah, no problem.

Speaker 4 (57:29):
Copy paste the vs code website. We have these dev
days going on, these community events. All the data is
in like a CSV file. So I go into agent
MO and I say, hey, I need to add And
then inside the vs code websites a bunch of like
you know, different blobs, different data, different things, and I say, hey,
I need to add these five events. Just go update
the site with these five events, and go update in

(57:50):
these places and it'll take. It's like, this is CSV data.
It knows how to transform that and just put it.
And then we also have locations of latitude and longitude.
So I'm like, and also go figure out the latitude
and longitude. It knows the latitude and longitude of you know, Portland, Oregon,
for example, Right, I can figure it out, and of
course I review it to make sure it's correct. It's
not one hundred percent every time. And that's another part

(58:12):
in part right it, it'd be clear it doesn't know
it looked it up. It looked it up, and it
may have looked it up incorrectly.

Speaker 1 (58:17):
That's let Yeah, I like it.

Speaker 4 (58:19):
It does it exactly exactly. It's all the context that
it has.

Speaker 2 (58:24):
They all have brain damage. You got to pick your
brain damage basically.

Speaker 1 (58:27):
Well, let's start with they don't have brain Oh, come on, Richard,
lighten up. It's just software. It is just soft software.

Speaker 2 (58:34):
But it's easier for humans to engage with something when
they treat it like another human. That's what we do
with our dogs and our pets and our cars and everything.

Speaker 4 (58:44):
I think it's a it's a big database. I think
it was a big database that it's quarrying, right, But
what it does is that it can querry that database
kind of like you're saying, Richard's like, it's just a
big database of stuff that it's pulling from.

Speaker 1 (58:56):
But what it does is it.

Speaker 4 (58:58):
Summarizes things and a thing that I can part Like
I was just looking at example example out of the blue.
We our dog has diabetes and we have to give
her shots every day, the VET accidentally gave us you
one hundred needles. We're supposed to get you forty needles.
I actually have no idea what that means. Are they
the same? The units look okay, they're different, Like they
are the same. So I just went into copilot on

(59:18):
my at Microsoft copilot and I said, just like you're
going shat Gypt, I said, what's the difference between you
forty and you one hundred needles? And give me an
analysis of like what this means and if there's conversions
or anything. And it gave me tables, It gave me graphs,
and it gave me warnings like these are not the same.
You should not be using these internal No, it's two
and a half times more. Now I could do all
of that research, all of that research. All that data

(59:39):
exists on the internet, and if I went to Google
or went to bing right, they give me that. But
it gave me this nice thing that I could then share,
I could keep in my memory. I now understand that I
could go back to that's all these things are big
databases of information, and it knows how to nicely summarize
things for us in human form in a nice way
that I can understand or tell it, to tell it
to display it in a different way.

Speaker 2 (59:59):
Right, here's a creative Here's an issue though, Right, A
lot of people are using these AI things to get
to gather facts and then they're satisfied with the fact
that the facts look okay, and so you know, we
we we're not going to double check because that's why
we asked it in the first place, because we didn't
want to go to Google and go to different resource

(01:00:21):
places and look things up. So I think that people
are doing that, they're just you know, asking for facts,
taking the facts as they are, not checking them and
using them. I don't think that's right.

Speaker 4 (01:00:33):
Now, I think no, I agree. I think it's the
same thing with coding, right, like the code that's in
the flow, Like I'm reviewing, I'm understanding, I'm doing that.
I I I'm not when I say I'm you know,
just in the flow and I'm just like accepting the stuff.
I'm reviewing the stuff I'm reviewing what's generating, right, I'm
not having an auto commit.

Speaker 1 (01:00:50):
Right.

Speaker 2 (01:00:50):
But in this case, you didn't know the difference between
you forty and U two hundred, right, so you couldn't know.

Speaker 1 (01:00:56):
Could not It was accurate or not.

Speaker 4 (01:00:58):
Now that being sad, like you said, it's up. It's
just like it's just like funnily enough going to googler
bang and then opening a bunch of links. How much
do you trust those sources as well to it?

Speaker 1 (01:01:09):
Right?

Speaker 4 (01:01:10):
So, the one thing that I do like about and
what I how I use AI. Actually, surprisingly I use
very little like AI chapbot, I don't use JGBT. I
use Copilot a little bit for some research. It's not
really my I haven't gotten into that flow of like
how I want to quarry the Internet for this data,
But I will say that the one thing that I

(01:01:31):
do and how I use it is it gives me
the links of where I got the research.

Speaker 1 (01:01:34):
I always leble check. But I agree with you.

Speaker 4 (01:01:36):
I don't think people are doing that, And I think,
how do you bubble that up? So people know at
least at high level did this come from? You know,
CNN dot com? Did this come from the.

Speaker 2 (01:01:45):
That I want at least three sources?

Speaker 1 (01:01:47):
Right?

Speaker 2 (01:01:47):
And if if any one of those three sources is different,
now we have a problem.

Speaker 4 (01:01:52):
Yeah, and you can always ask it to double check itself.
I had it come up with some numbers. We're celebrating
some number like fifty million Visual City users. It's like,
give me some fun stats about fifty million. It just
came up with something and was like, yeah, but really
is that really true? And it's like, yeah, some of
them weren't. I was like, okay, you know, ask it,
like are you sure that that is writing? Is that correct?

(01:02:13):
It's like, well, I was just trying to be fun.

Speaker 1 (01:02:14):
Like okay, well, I was asking the crack type.

Speaker 4 (01:02:16):
I was like being serious here, like let's go in
you know, on this. So I think it is like
with anything, yeah, any new tool, you know, making sure
that you are reviewing all of all the input and
output that that you have. Right, you can give it
bad input and get bad output. You can give a
good input and get bad output. You can get good
input and get good output. But always having that human

(01:02:38):
in the loop is super important and reviewing everything in
the world. Right, but also the same Right, if I
walk down the road and I talked to five people
and asked them about something, I might get five different answers, right,
which one's correct? I don't know what are their sources.
So that's how you look at it.

Speaker 2 (01:02:53):
It makes a good case for us as developers because
we are the guys with the knowledge and we can
see whether that code that it general it is right
or wrong, or efficient or inefficient. So it really really
really works. AI really works for us. It's just in
that whole fact based MELIU that I think problems arise

(01:03:15):
and then it'll get better.

Speaker 1 (01:03:16):
Well, and if you know the code is generating is
in theory a set of facts about executing. It's just
that there's an easy way to validate it. Yeah. But
I think this behavior of you know, building a validation
strategy out of every output is just going to be
part of the reflex the same way.

Speaker 2 (01:03:33):
Yeah, And I just don't think many people are going
to do it.

Speaker 1 (01:03:35):
Well, they will eventually. We all got better at searching
the internet too. Sure, right like the early days it
was worse. It's one would argue the peak has come
and gone and it's worse. It's getting worse again. You know,
we're back to this mechanism of having to validate more
often and more completely, and for multiple sources. You know,
the behaviors will change over time.

Speaker 2 (01:03:57):
Well, this has been a fascinating dive into more AI,
and especially with Visual Studio code and Visual Studio James.
Thank you very much. It's always a pleasure to doc
to you, my friend.

Speaker 1 (01:04:06):
Thanks for having me entirely too long since the last time. Yeah, anytime.

Speaker 4 (01:04:10):
I'm sure if you have me back in two months,
everything will be completely.

Speaker 1 (01:04:12):
Everything will be difference, all right.

Speaker 2 (01:04:16):
And check out coded with AI dot com and we'll
talk to you next time on.

Speaker 1 (01:04:20):
Dot net rocks.

Speaker 2 (01:04:41):
Dot net Rocks is brought to you by Franklin's Net
and produced by Pop Studios, a full service audio, video
and post production facility located physically in New London, Connecticut,
and of course in the cloud online at pwop dot com.
Visit our website at d O t any t r
O c k s dot com for RSS feeds, downloads,

(01:05:04):
mobile apps, comments, and access to the full archives going
back to show number one, recorded in September two.

Speaker 1 (01:05:11):
Thousand and two.

Speaker 2 (01:05:12):
And make sure you check out our sponsors. They keep
us in business. Now go write some code, see you
next time. Got middle Vans

Speaker 1 (01:05:23):
And
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Burden

The Burden

The Burden is a documentary series that takes listeners into the hidden places where justice is done (and undone). It dives deep into the lives of heroes and villains. And it focuses a spotlight on those who triumph even when the odds are against them. Season 5 - The Burden: Death & Deceit in Alliance On April Fools Day 1999, 26-year-old Yvonne Layne was found murdered in her Alliance, Ohio home. David Thorne, her ex-boyfriend and father of one of her children, was instantly a suspect. Another young man admitted to the murder, and David breathed a sigh of relief, until the confessed murderer fingered David; “He paid me to do it.” David was sentenced to life without parole. Two decades later, Pulitzer winner and podcast host, Maggie Freleng (Bone Valley Season 3: Graves County, Wrongful Conviction, Suave) launched a “live” investigation into David's conviction alongside Jason Baldwin (himself wrongfully convicted as a member of the West Memphis Three). Maggie had come to believe that the entire investigation of David was botched by the tiny local police department, or worse, covered up the real killer. Was Maggie correct? Was David’s claim of innocence credible? In Death and Deceit in Alliance, Maggie recounts the case that launched her career, and ultimately, “broke” her.” The results will shock the listener and reduce Maggie to tears and self-doubt. This is not your typical wrongful conviction story. In fact, it turns the genre on its head. It asks the question: What if our champions are foolish? Season 4 - The Burden: Get the Money and Run “Trying to murder my father, this was the thing that put me on the path.” That’s Joe Loya and that path was bank robbery. Bank, bank, bank, bank, bank. In season 4 of The Burden: Get the Money and Run, we hear from Joe who was once the most prolific bank robber in Southern California, and beyond. He used disguises, body doubles, proxies. He leaped over counters, grabbed the money and ran. Even as the FBI was closing in. It was a showdown between a daring bank robber, and a patient FBI agent. Joe was no ordinary bank robber. He was bright, articulate, charismatic, and driven by a dark rage that he summoned up at will. In seven episodes, Joe tells all: the what, the how… and the why. Including why he tried to murder his father. Season 3 - The Burden: Avenger Miriam Lewin is one of Argentina’s leading journalists today. At 19 years old, she was kidnapped off the streets of Buenos Aires for her political activism and thrown into a concentration camp. Thousands of her fellow inmates were executed, tossed alive from a cargo plane into the ocean. Miriam, along with a handful of others, will survive the camp. Then as a journalist, she will wage a decades long campaign to bring her tormentors to justice. Avenger is about one woman’s triumphant battle against unbelievable odds to survive torture, claim justice for the crimes done against her and others like her, and change the future of her country. Season 2 - The Burden: Empire on Blood Empire on Blood is set in the Bronx, NY, in the early 90s, when two young drug dealers ruled an intersection known as “The Corner on Blood.” The boss, Calvin Buari, lived large. He and a protege swore they would build an empire on blood. Then the relationship frayed and the protege accused Calvin of a double homicide which he claimed he didn’t do. But did he? Award-winning journalist Steve Fishman spent seven years to answer that question. This is the story of one man’s last chance to overturn his life sentence. He may prevail, but someone’s gotta pay. The Burden: Empire on Blood is the director’s cut of the true crime classic which reached #1 on the charts when it was first released half a dozen years ago. Season 1 - The Burden In the 1990s, Detective Louis N. Scarcella was legendary. In a city overrun by violent crime, he cracked the toughest cases and put away the worst criminals. “The Hulk” was his nickname. Then the story changed. Scarcella ran into a group of convicted murderers who all say they are innocent. They turned themselves into jailhouse-lawyers and in prison founded a lway firm. When they realized Scarcella helped put many of them away, they set their sights on taking him down. And with the help of a NY Times reporter they have a chance. For years, Scarcella insisted he did nothing wrong. But that’s all he’d say. Until we tracked Scarcella to a sauna in a Russian bathhouse, where he started to talk..and talk and talk. “The guilty have gone free,” he whispered. And then agreed to take us into the belly of the beast. Welcome to The Burden.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.