All Episodes

July 10, 2025 • 61 mins
How can AI tech help you write better code? Carl and Richard talk to Mark Miller about the latest AI features coming in CodeRush. Mark talks about focusing on a fast and cost-effective AI assistant driven by voice, so you don't have to switch to a different window and type. The conversation delves into the rapid evolution of software development, utilizing AI technologies to accomplish more in less time.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
How'd you like to listen to dot net rocks with
no ads?

Speaker 2 (00:04):
Easy?

Speaker 1 (00:05):
Become a patron For just five dollars a month you
get access to a private RSS feed where all the
shows have no ads. Twenty dollars a month will get
you that and a special dot net Rocks patron mug.
Sign up now at Patreon dot dot NetRocks dot com.

(00:34):
Hey guess what it's dot net rocks insane heat.

Speaker 2 (00:37):
Version in.

Speaker 1 (00:40):
A little warm where you are, brother, one hundred and
two degrees fahrenheit today.

Speaker 2 (00:43):
That's no funny. That's dangerous being you'll be careful, it
is dangerous. Thank God for air conditioning. Yeah, no kidding, Yeah.

Speaker 1 (00:50):
Told, I've told neighbors who don't have it feel free
to come over and cool off.

Speaker 2 (00:54):
Yeah no, that's smart. Just get an hour cooling down? Right? Yeah?

Speaker 1 (00:58):
Hey, Mark Miller's here. We're gonna have a lot of uh,
some really good stuff that he's got to talk to
us about. But you know, we're all we all seem
to be on the agentic Ai bandwagon, or at least
topic today because that's where my better know a framework
is so oh.

Speaker 2 (01:15):
Roll the crazy music. All right, man, tell me all
about your agentic AI coding experience.

Speaker 1 (01:29):
Yeah, all right, all right, Well, I haven't done a
Blazer train in a while, and I figured this was
the best time. So, you know, Build, we saw all
sorts of stuff about the new features for developers that
Microsoft is bringing out, right.

Speaker 2 (01:44):
Hunter seth w O'Brian, you know, was don't stop.

Speaker 1 (01:51):
And even the keynote kind of wet our appetite a
little bit. But you know, our friends talk to us
in terms of developers, which is great, and we all agree.
I think that AI is a very real and helpful
and safe thing for developers to use. You know, we
have to be careful about safety, but for the most part,

(02:14):
the tools that I've been using have been pretty good.
So I came home from Build and I immediately started
messing around with this stuff. And I started working with
the agent mode in visual Studio and that didn't work
out so well, only because it kind of got hung
up on files, couldn't access files that it had written

(02:34):
and all that kind of stuff, But it did it
was able to generate some good stuff anyway. Then I'm
at devsum in Stockholm. I met the rooftop bar the
hotel and Hunter is there and I'm just kind of
picking his brain and he's like, oh no, you got
to use the copilot GitHub copilot coding agent in GitHub, right,

(02:59):
that's what it's all. AB I just did this demo.

Speaker 2 (03:01):
That's where the action is. Yeah. Right.

Speaker 1 (03:03):
So I was like, Okay, here we go. I went
home and I started playing with that, and I agree
with Scott this is this is really really good. And
so the whole idea is that the co pilot is
a partner on your in the repos that you give
it access to, and you can assign issues to it

(03:25):
and it will go out and figure out those issues,
and you can view the logs and view the session
and see what it's doing. And then first it creates
a sort of a draft poll request, and then it
creates commits, and you can view those commits you know,
in visual studio, and test them out, and if it
didn't do something you liked, you can just add a

(03:47):
comment and it'll go off and fix it. So I
and then of course when you're done, you can merge
the poll request. So I thought this was just absolutely brilliant.
So I did a Blazer train about it. Episode one
oh seven, get ub Copilot Coding Agent and we'll have
a link to that. And also the first thing I
did was I documented my experiences using it to create

(04:12):
a whole Blazer server forms over data application using the
north Wind database. Cool, and I use SQL like because
you know, not everybody uses Windows. Yeah, and the video
the one I did in the video kind of I
kind of got stuck in the code at the end,
but if you look at the results that i'd have

(04:33):
in the actual repo that we'll have a link to. Also,
it was much better and didn't take you know, many
iterations to fix the bugs and stuff.

Speaker 2 (04:41):
But I do like that those fighting through moments, so
I think are really useful teaching moments too. It's like
when the software is not behaving the way you want,
how do you push on it to get to get results?

Speaker 1 (04:52):
Absolutely, and along the way, I learned about the system
prompt MD file that you can put in your repo
where you tell it sort of all the general stuff
and how you want things to be organized in the
things that the coding standards and.

Speaker 2 (05:07):
All that stuff.

Speaker 1 (05:08):
Yeah, overall was really good. And so there's a third part.
I've created a four hour webinar called a Survey of
AI Tools for Dot net Development, which goes beyond just
the Microsoft tools but looks at the best of breed
and AI tools across all platforms and services. So that's

(05:30):
like a survey of these things. It'll include GitHub and
all of that stuff. And guess what, it's even going
to include code Rush, which I learned about today from Mark.
He's going to be talking about that. But that link
is in the show notes as well, and also linked
in the repo, which is in the show notes. So

(05:51):
that's all the stuff that I have. Richard, who's talking.

Speaker 2 (05:53):
To us today, grabbed a common top of Show nineteen
fifty four, the one we did I had to build
with w O'Brien where we talked about the play right MCP,
which definitely one of those moments where like, oh man,
we can get up to stuff with this. Yeah. Yeah,
and John MacArthur had this great comedy said, listening to
this and talk about copilot, I love using it for
my get comments. It does such a good job of them,

(06:14):
and it's forcing me to get better. Oh no, upon
help me, help me no better by checking in more often,
because I don't want lots of unrelated changes grouped together.
See yeah, Because the tool actually remembers everything you did
and spits it all back to you, so you have
to be more organized and unlike when my handwritten comment
was fixed stuff and things. Yeah, yeah, seriously, the generated

(06:40):
comments are brilliant. Yeah, I gotta say, John, like, there's
more that can be done with these tools, Like I
appreciate your sentiment and certainly I like those parts too.
But keep going down the path and we're just getting started. Yeah,
it's early days, isn't it. Yeah, and John, thank you
so much. Your comment and copy of music code By
is on its way to you. And if you'd like
a copy of music codbe I write a comment on
the website at dot netrock com or on the facebooks

(07:01):
we publish every show there. Any of you comment there
and I read it on the show, we'll send you
copy music. O.

Speaker 1 (07:05):
I just sold another flak collection of music to code By.
We're up to twenty two tracks now and you can
get them an MP three flak and wave music to
code by dot net. All right, before we bring on
mister Miller, let's talk about nineteen fifty eight, because that's
our show number.

Speaker 2 (07:19):
Ninety fifty eight, A very good year in some respects.
I mean, you know.

Speaker 1 (07:23):
Oh yeah, So where do you want to go with it? Well,
let's talk about international developments. The European Economic Community was formed,
which is precursor to the EU. Precursor to EU. The
West Indies Federation was formed, the Soviet American Exchange Agreement.
Can you imagine? Yeah, Brazil won the World Cup, defeated

(07:44):
Sweden five to two to win their first World Cup title.
The first undersea voyage to the North Pole by the
USS Nautilus. Hey, I know about that submarine.

Speaker 2 (07:54):
Yeah, that's that's your part of the world.

Speaker 3 (07:56):
Yeah.

Speaker 1 (07:56):
It was built in my hometown. Yeah, and I'm sure
my father probably had something to do with that because
he was in planning, probably in the United States.

Speaker 2 (08:05):
NASA was established on July twenty nine. Yeah, the International
Geophysical Year. Yeah, established with NASA IGY, the Bunch of Satellites.
IGY is also the first track on Donald Fagan's excellent
The Nightfly, which is all about fifties fantasies about the future.
Sure Explorer one was launched on January thirty first detected

(08:28):
the Van Allen BELTZ. You know all these things. It's
so awesome. I do Federal Aviation Act. If you want
to just forget school. Just go have a cup of
tea or glass whiskey with Richard. You don't need school.

Speaker 3 (08:40):
Or if you've been in school and you want to
feel like you've never been in school, do the.

Speaker 2 (08:45):
Same, right, listen to a geek out.

Speaker 1 (08:50):
The Federal Aviation Act was passed, which was later the
FAA Federal Aviation Agency, giving it authority overall aspect of
aviation in the US. Elvis was inducted into the Army
on March twenty fourth, the B forty seven nuclear weapon
loss incident. A B forty seven bomber accidentally dropped an

(09:13):
atomic bomb on Mars Bluff, South Carolina.

Speaker 2 (09:15):
Oops, well that's the one that went off, but we
had no nuclear core. That same year they lost one
out of a B forty seven and have never found it,
Oh my god, somewhere.

Speaker 1 (09:27):
Some other things, the first radio broadcast from Space, President
Dwight D. Eisenhowers and a Christmas message from Space.

Speaker 2 (09:33):
Yeah, first communication satellite too.

Speaker 1 (09:35):
Yeah. Anything that you want to add in terms of
computers or technology.

Speaker 2 (09:41):
Well, let me do one non computer one, and I'll
do a couple computer ones. On computer one And this
is from my friend Mark is the first regular surve
transatlantic service of the Boeing seven oh seven. They really
beginning of the jet age. That the prototype flew in
the end of fifty seven, but it was already in
service in fifty eight eight pan am in the beginning

(10:01):
of what they called the jet age. It wasn't the
first jet air liner. There was ones before that, but
at the seven o seven, and that form factor still
flies today, that four engine rig that's the KC one
thirty five. Like that plane's never going away, it seems,
which is a good one.

Speaker 1 (10:16):
It was also mentioned in the song jet air Liner
by Steve.

Speaker 2 (10:19):
Miller band down Rights. They get on the seven o seven,
get on the seven oh seven. There's lots of variance
of it. Two. You know, the original form of that
our craft was quite narrow, is a two x two seating,
and then they widened it to a two to one
two seating, and then they widened it again into a
three three seating. Programming wise, the remarkable extraordinary found. John

(10:40):
McCarthy at MIT developed the LISP programming language in two
fifty eight. It wouldn't get deployed for another couple of
years because we barely had computers at that point. LISPS
short for the list processing. But yeah, he had defined
the language already. At that point, you barely had a
computer functional Interestingly.

Speaker 1 (10:58):
The functional language that was used a lot in was
it used in AI at the time or what they
called AI.

Speaker 2 (11:07):
Yeah, the term AI had already been coined at that point.
McCarthy or Minsky coined the term.

Speaker 1 (11:14):
But I don't know what they meant by that back then.
Was it machine learning kind of thing or.

Speaker 2 (11:18):
No, it was just that was more decision tree modeling.
What what Minsky got financing from the military for was
to do resource allocation and management for the US military,
which by the way, worked perfectly.

Speaker 1 (11:31):
So expert systems that kind of thing.

Speaker 2 (11:33):
No, just resource management, okay, like decision treat mesia management.
How do you how do you move a battalion? What
do you got to move first? Like what those pieces
look like based on the aircraft you got to the ships?
You're using that kind of thing. Okay, One more computer
one the RC five oh one, which was one of
the very first computers only nineteen fifty eight to use

(11:54):
transistors instead of vacuum two hm. No, discrete, it's all
discrete electronics at this point, right, we haven't got ICs yet,
but we are beginning down the transistorization of computing. Wow.

Speaker 1 (12:04):
You know, it just occurs to me when we have
a show number that's the same year as the year
we recorded in the world's gonna explode.

Speaker 2 (12:12):
I know, I'm with you. It's I think we're doing
two thousand and two is going to be a problem.

Speaker 1 (12:16):
Yeah, all right, Well, let's introduce Mark Miller. He's been
on the show many times. He is fondly remembered in
dot Netrock's history, of course through Monday's hilarious bits on there,
and also just in early dot net Rocks. He's been
an innovator in many, many ways. So he's a multi

(12:39):
year c Sharp Microsoft MVP and a leading expert on
user interface design. He's a chief architect of the ide
Tools division at Developer Express, and he streams live c
Sharp coding and design on Twitch, dot tv, slash code rushed.
Mark has been creating tools for software developers for over

(13:01):
four decades. He was also the He was the guest
on the first show that I was co host on
show one on one.

Speaker 3 (13:09):
Wow.

Speaker 2 (13:09):
Really that's cool.

Speaker 3 (13:11):
Yeah, I did not know that. I feel like I
must have. I feel like after the show, Carl MSSA
got a call for Richard. Richard's like, there's no way
I'm doing this never again.

Speaker 2 (13:23):
I think I think I got off that show and
said only fifty shows. Yeah, that's right. I mean, how
hard could it be? Fifty show fifty and.

Speaker 1 (13:30):
Here we are the longest running dot net podcast ever
and probably one of the longest running podcasts ever that's
still going.

Speaker 2 (13:39):
Yeah. Probably, Yeah, it's hard to know.

Speaker 3 (13:41):
That's interesting. Yeah, that's right.

Speaker 1 (13:43):
We are the og podcast for better or worse.

Speaker 3 (13:45):
Congratulations, guys, I forget Why don't we switch it up.
I'll interview both of you and we'll talk about that.

Speaker 2 (13:51):
Hey, all right, I don't think dang.

Speaker 1 (13:54):
Okay, So this morning, Mark Martin my time. You gave
me a demo of your new code Rush tools that
use AI and speech recognition, and yeah, wow, I was
just blown away. So maybe you can tell everybody about
this give us the elevator pitch.

Speaker 3 (14:15):
Okay, sure. It's essentially it's kind of like as if
AI was kind of like a suit of armor for
you as a developer. It is unlike the agent mode
of Copilot, where it kind of goes off in space.
You kind of give it some boundaries and say here,
this is the issue I want you to go after.

(14:35):
The tools that we have in code Rush are really
much more much closer to you as a developer. It's
kind of like, you know, leveling up in a game
or something along those lines. Your power and speed increase.
The essence of the tooling is making changes to code,

(14:57):
creating new code, and also doing essentially semantic searches through code.
Those are the areas so that you can you can
use the power of AI to essentially offload a lot
of cognitive effort, right, and so anytime that I've got
something that is tedious would otherwise be tedious to do,

(15:19):
or or if i have something that maybe I'm not
sure how to do right, both of these are great,
great places for AI to come step in and and
and the what makes them feel like a suit of
armor is, I think is the is the ease of

(15:40):
use the user interface. Right. By by going after this
in my usual approach right to doing creating any kind
of tooling, you know, looking for any kind of frictional force, right,
any kind of bump and trying to smooth out every bump,
we've gotten to this place where you can essentially do

(16:05):
what they're calling, you know, vibe coding to some degree, right,
but you can do it I as I need it,
right when I need it, I can say okay, you know,
as an example, I think if I showed Carl this morning,
I said something like, let's create a new class called
customer with a first name, last name, an email address,
a street address, and a birth date. And I can

(16:27):
just do that. I don't even have to say the
names of the types. I can just say here's what
I want to do. And if I want to specify things,
I can. I can go into more detail. But if
I don't, and I often don't, and this is what
really lowers the cognitive load. Right as I'm using the tooling,
I say, hey can we fix this, or hey can
you add the method that's missing? Right can I? And

(16:48):
I love talking in clues to AI because it often
figures it out and puts in the thing that I need,
whatever it is, and then I can just go from there.
I can start writing the code or doing what I
need to do.

Speaker 2 (17:01):
You know, almost exactly one hundred shows ago eighteen fifty
we were talking to you about open AI. This is
in twenty twenty three, and you hinted at this whole
idea of Hey, I talk about my code when I
code anyway. Yeah, Like, why wouldn't I be saying it.
Wouldn't it be nice if it was listening? Like this
sounds like you just built what you were talking about.

Speaker 1 (17:21):
Yeah, this is when we were upstairs at the guard
you and me Mark sitting across from each other in
the attic there where we had the booths, but in
twenty twenty three. But I think the I wanted to
say that. The one of the first impressions that I
got was how out of the way the speech thing
was like, I'm always nervous that if I hit the

(17:41):
wrong key and then it starts listening and it's going
to create a bunch of garbage. But you've got it
so that the right control key by default you tap
it once and then you hold it down and speak right.
It's not like you can just hold down control because
you know what, sometimes you need it as a control key.
You want to hold down control and you see or
V and so I really appreciate the fact that you

(18:03):
came up with like an action that's deliberate that you
would not do otherwise. Yeah, as opposed to like, you know,
the agent that just when you type a few characters,
says hey, how about this in a page of grade
out code?

Speaker 3 (18:19):
You know, yeah, no, I think we have solved a
lot of problems that would otherwise be blocking forces to
making this vision come true. Right, and you know the
the you know what you just said about the keyboard.
You know, I've been using this, this this feature has
been in development. I've been using it actively for probably
at least seven months. I want to say. Yeah, and

(18:41):
so I've forgotten how kind of unique that solution is. Yeah,
And it wasn't until you just mentioned that. I was like, oh, right, yeah,
we've made this. There's a couple of things that happen, right,
But basically we said, look, there's on programmer keyboards there's
no push to talk button, right, yeah, weird. So we
have to with something and we can't make it be

(19:02):
something where you have to reach the mouse, and we
can't make it for something where you have to hold
down the control and yet another key. We can't. It's
it's got to be effortless to get in.

Speaker 1 (19:11):
And so we I thought it was a brilliant solution.

Speaker 3 (19:13):
Well thanks, yeah, no, And and so that's that's part
of what we're doing is we're making things effortless, right,
And there's actually about four different kind of cross section
criterias for success that I think we've done really well
in that we've kind of leap frog other existing technology,
and one of them is ease of use. On the
ease of use side of things, it is effortless and

(19:37):
it is also fast. Yes, right when you when you
say I want to do this, and then you release
the control key because you're done talking. On average, our
our voice to text conversion times is less than a
half a second, so within a half a second of
that release, we know what you're asking for, and at

(19:58):
that point we throw it up to open end. Right,
and we also include with that a lot of context
and a lot of detail that opening eye can use
to figure out what you mean.

Speaker 1 (20:11):
Let's talk about context, okay, sure, Yeah. So one of
the things I like about chat GPT and I pay
for the premium version, is that it has it remembers
a lot about you know, it knows that I primarily
work in Blazer. It knows, so therefore when I say
I got this new application and I got this file here,
it knows it's a Blazer app. Nless I tell it otherwise. Yeah,

(20:33):
so it has a history and a lot of these agents,
and I know the Microsoft Copilot agent does not have
this memory, and I think you said that Code Rush
doesn't either. But it does understand the context of your application, right.

Speaker 3 (20:50):
Yeah, yes, it does, so just briefly, no history yet,
it's stateless right now, right, But we're leaning in a direction.
We're considering a move towards a essentially a summarized history
in case you want to reference something you've done before.

Speaker 1 (21:08):
But that takes more tokens and costs more money and
blah blah, so they have to be an option, right.

Speaker 3 (21:13):
Well, yeah, well that's why I'm saying summarized. In other words,
brief really a brief history, but so that we can
get context. So let's just I just want to say
that this version that's coming out effectively, I think tomorrow
from our recording date, so should already be out by
the time you're there from the Code Rush. This version

(21:33):
is what I call one point zero, but there is
already a strongly visualized two point zero version of this
that we are going to commence work on essentially tomorrow
that is going to take things to a whole new
level of what I think is you know, if you
want to call it the mind blowing scale, we're going
to go from this version I call this a five,

(21:54):
and we're going to go to a ten. Yeah, you know,
we're going to get to this killer level and we're
going to solve this. But so briefly, the history right now,
it's stateless, yeah okay, but with regards to context, the
context is rich. We send, for example, a summary of
what's going on with the project, what your new get

(22:14):
packages are, that sort of thing. We send the current file,
we send code behind or the designer if you're in
one of those two files, right. You can also do
things like say, if you mentioned the word clipboard, there's
a something called triggered prompts, and it detects for that
word click clipboard and then adds an additional prompt that

(22:34):
sets out and says the contents so my clipboard are
and shows what they are.

Speaker 1 (22:37):
And I like that because it's contextual, like a system prompt.
Even in GitHub Copilot is copilot, dash instructions, dot MD,
and I think you put in a dot GitHub folder
if my memory serves and it's general, it gets sent
with every single you know, user yeah, but I like
the way that you have broken these out into contextuals

(22:59):
based on keywords and things.

Speaker 3 (23:00):
Yeah, and no, we did have by the way, we
worked on a feature that allowed me to like hold
down the alt key and say the name of a
class and then release it. But we are not documenting that.
We're essentially not gonna We're not going forward with that feature.
We're instead going forward with a you can talk about
any class you want and we'll figure it out. That's

(23:23):
not now, that'll be in version two. Nice, So version
in version two you'll be able to effortlessly, AI will
be able to infer what you mean, what you're talking about,
which will impact the context that we submit to AI.
So so we're going to solve all this in the
two point zero I guess I shouldn't be talking about

(23:45):
this so much, but but what we do now is
a relatively rich context. So if I want to work
with another file and I want to work with it
in this file, like say, for example, I'm a Zambal
designer and I want to create a data grid and
I want to do data binding on a particular control class,
I can copy that class to the clipboard and then say, hey,
I want to create a data grid here, and I

(24:07):
wanted to have instances of this customer class, and I
want in the code behind for you to create fifty
sample records that have realistic looking data sample data so
I can try this out and work on the UI.
And then that's my problem.

Speaker 1 (24:23):
I saw this. I saw this myself, that you did that.
And I want to talk about the speed too, because
in the demo that I did and blazer Train, I
basically created these long prompts. I could create a one
prompt to create a data manager for every model and
the models folder, which there was like twelve or thirteen, right.
I did it against north Wind, and I said, create

(24:45):
a data manager, and I want creud methods for each
one of these using ado dot net objects and blah
blah blah. Trycatch and here's some return class types. Right, success,
failure the data and then error messages. That kind of
simple thing. Yeah, and you it basically took like fifteen
minutes for it to do that and come back and say, yeah,

(25:06):
here's your you know, here's the here's the change set
that I'm going to check in or that that you
can check out. But I like the way that code
rush works because if I was going to do that,
I would create the data manage your class skeleton, or
I would tell it to and then I would say, hey,
let's create cred methods for the album or the artist

(25:27):
or the customer or whatever, and just inspect those because
that would take like no time at all for code
Rush to do. I have a feeling, yeah, and then
and then from there I would tweak that and say, okay,
now let's duplicate this for all of the other models. Sure,
which is something I wouldn't have had a chance to
do if I because I used a big prompt like that.

Speaker 2 (25:47):
Right. Yeah.

Speaker 3 (25:48):
I actually saw an example today where I was in
one class that had implemented I property notified change. I
asked for a new class, just saying here's the properties
I need, and it because I started in a class
had this style it built it followed the same style
in the second class, which is interesting.

Speaker 2 (26:04):
Nice.

Speaker 3 (26:05):
But with regards to speed, we have totally solved the
speed problem, like really really solved it well. If you
compare the code rush aigen feature against leading competing AI tools,
it can be up to ten or one hundred times
faster than the other tools, depending on what you're doing

(26:25):
and what's going on. It is and at the very
least it is a little bit faster, right, it's about
five percent faster if anything.

Speaker 1 (26:33):
Else, so five to one hundred times faster, five percent
to one hundred times.

Speaker 3 (26:38):
No, it's like five to ten thousand percent faster. It
depends on what you're looking at and what you're doing.
But in some cases, and basically what the I guess
the secret the big reveal there is the leading tools
that are out there have not addressed performance in terms
of the integration of AI results with code. They've not.

(27:02):
It seems pretty clear to me at this point that
in the current versions that I'm looking at, they that
is not a concern. That's not something that looked at
or not even something that they've conceived of that they
could actually do faster. Wow, And that is that's you know,
when I was talking about these different criteria, these different
cross sections for how you evaluate something.

Speaker 1 (27:21):
Speed is one of those Yes, yeah.

Speaker 3 (27:24):
And from and why And I'll give everybody the big
secret here. I'll tell you what the secret, sauce is.
How we're able to do it is that we make
it so that when AI comes back, it doesn't have
to be flowery. It doesn't have to give me the
whole class. It can just give me the changes. And
because it's just giving me the changes, I'm sending back

(27:44):
fewer tokens, which means it's cheaper, it's faster, and it
has a lower environmental impact.

Speaker 1 (27:52):
Because your code rush, you know exactly where the code
should go when it comes back. Yeah, so you do
that part of it.

Speaker 3 (27:58):
Yeah, we do all that part.

Speaker 1 (27:59):
So, yeah, you do everything from visual studio.

Speaker 3 (28:02):
Yeah. So the agent technology that's inside code rush is
essentially an expert at taking the response back from AI
and immediately folding it back in the But the speed
benefits are really all gained in conversation with AI. That's
where they are. And nobody's doing this yet because you
can tell because they're so slow.

Speaker 2 (28:22):
Yeah, just what are you truly changing the workflow? You're
just speeding up my existing workflow.

Speaker 3 (28:27):
Yeah, I'm speeding up your existing workflow.

Speaker 2 (28:28):
Yeah, which is what you've always done.

Speaker 3 (28:30):
I'm changing it in a way and that now you
can ask more effortlessly and get the answer and see
the results more quickly. And so because that offer is
now available, you lean into that offer more frequently. I
think and so there's kind of this natural change where
you're like, oh, they this tool is pretty cool. Let
me try this. I'm going to slash, you know, try
to slash away at this problem.

Speaker 2 (28:52):
Right, But this also seems like not an all in
like do everything through Agentic that you're still doing your
own coding. There some hints from these tools.

Speaker 3 (29:01):
Yes, yeah, it's very much in that space.

Speaker 2 (29:03):
Yeah.

Speaker 3 (29:04):
Like, often I'm like just writing code normally, and then
i might be in a place where I'm like, okay, wait,
I don't know how to do this. This involves some
new framework or new new thing I've never touched before,
and I just ask, right, and I just get it
and I take a look at it. And often I'm
delighted because I'm learning at the same time when I
see it come in. Sometimes I ask for something that's

(29:26):
like a tedious thing, and I'm delighted because I see
a new way of doing it that I didn't know before,
for example, which is also kind of a cool learning experience.

Speaker 2 (29:36):
Yeah. Yeah, but I also could see, like with the
Agenic thing, there's sort of scut work coding that you
could do, but it's easier just to hand it to
the tool. Let it generate.

Speaker 1 (29:47):
Yeah, hey, this sounds like a good place to take
a break. So we'll be right back after these very
important messages. And as a reminder, if you don't want
to hear these messages, you can opt for an ad
free feed by becoming a patron. Got a patreon, dot
dot nerocks dot com. We'll be right back. You know,
dot net six has officially reached the end of support,

(30:08):
and now is the time to upgrade. Dot Net eight
is well supported on aws. Learn more at aws dot
Amazon dot com, slash dot net. All right, we're back.
It's dot net rocks. I'm Carl Franklin. That's Richard Campbell, hey,
and that's Mark Mellers Mela, Mark Miller. All right, all right,

(30:30):
so I have another question. Is you support what project
types now and what project types will you be supporting
in the future.

Speaker 3 (30:40):
Okay, so that's a great question. It's our In terms
of project types, it's basically, if you can hold onto
c sharp, we support that project type. So we support
c sharp, and we support right now is examal and
so all the various flavors of that. Right so that
if you're working or just WPF or whatever way you

(31:04):
can work in that conver that that combination. If you're
working in other UI frameworks that are not supported, but
you're still working in c sharp, we've got to cover
on the c sharp.

Speaker 1 (31:15):
Side of it, right, But for now, you wouldn't be
able to say, hey, build me a form in Blazer
that calls into these methods.

Speaker 3 (31:20):
Right, No, we cannot do that. Now. Now we're we
are close again, and the two version of this, the
two version, a couple of things are coming into A
one is Blazer support will be coming great. And a
second thing that we have there's a current limitation is
the only way we can create new files right now
is by creating a new class.

Speaker 2 (31:38):
Okay.

Speaker 3 (31:39):
So in other words, there's no way for us to
have a conversation with AI where a AI comes back
and says, here's a new dot Razor file and here's
a new Dot Sammle file that doesn't that's currently not happening.
It's not hard to get there. But we wanted to ship,
and so we're shipping what we have now. Okay makes sense?

Speaker 1 (31:56):
Yeah, I wrote an agent myself in WPF that used
the open AI and I got the response back and
parsed out all the comments. So that I just got
the code and then I could tell it to you know,
write code files. And it does, but I have to
use I basically inject into the thing that does that,

(32:22):
compile it to Roslin and inject in system io and
things like that, so it can actually write files. But yeah,
it takes the result that comes back, compiles it with
Roslin after you inspect it, of course, and that can
also create files and things. Yeah, but it was a
fun exercise, but it's nothing compared to what the actual

(32:43):
real tools do.

Speaker 2 (32:44):
Yeah.

Speaker 3 (32:44):
So when we're talking about that, right, the inspection, right,
and this idea of user interface, right, one of the
things that happens. I think in a lot of places,
a lot of tooling will give you that inspection phase
in it beforehand in another window. And our approach instead
is let's make the code changes, inject them everywhere, but

(33:07):
give you a really high quality tool for navigating among
all those changes, navigating through all those changes. And I
think that the latter approach is certainly better for.

Speaker 1 (33:17):
This this feeling.

Speaker 3 (33:19):
Of I'm going to I'm going to go after this,
I'm speaking with intensing what I want, so why not
just give it to you faster by putting it right
there in the code. Right, If you think about it,
if I have to go to another window and I'm
looking at code, sometimes I'm looking at code in that
other window and talking about other tools. I'm looking at code.
That is where a small change is among a giant

(33:40):
reproduction of all the code I had, right, and I
have to scroll up and down, visually scan it, then
copy it, maybe copy a feel and go paste it in.
Maybe it's in sections and I have to go copy
one and paste it in, copy another, paste it in.
Or maybe there's a button I can click and it
puts it in. But still I got to click and
wait and then go to the other one, click and
wait and do that. And there's this whole thing where

(34:01):
you're not in the editor, right. And and what I'm
trying to do here is we're you know, with all
of the tooling that we're trying to, you know, create
is make it so that look, we are not leaving
the editor, right the editors where we stay. The editor
is our world. That's the world we you know, we
slash and build in you know, we create, and we
go after the tasks and the challenges there in the editor.

(34:23):
So there's just no way, we're leaving that space. And
as a result, it's more effortless, the cognitive load is lower,
and as and the other things.

Speaker 1 (34:32):
You have this.

Speaker 3 (34:33):
Oh, I just I wanted something that was tedious and
I and now I have it. Well, I'm ready to
you know, I'm still refreshed. I'm ready to keep going.

Speaker 2 (34:41):
Yeah right, yeah.

Speaker 1 (34:42):
So one of the problems that I have with the
open AI for code and even the co pilot, the
getthub copilot coding agent, is that it's nondeterministic. So I
give if I gave it the same prompt twice the
second time, I may come back with you know, I

(35:03):
had this problem with it creating a data manager. It
decided to break that out into three partial classes, which
wouldn't be bad except that one of two of the
partial classes had the same method in it, and you know,
I had to specifically tell it, don't use partial classes,
put in everything in one class, and don't have any
duplicate methods. I had to tell it that.

Speaker 3 (35:24):
Yeah, yeah, you know, yeah, yeah, I was just gonna say, well,
when you first said it's non deterministic, I'm like, welcome
to AI. And then as you were talking, I was thinking, well,
you can kind of control it, corral it in a bit, right,
you could do that. I actually find that one of
the things that's useful is if I see like this
kind of variance and it goes against my style, I

(35:47):
kind of put in like these what we call the
system kind of system proplems add ons to the prompt, right, right,
so that no matter what it'll say, hey, I always
like to use the latest fluent assertions or something like
that in my test case.

Speaker 2 (36:00):
Yes, yes, right, I guess what I'm saying is.

Speaker 3 (36:03):
I just those are those are ways that you can
kind of get to there's you know, it's it's still
not deterministic, but you're getting closer, right, You're getting.

Speaker 1 (36:12):
Yeah, getting closer. Yeah, you're tweaking it. I found that
the coding agent like to downgrade to from dot net
nine to dot net eight constantly.

Speaker 2 (36:23):
Yes, yes, I've seen that before too.

Speaker 1 (36:25):
It's in my system prompt and it's in my user prompt,
and then it still doesn't and I say, do not
downgrade you know, oh okay, oh yeah, yeah.

Speaker 3 (36:34):
So this is really interesting. So one of the things
I've discovered when you're working with working with code that's
being generated off of sources that have changed over time. Right,
classic example is end unit test cases. Right. They used
to be able to say a cert dot is true,
and now they essentially broke that. And now you've got

(36:57):
to say, now the code you often wanted to be
a cert dot that. Right, So if you have essentially
breaking changes or code, that is the way you do
it has changed over time. And AI is saying, well,
I'm scanning everything for the answer, right, then you're gonna
you're gonna get this kind of variance more often. And

(37:19):
the the more either edge case it is, in other words,
the fewer examples there are, or the more that they've
broken it over time, right, the more this variance occurs. Right,
And you really, you really, you really start to need
to corral things in with these very explicit pieces. But
what's interesting is that, like one of the things that

(37:41):
we had to do to build this right, Essentially, one
of the things that our agent does is our agent
allows for variance to come back from AI and it
gets it back in the right place. So so we've
got like a ton of code that and test cases
that are all based around this idea of defense against hallucinations. Right, sure,

(38:05):
and so when a hallucination comes like it's we as
soon as we spot it, it's like gold to us,
right yeah, we're like you grab it and we have.
One of the things that's really cool about our implementation
is that for every interaction you do, you do there's
a log file that's created.

Speaker 2 (38:25):
Yes, I saw these amazing.

Speaker 3 (38:27):
So a couple of things that are beneficial about this
one is that log file is beneficial if you're ever
going to exploit the the ind nification against copyright which
is offered by AI tooling right writ shout GBT has
something along the lines.

Speaker 1 (38:44):
Because you have a log for the prompt, you have
a log for the return, yes, and then you have
other logs, but they're individual logs yes.

Speaker 3 (38:53):
And therefore, so for each request you make, there are
essentially four logs that are generated and and that gives
you a fleet verifiable audit trail and audit auditible trail
in terms of what how I have used ai UH
in this application and how I've integrated the One of
those four files that we create has a dot test extension,

(39:15):
and that dot test extension is a test case that
if you had problems or something didn't work the way
you were expecting you contact support as it Support might
ask for the test case. You could send it, and
that test case would allow us to reproduce that scenario.
And that's how we've grabbed onto these hallucinations so so

(39:36):
fast and quickly. Right when we when we see a problem,
it's gold because we grab that test case and we
go put it in and then we start looking at it.
How do we you know, what's what? We look at it?
We analyze what are the chances that hallucination happens again? Right?
How do we what defensive code can we add? And
what's interesting is we have a very specific spec that

(39:58):
we deliver open A but we have a very wide
tolerance for what open AI can come back with. Right.
What we tolerate is would be in normal scenarios would
because I think a lot of people would almost consider
that's ridiculous in terms of how much variance or or
you know from the spec you're allowing for, right, you know,

(40:21):
but we were basically allowing for open eye to respond
back with things that are almost completely wrong and then
we fix them.

Speaker 1 (40:29):
So there is a temperature setting right when you when
you issue an API command.

Speaker 3 (40:35):
Yes, we're not touching that or playing with that at all.

Speaker 2 (40:38):
You're using the default. So what is the default?

Speaker 1 (40:40):
Like a very low temperature, so low tolerance for hallucinations,
like zero.

Speaker 3 (40:45):
Yeah, I understanding is that I don't know if it's
zero or not, but I'm understanding it's pretty low.

Speaker 2 (40:49):
It's pretty low.

Speaker 3 (40:49):
Yeah, yeah, yeah, and it might be even lower when
you're doing code related requests.

Speaker 2 (40:54):
I imagine it should be. I'm also thinking about what
Carl talked about with the system MD, like shouldn't have
be this be stuff that's basically a part of every
prompt every time, like resist hallucination, make sure it's accurate.
I keep reading about folks doing agenda GAI, where it's like,
make sure you do every step. You know, you cannot
proceed to the next thing without showing proof of this.

(41:17):
Building more testing into each step so that it actually
shows that the test passed.

Speaker 1 (41:22):
What I think I hear Mark saying is that we're
foregoing most of that in lieu of taking the stuff
that comes back from it and using our own smarts
to discern what is real and what's not.

Speaker 3 (41:35):
Yeah. And also we're doing things a step at a time,
and each step is directed as a developer directed action right,
So the developer might say I need some test cases.
And then the developer might say, okay, I want to
All these test cases have got hard coded numbers inside
of them, and I want to change that to be
parameters to the test case and use the test case attribute.

(41:58):
So convert all these test cases to me the more flexible,
do that for me. And so that's the second step
or whatever. And so the developers are doing these steps
one at a time in our model, and so the
developer essentially approves after each one. Right now. One of
the piece is the things you mentioned there that I
really like that we do not have yet, but I

(42:20):
think some form of this is likely to come in
the two point zero and that is the ability to
validate or verify by looking at, for example, compiler errors
after a build like before and after a build. Right,
compiler always gets to say before and after a change,
I mean not a build. Yeah, So I think that's interesting.

(42:41):
That's a that's a really interesting piece. Giving giving that,
including that as part of context, and make it triggerable.
So if I use the word error, or I say
the word fix, can you fix this something along those lines,
it will say, well, what's going on.

Speaker 1 (42:56):
In that list of triggered prompts that you showed me
where you have like clipboard and then a prompt that
has is there one? Can you set just a general
system prompt in there that goes with every single request?

Speaker 3 (43:08):
Yes, each one of those prompts has a regular expression
that triggers it, and the regular expression essentially searching that text,
that speech to text translation that came back.

Speaker 1 (43:17):
So you get just to a wild card everything.

Speaker 3 (43:19):
Yeah, you dude, dot star for the for the regular
expression to match, and it'll match everything. But even with that,
you can specify an additional code rush context, which could
say something like that, make this happen only in see
sharp files. Yes, we'll make this happen only when i'm uh,
when there's a selection, or only when I'm debugging. Right,
So you can set different contexts, rich control over that.

Speaker 2 (43:42):
I love that. I love both, and that all.

Speaker 3 (43:44):
Happens automatically, so you don't need to say any of
those things. Right.

Speaker 2 (43:48):
I like both.

Speaker 1 (43:48):
I like having a general system prompt, and I also
like having a context based triggered prompt. Yeah yeah, yeah,
that's very very smart. Wow, I don't know where to go?
Where else where else can we go besides Austin here.
I mean, I'm blown away, right. You know, there is
one thing that became a parent when we were talking

(44:09):
about this, is that the way that I did it
in my Blazer train thing was to create a big prompt,
a big giant prompt, right, and then you know, like
creating a data manager for example, that's got you know,
fifty sixty methods. You know, we got cred methods for
idio dot net for each of these models, let's say,

(44:29):
and it takes you know, fifteen minutes, and then I
can check it out and one at a time tweak things.

Speaker 2 (44:35):
Right.

Speaker 1 (44:36):
Well, that's a different way to approach. And if it
doesn't work out, you can just blow the issue away
and start again, right, Yeah, close the issue. But that's
a different approach than what you're doing. What you're doing is,
like you said, very close to the code, right, it's
in the code. You're probably not doing big things like that.

(44:57):
You might say, you know, do some data binding over
this particular model or something like that in WPF and
later in Blazer, but you would probably do them one
at a time and in a more controlled way. It's
kind of what I'm thinking. But it's two different ways

(45:17):
to approach you.

Speaker 3 (45:18):
Well, well, I do like I I kind of approach
by groups. So like for data binding, for example, I
would say, well, like, I want to I want you
I've got a class on the clipboard. I want to
do data binding.

Speaker 1 (45:29):
All.

Speaker 3 (45:29):
I want you to find all the controls on the
form already and bind them to the appropriate properties in
this class. So I'll do that. Or I'm like say, hey,
I want you to take all the buttons on this
form that don't have a style, and I want you
to add a new style to that, and I want
you to set the default font for that new style
to be twenty four. So I will kind of group

(45:52):
things together.

Speaker 2 (45:53):
And I'll digestible pieces, yes, and.

Speaker 3 (45:55):
I'll do those all at once check the results. Sometimes
it's a run to check those results and stepping through
the code maybe, and sometimes it's visually scanning the code
and seeing what's going on.

Speaker 1 (46:05):
And so what happens if it does something that you
don't like?

Speaker 3 (46:09):
I mean, well, you've got so that's a great question.
I love that if you ask, because this is this
is actually.

Speaker 4 (46:14):
Another Hey have we met? This is my freaking job, man,
This is I don't know that you should have a
podcast or something. This guy's good.

Speaker 3 (46:26):
So so yeah, this is a This is actually great
because again this is a great point of differentiation. If
you look at other tooling, other tooling makes changes in
multiple places and requires multiple undos right to do that,
whereas the code Rush aigen does everything, wraps everything up
in a single undue unit, so you can undo, right,

(46:49):
go back, or go forward again. And when you go forward,
that navigator I talked about that really high level you know,
ability to look around, to go from changed to change,
and to see the differences. That comes back up on
a redo. So if you undoing then redo, you can
get that navigator back up and scan the differences inside

(47:09):
if you need.

Speaker 2 (47:10):
To do that. Yeah, that's cool, so so.

Speaker 3 (47:13):
And it's fast, right. The main way time is waiting
for AI to give you that composition that responds back.
That's your main once you do that, for code Rush
to make many changes through many files is generally less
than a second. It doesn't, it doesn't take that long.

Speaker 2 (47:28):
So I'm also thinking in terms of how much experience
you need as a software developer to keep able to
use these tools successfully.

Speaker 3 (47:35):
Well, you cannot, I think you cannot be a brand
new developer.

Speaker 2 (47:39):
Yeah, I agree, Right, you can't, like you don't know
when the tools misbehaving.

Speaker 3 (47:44):
Well, you could, you can get I haven't seen scenarios
where you ask for something and there's just not a
lot of content out there in terms of what you're
asking for, and so what you get is something that
maybe doesn't work, or maybe it's on a you know,
on a framework that's been changing over time, So you
get old code that doesn't work or something like that.
And if you don't know anything about that, you don't

(48:06):
know how to solve it. You're stuck.

Speaker 1 (48:08):
Here's a great example, and I brought this up in
a previous show. Mark you'll like this. Yeah, tell the
AI to create a collection class that is thread safe. Now,
if you didn't know, there's already a thread safe collection
in dot net, right, it's going off and it's not

(48:28):
going to know that, and it's going off and creating right. Right,
you didn't say, hey, is there a thread safe collection
in dot net? You just said, create me a thread
safe collection object, right, a junior divisse.

Speaker 3 (48:40):
It's going to do what you were saying.

Speaker 2 (48:41):
Yeah, it's going to do.

Speaker 1 (48:42):
Exactly what you say. And now you're committed to that
thing and you use that everywhere and oops, there's a
problem with it, right because it's not debugged and tested
and all that or whatever.

Speaker 3 (48:52):
Yeah, yeah, I think that I think you have to
be you know, I think you have to be just
kind of if you're going to if the code you're
going to generate and the things you're going to build
are are kind of, you know, within the norm whatever
that is of development in other words, the things of
which we have a lot of content that AI has studied,

(49:16):
then I think that your user level needs to be
maybe below you know, intermediah. You can be. You don't
have to, you can be somewhere between beginner and intermedia
and you can benefit And there are some ways where
you can benefit really hugely from this as a beginner,
when you say I want a new class that does this,
this and this, and then you've got it and you're done.
There's it's so much easier. And and right now we're

(49:38):
only supporting c sharp, but when we start supporting other languages,
the ability to now code in another language becomes transferable
because I'm still using the exact same spoken English right
to interact with it.

Speaker 1 (49:50):
But getting back to my getting back to my thing,
my example of the thread safe collection. Yes, you probably is.
It's a good idea, even for more advanced developers to
first have a consultation with it about what the best
way to go ahead implementing such a feature would be.

Speaker 3 (50:08):
Right, yeah, no, I agree. And that's where this this
feature is not really a consultation feature. Yeah, you can
ask it questions and it'll come back then and the
navigator and give you the answer. But it's not interactive.

Speaker 1 (50:19):
But at chat GPT would be good for that, right.

Speaker 3 (50:22):
Yes, I will switch over to chat GPT for that
kind of a style of dialogue.

Speaker 2 (50:26):
For that kind of consultation.

Speaker 3 (50:28):
Yeah, yes, yeah, I'll do that in chat GPT. I
use Copilot for other things as well. Copilot's great, great
at like hey, take a look at this error or
something along those lines. Sometimes Copilot's great for just these
kinds of questions. Because it's with code rush, you can
actually talk to copilot using voice. Yeah so, and I
didn't tell you this this morning, but if you hold

(50:49):
down the left control key, just hold it down and
start your your prompt with hey co pilot, oh, and
then ask the question. It will open up the copilot
window fill in the prompt with your question and all
you have to do is just you know, hit enter,
okay and send it up.

Speaker 2 (51:04):
Wow.

Speaker 1 (51:04):
So I think we're in agreement in that we're and
I brought this up in our talk with Scott Hunter
and also offline with Hunter, and.

Speaker 2 (51:14):
That the.

Speaker 1 (51:17):
As a developer, there's just as much value on your
creativity and imagination now as there is on your ability
to write code and the things that you remember how
to do, maybe even more like you have to be
able to imagine things and in order to create them.
Whereas you know, for the last fifty years of my career,

(51:40):
it was always like, oh, I need to do this, Well,
how do I do that? Oh, st break it down
into steps, go research those steps, figure out how to
write the code. And now it's like, you.

Speaker 3 (51:50):
Know, and no one to consult with on that. By
the way, you're right.

Speaker 2 (51:53):
No one to consult with.

Speaker 1 (51:54):
Yeah, and now it's big picture think all right, so
start with chat, GPT or whatever, and you know, consult
on the architecture. What are the ways that I can
go about doing this and open up your imagination? Like
you you might as well just try it because you
can always blow it away.

Speaker 2 (52:10):
I almost wondering if you just need an architect. Now,
that's just the architect can use the tools to get
a lot of the code written.

Speaker 3 (52:16):
Yeah, it's it's you know, for for this journey. One
of the steps along this journey was I realized we
needed a great way to show differences inside the code, right,
And I checked out what tooling that we had already

(52:37):
and and we didn't. We didn't have what we needed
essentially that was already built in. So I started that process,
and I and I think I started a discussion with
job GPT and and and to just kind of understand
how the technology works, and ultimately I ended up with

(52:57):
an engine that does essentially the same thing. But there's
two variations of the engine. One variation works with lines
and the other works with characters. And so we use
the line engine at the high level to see what's changed,
and then if a line has differences, then we feed
that to the character engine right to see what is changed, right.

Speaker 1 (53:19):
What's the actual difference in this line to that line?

Speaker 3 (53:22):
Right? And I also use the tool to help build
the visualizer as well. And the visualizer is super interesting
because there's no built in difference visualizer that's out there,
and so we were like okay, or I was like, okay,
let's try I think the rich textbox or whatever I

(53:43):
think is called WPF, and I start to do that
and look at that, and we start to make progress,
and then we realize that, okay, well it's great, except
for the ends of the lines don't look great because
the lines end at different you know, you know is
and the background highlighting goes only to that end of
the line.

Speaker 1 (54:03):
Are you talking about the get difference visualizer.

Speaker 3 (54:06):
I'm talking about the one that's built into Aigen's navigator,
the Coat Rush navigator, the one that we built from scratch.

Speaker 2 (54:12):
Yeah, the one that's in Visual Studio is pretty good.

Speaker 1 (54:14):
I mean, if you make a change to a file
and then you double click it, you see the original
on the right and the new one on the left.

Speaker 3 (54:21):
And yeah, okay, so anyway, there is uh that's I'm
not saying. I have I have some problems with it.
I don't like a number of things of it, and
so I implemented something different. Yeah, it's basically what I
ended up doing. And at any rate, we got the
highlighting right, but the ends of the lines wasn't working.
And I'm like, well, how do I get this to work?

(54:43):
And that was the question essentially asked, and the answer is,
you can't do it with the rich textbox, but you
can do these adorners where you throw them in there.
So then I'm like, okay, wait, I can put adorners
out on the ends here and line them up. Yeah, okay,
let's do that. But now I scroll, they don't scroll.
And then so now my new question is how do

(55:03):
I get the adorers to scroll? Let's make them scroll
and work. And this whole thing is this inertive process
where I am doing this a piece at a time,
but finally get it so that it's essentially solid. Right
that you've got this, I'm using the rich text I'm
not doing my own custom control, using existing rich text
box and getting and making it behave as if it's

(55:26):
a difference view. And that's done largely through the help
of AI assisting and making changes interactively as I go through.

Speaker 2 (55:33):
It's very cool, yeah, so cool.

Speaker 3 (55:36):
Yeah, But you know your question about what level do
you have to be? I think at least intermediate in general,
and then the further off the beaten path you go,
the more of an expert you kind of need to
be or the more of a debugger, you know, kind
of let's figure this out kind of person that you
need to be.

Speaker 1 (55:53):
You need to know what questions to ask, right, That's
that's what it all comes down to.

Speaker 3 (55:58):
Yeah, yeah, what to ask for You're right, Yeah, you do.
Even if you're working with something you've done. You know,
if you're working with a new framework, right, if I'm
working with a control set, what do they call their
grid control or what do they call their their toolbar control? Right? Right,
If you know what that's called, then you can ask the.

Speaker 1 (56:13):
Question and hopeful you'll do research first. That's the whole idea.
The less the less familiar you are with what you're
going to end up with, the more research you have
to do up front, however, to know how to ask
the questions.

Speaker 3 (56:25):
However, Carl, I'm today, I'm in this code and I'm
like saying, hey, I'm seeing that a whole bunch of
these methods are passing in the it's an on notified
property change. Yeah, thing I hear, And I'm like, and
my prompt goes like this, I say, hey, I'm noticing
all of these calls to property change are passing in

(56:47):
the name of the property, But isn't there some way
to check from the call stack what that method calling
method is? And that's all I said. And so in
other words, you could know kind of genly about it
and speak about it. And then it goes in and
it ends. It puts in I think member calling name attribute.

Speaker 2 (57:06):
Member calling name yeah, on.

Speaker 3 (57:08):
The parameter definition, and it does that, and it's great,
and I'm like, oh, but wait, you left all of
the calls with their arguments. They don't need those arguments anymore,
that's right, yeah, And then it fixes that and so
that instead of me doing any of those things manually,
it's just you know, speak, wait a few seconds, and
then speak and wait a few seconds. And that's a

(57:31):
different flaw.

Speaker 1 (57:32):
You also have to have some pain, right like if
I've had pain in the past with making base classes
for models, for example, which I do not do anymore
because you get into all sorts of problems with that,
right then you know, you could easily take that in
a direction where it would say, well, let me just
make a base class for you that does all the

(57:53):
notified property change and blah blah blah, and now you
can just create models that are in that space class
and right, Yeah, you know what I mean, And you
wouldn't have the experience to say, no, I really don't want.

Speaker 3 (58:04):
To do that, right right, No, I get that, I
get it. Yeah, But you know, you can make things
work a lot of different ways. I can have the
same set of test cases and they can all pass
regardless of which frameworks. And sometimes I don't understand the
problem with the framework until it's time to go to
version two, right, right, right, and then I realized, Okay,
that was a mistake. I got to really really make

(58:26):
this architecture great, right, And that is like that's something
that you don't see with what we've done here, but
we have actually made this architecture really good. We stopped
we were moving towards bringing Blazer support in this one
oh release, and we stopped it to focus on the architecture.
And as a result, I'm like way more confident about
our ability to go in and support Blazer in other

(58:48):
languages because of the changes that we made to simplify
and decouple the architecture behind this.

Speaker 2 (58:57):
It's awesome.

Speaker 3 (58:57):
So feel good about that.

Speaker 2 (58:59):
So where can we get this?

Speaker 3 (59:00):
So? Okay, go to def express dot com slash code
Rush and you can get you can download a copy
codeer is for free. Code Rush is free. There's no
charge for this. However, you're gonna have to supply an
open Ai API key, an open Ai API key, and

(59:20):
you're gonna need to supply if you want in voice,
you're gonna need to supply an Azure Cognitive Services speech key.

Speaker 1 (59:26):
Which is free for.

Speaker 3 (59:29):
It's essentially you can get a free tier that gives
you I think up to five dollars a month, and
I use maybe three dollars a month in all my
months of my daily use of the product, so you're
probably not gonna pay anything for that. And Azure Speech
Cognitive Services is excellent. I just I cannot, I cannot
tell you how good this service is.

Speaker 2 (59:48):
It's really good.

Speaker 3 (59:50):
It's it is way more accurate than anything I've seen
built on a machine running off of a machine. And
it's fast. It's like generally we get times of processing
times of a about from the time you stopped talking,
we get a response back that's about four hundred milliseconds.

Speaker 1 (01:00:04):
That's amazing, Mark, congratulations, and I'm really blown away. And
it's always good to talk to you. I always learn
things and what can I say? Thanks again?

Speaker 5 (01:00:14):
Thanks, guys, all right, and we'll talk to you later
on dot net Bros. Dot Net Rocks is brought to

(01:00:40):
you by Franklin's Net and produced by Pop Studios, a
full service audio, video and post production facilities located physically
in New London, Connecticut, and of course in the cloud
online at pwop dot com.

Speaker 1 (01:00:55):
Visit our website at d O T N E t
R O c k S dot com, RSS feeds, downloads,
mobile apps, comments, and access to the full archives going
back to show number one, recorded in September two thousand
and two. And make sure you check out our sponsors.
They keep us in business. Now, go write some code,
see you next time.

Speaker 2 (01:01:17):
You got JAD Middle Vans And then I'm
Advertise With Us

Popular Podcasts

Stuff You Should Know
Law & Order: Criminal Justice System - Season 1 & Season 2

Law & Order: Criminal Justice System - Season 1 & Season 2

Season Two Out Now! Law & Order: Criminal Justice System tells the real stories behind the landmark cases that have shaped how the most dangerous and influential criminals in America are prosecuted. In its second season, the series tackles the threat of terrorism in the United States. From the rise of extremist political groups in the 60s to domestic lone wolves in the modern day, we explore how organizations like the FBI and Joint Terrorism Take Force have evolved to fight back against a multitude of terrorist threats.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.