Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
How'd you like to listen to dot NetRocks with no ads?
Speaker 2 (00:04):
Easy?
Speaker 1 (00:05):
Become a patron for just five dollars a month. You
get access to a private RSS feed where all the
shows have no ads. Twenty dollars a month will get
you that and a special dot NetRocks patron mug. Sign
up now at Patreon dot dot NetRocks dot com. Hey
(00:35):
guess what, it's dot net rocks all over again. I'm
Carl Franklin, an amateur camp here for your geeking out
and you know, all that pleasure. If you're dot NetRocks pleasure,
I hope you're having a good week.
Speaker 2 (00:49):
We are.
Speaker 1 (00:49):
We're getting ready to head to Orlando for dev Intersection,
which should be going on.
Speaker 2 (00:56):
Yeah, yeah, this show will come out, This show will go,
Will we go? Will be on when we're in Orlando.
Speaker 1 (01:01):
Right right now, we're probably up on the stage giving
prizes away.
Speaker 2 (01:04):
Or doing something silly, which we are prone to doing.
Speaker 1 (01:07):
Yeah, to do well. We've got a lot of interest
stuff to get through. So let's talk about nineteen seventy one.
Speaker 2 (01:13):
This is the all right you begin friend, because it's
a big year. It's a big year. So let's see.
Speaker 1 (01:21):
I'm going to talk about some of the cultural stuff
like the rock bands formed in nineteen seventy one include
The Eagles, Queen fog Hat, New York Dolls, and Roxy Music.
Radio and TV advertisements for cigarettes were banned in the USA.
Speaker 2 (01:37):
Yeah.
Speaker 1 (01:37):
At the beginning of that, Rolls Royce went bankrupt again. Yeah,
the first ever email.
Speaker 2 (01:43):
Was sent up. I got a whole piece on that
quirty IOP or something like that or test one two three,
he doesn't remember.
Speaker 1 (01:50):
Yeah. The Nastac made its debut on Wall Street. Evil
Knievel set a world record. I can't remember if he
actually crashed or not, but he probably did. That was
his stick. Hey watch this hold my beer boom set
a record, but I broke every bone in my body.
Ed Sullivan Show aired its last episode.
Speaker 2 (02:11):
End of an era.
Speaker 1 (02:12):
The New York Times started publishing The Pentagon Papers.
Speaker 2 (02:15):
Jim Morrison died. Yeah. Yeah, Tormented soul, Tormented soul. Yeah.
Speaker 1 (02:22):
Four countries gained independence Bangladesh, Baja Rain, Qatar, and the
United Arab Emirates. China was admitted the General Assembly at
the United Nations. M you're going to talk about that.
Let's see. Oh, Charles Manson and three of his followers
were convicted. Starbucks opened their first store. Yeah, I mean
(02:46):
became president of Uganda. Well, Disney World opened in Florida,
and there's so many more, but I'm going to pass
the baton over to you, my friend. What happened in
science and space? Let's do space First.
Speaker 2 (03:00):
Fourteen and PAULA fifteen are both in nineteen seventy one,
and paul Of fourteen was the last of the original missions.
That's Alan Shepard was the commander on it. It's the
first American into space on Mercury. And of course he
was a big golfer, so he brought a golf club
and whacked a ball on the Moon. That ball has
not been found. That kind of went far. It's been somewhere,
(03:23):
went somewhere. PAULA fifteen was the first of the extended missions,
so doubling the time on the Moon, plus when they
introduced the rover so they were able to travel a
lot farther on the Soviet side. Having lost the race
to the Moon, they began on space stations and in
nineteen seventy one lifted off with Salute one, only designed
(03:44):
to function for six months, so it went up in
April was deorbity in October. There were only two missions
to it, so used ten soft docked but couldn't hard dock,
so ultimately, after a few hours they had to give
up and re enter, which unforture and the Soyu's eleven
successfully docked and the crew spent twenty three days on
(04:05):
the mission doing a bunch of experiments, and unfortunately on
re entry the capsule depressurized and all three were assixiated,
the first and only time that we actually had a
crew die above the Carmen line versus been other astronauts
lost at other times, but Georgie de belfe Ski, Vladislav Volkov,
(04:26):
and Victor Patsayev were found dead in the capsule after
re entry. But I would say that the Salute one
is the core designed for every space station going forward,
with the exception of Skylab the Chinese the space station
is derived from Salute. All the salutes obviously ends and
(04:49):
mirror and this Vezda module on the ISS. The core,
which is the core control module for the International Space
Station also derived from SCOT from Salute. On the computing side,
Big Year this is the year that Intel ships the
four thousand and four, the first CPU. It's an eighteen
pin DIP twenty three hundred transistors on board. It's a
(05:10):
four bit processor at one hundred and fifty killoer hurtz.
It'll go on to be in a bunch of calculators,
and of course they'll go on to go onto the
eight thousand series and everything you're using right now. Yeah.
The floppy disc is first developed at IBM by a
guy named Alan Shugart. Although limittedly he led the project.
He recognized that name if you remember the Shoeguart drive.
(05:31):
That original floppy disk is an eight inch format. It's
called the IBM twenty three FD and it is read
only oh eighty K of data. There's a separate hardware
to actually write on the disc. And these were meant
for updates to the System three seventy, So you would
get a twenty three FD as part of your System
three seventy and do updates with eight inch discs. There'll
be later versions that are rewrite.
Speaker 1 (05:51):
I remember looking through the Radio Shack catalog when I
was a kid, and there were computers that took the
eight inch floppies of the model too, took the eight
inge drive, yeah, and I thought that was like the stuff,
you know, it's pretty cool. I just so wanted a
computer when I was a kid, for sure. It wasn't
until I was almost in high school that my father
bought one.
Speaker 2 (06:11):
But they only got better with time.
Speaker 1 (06:13):
Here's another story, another story to interrupt. When I was eleven,
nineteen seventy eight, there was a huge blizzard in New England,
Blizzard of seventy eight. We got like four or five
feet of snow and so there wasn't any school that day,
and my friends and my brother and I we decided
to build not just a snowman, but we built a
(06:35):
space station and called it Skylab.
Speaker 2 (06:38):
Nice.
Speaker 1 (06:38):
Isn't that cool? Yeah? I was like, why Skylab? It
had little shoots and stuff for snowballs running down it
and it was one of the happiest days of my
life in school, probably because there was no.
Speaker 2 (06:51):
School, but snow days are always good days without a day. Yeah,
last things In nineteen seventy one, the Arpanet stuff so
went live the year before and now we're starting to
build products on it. So Ray Tomlinson sends the very
first email. This guy's from MIT, but he was working
for BBN by them, which is both Barnic and Newman.
(07:12):
Most people have a dot heard of BBM, but they
were essential to the creation of the Internet. That very
first email has a lot of the stuff you already
recognize today. Uses the app notation for name and domain.
It has to send her a subject, date, a body.
There's an argument Tomlinson doesn't have the email anymore, so
they asked him, well, what was in it? He says,
I don't know test one, two three or the Cordy
(07:32):
row like, I don't know something like that. It was
a test email. Yeah, but over on the Berkeley side,
so that's east coast. On the West coast side, Mary
Turnoff of Berkeley, who works for the Office Emergency Preparedness,
builds a system called the Emergency Management Information System and
Reference index are IMASARI, which is real time Chat wow,
(07:55):
and he has ten offices communicating in real time via
nineteen seventy one.
Speaker 1 (07:59):
The first time I said, I saw that at a
friend's house who is connected over a modem to somebody.
I was just amazed. It's stunning, right, So it's stunning,
like that's somebody typing in real time. Yeah, on the
other side of the world, potentially.
Speaker 3 (08:14):
Yeah, I have a question, Richard.
Speaker 2 (08:16):
Yes, hey, Egil, speak up.
Speaker 3 (08:18):
Gil yeah, sorry, I'm really holding my breath yet. But no, like,
did they actually have the at sign back when the
first email was sent out?
Speaker 2 (08:27):
Apparently? Yeah, they said the acts of what was included
in the original email structure. So you know, I mean
it's nineteen seventy one, so ASKI exists. It's just a
question of this is mostly mainframe computers at this point,
where there using ASCI, where they using embedict, there's a
lot of IBM. Then, Yeah, so it could have been.
Speaker 3 (08:44):
But it probably wasn't printed on the keyboard, I guess.
Speaker 2 (08:48):
Yeah. Yeah, it's hard to get your head around what
the hardware looked like back then. But then this was
mostly universities and military labs on arpedet. Right, it's not
inner yet, so they're all largely mainframe machines, PDPs and ibms.
Very cool.
Speaker 1 (09:06):
Yeah, so that's your list, Richard, that's my list. Well
in that case, it's time for better no of framework.
Speaker 2 (09:13):
Awesome rold of music. Man, what do you happen?
Speaker 1 (09:23):
So, as you know, it's my moo to go looking
for things that are training on GitHub and I found
this Hardened Windows Security the new threat to malware. Hardened
Windows Safely securely only with official Microsoft methods, So this
is essentially training and it's a repository that the features
(09:46):
have already been implemented by Microsoft and Windows to fine
tune it towards the highest security and locked down state
without relying on any third party component or dependency, using
well documented, supported, recommended, and official methods. So I think
this is really cool because one of the big problems
in security, as you guys know, is the chain. You know,
(10:08):
the supply chain, what dependency or are you using and
what dependencies are they relying on, And if you draw
graphs sometimes you can see the spokes go out in
millions of directions and it's impossible to wrap your mind around.
So I really like this for the fact that you know,
you're not relying on some GitHub repo where somebody could
(10:29):
come and do a pull request that isn't looked at
carefully and it's got some malware in it, and that's
that's a reality today. So yeah, I'm very much in
favor of doing what's on the box. You know, what
can I do that's right here.
Speaker 2 (10:44):
So and to be clear, this is not an app
in GitHub that you compile or anything. This is just
a list of information and some scripts Power Show to guidance. Yeah,
just guidance to implement features that already exist in Windows
or perhaps have Microsoft based tools in some cases, to
just turn up the security level on your machine. You know,
(11:05):
Microsoft tends to default to everything works, it's easy to use, right,
and then highly secure.
Speaker 1 (11:11):
As we say on our show, security is the enemy
of convenience and vice versa. And so if it's very convenient,
the fact of the matter is, it's probably less secure
than you wanted to. But there are ways to to,
you know, get get as close as you can to
convenient and be secure at the same time. And I
think that's you know, it's a guidance like this that
(11:33):
helps us with it. But you know, knowing what we know, Richard,
when you come across people in your daily life who
are asking you, you know, I got this text message
or I got this email, what is that? And you
know your answer is ignoring it and you're rolling your eyes.
It's like most people out there are so vulnerable and
(11:54):
susceptible to phishing attacks and yeah, dude, you don't know
where to start.
Speaker 2 (12:00):
It's worse than that. A friend of mine shows me
as one of my neighbors shows me the out box
on the email on his iPhone and it's full of
spam sent through his account, and I'm like, dude, you
need to change a password. He's like, oh, no, I
can't do that. Yeah. Like, why I'm not changing my password.
It's not hurting me. I'm like, you're sending spam mail.
Speaker 3 (12:23):
Your reputation take a hits.
Speaker 2 (12:26):
Yeah, Nope, not changing the password. Step too far, not
a thing. Not going to do that.
Speaker 1 (12:32):
Yeah, let me introduce you to a password manager, my friend.
Speaker 2 (12:36):
Please.
Speaker 1 (12:38):
Well, anyway, that's what I got. Richard, who's talking.
Speaker 2 (12:40):
To us, grabbed a comment off of the show eighteen
sixty eight when we did with one Eagle Hints and
maybe you've heard of him talking a little bit about
the updates around b unit. This is from twenty twenty three,
and our friend Joe Finney had a comment on the
show which I thought was very appropriate. He said, I
really appreciated how Eagle made the distinction between different kinds
of open source projects. I think in addition to having
(13:00):
a licensed file in the REPO, maybe projects should have
a goals file which states what the owners of the
project would like to see it become. There's a little
one off experiment, but a complete end user application looking
to be improved but not really built upon. Is the
project more of a framework supported by a business to
help with integrations and so on? Like, what's the intent?
(13:23):
I love it that would help consumers of the project
know what to expect when opening issues. Prs are looking
for support. As always, Thanks for the great show. It's
all refreshing to hear smart people discussing hard problems.
Speaker 1 (13:35):
Yep, even if the answer is how should we know?
Speaker 2 (13:40):
Well? And we just had Joe on the show too, right,
it was the previous week, so it's only coincidental that
I happened to find a comment from them from eagles
last show, which I thought was very relevant. So Joe,
thank you so much for your comment in a copy
of music. Co Buy is on its way to you,
and ifew'd like a copy of music, go buy right
a comment on the website at don arogs dot com
or on the facebooks. We could publish every show there
and if you come there and everything on the show,
(14:01):
we'll send you a copy of music to Coba.
Speaker 1 (14:03):
Music code by still going Strong and twenty two tracks.
You can go to music too coode by dot net
to purchase the collection in MP three wave or FLACK format.
Go for it Flack. Okay, well, let's bring back Egil Hanson.
Of course, he is the author and inventor of b
unit for testing Blazer apps and components. So let me
(14:27):
read his official bio.
Speaker 2 (14:29):
Egil is a.
Speaker 1 (14:30):
Dane chilling in Iceland. He's a developer at Delegate and
a Microsoft MVP. He chats globally at conferences about nifty
developer stuff and is all about that clean maintainable code.
I love that song. I'm all about that clean maintainable code.
That clean maintainable code. He's the creator behind b unit,
(14:51):
as I said, for Blazer tests and angle sharp diffing
think spot the HTML difference in c sharp. Though he's
danced too many tech tunes, Eagle's current jam is everything
dot net and cloud. Check out his repos at GitHub
dot com slash Egle. Yes, this bio was written by
chat GBT.
Speaker 3 (15:11):
And it was written what was it three years ago?
Last time I visited you too?
Speaker 2 (15:16):
So yeah? Yeah? Nice? Uh?
Speaker 3 (15:20):
Still still more today? Yeah? Well, in which you mentioned
b unit, I have to give a shout out to
my co maintainer Steve and Giselle who's really pulling a
lot of the weight around that. These days, we are
working heavily on a V two of that co mollis
just simplified a PI is you remove some of the
craft all of that, and especially removing the support for
(15:44):
the older target frameworks that we have. We have we're
still support back to free one dot net core days,
so it will be nice to to clean that out
a bit. But so try not to do braking changes.
So we would have to do a V two to
actually do that. So so that that will happen hopefully
the end of this year.
Speaker 2 (16:02):
Very good. Nice, we'll need to do another show. Then
we talked about the new version.
Speaker 3 (16:06):
Absolutely, yeah, absolutely, that would be good to do.
Speaker 1 (16:09):
So what are you thinking about these days?
Speaker 2 (16:11):
So?
Speaker 3 (16:12):
I well, I, as my bio probably indicates, I used
to be sort of the B unit guy and and
there are a lot also conference talks on that, and
I branched out a bit, so now I'm talking more
about testing in general and have been double clicking a
lot on what it means to write valuable tests. So
(16:33):
so so so that that that's something you know that
I'm really passionate about making sure that.
Speaker 1 (16:39):
That indicates that there are mostly not valuable tests out there.
Well exactly, so implies.
Speaker 3 (16:47):
Well, no, not necessarily, not necessarily, But I think I've
seen and I've certainly been guilty about myself at certain
cyy st tendency too to not think, not not be
explicit about what the what kind of test you write.
So so some some types of tests have different benefits
from others, and there's no silver bullet, so so be
(17:11):
you know, be open to new ways of writing tests
or write new styles of tests. So, I think the
classic thing you really want from the test we is
you bought the confidence to be able to say, well,
all my tests passed, so I haven't made a mistake,
and we can go straight to production or at least
through a c c D pipeline where the quality gauge
is the test among other things, and if all are green,
(17:33):
well then you know it's it's good. And and for me,
it's it's about having tests that really gives you the
I think that's that's four pillars that I think that
we're quite elegantly described in a book called I Think
Unit Testing Principal Patterns and Practices by Vladimir Kolikov, which
(17:54):
I often reference is his way of testing he describes
in that book is very much aligned with how I
do it. But he has four pillars sort of that
indicates if you have a valuable test suite, and it's
if you have good protection against regression, if you have
resistance to refactoring, if you have fast feedback, and your
tests are maintainable. So this is the top exactly. Well,
(18:19):
so if we take the three first ones, right, protection
against regressions, you get a lot of that if you
write more into interests, because regressions is when something breaks
that you didn't intend to break, or when you make
a change. Well yeah, well so unintentional change is probably
(18:39):
what I would call it. Right, And if you do
into interesting, you're touching on your nukelet package. You're touching
on all your external dependencies as well as your business logic. Right,
so if you have good into Interests, you can also
just upgrade to the data versions of all your external
packages that you depend on on use and if your
tests still go through, well you have probably a good way,
(19:03):
Like it's probably safe to deploy, right, It's probably okay
or great. But the downside is so the third third
pillar was fast feedback and obviously, and to insists they
runt quite a bit slower maintenance. Why is they also
somewhat harder to maintain because there's usually way more things
going on, so that does no you know, depending on
(19:27):
the kind of application you're writing, you can choose to
either go to integration testing or into interesting or have
a mix of all of that, and that is usually
my default is to start start with integration testing. And
then if I find I have a lot of business
logic letters annoying hard to test through the public surface
of my way, if it's API I'm building, or product
(19:50):
product on app or something like that, well then you know,
you peel away the outer layers of your application and
isolate your components are your units on the test, and
test them in isolation, so you get both fast feedback.
But there's there's no super bullet here.
Speaker 1 (20:08):
I can think of a few situations in which refactoring
can throw tests off. Say you have a method that
takes I don't know, three parameters or something like that,
and so therefore it's up to the calling code to
have those parameters from wherever they come from, whether it's
a user or something. And then instead of that you
(20:31):
decide to you know, make some configuration and the configuration
has those name value pairs in it now and they're
not passed in. So you're like, okay, I'll just you know,
remove these parameters from this method. We're going to assume
that these two parameters are coming from configuration and blah
blah blah. So there that'll break a test because oh well,
(20:54):
I'm calling this still with my parameters, and now I
have to decide whether I should get those values out
of the configuration before I pass it in or do
you know that kind of thing.
Speaker 3 (21:06):
Well, so there's a whole bunch of patterns that you
can lean into that will give you resistance to refacting,
which was the second pillar, and really resistance to refacturing
something that is valiable to aim for, independent of whether
you're doing into interesting or union interesting. So I think
the first thing is in general, to keep implementation details
(21:28):
out of your test if you can. And that's sort
of a high level. So what does that mean, Well,
implementation detail is well, actually, okay, that's that's actually something
I sometimes do when I do this. I do this
presentation around this, so I ask what are implementation details?
So an example, if you're building I don't know for
(21:49):
a consumer phasing applications, so not not building a library,
but you're building something that users will use. Right, if
you have a database, would you con see that better
implementation detail or would you say that that is you
know something that I think the opposite of the implementation
details is something that is externally observable. It's something that
you know changes and that you can then have it's
okay to have in your tests. So would a database
(22:11):
be an implementation detail in your mind?
Speaker 1 (22:14):
The database itself, the structure of the database.
Speaker 3 (22:17):
Yeah, yeah, yeah. Could you have something of the database,
could you have access, could you use could you have
entity framework in your test, for example, to put some
data into the database before you run the test, or
could you acquiring the database and see did my changes
act to get into the database?
Speaker 2 (22:32):
That smells bad to me?
Speaker 3 (22:34):
Yeah, well you would be right. So implementation details of
things like that like databases and impliptation details in theory
in your test or your test shouldn't care about what
database you're using, because your test should focus on business
value what you're trying to test.
Speaker 1 (22:50):
You're introducing more complexity.
Speaker 3 (22:52):
Yeah, but when another example could be, well, you have
an app and it sends out emails whenever users do something. Well,
you could actually take that, you could take a dependency
on the email setting functionality in your test because it's
externally observable. It's part of the external contract that you
have with the users. So that's a whole host of
things like this where you have to ask yourselves, is
(23:14):
it actually something that viewed from the outside when they're
looking at my application or looking at if you're building
a component live or class library, is it something that
is external to the library. And if it's. If it is,
then it's typically okay because if you actually go and
change the myth that you talked about before, change the
parameters you're passed them in a different way, well, you
(23:36):
actually want your test to break right because you have
changed the public observable behavior of the whatever you were building.
But if it's if it's just an internal thing, maybe
you took a method or method and split it out,
refractored into an internal class, or you took parts of
a class that got too big and refactored it into
a secondary class. Well, in that case, I wouldn't necessarily
(23:59):
create a testsly for that refactor out class. I would
just test it again through the public service. I have
of whatever I'm creating, so so so thinking hard about
how to keep implementation details out of test. But it's
also the hardest problem, honestly, incurading good test suite. And
usually when you have to, like if you have whatever
(24:21):
you're testing the class, for example, if it takes a
bunch of parameter as input and all the and you
have a whole set of tests, but I actually need
to do the same thing. Do that do that system
montal test up and passing parameters to it. Well, you
can refacture that into a small help me method that
syst next to the test, and that instantiates the suster
butter test for you. So when you actually have to
(24:44):
change things, you have one place to go. You don't
have to break all the tests.
Speaker 2 (24:48):
Right.
Speaker 3 (24:50):
Or at least when you do break them. I think.
Another thing I have strong feelings about is if you
are using generic marcking framework like EMQ or in substitute
in your test. So maybe you decided you have some
business logic and it's maybe talking to an API or
back end, so you don't want to have that part
(25:11):
of your test. You don't want to do intern testing,
and in this scenario. So you have a surface an
interface that you put that external communication behind. But if
you then use MOQ or in substitute or other marking libraries, well,
the way that you're the code that you are writing
(25:31):
test for is interacting with that external dependency or out
of process dependency, which is typically what you would use
that for. That's an implementation detail most often than not,
especially if it's a database or something that's internal application.
If it's sending an email to an email server, that's
a different story. But in the case where you have
a database or an event top or some other internal
(25:52):
out of process thing you have, if you use a
marking framework, you're likely going to in every test to
have that set up there. You say, well, when this
method is called on my interface, I want you to
respond with this, and I want you to do X,
Y and z, And you might also want to asserte
in your test on the MUG that it received a
certain amount of course with specific input and output. All
(26:14):
of that is implementation details. It's implementation details of how
your system under test is using them using the dependency
it has. So if you at some point change your
logic so that you don't save to the database. Every
time it gets a new thing in, it keeps it
around the memory a little bit, and then it's saved
(26:35):
maybe to you know, save one. Don't Well, if all
your test assume that it's going to save one time
every time it does a thing, now all your tests
are broken, and they shouldn't be breaking because the functionality
is still right, still right. The implementation details have changed,
but not the functionality itself.
Speaker 1 (26:51):
So let's talk about interfaces, shall we. As saying you
and me, I'm saying, yeah, you know that's uh. An
abstraction layer is critical when you're doing marking.
Speaker 3 (27:04):
Well, yeah, right, Well you.
Speaker 1 (27:06):
Just want to swap one service in for another.
Speaker 3 (27:09):
So yes, and though well, in general design wise, I
want to give a shout out to Mark Sieman as well.
He has an actual book book called Code that Fits
in New Head I believe actually did an interview with
him on the podcast about that book and he in
there for people who want to double click on this.
(27:29):
He talks a us about, you know, keeping the abstractions
or interfaces at the edge of the system, and especially
at the edge where you maybe have some out of
process communication going on, where you have something that doesn't
that isn't easily h spun up during a test test
unit test for example. So in general I am fine
(27:51):
with introducing interfaces at the itch or if you know,
things starts hurting. I don't introduce interfaces into my code
just so I can not doing testing. Okay, if unless
that is the pain point, I actually prefer the testing
style I believe it's actually called, I think the Chicago
school of testing or our classic school of testing, where
(28:11):
you test from the outside in and you if you
touch one to five ors in classes doesn't really matter.
I don't want to test each class in isolation and
put interfaces around all like every single class has an
inservace in front of it so I can mark out
it's if it's been used as a dependency somewhere else.
I don't really see a lot of value in that.
(28:32):
And again it also leads to just a lot of
maintenance and maintainability of your test just goes down.
Speaker 1 (28:36):
Mark Semen is one of our favorite and favorite smart people.
He is a great Dane, isn't he he is?
Speaker 3 (28:43):
He's frankly, he's he's quite a bit better at explaining
some of these things that I feel I am. I
really he's an expliration for sure.
Speaker 1 (28:50):
So I have a little story here. I was working
for a company that had some data that they were
pulling from Oh let's just say it's stock data, and
it was coming from a real time place, and they
wanted to have this real time update of these tickers
and stuff, right okay, But their secret sauce was secret,
(29:13):
and I didn't want to be I didn't want to
interface with it because that makes me liable in case
of a data breach or something like that. So I said,
look right up front, We're going to make an interface
at this edge, and I'm going to write a little
number generator with a timer or whatever to send me
my data over and I'll pass that over to you
(29:35):
when I'm done, and then you can wire up your
back end stuff. I don't want anything to do with it.
I'm a UI guy at this point, right. So that
was I wasn't using any mocking framework or anything. We
just created an interface and it was their interface, and
I said, all right, here's the data that I'm going
to give you, here's the method that i'm going to
call when I have new data, and then and Bob's
(29:56):
your uncle. So in that case, it was the it
was the business agreement and that dictated that we use
that method.
Speaker 3 (30:05):
Well, I think so. I think that's a completely fair
use of interfaces. So, if you have something that is
external to your application, put an interface in front of that,
or in domator and design we call that an inter
corruption layer. You create a facade but sits in front
of some external thing that you have to talk to
or that talks to you. And and that is so
(30:29):
that if they change whatever they're doing, it doesn't affect
your cover. You have a facade.
Speaker 1 (30:34):
So that and more importantly, I don't know the implementation
details on the back end, and then that's a good thing.
Speaker 3 (30:39):
They're not important to you exactly. And I'm so in
general like those kinds of thing, and even when I'm
using mocking framework, that's fine. My go to actually is
if I have something out of process that is also
externally observable, I would just handcode, you know, a custom
mock or fake for that interface, because it's gonna instead
(31:03):
of having repeated in every single test the setup code
for my whatever what needs to do, I just have
a generic fake implementation of the interface if it's repository
or sending an email or something, and it will be
using my domain language instead of the generic Marcking set
up language, which is well, you learn to read it,
(31:24):
but it's it's not the same language that the rest
of your program is. It's the custom API, right, So
I find that that is valuable and if you can,
probably you can more often can you can reuse those
handwritten marks of face throughout the different tests because you
tend to make them just like they just immurate the
behavior of whatever you're interfacing with. So the fake of
(31:47):
the mark itself, the handwritten one, will also be a
documentation for how you expect the the third party to behave.
So if that changes, you go and update the fake,
and then all the tests don't change because they still
have the same surface and likely on this there's actually
new behavior. Well the tests actually have to change. The
(32:09):
test will still keep running, so you have a secret
place you have to update instead of having you know,
you change them something in one interface, and then all
the tests break because all the bogs are now wrong
and you need to do update the configurations all the
tests that use the same interface or lock that interface out.
So yeah, it's about spending a little bit of extra
effort initially to create the fake versus just using the
(32:30):
generic marcking library, and it will save a lot of
time in the long run. I think in my experience.
Speaker 1 (32:36):
That's very cool and it feels like a good place
to take breaks. So we'll be right back after these
very important messages stick around. Did you know you can
lift and shift your dot net framework apps to virtual
machines in the cloud. Use the elastic bean stock service
to easily migrate to Amazon EC too with little or
(32:57):
no changes. Find out more at aws dot Amazon dot
com slash elastic beanstock.
Speaker 2 (33:09):
And we're back. It's dot net rocks amateur Capbell. Let's
call Franklin Yo. Yeah, I can do our friend Eagle
Henson a bit about you know, making really valuable tests too.
Recently we did a show with w O'Brien talking about
the MCP for playwright Wonderful, and it made me start
thinking about how we make more resilient tests by not
writing the tests at all, but by writing prompts to
(33:30):
generate tests so that we would be able to regenerate
tests pretty quickly with those kinds of tools, Like how
important is it to handwrite this test code versus using
these generative tools for writing that kind of code. You
knew we were going to get to add sooner or later.
Speaker 3 (33:47):
Yeah, well we have to check that box. I think
it's an excellent question, and I am using I'm playing
around both of Claude and with all the OMII tools
from GitHub as well. I think, you know, back to
my original point, is a good test suite is something
(34:09):
gives you confidence. I have tried to was it a
vibe code or just have an agent code, you know,
a library for me, something I needed for production app,
something you know, a bit more complex, not just a
simple thing. And it was a general a good experience,
like we would go through all the things, and I
(34:31):
told it to go in a very strict TDD fashion.
So we brought brought down to this was carawed. I said,
you know, we're going to do GDD and just break
down and take the most simple thing we can build
first and write a test for that, then write code
and some back and forth. But at the end of
you know, a long day of experimenting with that, that
(34:52):
was quote and all the code passed or all the
tests passed, and the production code was there. I wasn't
sure if but honestly I didn't trust it because for me.
The key part when I'm actually writing a test or
writing production code is I am I'm thinking about like
all the turns that I don't take during writing test,
(35:14):
like thinking about, well can I do this? No, that
doesn't work because X, Y and C. And then I
discover something else I need to think about as well
when I sort of problem solve go through it. I
haven't found a way with AI agency yet to get
that same experience where I know I've been through all
the educases and I've thought about that because I know,
go for one.
Speaker 2 (35:32):
But you only see those educases by writing the primary
case and bumping into them. So it's the thought exercise.
It is, Yeah, that happens right right, the code that
actually helps you find those edges.
Speaker 3 (35:45):
It helps. So one thing I have done is I
have either tried to write the production code, the code
that needs to some testing at some point, write that
by hand without tests first, and then tell, you know,
a AI agent of choice to go look at the
changes I have here and write tests that test these things. Right,
(36:08):
and if I have existing tests already, you know, look
at the existing tests, see how they're structured. Maybe I
write one or two tests myself because I want it
to be done in a specific way, and then it
can typically iterate over it and figure out well, these
are all the it's cases, these are all the branching
got things we need to test for, and it will
bid out a whole bunch of tests. But I've also
(36:29):
seen when I looked at some of the code that
has been generated sometimes that will be a test where
that is a source just doesn't make sense. So I
don't think it's a silver bullet. You definitely need to
do a ferral review like you would with if you
have an open source and you get a pro request
(36:49):
in from somebody you don't know, you really have to
read food very carefully, and at that point if it's
something very deeply technical. I my experience is that the
amountlets time it really takes to do a very ferl
review of the code you have received from somewhere like
it's almost as you could have just written it yourself.
Like at that point, if you.
Speaker 2 (37:10):
The energy to do that, reviews same energy as to
write it.
Speaker 3 (37:13):
So what I do like with alarms is that they
are really really good at reviewing things though, so you
so using them using them with githob assigning copilots as
a reviewer, they usually always find something. Sometimes it's just nitpicking,
like you spell this word wrong, or but sometimes that
is actually think so elerms. I think much more valuable,
(37:37):
at least in my workflows right now, in reviewing what
I've done and giving me feedback or having that back
and forth. Whether we are just iterating on ideas, sparring
and what didn't I think about here? And there would
be something? So that add lots.
Speaker 1 (37:52):
So we're getting into using models to generate tests, and
then using models another model perhaps to value and validate
it and review it.
Speaker 2 (38:03):
And software arguing with software.
Speaker 1 (38:05):
Maybe maybe this is our future, you know, and we
are layers of models.
Speaker 2 (38:12):
But it speaks to and I know you're working on
a B in a two point Oh, it's like, should
there be an MCP for B unit just so that
those features are easily exposed to a prompting system.
Speaker 3 (38:23):
So I think it could be. I think our MCP.
So what would be, you know, quite extensive documentation with
examples so that say go and look at the documentation
and figure out how to make this test work. I
think that that is I think a difference with playwright
and the MGP. So what they have is because you're
actually running a browser that has words of API. So
(38:46):
so it's it's a slightly different scenario.
Speaker 2 (38:49):
I think one thing necessary abstraction, Yes.
Speaker 3 (38:55):
And again. So now you can also say, well, if
you have a generally a whole bunch of test cases
for whatever you're building, well those are either another ELM
or yourself that has to maintain those going forward. So
there is a point to having, you know, slipping that
down or at least looking critically at them and saying this,
(39:17):
this is actually a valuable test. Does that actually test
something that other tasts aren't testing already? Right? I did try,
which is something I want to experiment more with. Have
you heard about striker dot net it's a mutation testing
framework from my yourself, Well, it's it's from different platforms.
So is so with mutation testing? What it does? It
takes your test suite and then it says your production code,
(39:39):
and then find conditionals in your production code, and then
and it flips flips that conditional at at a very
basic level, and then it runs all your tests. If
it all your tests pass with that flip conditional, well
then you have they call it a mutant that survived
it's you know, from the x N universe, and and
it will you said it loose on your code base.
(40:02):
You need to have everything building and compiling. I'm always
just passing before you do that. But it will then
spend ten minutes, fifteen minutes as long as you wanted
to go, just going through all your production cont finding
bits to flip. And it will do that, and it
will run all your tests and if some tests, if
that's test that doesn't fail, well then it reported back
to you in a in a in a report you
(40:23):
get after the run that tells you, well, I flipped
these conditionals and there were no tests that broke when
I did that. So I find that to be a
much better metric for whether or not I have good
code covers or not, because just going crossing over a
line during a test doesn't mean you actually, you know,
I'm sitting here on video doing the you know.
Speaker 2 (40:45):
Podcast That doesn't work.
Speaker 3 (40:49):
So I actually asked Claude to say go down too, Striker,
because it claimed that it is done, it had done
all what to do for that library, low level libry building,
and then run this against it, and then it's spent
two hours more just running Striker look at the report
than adding more tests. So that is you know a
way to go forward and find find the test that
(41:10):
you haven't written yet.
Speaker 2 (41:11):
Interesting, Yeah, because we're presuming all mutations are bad. You
know that's this very x men.
Speaker 3 (41:16):
Well no, so yeah, if you go this website striper
dot net. But if you go and look there, you
will see there will be mutations where you can see that.
It's never it's like it's on reasonable code. As soon
as they flip that bit.
Speaker 2 (41:31):
You shared striker dot net, isn't it striker mutator dot io.
Speaker 3 (41:35):
Maybe they changed the U own.
Speaker 2 (41:36):
Yeah, because it's that that's the site with JavaScript, n C,
sharp end Scala.
Speaker 3 (41:40):
Yeah, yeah, it's it's uh yeah, you're right, Striker.
Speaker 2 (41:43):
Mutata including the show notes for folks who want it
helps you find the code that has leaked out of
your head.
Speaker 3 (41:51):
So it's it's a very It will also report things
and obviously it can flip a bit and then the
compiler doesn't it doesn't compile anymore, and that is also
counted as you know, that's just stetic tybing.
Speaker 2 (42:03):
That's easy to do. I do that all the time, exactly.
But I love this idea that you it makes changes
to code. It still pass. But still the question is
is the code now incorrect?
Speaker 3 (42:16):
Well, the point is it's about checking if your test
tweet is giving your protection against your questions. Because if
I can go and flip the bits and no, none
of my test of breaking, I might it might just
not be a nothing, but it might also be that
I've broken the behavior of my application. Absolutely, So you
want your test weite to capture or to catch those
things if it's important. So, yeah, Typically when I run
(42:39):
Striker on my code, that is quite a large list,
and some of them you just say, well, that's not
an issue.
Speaker 2 (42:46):
I'm not going to do that. Yeah, Gail, do you
hate nells as much as I do?
Speaker 1 (42:50):
For testing? Aren't they like the worst? Do you like
when you create a test, should you automatically should your
framework automatically check for nolls before I actually add more tasks?
Speaker 3 (43:02):
Well? I think we are. Well, we're talking dot Net.
We have we have a compiler that will do that
for us, so I will I.
Speaker 1 (43:11):
Will use If they change it run time and you're
not doing any null checking, there's a runtime error that
can occur and often does.
Speaker 3 (43:20):
So I think that's that's the things between what kind
of things. You're if you're building a component, well at
a reusable library that you probabish as a NUCAD, you
would want even if you have noables enabled in your library.
When you're building it out and you use all the
right attributes, your in user, which is not in the
(43:42):
same compile space as you are, can passing. Yeah, they could.
They can pass in a all because they might have
turned Dollible off in their compiler.
Speaker 2 (43:51):
Right.
Speaker 3 (43:51):
So on the so, if you're building a library, definitely
on all your public GAPI surfaces, I add indult checks
or in general, you know, just use the various arguments
out of range and all of that checks. Because you
can't will do that and bid units. You can't trust
your users to do the right thing, or they might
just have apology of not using nollible in their code base.
(44:13):
If I'm building a line of business application or internal
applications where everything that gets compilers compiled by my compiler,
how I compuget it. I am much more inclient to
just trust that. You know, if we've said that this
object here can not when you pass it into this method,
well the compiler will tell me if I'm trying to
(44:33):
pass in an object. I think it's especially the later
version of c Shop has gotten very good at basically
understanding how objects flow and capture that.
Speaker 1 (44:42):
Okay, it has, but when we turn nullible on now
we see a lot of squiggles from the compiler and
visual studio. Yes, and I think of them sometimes as
noise because it's everywhere, and I know that could be
no but it's not because I'm clearly looking at the
code and so I'm probably not alone in that. A
(45:04):
lot of developers so that out you know, I hear you.
Speaker 3 (45:09):
I would strongly recommend that to really work well with
notable enabled, you have to enable it. And then I
also always enable three warnings as errors, at least in
my release belts. And then we simply don't like I
would if you have, for example, if you know you're
talking to external things like it's not part of your
(45:29):
internal business logic domain, where sometimes you can have an
all come in or the library will tell you that
this may be not even though you know in all
my use cases it's never going to be all right. Well,
I just put a guard in at the itches so
I know that within my code everything is not knowable,
and it's a bit annoying. Sometimes you have to add
(45:51):
an additional check just because, right you can.
Speaker 2 (45:53):
But but I do that all the time.
Speaker 1 (45:55):
I'm it's in my DNA to put those centuries in
place at the edge.
Speaker 3 (46:00):
So but if you force yourself into not not accepting
any normal warnings squigglies and simply deal with them at
the ages where you are, then you are much less
likely to have problems, at least in my experience.
Speaker 1 (46:15):
But that's you know, except now you're going to spend
an extra hour getting rid of those squiggles.
Speaker 3 (46:21):
I'm sure co Pilot can help you with that.
Speaker 2 (46:23):
I'm sure exactly.
Speaker 1 (46:26):
That's a good use for a Hey, get rid of
all them spriggs.
Speaker 3 (46:29):
They are actually quite good at figuring something like that out.
And obviously you have to do a bit of monitoring
on on things as well afterwards, reviewing the changes.
Speaker 2 (46:39):
Right.
Speaker 3 (46:40):
Actually, speaking of Mark Semen, I wanted to call out
something else which I find is right quite interesting, is
naming of tests. So I want so because there's no
right or wrong here. But he had a good blog
post a while back, so let me let's me test
out the scenario on YouTube. So imagine we have I
(47:05):
don't know, a podcast met Ninja class and and we
want to write it has a feature where you can
upload a new podcast and it maybe have to go
life time in the future, So we want to write
a unit test for that feature. So just off the
top of the head, what would you name that test?
Like the test where we're just uploading a file call
(47:27):
the surface of the API and say, oh, here's his
new podcast and has a good life date and you
know something, you know, some assertions should happen as well,
what what what name would you give that test?
Speaker 1 (47:39):
Well, if I worked for Microsoft, I'd call it uh
foundation framework for testing podcast uploading and checking to see
if it's valid.
Speaker 2 (47:52):
Yes, something like that.
Speaker 3 (47:54):
So something about something with the whatever you using, then
maybe the action and then some expected outcome assertion. Right, yeah,
and I so Mark had a really great point. Whenever, well,
at least for my sake, I guess for Marks as well,
whenever I read a test, when I come into a
(48:16):
test I haven't seen before somebody else has written bad,
I actually usually just skip over the name of the
test and start reading the code because my experience is
that you can't you can't fit all the details in
the name anyway. So we'll just end up reading the test.
And then the name is more of a mandatory comment
(48:36):
because methods have to have a name in cl but
but they can also contend to go stale like the name.
Like I so many times I've seen a unit test
where the name said that it included, for example, bessertion
or expected outcome in the name, but the outcome changed
because business requirements changed, right, but they forgot to update
(48:56):
the name of the test. So now you have it.
You're reading the code when you're looking at a test
name and they don't match up, and you're going back
and forth, what is the right thing here?
Speaker 1 (49:03):
So usually they start with If I wasn't being flipped,
I would say it's start with should.
Speaker 3 (49:07):
Yeah, because that's another good word that explains exactly you know,
filler words as well, well, that's another pet peeple. So
I think were my recommendation, and what I learned from
Mark as well, was to just use the business scenario
in the description. So in this case, it might be
something like uploading new podcast with future date future release date,
(49:30):
So that is the scenario, that scenario. As long as
you have that feature, it's not going to change. So
the neighbor of test is not going to change, but
then you fill out the breadth of the test method anyway.
So again that makes it so that you have an easier,
juddiestable test when you get back to it half a
year later and figuring out why it's broken or it
(49:50):
just won't to understand what's going on.
Speaker 2 (49:52):
Very good.
Speaker 3 (49:53):
Small details like that are important.
Speaker 2 (49:54):
I think we tend to name the test to the
class that you're testing and ending it with tests, so
you can sort out the test if you want to,
like I kind of want to. I guess I'm always
thinking from a search perspective, like why am I going
to find this where? You know? Where is this going
to live? What is it related to?
Speaker 3 (50:12):
Well, so I think in dot Net, or at least
in my code, we have all the tests sit in
the separate project test project, and I tend to have
I don't tend to have a test class per class
I have in my production code, but if it's like
the if it's a public surface a p I, I
(50:33):
will have a you know class named you know product
or podcast upload manager tests and that is where all
the tests belonging to that sits. That's one approach. You
can also go full b d D and and and
girking and all of that, and and have just different
sets of tests, or if you are in a different
(50:56):
coding environment where you can actually have test code sitting
right next to the coded tests over classes just mixed
to it, or whatever terminology they have for their you know,
test code and production code, that that's also quite valuable.
It's a bit leaning into the feature foldos concept where
everything that that does belong together is sitting in the
same area on disc so you can easily find everything
(51:19):
that that that does something together. But I don't think
that's a good option in the dot Ninity system at least.
Speaker 1 (51:25):
So what's next for you? I know you said you're
working on b UNIT two, yes, and is that taking
all your time or I mean you're doing a lot
of conference speaking and that kind of stuff.
Speaker 3 (51:38):
Well, so b unit is taking a little bit of
my free time. No writing books for me. I I've
heard from from better folks than me that it's not
you know, it's it's not a good idea to get
into that too much. But no, Yeah, I have a
conference coming up. I think Richard will also be there
in Stavanna. Hello Stavana in Norway. And and hell was Yeah,
(52:04):
I'm using the Norwegian pronunciation perhaps uh and but most
of my days to day job, it's it's it's a
full time in a little bit. So open source isn't
getting a lot of attention these days, but we are
trying to sneak in a few hours every week, me
and Steven to just jump on a scream or jump
on a call and get some work done. So so yeah,
(52:25):
that that is. That is, you know, going back to
the original comment that you read Richard in the start
of the podcast from I forget the name, but yeah,
open sources you know, the work that you do when
you don't have work and you don't have family time.
(52:46):
So yeah, so it's it's a big bit on the
back burner. But I'm waiting for EA agents to come
and take my open source job for me, and so
I give it instructions and it will do all the
right things. It happened, hasn't happened yet, but we might
be lucky when.
Speaker 1 (52:59):
Yeah, fantastic Ego Hanson, it's been pleasure talking to you
this hour, and it's always interesting when we sit down
and talk likewise, and we'll talk to you next time
on dot net Bronx.
Speaker 2 (53:32):
Dot Net.
Speaker 1 (53:32):
Rocks is brought to you by Franklin's Net and produced
by Pop Studios, a full service audio, video and post
production facility located physically in New London, Connecticut, and of
course in the cloud online at pwop dot com. Visit
our website at d O T N E t R
O c k S dot com for RSS feeds, downloads,
(53:55):
mobile apps, comments, and access to the full archives going
back to show number one, recorded in September two.
Speaker 2 (54:01):
Thousand and two.
Speaker 1 (54:03):
And make sure you check out our sponsors. They keep
us in business. Now go write some code. See you
next time.
Speaker 3 (54:10):
You got JAD Middle Vans the
Speaker 2 (54:17):
Home