Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:08):
Welcome to the Angular plus Show. We're app developers of
all kinds share their insights and experiences. Let's get started.
Speaker 2 (00:21):
Hello and welcome to another episode of the Angular Plus Show.
My name is Laara Newsom. I will be one of
your hosts today. With me, I have Q Q.
Speaker 3 (00:29):
How's it going great? How are you doing today?
Speaker 4 (00:32):
I'm good?
Speaker 2 (00:33):
You surviving the pollen attack that is currently plaguing the Midwest.
Speaker 3 (00:38):
Well, you know, I'm very allergic to everything, so not really.
Speaker 2 (00:46):
What the listener can't see is that Q is currently
in a plastic bubble, so I am.
Speaker 4 (00:53):
To keep him protective if it If.
Speaker 2 (00:54):
He sounds a little tinny, it's just the reverb off
the walls of the bubble.
Speaker 4 (01:00):
Chow is also with us today, Chow, how is it
going and how are you? I am? I am great.
I don't need a plastic bubble.
Speaker 5 (01:08):
But yeah, and my son, he's struggling with the poland
attack right now. Yeah, yeah, his eyes all swollen this morning.
Speaker 2 (01:17):
Oh no, I swear we're just like we've become weaker
human beings, because like I don't remember allergies really at
all when I was a kid.
Speaker 4 (01:25):
But maybe I was just really lucky.
Speaker 3 (01:28):
Yeah, we all have to live in the central US,
so it's really bad.
Speaker 4 (01:33):
It's true. And our guest today is nice Unice. How
are you. What's the pollen report where you're at?
Speaker 6 (01:42):
It's pretty okay.
Speaker 7 (01:42):
But if you have any plastic bubbles, are they soundproof?
Because I'd love to put my kids inside.
Speaker 4 (01:50):
Little like airflow.
Speaker 2 (01:51):
It's kind of like one of those hamster balls that
you can put them in.
Speaker 4 (01:55):
And yeah, it's amazing how loud chill drin can be.
You don't really appreciate it.
Speaker 2 (02:03):
I was telling you before the show started that I
had family members over and we had three kids under
seven in the house. What I forgot to add is
that then my cousin came over with her son who
he's like seven. So when we threw him into the mix,
like the volume just went up exponentially. So there's like
some sort of formula for noise and children and.
Speaker 7 (02:25):
Yeah, and it gets louder as they get old, and
I don't know when it stops.
Speaker 6 (02:30):
Maybe it doesn't stop.
Speaker 8 (02:30):
I don't know.
Speaker 2 (02:32):
The good news is that once they become teens, they
sleep more and so there's like more quiet time. But
sometimes their loud time is like midnight and they'll they'll
have to yell at the TV or you know, a
video or whatever, and that's shocking.
Speaker 4 (02:48):
So well, we didn't just come to talk.
Speaker 2 (02:52):
About pollen and children. But what we did come to
talk about is, well, we'll talk about testing, right. I
don't think we could have you on the show without
talking at least a little bit.
Speaker 6 (03:04):
About testing, right, totally talk about.
Speaker 2 (03:12):
The one thing that I think we could talk about
is the test. So I guess I didn't let you
introduce yourself. I always just assume everyone knows who you are,
but for the listeners who don't know who unises, would
you like to introduce yourself?
Speaker 6 (03:27):
Yeah. So I'm an old software cook and.
Speaker 7 (03:34):
I'm really passionate about long term software you know, how
to keep things cheap to maintain, not just building things
and hacking around, even though that's fun.
Speaker 6 (03:47):
But that's what I'm focusing on, like the long, long run.
And one of the key things that.
Speaker 7 (03:53):
I'm focused on is testing in general, and mainly angular testing.
So I have like angular testing workshops running and video
course and stuff like that, and also like a free cookbook.
Speaker 6 (04:06):
With some items there.
Speaker 7 (04:08):
Anyway, so I'm really trying to improve the testing experience
because testing should not be painful, and if it's painful,
then there's something wrong.
Speaker 8 (04:19):
There.
Speaker 9 (04:19):
Is it an actual cookbook, Yeah, it's it's online, but yeah,
like like French toast.
Speaker 2 (04:27):
And stuff, how to test French toast? You cook it
and then if it's good, big green pipeline.
Speaker 5 (04:36):
I can test it. I can test friends hosts all day.
Speaker 4 (04:42):
Nice.
Speaker 2 (04:42):
So I wholeheartedly agree with you, and I love that
you brought up the testing for long term maintenance because
one of the scariest things about like updating an Angular
application is not having confidence that the update didn't break something.
Speaker 4 (05:02):
So without a robust.
Speaker 2 (05:06):
Suite of tests, you go to update your Angular app
and you're going to find out the hard way.
Speaker 4 (05:12):
That oopsie, that.
Speaker 2 (05:13):
That might seemingly minor change really did break stuff. So
I love the idea of writing tests per maintenance. Also,
just refactoring like.
Speaker 4 (05:25):
Fantastic.
Speaker 7 (05:26):
That's a really important point there, like how how do
you refractor without breaking your app of course, but also
without having false uh negatives and false positives in your tests.
So that's like one of the big challenge is how
(05:46):
do you make sure that uh, you don't have like
a false negative where your test is saying Yeah, everything's okay,
so you're good deploying, then everything is broken or having
false positives and your test is like.
Speaker 6 (05:58):
Oh, there's a bug there, but iually there's not anybody there.
Speaker 7 (06:01):
So and that's where testing becomes painful because like, you
change a little thing, you're like you switch from observables
to signals, and oh, hundreds of tests are broken. Why
isn't that an implementation detail? And that's really the thing
I'm trying to focus on, like and help people with.
You know, how how do you how do you design
your test like beyond tooling and everything, how you design
(06:23):
your tests so that you're not coupled to implementation details?
Speaker 4 (06:28):
Absolutely? Yeah, that's such a great point because I definitely
have you go in, you make.
Speaker 2 (06:33):
A very minor change, It really shouldn't be that hard,
and then not only do you break your test, but
somehow you manage to break like thirty five other unit
tests as well, and that can be it's painful, and
that is where a lot of time and turn happens
and the software development life cycle so.
Speaker 7 (06:52):
Exactly exactly, So that's when there are two pathways that
are extreme.
Speaker 6 (06:57):
The first one is that you don't refactor.
Speaker 7 (07:00):
Your code keeps getting like sure not adapted to your
current needs or the other way around is just like
you stop testing or you remove your tests.
Speaker 2 (07:11):
Yeah, or you yo yeah yolo, just like it's it's good,
we'll see what happens and yeah and it yeah, And
it's very frustrating to come into, Like I definitely I've
come into like legacy applications where you know you've got
a thousand line component and you're like, I need to
refactor this thing, but there are very few unit tests,
(07:33):
the unit tests and there are not valuable in any way,
And how do I confidently start to refactor this?
Speaker 6 (07:42):
Yeah, And that's that's that's a very good point.
Speaker 7 (07:45):
Like one of what my technique for that is what
I call like that whenever like you start bringing tests
or valuable tests to.
Speaker 6 (07:58):
A legacy application, is what.
Speaker 7 (07:59):
I like the sandwich strategy is that you start with
narrow tests for like super So I don't use unit
versus integration.
Speaker 6 (08:10):
We can talk about that later.
Speaker 7 (08:12):
But I use like super narrow tests that are very
focused like this tiny calculation function and it's very isolated
and synchronous and whatever, so it's pretty easy to test
so that people start learning.
Speaker 6 (08:22):
Also testing with that and wider tests.
Speaker 7 (08:26):
Because your legacy code, if it's not tested, it's going
to be hard to test. Yes, because with tests and
TDD for example, test different developments where it starts with
the test, it enforces a design that helps testing.
Speaker 6 (08:42):
And so when you have legcy code, you have to
start with wider tests. So I don't know how wide
is that. Is it an end to end test or
is it somewhere in between? I don't know. We can
talk about that. And then you start zooming in progressively
and refactor in your code progressively. And there are.
Speaker 7 (08:58):
Also some uh what I call ephemeral tests, which are
tests that you just been there to fix the legacy
coal and keep it working while you're refacturing and adding
other tests and redesigning your whole thing. Then you can
you can remove them or remove them like a scaffold
around a building. You know, that's something that we don't
(09:18):
see that much.
Speaker 4 (09:19):
Yeah, that makes sense.
Speaker 2 (09:20):
I actually I do that kind of testing somewhat just
you know, when I'm setting up a test, I you know,
if I write a you know, component did create, I
don't keep those tests. They're not like obviously if my
unit tests run, it created fine, But I always write
one of those to get started to be like did
(09:42):
I get.
Speaker 4 (09:43):
My setup right?
Speaker 6 (09:44):
Yeah?
Speaker 2 (09:45):
Because I think there's a fine art to setting up
a test, especially if you've got sort of a complex
component or service that you're trying to test. So totally yeah, interesting, Okay,
So I'm sorry, Oh oh, I was going to just
so you've got this idea about why versus narrow why
don't you start when you're trying to do legacy code,
(10:07):
you're going to refract your legacy code, why don't you
start with narrow tests?
Speaker 6 (10:17):
It's bad for the team's moral how do you call it?
Not the moral?
Speaker 7 (10:23):
And yeah, and so you have to also take care
of that, Okay, Yeah, you have to find a strategy
that is very satisfying for the business and for developers.
Speaker 6 (10:39):
Otherwise, by the time like the team gets testing right
or I don't know what right means, but anyway, better
you it's better if they.
Speaker 7 (10:52):
Don't have too many narrow tests that are all of
different kinds and just start doing them progressively and once
they get the skill, you know, and also once the
design gets better and everything. Otherwise they must struggle a lot,
like how do you refractor?
Speaker 6 (11:11):
Like let's say we're in.
Speaker 7 (11:13):
You have like legacy Anglar components called in like thirty
different services doing some crazy observable RXS madness there, And
how do you test this this whole thing? How do
you narrow down things? It's very complicated, so.
Speaker 6 (11:30):
You want to narrow down, but the components is already
too big. So how do you how.
Speaker 4 (11:33):
Do you mean you know, you eat that elephant?
Speaker 6 (11:38):
How do you slice the elephant?
Speaker 2 (11:40):
Yeah, that makes sense, And I like that You pointed
out Team Morale as well, because it is I've definitely
been on projects. We're like, all right, there are no
unit tests on this, and we it was actually react up.
We had to bump the version, like the version there
were known vulnerabilities in some of the libraries that we
(12:01):
were using, so we needed to bump the versions.
Speaker 4 (12:03):
But there were no unit tests, so it was like,
how do we what do we do?
Speaker 2 (12:08):
And somebody decided to get one hundred percent test coverage
on everything, which it's a fun game to play if
you're like, can I get one hundred percent test coverage?
Like you can totally Like I that's the game I
play when I just need to get an A plus
for the day, like I did it I got my
(12:29):
one hundred percent. But that's when you also find out
that like, oh gosh, so much of this code is unreachable,
and you know, I'm so tired. I've been testing this,
these three components for the last three days, and I
just wish my life wasn't doing this right now.
Speaker 7 (12:48):
That's that's exactly what when you start aiming for, like
the handred test coverage or code coverage or stuff like that,
that's when like the testing uh kicks you back in
the face, you know, and you you have to you
have to think of that differently, because so the first
(13:15):
thing about code coverage is that it's it's a bad
indicator because you can have like one hundred percent coverage
of something that is not working at all. Like basic example,
imagine an angular component where you're displaying the spinner when
something is loading and you're displaying an error message. If
there's something wrong and you're displaying the data and for
some reason everything is.
Speaker 6 (13:37):
Displayed together, that's a big bug.
Speaker 7 (13:40):
It's a very major problem because you have spinner above
the data, but you have one hundred percent coverage.
Speaker 6 (13:45):
Because ted's fine. Everything, everything is fine. And and and.
Speaker 7 (13:52):
Also that brings us to like what are we testing?
Are we testing.
Speaker 6 (13:59):
Code or behavior?
Speaker 4 (14:01):
You know?
Speaker 6 (14:04):
What I want to cover is one hundred percent of behavior.
The feature is not because itself.
Speaker 4 (14:10):
Yeah, so let's talk a little bit more about that.
Speaker 2 (14:12):
When you say testing behavior, So if you're going to
test the behavior of an angular application, how do you
go about doing that?
Speaker 6 (14:19):
Yeah?
Speaker 7 (14:20):
So the best approach is stuck with the business.
Speaker 10 (14:23):
You know.
Speaker 7 (14:24):
So if you have a like problem, like like it's
it's a common problem, like a good way to start
it is, so let's talk about that. Let's bring the
unit tests here. So let's start with the unit test.
Speaker 6 (14:41):
So uh, because the common problem is people are like
what do I test? What should I test?
Speaker 7 (14:46):
And that's the common question in testing, Like people are like, Okay,
I want to start testing, but what where where do
I start? And people either talk about en to in
test or unit tests. And let's start with what's for.
Speaker 6 (14:58):
Each of you.
Speaker 7 (15:00):
Each of you should come with the definition of a
unit tests. I'm really curious, like what is your definition of.
Speaker 4 (15:10):
Right?
Speaker 2 (15:11):
So you know, if I was to explain it to
my team. The boundaries that we sort of draw with
unit tests are the functionality that exists within the code
under test. Like if we're getting into trying to test
the functionality of a child component, that might not be
(15:31):
the right place for that test, Like.
Speaker 3 (15:34):
We should still test that, but like that coments.
Speaker 4 (15:39):
Yeah, and or we might use a different tool, right,
So like we.
Speaker 2 (15:42):
Might instead of saying this is my this is my
fiddly test that's testing all the all the different states
that this component might be in, and now I'm also
going to test how the child works, that we would say, okay,
the component test is testing all the fiddly bits that
it can do. So if you're an admin user, you
(16:02):
should see it like this. If you're a read only user,
it should look like this. And then we would have
another layer of tests, which would be like and now
you should now in this case where tests and this
is where we would do something like playwright to say
does the parent component interact with the child component?
Speaker 4 (16:22):
And so I think, and I'll let you answer.
Speaker 2 (16:26):
I try to follow this sort of idea of narrow
and wide, but then also train the layer unit and
integration on top of it. And that I think it's
hard for people to define those those boundaries, but I
that was my idea. So we should make CHOW and
Q also answer. So, chow, what's your definition of a
(16:47):
unit test?
Speaker 5 (16:50):
I don't I don't do unit test. Oh my gosh, no, no, no,
I mean like I don't do I don't do compon
and unit test.
Speaker 2 (17:02):
I mean right, like like a what what testing tool
do you use them?
Speaker 6 (17:07):
Uh?
Speaker 5 (17:08):
For components, I just used mainly testing angler and.
Speaker 10 (17:14):
Two.
Speaker 5 (17:15):
So so your scenario, I would test the interrection as well. Yes,
so with the child components now for h so when
it comes to playwright uh and to end test, I
would test only the happy path.
Speaker 3 (17:30):
Okay, So for your for your for your component test,
you test happy path only.
Speaker 5 (17:35):
No, no, no, no, for for interntest for the Yeah,
yeah I gotcha, gotcha?
Speaker 3 (17:39):
Yeah okay, and.
Speaker 6 (17:42):
I didn't get it. What about the components? How do
you test?
Speaker 8 (17:45):
Like?
Speaker 11 (17:45):
How do you zoom in?
Speaker 4 (17:46):
Like?
Speaker 12 (17:47):
Yeah?
Speaker 5 (17:47):
I mean like I uh so, I used to be big,
a big fan of shadow testing.
Speaker 6 (17:52):
Okay, so I mock everything.
Speaker 5 (17:55):
I mark the child as well, but now I don't. Uh,
But then I do get into uh so so for
when it tests a component and tests all the interactions
that that that component has. So now when we so,
now I tend to get into a scenario where with
(18:16):
especially with, So I work not not in not in
Anglo nowadays, so I work with like Remix or React router,
where you usually have a like a page component that
gets rendered by the route. So you tend to just
like propagate all the interactions to that component so that
that page component can talk to the route, uh, the
(18:36):
back end whatever. Now that component, that page can goes
in definitely deep.
Speaker 8 (18:44):
So that now.
Speaker 6 (18:45):
At at what layer?
Speaker 5 (18:46):
At what level did you stop the interaction?
Speaker 4 (18:49):
Right?
Speaker 5 (18:50):
So that's the the that's the thing that I've been
fighting every single day.
Speaker 3 (18:55):
Yeah.
Speaker 6 (18:55):
Yeah, and the Remix Stubb.
Speaker 4 (19:02):
How about you.
Speaker 9 (19:03):
Yeah, I guess mine's different because you guys are playwright
guys and I'm a Cypress guy.
Speaker 3 (19:07):
But I do use Cypress for component testing, specifically.
Speaker 4 (19:12):
Cyprus component testing like that.
Speaker 9 (19:14):
Yes, yeah, I use the test dash Dash component. So
unit testing for me generally, I tried to keep all
of my logic business logic and services, so I mostly
test services, but I do if there's something component specific
functionality that I have to keep it in the component
for whatever reason, that will be a unit test. But
(19:35):
I do keep it black boxed, so any services I
know that units. At one time, I remember reading a comment.
Right after, right after Signals came out, we had this
whole thing on the forums about newing up a component
for testing, and I was like, well, this unit guy
(19:55):
knows what he's talking about.
Speaker 3 (19:56):
Because he was agreeing with me. I was like, I
needed to do it this way.
Speaker 9 (20:00):
Yeah, it's guy's a genius, and I prefer a new
component in the test and just test functionality that way.
If I have to do a bunch of test bed stuff,
I don't, I feel like I'm doing something wrong.
Speaker 3 (20:10):
So yeah, I just test up.
Speaker 9 (20:13):
I just if I knew up this test class component,
then I can test class component functionality. I expect or
I passed this in I expect this, and that's what
I want to do. So really old school black box
black box unit testing component testing. I will test all
the layers, so any type of integrations it has with
any child's components or services, I can actually test those,
(20:35):
test the behaviors, make sure those coverages are working right.
And Cyprus does a good job at showing like interaction
interaction coverage so you can see like what buttons are
pressed and what's rendered and are you actually expecting on
those two things. So I use it for for that
and then our into end test run on Playwright, so
they those are also happy path. Those are the only
(20:56):
happy path things. Unit tester are for edge cases and errors,
dates and component tests, test integration.
Speaker 4 (21:04):
All right, so judge us.
Speaker 7 (21:07):
I'm just saying, like, that's the thing with unit tests
and integration tests is that just for all the people listening,
just ask people in your team and you'll see you
get like so many different definitions. And as long as
you can, as as Lara mentioned, like, as long as
you're in one team, you say, okay, here a unit
test is is an intoind test and everybody agrees.
Speaker 6 (21:31):
As long as you have one definition and everybody agrees
with it, then it's perfect.
Speaker 7 (21:35):
And that's the reason I came up with the narrow
and white test, because I didn't want people to connect
this to anything they know or any rules they know.
Speaker 4 (21:46):
Yeah, any layer of that testing.
Speaker 2 (21:48):
Like so if you had your testing pyramid would be
like lots of narrow tests or or do you have
is it like a cube? What shape is your testing?
Speaker 7 (22:02):
So it's it's a honeycomb, okay, fair, and the my
target is the widest narrow test, okay. And so what's
the difference between the narrow tests and the unit is
because that's the connection.
Speaker 6 (22:20):
We're all trying to make.
Speaker 7 (22:22):
So for lots of people, there are some rules that
come when they think about unit tests, like like, oh,
it's a unit test, so it cannot use the file system,
it cannot use a database, it cannot use any external thing.
It's a unit test, so it should test only one file,
no one class, oh, no one method, So maybe I
should mock private methods. You've probably seen that, And that's
(22:44):
what we call over specification. And your test gets coupled
to get coupled to implementation details, and that's where you're
stuck and you can refactor. And actually this is for
me when you go so narrow, it's what I call
like over narrow tests. Is like if you take screenshots
of your code, like code snapshots, and you're different the snapshots.
(23:06):
Oh you changed this line, don't change it.
Speaker 6 (23:08):
So that's how really on a team that.
Speaker 12 (23:11):
Would compare the snapshots, like we would take a snapshot
and just compare it, and then really all we ever
did was just always updated it because yeah, oh my gosh.
Speaker 6 (23:23):
Here here I'm going even even further.
Speaker 7 (23:26):
For me, like this test where you're even molking the
private methods and stuff like that.
Speaker 6 (23:30):
It's like if you take not.
Speaker 7 (23:31):
Screen snapshots of the return values or HTML or whatever,
you're taking snapshots of the what you see on your idea.
And so so that's the that's for me, that's one
of the biggest misunderstandings of the testing pyramids. So people
went too narrow and the tests are too small and
(23:53):
over specifying, and they don't bring much confidence or value.
Speaker 6 (23:57):
Yeah, they're very expensive and not very.
Speaker 7 (24:00):
Profitable, you know, and too expensive to mainting, too expensive
to implement and to maintain.
Speaker 2 (24:06):
Yeah, and sometimes to run in the pipeline, like because
you can get tests that aren't efficient when they run,
and like you can burn through a lot of clubs
bend that way.
Speaker 7 (24:16):
So oh of course, also yeah and uh, and so
what happened is that some people went too far into
that bath and that created so much frustration that they
moved up to okay, let's forget about everything, let's just
do and to intests. And that comes with other problems
(24:36):
that you can't see at the beginning because the app
is simple, and then once you start implementing that, you're like, oh,
it's okay, we can paralyze now with an x with
playwrights and everything. But that's not always that easy because
sometimes you start interacting with third party services that that
trigger capture or something or sometimes simple the your death server,
(25:02):
your dev environments or testing environment back and can't stand
the volume of requests. So you have some rate limiting problems.
And well anyway, whenever you start playing with the network,
you will have like a percentage of errors, and that's
when you start having test that fails and you need
retries and fleckiness exactly.
Speaker 5 (25:25):
Yeah, we ran into a rate limiting issue with off zero,
you know, into a test. Yeah, I had to come
up with the hacks and walk around in flickiness.
Speaker 8 (25:38):
It's just.
Speaker 2 (25:40):
I had I had a I owned an API that
interactive with a third party service and their test data
like their out the dev environment they provided for us
was their dev environment. So it would break it all
the time. It was the weirdest thing. And so this
(26:01):
and we had a federated graph QL endpoint and so
it wasn't federated, no, it was whatever it was before.
Speaker 4 (26:10):
So it was like a rest.
Speaker 2 (26:11):
It's like an endpoint that like knew how to call
all the rest end points and and so we would
try to test that thing end to end and it
was such a nightmare because ours was not the only
system like that and it was always flaky. And so
then they're like, well, we'll just mock all the responses
we get back. I'm like, what are we testing?
Speaker 4 (26:28):
Then we're not actually testing anything.
Speaker 2 (26:30):
Then now we're just testing that we know how to
write a MAK that's going to pass our test.
Speaker 6 (26:35):
So exactly.
Speaker 7 (26:37):
And then and then that's where there's also there are
so many problems when your strategy is like the ice
cream cone, you know, lots of end to end tests
and uh, the ice cream is manual testing on top,
and uh, we hate ice.
Speaker 3 (26:54):
Cream, Give me yogurt.
Speaker 6 (27:00):
Ice cream.
Speaker 3 (27:00):
Ice scream is great, fantastic.
Speaker 2 (27:03):
I think we can all agree ice cream is fantastic.
Test the manual testing.
Speaker 7 (27:09):
And then we don't want to exactly yeah, but the
mud going or something.
Speaker 6 (27:14):
Yeah.
Speaker 7 (27:15):
So the the problem with the this is that first
there is a maintenance cost, like oh, I've seen teams
like like, oh, the entrancest is failing.
Speaker 6 (27:25):
Wait where does the problem come from?
Speaker 7 (27:27):
So they have to call like there's one person that
can debug that because they work there for like ten years,
they're like, Oh, I think it's the database and I
have to change this row there in that table, and
this is going to fix it? And how why what's
that like an LLLM thing, you know, And it's really
hard to do about.
Speaker 6 (27:47):
Why is the button grayed out? Sometimes? Sometimes? So is it?
Is it the UI?
Speaker 7 (27:53):
Oh no, I think it's a signal problem. I'm sure
it's a signal. Oh no, No, I think it's the serves. Oh,
it's the API. No, it's the network cash. I think
it's Oh no, I think it's the back end. Wait,
it's always the front end. I think it's the front
end thing. No, No, they're not good at testing, and
I think it's the back and let's check back there.
Speaker 4 (28:08):
You know, they broke something and that's.
Speaker 7 (28:12):
Where you start like pin ponging and at the end
you realize it's just like some race condition in the
test or some flackness or whatever.
Speaker 6 (28:19):
Yeah.
Speaker 7 (28:19):
And also the problem is that like tests are not
just there to make sure you're not breaking anything.
Speaker 6 (28:26):
They're also there to help you develop faster. So you
need a quick feedback loop.
Speaker 7 (28:31):
So in my developer experience, when I'm making any changes,
I want to feedback of one second. Yeah, because I
want to change one line and see if I broke something. Yeah,
that's that's how you can contribute to things you don't
even know how they work. That's how you can contribute
to angular. That's how you can contribute to anex It's
(28:52):
because you have tests and you're like, I'm going to
change this line.
Speaker 6 (28:54):
I don't understand it.
Speaker 4 (28:57):
What happens?
Speaker 6 (28:57):
When happened?
Speaker 4 (28:58):
Oh that's what that okay exactly.
Speaker 6 (29:01):
Now imagine the other way around.
Speaker 7 (29:03):
If you didn't have like these tests that can give
you like a quick feedback and confidence, then you can't
code with.
Speaker 6 (29:09):
Music, for example, to be like super focused.
Speaker 7 (29:12):
So my take is that you have to focus during
your test, and that's where you bring like the business
people and other developers, and that's where like.
Speaker 6 (29:19):
You pay program and whatever, and then coding should be like.
Speaker 7 (29:24):
Jen Ai thing almost because it's like you just have
to give it like the architecture and your tests, and
then it should just brute force that thing. Because that's
how my brain works. It's how I develop things. I
write my tests, that's where I focus most, and then
I brut for things really like conditions. I never think
of conditions. I just try all combinations until like exurgerating
(29:45):
a bit, But I try all the combinations until like
the condition passes, and then the test pass and then
I moved to the other test and then and at
some point I have something working.
Speaker 6 (29:54):
Yeah, Then I start thinking, and then I refactor.
Speaker 13 (29:58):
Then I say, yeah, okay, I this is interesting because okay,
so in our teams, you know, I think lots of
teams across the world now are being encouraged use AI tools,
use AI tools.
Speaker 2 (30:10):
And honestly, the place that I've found the AI tools.
Speaker 4 (30:15):
To actually work okay for me.
Speaker 2 (30:17):
Is being like what what did I like? What should
I cover in this test? Or can you just write
this test for me? I love the It's an interesting
idea to say, Okay, here's my test suite, generate code
for me that passes all these tests.
Speaker 7 (30:33):
That's so what yeah, yeah, this has taken an interesting twist.
So this is where I think the the gen AI
hype is sometimes not going in the right direction. So
with all the vibe coding hype is like, okay, just
go coding, and the thing will generate code. And once
(30:54):
you have the code, like, okay, please generate tests and
please generate documentation. And actually it's way more efficient to.
Speaker 6 (31:01):
Do it the other way around.
Speaker 7 (31:03):
So this is something that I still didn't write anything about,
but I've been working with it.
Speaker 6 (31:07):
Is that I always start with a.
Speaker 7 (31:09):
Design doc for whatever thing I'm working on, except maybe
changing and I can so, uh so you started with
a design doc and you're discussing with design doc. And
that's where the discussions are interesting. It's not once you
have like the whole pr like, what do you think
of this?
Speaker 6 (31:25):
Thousands of lines of code? Yeah they're good, but what
are you trying to do? I mean, then they could.
Speaker 4 (31:30):
Here, what is this line for?
Speaker 2 (31:32):
Yeah, weird edge case we have to cover that's here?
Speaker 7 (31:36):
And okay, exactly exactly and and and that's I don't
know if it happened to you. And it's like and
that's where you when you realize that someone spent like
two days or three days on something that wasn't never
asked for. Yeah, and nobody throws it out because it's there,
so let's just keep.
Speaker 4 (31:55):
It looks perfectly good.
Speaker 7 (31:56):
Yeah, yeah, so it's better to like iterate through the
design doc first.
Speaker 4 (32:03):
That makes sense.
Speaker 2 (32:04):
Yeah, that is something and the problem with being an
engineer is that we like to solve problems yeah, and
sometimes we like to invent new problems that we can
also solve that nobody asked us to solve, so.
Speaker 6 (32:16):
Exactly exactly with all of challenges. Well yeah, and.
Speaker 7 (32:22):
That's also another problem is that sometimes like and that's
really a developer instinct that we have sometimes to fight
is that you're like, oh, I don't know how this
is going to work. You know what, I'm going to
start to code it. To start coding it, then things
are going to get clearer in my mind, doesn't work
that way, so you can spike. Spiking is like when
(32:43):
you create like a spike, it's just code. It's not
a proof of concept or anything. It's just messing around
with an API and stuff, and you're like.
Speaker 6 (32:50):
Oh, this is how it works. I see now you
get You throw that.
Speaker 7 (32:53):
Away and you come back to your design doc. And
that's where the alambs are really good, because with the design,
with the proper design doc, with like Mermaid diagrams and everything,
you generate.
Speaker 6 (33:04):
The alem will generate the code.
Speaker 7 (33:06):
But like, for instance, I don't ask it to implement
the code. I just ask it to create the files
with the classes and functions.
Speaker 6 (33:14):
But all throw errors oh okay, and I have.
Speaker 7 (33:19):
My testing scenarios that are also in the Design Dog
that I talked about with the business, but they're playing
English like, oh, we have to test this behavior, and
I click this and this is and that's where I
start implementing the tests just in English first. Then from
that English, detailed English, I switch it progressively to like
real tests with my way of testing in my app
(33:41):
or our team's way of doing that. And from there
then you start asking the Jenai to make that working.
And sometimes it works really well, like you can ask
cursor to please make this test pass, and it's gonna
keep brute forcing. Don keep trying, because that's how I
(34:02):
work as a human until until it works. And I
think that's one way which is in my observation, uncommon,
but that should be like the common way of using
like the YELM.
Speaker 2 (34:15):
Yeah, no, it's it's kind of just turning it on
its head right instead of and I think there's a
lot I think in general, TDD is a practice that's
difficult for people. It's it's a backwards way of thinking
about your code. I'm saying backwards, not in a judgmental
way at all, but it's it's like that is it's
kind of like I felt the same way about writing
(34:38):
like an observable pipe. So if you're trying to set
up a reactive system with observables, you wouldn't start with like, oh,
I want to select a product. You wouldn't The outer
observable is not your products. It's it's the selected ID
because you need to know if that changes. And it's
backwards from how because your brain I have products and
now I need to select one, whereas instead you need
(35:01):
to say I have a selected product and.
Speaker 4 (35:02):
Now I need to find it.
Speaker 2 (35:03):
And so it's it's turning that coding process upside down.
Instead of saying I need to I need to write
code that can do this, yeah, instead you're saying, I
need to make sure that the code I'm going to
write will do this.
Speaker 6 (35:18):
Exactly.
Speaker 7 (35:19):
And that's that's the thing is that like switching like
to t TD and testing in general is not like
a technical problem that can be solved with a technical tool.
It's more of a mindset transition, which is which can
be long and and back to the unit test thing.
That's also one of the problems with the unit tests
is that people are like, Okay, I'm going to write
(35:41):
my class, and what do I want to code? What
do I want to test? Well, what's the thing I'm testing?
And they focus on the units and what is this
class doing? I want to test this class? Now, what
do you want is like testing the behavior, and the
behavior is what you discuss, like with your business and everything.
So at first, naturally the business will talk about the app,
(36:03):
so naturally you will think end to end, and that's
when you start narrowing down your test. But one test
can cover hundreds of classes and still be a narrow test.
And some tests can cover one file and still be
a white test because that's where well, that's where my
(36:23):
definition of a narrow test comes in. So a narrow
test has like three rules. First, one is that it
should run in less than one hundred milliseconds, around one
hundred milliseconds, because I consider that in most cases, if
you're using v test or jest or an X, you
will be able to analyze the dependencies and only run
(36:46):
the tests that are affected by your changes, and most
changes will only effect like tens or hundreds of tests.
So one hundred of tests multiplied by one hundred milliseconds
is one second, because you don't have only one core.
You can paralyize on multiple cores, and that's a good
feedback speed. So that's why it's important. The spind is important.
(37:08):
Then the other rule, and that's the most important one,
is that anyone, really anyone, anyone working in the product
on the product, like in one year and six months,
can debug the test if it fails. Yeah, So that's
(37:29):
a very important property. And the last one is that
the test should be easy to isolate and paralyize. If
in order to parallelize you need to duplicate databases and
pop up servers and create multiple accounts and blah blah blah,
then it's not easy to paralyze.
Speaker 6 (37:44):
It's okay, that's a white test. It's not a narrow
test anymore.
Speaker 7 (37:48):
And that is why sometimes, as I told you, like
I just go TDD and I have like a big component,
one component. I have one test that tests like all
that behavior I don't know, like some attack simulator.
Speaker 6 (38:01):
Okay, yeah, tax simulator? Is that valid? English? Okay?
Speaker 7 (38:08):
You have taxes in the US?
Speaker 6 (38:11):
Oh yeah, absolutely, they've been invented in France. Really good,
good morning.
Speaker 14 (38:24):
You know that moment when your coffee hasn't kicked in yet,
but your slack is already blowing up with Hey did
you hear about that new framework that just dropped?
Speaker 6 (38:32):
Yeah, me too.
Speaker 14 (38:35):
That's why I created the Weekly Death Spur, the newsletter
that catches you up on all the web def chaos
while you're still on your first cup. Oh look, another
anger feature was just released, and what's this? Typescripts doing
something again? Look also through the poor request and changedt
grama so you don't have to five minutes what's my
(38:57):
newsletter on Wednesday morning? And you will be the most
informed person in your standard. That's better the WEEKI desper
because your brain deserves a gentle onboarding to the week's
tech matters. Sign up at Weekly Brew dot Death and
get your dose of deaf news with your morning caffeine.
No hype, no clickback, just the updates that actually matter.
(39:19):
Your Wednesday morning self will thank you.
Speaker 7 (39:22):
And so, so you have like a tax simulator, So
I'll have only one test for like maybe the whole
components and everything is in one component. Then I will
start refactoring and moving things around, and that tax simulator
will end up being like, I don't know, twenty components
(39:42):
and three services.
Speaker 6 (39:44):
Okay, So is that okay? Is that still a narrow test.
Speaker 7 (39:48):
Yeah, probably, because if all the components and services have
like a cyclomatic complexity of one because like they just
do simple calculations and nothing more, then.
Speaker 6 (40:00):
It's totally okay.
Speaker 7 (40:01):
And at some point they become a little bit more complex,
Like in one case I have, like the calculation is
becoming really hard. That's when I'm gonna narrow down my
test and I'm gonna just write a test on the
service that's making that calculation.
Speaker 6 (40:17):
And instead of having lots of test doubles Marx.
Speaker 7 (40:22):
Fakes or whatever, you implement them for each thing that
you replace with another thing and checking each interaction that
will make your test hard to your code hard to
refactor later, I only need like one or two test doubles.
Speaker 8 (40:38):
Just like that.
Speaker 7 (40:39):
I'm gonna have like a fake service that mimics the
real service that calls the back end, and I can
test like the whole thing without authentication and everything just
focusing like this, so I have like a good trade off,
and that's how you get closer to the business and
to the business rules.
Speaker 6 (40:58):
So for me.
Speaker 7 (40:59):
Like whenever you're making a change, you never make a
change just for the for fun. You're making a change
because the business is asking for some behavior and that
behavior has to be tested somewhere. Now it's up to
you to decide whether it's in an end to end test.
Like maybe it's already covered in the happy path and
that's okay, and maybe you want to double down and
(41:22):
have like a narrow test that tests also the behavior,
but just on the component and it's better because that's
how that's the for me. Like the testing is like
the the nets for the trap trapez artists, you know,
and you're not you don't have holes in the first
nets and be like, oh, it's okay, there's still the
(41:44):
other net, like ten meters down. You don't want to
train on that. Like if there's a hole in this net,
there's going to be a hole. We're gonna catch you
with the other one, but then we'll fix the hole.
So that's also one thing that people we don't use,
Like whenever an end to end.
Speaker 6 (41:59):
Test fail, we should ask ourselves what was.
Speaker 7 (42:02):
Missing in our other tests, so that we never rely
on end to end test to get like the confidence
to go.
Speaker 6 (42:08):
To to productions.
Speaker 7 (42:10):
The first safety net, which is like narrow test, should
give you like eighty percent confidence.
Speaker 6 (42:16):
Then you have like.
Speaker 7 (42:17):
Another layer and then another one and then another one
as many as you as you need.
Speaker 2 (42:21):
I wish that instead of coverage, we had like a
competent percentage, right.
Speaker 6 (42:26):
Like exactly, And that's that's exactly.
Speaker 7 (42:30):
What's so that's something that I'm trying to figure out,
is what is the.
Speaker 6 (42:35):
Right uh uh kpi?
Speaker 7 (42:40):
What is the indicators that you should measure? And coverage
is clearly not the right one because it's easy, it's
an easy one to calculate, but it's not it's not good.
And then and uh so you need uh more of
like for instance, you need more like one idea I
(43:04):
have like that I've been trying to apply with some
clients I've been coaching, but it didn't work out.
Speaker 6 (43:08):
But I talked about it in my cocooks. If you
want to try it out.
Speaker 7 (43:12):
It's test evaluation. Each time a test produces like a
false negative, meaning that it missed a problem, you added
comments on top of it. It's at false positive negatives,
and you just increment a number. And each time a
(43:34):
test does it like a false positive, so it felt
while it shouldn't, you increment the number and then you
have like an ad hero and that's the test that
on the CI or somewhere or even locally caught a
problem that you didn't see and would have broken production.
Speaker 6 (43:52):
And from there you.
Speaker 7 (43:53):
Could then analyze your tests and see which tests are
good and which tests are not, and that will inspire
you in terms of testing strategy and then in terms
of behavior and confidence. That's that's the thing is that
you have to test, Like, uh, that's really hard to measure,
(44:14):
like in terms of numbers, but that's the important one.
And I like when I coached him, the typically the
things that I ask that I keep asking people about,
like what is the thing that scares you most? And interestingly,
it often correlates with when you do some git ups,
Like there are some tools like one command that is
called git effort that allows you to measure which files
(44:35):
received more commits and stuff. And for instance, you could
start with that like which file received most fixed commits.
That's very interesting and that's when you're like, Okay, this
is this is what's going we're going to test. How
are we going to test it? That's something that the
team has to figure out, but you have to start somewhere.
(44:57):
It's totally it's totally go okay, Like you have some
legacy code that's working, not tested, and nobody touches it,
and it's there and it just dying.
Speaker 2 (45:07):
And don't look at it, don't open the file.
Speaker 7 (45:11):
It's okay, don't waste your time there. Right now, you
have to choose, So you have to.
Speaker 6 (45:15):
Choose your battles. You know, it's the last thing to fix,
not the first one.
Speaker 7 (45:21):
And yeah, and yeah, typically there's no easy way of
like measuring like the confidence, but that's where you that's
you have to ask human beings how they feel.
Speaker 4 (45:34):
Yeah, yeah, I feel it feels like test quality is.
Speaker 2 (45:38):
Also it's right up there with like accessibility, where there
are there are some tools you can use to kind
of help a little bit, Like coverage is helpful, and
at least it's like you didn't even call this method.
Speaker 4 (45:49):
You're like, okay, yeah, I probably.
Speaker 2 (45:51):
Shouldn't do that, but you know, it's it's you can,
like like we've said you can, you can get one
hundred percent coverage and write tests that do absolutely nothing
useful and and so yeah, that's interesting. Okay, So we
did promise we would talk a little bit about v tests.
So maybe we let's like shoehorn that in here at
(46:12):
the end. But I know, right, yeah, well we could
keep going. We still you know, we have more time.
We'll just like this will be the two hour episode.
Speaker 4 (46:23):
Where like taking water breaks and a musical interlude.
Speaker 6 (46:30):
I'm very verbose, and I took a lot.
Speaker 7 (46:32):
I chat a lot, like I've been banned from chat GPT.
Speaker 9 (46:35):
Because when they when they become syntine, it's coming right
for you.
Speaker 4 (46:43):
That's right, you know. And I think it's great though, because.
Speaker 2 (46:46):
You're passionate about the topic, and I I like hearing
different strategies. I think it's really easy to Okay, I've
learned how to great unit tests, and I was like, okay,
I'm gonna do up my component.
Speaker 4 (46:58):
I'm going to test the methods.
Speaker 2 (47:00):
I got coverage, I'm done, white pans walk away.
Speaker 4 (47:03):
Oops, a bug came.
Speaker 3 (47:04):
In, like.
Speaker 2 (47:07):
You know, And so then you learn a little more,
and then you know, somebody says, well, why aren't we
testing behavior? Oh okay, Why aren't we using Why aren't
we doing it like this?
Speaker 4 (47:13):
Oh okay?
Speaker 2 (47:14):
So I think having these new ideas about how to
rate tests can help you find a strategy that works
for you and works for your team.
Speaker 4 (47:22):
And I think I think that's really important.
Speaker 2 (47:24):
That's really at the end of the day, what matters
like is your testing strategy working for you? Are you
releasing code you're confident in and can you maintain your
test suite? And if anyone like super upset when you say,
can you put a few tests here?
Speaker 7 (47:36):
Yeah, exactly, Yeah.
Speaker 5 (47:39):
I was one hundred percent confident in my code in
my component co until effect change comes in.
Speaker 4 (47:48):
Exactly.
Speaker 7 (47:50):
That's also very interesting because sometimes people are very confident,
but because they we think that we control the change.
We control nothing like changes can come from like everywhere.
Once I had the bug with a browser browser because
I was using I was using flex boxes. Really early
Chrome made a change. All my tests were passing, but
(48:13):
the products in the app were displayed in one.
Speaker 4 (48:17):
Pixel, but they were there.
Speaker 7 (48:21):
They were This was back in protractor times, so you
didn't have actionability that you have with Cyprus and Playwright
that would detect that and say, oh this is I
can click on this. Yeah, yeah, so this is this
is how like problems can come from everywhere at any time,
So you have to be ready to anything.
Speaker 2 (48:41):
And it's I do sort of love when a bug
comes in and people are like, we can't figure it out.
Speaker 4 (48:45):
I'm like, give me that bug.
Speaker 3 (48:47):
I cannot this thing.
Speaker 4 (48:50):
If somebody like it is like catnip to me.
Speaker 2 (48:52):
When someone says I can't figure out this unit, like
drop everything, I'm.
Speaker 6 (48:57):
Common remove the test.
Speaker 4 (49:05):
I have done that before.
Speaker 2 (49:06):
I've been like, you know what, this test isn't even useful,
it's not testing anything.
Speaker 6 (49:11):
But that's I was joking.
Speaker 7 (49:13):
But actually it's very important, like to to That's why
you have to score the test and evaluate them and
throw away anything that's just in the in the in
the way Like, yeah, that's sounds critical.
Speaker 6 (49:24):
So yeah, those talk about est that whole thing.
Speaker 4 (49:30):
Okay, So do you use Vita?
Speaker 6 (49:34):
Yeah, yes, you should too. Who's using vites?
Speaker 2 (49:41):
I need to just get we have a spike that
I just have not had time to do to get
it set up.
Speaker 5 (49:46):
I mean I do use v test, but not with Angler.
Speaker 4 (49:50):
So yeah, do you use it with Angular?
Speaker 6 (49:54):
Yeah? Yeah?
Speaker 4 (49:56):
And yeah, I see you put a.
Speaker 2 (49:59):
Link in the chat here of whyv tests and on
your website, so that will be in the show notes
as well. So but let's just I could sit here
and read this word for word, but that's boring for people.
So why don't you tell us a summary of why
V tests?
Speaker 6 (50:17):
So I have a better option. Okay, I brought a
guest with.
Speaker 4 (50:21):
Me, nice to explain what V test is.
Speaker 6 (50:26):
Okay, so I'm gonna pass the mic.
Speaker 11 (50:29):
Okay, We're gathered today to now on the end of
Jasmine jest and this stacular Oh so where was that?
Speaker 8 (50:42):
Oh it's Karma now, Okay, sorry for dead naming you, Karma, Karma,
dear sweet Karma. You did your best to make tests
symmetric to production. You just wanted to launch a browser
and died working for it to next back Jasmine, Oh, Jasmine,
(51:04):
so inspirational, such a free spirit, no dependencies. Wow, but
you were so impatient that you couldn't wait and wanted
everything to be synchronous. Then came Just the Golden Child,
the Savior, just once, Jasmin's loyal companion for those who
(51:24):
didn't know their childhood. They walked side by side until
Just took flights on its own weights. So powerful, so confident,
so limitless, mischievous, just always ready for a joke, like
shoving deaths off the cliff straight into snapshot testing. You
promised zero configure.
Speaker 11 (51:46):
Rankster to the end. Then came the great Migration. Remember
is them? We all jumped in the boat and left
you just stranded in the common jays. So yes, we
are here too, mound, but also to celebrate, celebrate the
(52:06):
lesson the three of you taught us. Never gets to
attached to the testing framework. Rest in peace, Karma, sleep well, Jasmine,
good night, Sweet chest, thank you for everything. May the
new generation honor.
Speaker 3 (52:28):
That guy was awesome.
Speaker 2 (52:32):
Let's pour one out for the former testing frameworks. It's
all a bottle of developer tears that I keep on
my desk.
Speaker 6 (52:41):
So oh wait, I still have my.
Speaker 2 (52:49):
It was a.
Speaker 4 (52:49):
Totally different guy. I can't confirm. We saw him on video.
We came in, he looked mysterious. Nobody knows, nobody knows
who that was. Curious man are testing?
Speaker 6 (53:07):
So yeah, yeah.
Speaker 7 (53:08):
The the problem with with so all these tools like
had some some limitations, and one of for me, like
the big switch from so Karma is not uh is
not maintained. It's it's uh, it's duprecated, so it's dying.
(53:29):
And the Angler team is is switching to web test
Runner as an alternative, which is a pretty young thing,
but just like as a.
Speaker 6 (53:40):
Like replacements with a migration.
Speaker 7 (53:43):
That's one of the goals I guess like not having
to rethink like that the test and that's very important.
Speaker 6 (53:48):
And then there's v tests that having like a big.
Speaker 7 (53:50):
Traction in the whole community, and it's supporting the s
M and it fixes so so many things like.
Speaker 6 (53:59):
Oh, the first thing is due to.
Speaker 7 (54:03):
The common JASS problem, like just configuration can get really messy.
Yeah with all that transform ignore patterns, reg x.
Speaker 5 (54:15):
Yeah and slow when you have to transform something.
Speaker 7 (54:18):
Yeah, yeah, and you have to think and it's always
frustrating because you're like you're having your app, your tests
are working, and you include some third party library that's.
Speaker 6 (54:27):
Doing something weird. Maybe I don't like importing a markdown.
Speaker 7 (54:30):
File or whatever, and you you just fixed that in
the Angular building it works, and the just test is
broken and you have to change your brain from VIT
and Angular configuration to just mindsets and think of.
Speaker 6 (54:46):
How to transform that and hack it around and maintain
like both configurations.
Speaker 7 (54:51):
That's also why v test is interesting is that Angler
is using vat under the hood, so this means that
it will also allow the tests to be more or
closer or more similar to whatever is happening in the
Angular app. Even though the just experimental support in Angular Cli,
(55:13):
which is very experimental, is a bit safer than just
using the just as we're all using it today with
just Angler preset, because it's using the the Angular builder
to build the app and it using vats under the hood,
so it's not doing the transforms with with justin.
Speaker 4 (55:33):
Origin just the experimental be a module's flag.
Speaker 7 (55:36):
Oh no, no, that's also that's that's another thing, the
just experimental builder, like the one that allows you to
run just in Angler cli.
Speaker 6 (55:45):
But okay, it's not used that.
Speaker 2 (55:49):
All disclosure, I haven't we haven't successfully set up the
test yet in I guess, yeah, we haven't messed with that.
Speaker 4 (55:58):
Maybe we should. I don't know.
Speaker 7 (55:59):
Yeah, it's it's definitely, it's definitely worth the transition. Plus
as Choe Mansion, like, there's all this not only the
speed problem, but also like not only the performance problem,
but also the memory leaks problem. There's a conceptual memory
leak problem in in jest.
Speaker 2 (56:20):
Yes, which was if you've actually ever run base first
into that, it's extremely frustrating because we when I started
on the team, our test suites would take like a
couple hours to run, and they were constantly running out
of memory. And it was because of the memory, like
(56:41):
because of the way JUST was handling its memory.
Speaker 7 (56:46):
So yeah, and it's that's actually because Just has only
one isolation mode. It's using vms like ritual machines in
no GS and hence the VM experimental flag or when
you use it with ESM okay. And the problem there
is that tests are executed in different vms, so they
(57:09):
have different module caches, but they can.
Speaker 6 (57:13):
Share the globals some globals okay.
Speaker 7 (57:16):
And when you have third party libraries that start monkey
patching things or doing weird stuff, that's where you can
get like something wrapping monkey patching and function and then
the other test is monkey patching again and stacking calls
like that, and that can cause problems and it's conceptual.
Speaker 5 (57:33):
Yeah.
Speaker 2 (57:33):
We ran into that with loa Dash because the whole
library was full of loa Dash and people don't realize
that low dash memoize is a bunch of stuff and
so it was.
Speaker 4 (57:42):
Like memoizing memoizing, memmoizing that like it was just remembering
everything forever multiple.
Speaker 6 (57:47):
And that's one that's one big problem.
Speaker 7 (57:50):
And in v tests you have different isolation modes, okay,
so you can run your tests in different problemrocesses. Each
test is running into each process, so that's the maximum isolation.
Speaker 6 (58:03):
They don't interfere, but they will be slower or threads.
Speaker 7 (58:08):
And the other extreme is and that's the one you
should use if you're starting up with v tests and
if it works for you, is no isolation. Okay, so
all the tests are running in one process, okay, and
you have multiple processes, you have multiple workers, but each
workers is running a couple of tests, and that super fast.
(58:30):
And for frameworks like no matter what framework here ising
and including Angular, is that the framework is handling isolation
perfectly for you, except if you have, like some tests
that are using some third party library that's doing something
really weird, and in that case you have to yourself
(58:50):
clean up after each test, or in that case tag
these tests differently so that you run them in total
isolation like in forms in processes, and that's easily configurable.
In v test you could say these tests just tround
them all in isolation.
Speaker 6 (59:05):
What's happening with this test, it's interfering. Put it there
in the process.
Speaker 7 (59:09):
It's annoying, and that's really super cool. Yeah, so you
don't waste your time.
Speaker 2 (59:14):
Nice, nice, and I like organomically. I have used the
test with a view app, and I mean it's my
developer experience is almost identical, Like I don't have to
learn a whole new way to write tests to use
(59:35):
the tests.
Speaker 4 (59:36):
Yeah, yeah, that matters.
Speaker 7 (59:40):
That's that's That's also why uh V test and also
testing library, Angular Testing Library or React Testing library are important.
Is that this is what I call like the transferable knowledge.
Speaker 6 (59:53):
Yeah, and that's also one.
Speaker 7 (59:54):
Of the things that I focus a lot in my
course and quickbook and stuff is that I don't want
people to focus on specificities. I'd rather have like concepts
and also tools.
Speaker 6 (01:00:07):
That work for all frameworks.
Speaker 7 (01:00:10):
And that's interesting because you can onboard React to developers
in your Angular team and the other way around and
find inspirational solutions from other people. And that's how frameworks
can influence each other. So it's really important that in
the Angler ecosystem we start sharing the same tools as
like other frameworks, and like the React and View community
(01:00:33):
are big on v tests and remix too, and next too,
so that's why we should join them.
Speaker 2 (01:00:40):
Yeah, one of us, will Angie join them, will join them.
Speaker 6 (01:00:50):
Just have to switch switch everything to classes.
Speaker 5 (01:00:55):
Classes are not bad. I love classes.
Speaker 6 (01:01:00):
What you're talking.
Speaker 4 (01:01:00):
About, Brian just drops it out of nowhere. Yeah. So,
uh so for an Angular developer to get set up
using v test, what does that process look like?
Speaker 6 (01:01:17):
It's pretty pretty easy. If you're using an.
Speaker 4 (01:01:22):
X, I feel like that's the answer for everything. Are
you using an X if yes, it's easy or easier.
Speaker 7 (01:01:31):
So yeah, so an X has already like the v
tests set up initially, and I made the prs to
add like v test as an option when you create
an Angular app or library, so it's going to create
everything for you. Otherwise, if you're using an Angular Cli,
you just have to.
Speaker 6 (01:01:48):
Go to the analog.
Speaker 7 (01:01:51):
Documentation and then there are instructions on how to set
up a v test on an Angular Cli app. And
because and again it's not because you're going to analog
documentation that you're switching to analog.
Speaker 6 (01:02:06):
Like the v test plugging is totally uh.
Speaker 7 (01:02:11):
Independent, and and it's all started with Brandon and Chaw, with.
Speaker 6 (01:02:17):
Dannalok and everything.
Speaker 7 (01:02:18):
So that's that's where that's why, that's where it came from,
because there's this VIAT plug in for Angular.
Speaker 2 (01:02:29):
Because the history of it was that mb test first
came out Angular couldn't support it.
Speaker 4 (01:02:34):
And then Brandon heard that and was like.
Speaker 7 (01:02:38):
An, actually it worked from day one and some really
strict conditions. If you're just using inline templates and inline styles,
you could run everything in.
Speaker 6 (01:02:56):
JITs okay, and that was working.
Speaker 7 (01:02:59):
But if you really want it to work better, you
have to transform the template ails and style of arails.
But still, if you want to go further, you have
to compile. If you want to handle signal inputs, you
need to transform. And that's that's why, that's why, that's
why you need to to you need you need.
Speaker 6 (01:03:18):
The builder, you need to transform that.
Speaker 7 (01:03:20):
And and also, by the way, V test Angler team
just announced store through PR that they're they're starting like
a V test experimental builder for Angler, so they're going
to be experimenting V tests to UH, not V test
versions or v test to O and U. And also
(01:03:41):
another thing which is very interesting about the v tests
is that you can enable AOT and now that's also
an a R are made.
Speaker 6 (01:03:50):
You can enable AOT for Karma and UH in the.
Speaker 7 (01:03:54):
Just Experimental Learner runner in Angular Cli, but you cannot
enable AOT and just preset.
Speaker 6 (01:03:59):
Anglers for US I know. And why is that important?
Speaker 7 (01:04:02):
Is that this will make your tests closer to what's
happening in production. And also it adds coverage to your
templates which nobody had before before you're bling aot And
that's why that's the funny thing about everyone targeting code
coverage in Angular as whatever you want wasn't working, so
(01:04:29):
there was no code coverage.
Speaker 6 (01:04:30):
So it's only covering the ts.
Speaker 4 (01:04:32):
Like it was like the Honor system.
Speaker 2 (01:04:34):
No, I swear I covered all the things the tempo
can do, like I have a snapshot.
Speaker 4 (01:04:40):
That's proven.
Speaker 7 (01:04:43):
Exactly so, and so this is this is uh, this
is where where we're heading. So the thing is, in
my opinion, people should really start migrating to Vita story
because the experience is just amazing.
Speaker 6 (01:04:57):
It's just mind blowing.
Speaker 7 (01:04:58):
Plus there's one thing, so the performance. I did some
benchmarks on my cokekbooks so people can check that out.
But so it's really faster, but also when you're that
was harder to measure.
Speaker 6 (01:05:09):
When you're in watch mode, it's really super super super fast.
But that's not the only thing about Vitas. It's also
the APIs. For example, v test hasn't expects the poll
API that allows you.
Speaker 7 (01:05:21):
To call any function and make assertions on it and
to keep trying until it passes.
Speaker 6 (01:05:26):
It has fixtures like.
Speaker 7 (01:05:28):
In Playwright for injecting configuration instead of just putting everything
in before each and doing a lot of things in
before each, and now just some advice for people so
that it can get ready for v tests and start now.
So the first thing is that so you can just
set up vtest with NX or with.
Speaker 6 (01:05:48):
The analog plug in.
Speaker 7 (01:05:51):
And then there is uh I put this into my cookbook.
Two you have like, uh, there is a.
Speaker 6 (01:05:58):
Code mode that transforms.
Speaker 7 (01:06:01):
Just code to v test, so just that FN will
be replaced but by v dot FM and stuff like
that so that you don't have to transform all your
tests manually.
Speaker 2 (01:06:15):
Yeah, that's nice because that's the stuff that that's that's
the load that starts to make people feel like it's uh,
is it worth it if I have to go update seven?
Speaker 5 (01:06:27):
Yeah?
Speaker 7 (01:06:27):
And plus as when exactly the same one I was
like helping people migrate from Karma to to jest is
that you can use both.
Speaker 4 (01:06:36):
You can have all tests the.
Speaker 7 (01:06:38):
New ones in jests and the APIs are compatible. Because
just was using Jasmine, then it just mimicked Jasmine and
added the APIs and v test is mimicking jests so
that you just get in the same API plus additional
really useful tooling and.
Speaker 4 (01:06:57):
And it works.
Speaker 2 (01:06:58):
It also works with Angul Testing library, which is that's fantastic.
I'm a fan of Angular Testing library because of the.
Speaker 4 (01:07:09):
Gosh, it's so much easier than test.
Speaker 6 (01:07:13):
It's true, and yeah, it helps.
Speaker 7 (01:07:20):
It's it's important to know about test bed because sometimes
you have to configure things and stuff.
Speaker 6 (01:07:25):
But yeah, but it helps a lot. There are some
little drawbacks that could be easily worked around, but it
really worked great.
Speaker 7 (01:07:36):
And but so that that's why, like the things that there,
people should get ready, but so that there are some
things that are better to avoid right now, so that
it makes the migration easier. Like for example, if you
have like some done callbacks, all I should stop using
them and switch to like promises. If for instance, that's
(01:08:02):
another topic that I can come back and we can
talk about it. Like if you're reusing a lot of
mocks which are actually stubs and spies, which is just
that I fin that's very annoying to switch to vitas
and it's also very annoying to maintain, and.
Speaker 6 (01:08:15):
That's causes false positives and false negative.
Speaker 7 (01:08:17):
So it's better to progressively switch to fakes, for a
like fake.
Speaker 6 (01:08:20):
Complementations of services, which is just typescripts who.
Speaker 7 (01:08:23):
Are not coupled to any framework, because right now we're
talking about VITAS, but who knows what we'll be using
one year or two.
Speaker 4 (01:08:29):
Yeah, and I feel like I read it.
Speaker 2 (01:08:32):
Did I read an article or did I watch a
video of you where you go through talking about pigs
versus mox.
Speaker 7 (01:08:40):
Yeah, so yeah, I wrote an article and I'm making
a video.
Speaker 2 (01:08:43):
Okay, yes, I'm like, I know I read this because
this is this is now I'm remember because I uh,
I can't remember how it came about.
Speaker 4 (01:08:50):
I think I was writing.
Speaker 2 (01:08:51):
I was writing a talk, and so I'm likeful, it's like,
what you want to have to say about it?
Speaker 4 (01:08:55):
What is a mock? What is it fake? Like are
the same thing? They're not the same thing.
Speaker 2 (01:08:59):
So we all we can put that article in the
show notes as well, so you know, but yeah, it's.
Speaker 4 (01:09:05):
That's good to know, and we I there. We stopped
using the done callback a long time ago because it
causes problems.
Speaker 6 (01:09:11):
Mm hmm.
Speaker 2 (01:09:13):
So yeah, that's great advice anyway, even if you're not
planning the switch.
Speaker 7 (01:09:18):
So even even if you're not switching out sure, And
I mean.
Speaker 5 (01:09:22):
I don't know, I don't know. I don't know how
anyone could write anything without dependency injection these days.
Speaker 6 (01:09:27):
Honestly, I have to.
Speaker 5 (01:09:29):
I'm adding defendity injections to my race, to my remix
app because there are no way and how I can
test uh that one function with all of these like
global lockers.
Speaker 6 (01:09:41):
Global Mongo dB, Globo.
Speaker 5 (01:09:44):
Uh audit locker, all of this stuff level please, I
cannot test it.
Speaker 6 (01:09:50):
Yeah, that's a that's a that's a big problem.
Speaker 7 (01:09:54):
There are workarounds like for instance, like depending on the teams,
like whenever I'm like coaching teams, there are is in
React or wherever which is trying because the problem is
that the provider in React is not enough because the
JSX correct, So you have to access some things differently.
But sometimes like depending on the team, just show them
(01:10:16):
like different ways. But one way is also like a singleton,
Like we have like a Singleton factory, and they're using
like a singleton that has like an override method that
allows you to override.
Speaker 6 (01:10:27):
The thing in your test.
Speaker 7 (01:10:27):
So no matter how you can manage that, as long
as you have like a point where you can switch things.
It's good, but I think it's good to have it
in control in your app and not use again just
dot mug and v dot monk, because that is exactly
the kind of things they're going to couple you to
the testing framework.
Speaker 4 (01:10:44):
Yeah, yeah, yeah, I think that.
Speaker 2 (01:10:47):
Like the takeaway is always if you're using something, never couple.
If you think to yourself, wow, I assure am a
couple like, like I sure this sure is everywhere in
the app location. You're someday you will and it's probably
gonna be you somebody will say let's switch, and then
you're gonna be spending the rest of your life butitching
(01:11:09):
out that thing.
Speaker 7 (01:11:10):
Yeah, and even beyond that, like because sometimes people are like,
uh so, this is like the key thing in like
exagonal architecture and stuff is that you have to protect
the core of your app from like the interference of
things you don't control. And lots of times people are like,
(01:11:31):
oh yeah, but I'm not gonna need an abstraction for this.
This API will never change, or we're not gonna switch
the data and we're gonna not gonna switch.
Speaker 6 (01:11:38):
The database or whatever. First it happens but even.
Speaker 7 (01:11:41):
If like let's say it's not gonna happen because there's
some legal reason or whatever. The thing is that if
you have like an abstraction on top, you control it,
and it means that you don't have to know like
when you're using like an That's a big problem with
the HDP mocking is that how are you, like, you're
using some API, how are you you're not the one
(01:12:01):
developing it, or maybe you aren't know, but how are
you gonna marck it? You don't know how it really works?
Are you gonna mock all the properties? If you don't
mock all the properties, then what's gonna happen later? And
do you really need all of these? And what if
the API changes are you do you have contractorsts or whatever?
How do you follow the changes? And what if it's
not HHP anymore? What if it's web sockets? What if
(01:12:23):
it's another thing? And that's just the implementation detail that changes,
and you're stuck and you have to refactor that. That's why,
like whenever you replace something with fakes or whatever other
test doubles or mocks, you have to only replace things
that you control, not like browser APIs, not things from
(01:12:44):
third parties, not libraries or APIs.
Speaker 2 (01:12:48):
Yeah. I ran into this yesterday. We were using our
component library uses something.
Speaker 4 (01:12:55):
I gosh, I can't remember what it was.
Speaker 2 (01:12:57):
It was it was an external dependency and the unit
tests were failing because it couldn't find the.
Speaker 4 (01:13:03):
Module to pass the test.
Speaker 2 (01:13:06):
And so we were like, oh, we had a mock
and like nope, nope, nope, nope, no, we need to
just overwrite, like this is where we fake it. Like
we shouldn't have to know what method the internally, like
the component library is calling from this third party library.
All we need to know is just that, like here's
the thing, and you know it's Yeah, the minute you
(01:13:28):
start having to dig into your dependencies to figure out
what you need to mock from them, that's when you
should say ding ding ding, Okay, something something here isn't.
Speaker 4 (01:13:37):
Quite right, so exactly yea or child components.
Speaker 2 (01:13:42):
You know, if you're having to really dig in and
like write a very specific htmail selector to select an
element on your child component, like maybe that's not the.
Speaker 4 (01:13:51):
Right test right there, exactly.
Speaker 8 (01:13:54):
So yeah, all.
Speaker 2 (01:13:56):
Right, Well I would love to talk all day about testing.
I you love testing, and we love having you on
but eventually my boss will make me go back to work.
So where Okay, So, so where's your next where's your
(01:14:16):
next appearance?
Speaker 4 (01:14:17):
Where will you be meeting the public again?
Speaker 7 (01:14:20):
Oh, I'll be at Energie Baguettes by the end of May,
like the new Anglar conference in a friends and and uh,
because I'm not traveling this year because I just had
the babies.
Speaker 3 (01:14:41):
I mean, you can still travel, but.
Speaker 4 (01:14:45):
Expect yourself to be out on the lawn when you
get home.
Speaker 6 (01:14:51):
But I could only be traveling afterwards.
Speaker 5 (01:14:56):
Become a nomad.
Speaker 2 (01:14:59):
We about Erngia get a couple of weeks ago because
we or I heard about it because I interviewed.
Speaker 4 (01:15:10):
Soumaya.
Speaker 6 (01:15:11):
Okay to be there.
Speaker 4 (01:15:12):
So yeah, so that sounds.
Speaker 2 (01:15:14):
That sounds really great. I always love hearing about new
Angular conferences. Of course, then it's like, oh, now I
have something to be sort of like jealous of.
Speaker 8 (01:15:23):
Again.
Speaker 6 (01:15:25):
It's like the Fomo and all the conferences I know.
Speaker 2 (01:15:28):
It turns out that like, I actually like the people
in the community, So I get sad when I didn't like.
Speaker 4 (01:15:33):
Normally I don't want to go see anybody. There are
very few people in the world where I'm like, oh,
I can't wait to go hang out with those people.
But I actually look forward to conferences.
Speaker 7 (01:15:43):
There's a real, real special live like in the Angler community.
Speaker 4 (01:15:48):
Yeah.
Speaker 2 (01:15:48):
And you know, it turns out if you nerd out
about Angular to just random friends and family, they don't
know what you're talking about.
Speaker 4 (01:15:58):
They're not good at.
Speaker 2 (01:15:59):
It, so so great. And then if somebody wanted to
reach out to you to nerd out about Angular or
testing or yeah.
Speaker 7 (01:16:10):
So I think the best anchor point is is my
cookbook and there you have like links to like my Twitter,
blue skyle LinkedIn and email and stuff. And and also
there's uh, if people want to learn more abad testing,
there's like my video course, yes and so and I'll yeah.
Speaker 6 (01:16:37):
You can find me on Twitter and like social media.
So as I said, and.
Speaker 7 (01:16:41):
There's I'm also like working in some videos. I'll be
publishing on my YouTube channel channel, so you can and that's.
Speaker 4 (01:16:47):
And then you do.
Speaker 2 (01:16:47):
Your YouTube channel would also be linked through the cookbook
dot dot marm.
Speaker 6 (01:16:52):
Yeah, everything is in in the.
Speaker 4 (01:16:54):
F dot and that will be in the show notes.
Speaker 6 (01:16:59):
So exactly, nice.
Speaker 2 (01:17:01):
Nice, excellent, Well, thank you so much for joining us today.
I think we should just like go ahead and pre
book you on for next season, and we'll.
Speaker 4 (01:17:08):
Talk about and mogs.
Speaker 2 (01:17:11):
Yeah apparently Chow let us know there's another test runner
the just drops.
Speaker 4 (01:17:15):
Yeah, we'll have you want to talk about that.
Speaker 3 (01:17:20):
Everyone to move to it by next week, I'm sure.
Speaker 6 (01:17:22):
Yeah, write another migration uh so yeah.
Speaker 7 (01:17:28):
And also like we'll be by the way. Well, yeah,
we can talk about this, like maybe in June is
that we'll we're making some testing announcements with a Reiner.
Speaker 4 (01:17:40):
Nice yeah tooling.
Speaker 2 (01:17:43):
Yeah, we would love to have you on to talk
about that, because anytime somebody makes me a nice new
testing tool, it's it's a happy day. So so yeah,
and the collaboration between you and Reiner, it's got to
be good, right, so we'll see we're terrible who knows.
Speaker 7 (01:18:04):
Yeah, because really think like the testing experience right now
is not perfect. The tooling is not we don't have
exactly like there's a sweet spot.
Speaker 4 (01:18:12):
And yeah, it's so much better than it was, but
it's like there's still like we can we can still
do better.
Speaker 6 (01:18:19):
Yeah.
Speaker 7 (01:18:19):
Yeah, we're in the middle of transition because like you
have v tast, you have like browser mode and v
task is still pretty early stages. You have player I
have player component testing, which is like in a weird state.
Speaker 6 (01:18:30):
So that's the that's what's very confusing for the community,
and we're trying to clarify.
Speaker 2 (01:18:36):
Nice well, thank you so much for your time. Thank
you so much for joining us to the listener. Thank
you for listening. If you like what you hear, be
sure to subscribe to the podcast so we can keep
having great guests on to the show. If you haven't
checked out and gotten your tickets for NGI Comp yet,
the ENGI comp will be in Maryland October seventeenth and eighteenth,
(01:19:02):
I believe I just keep saying those days, so that's
when it is now.
Speaker 4 (01:19:06):
But those tickets are on sale now.
Speaker 2 (01:19:08):
I believe the call for papers is still open, so
definitely submitted to talk if you have a good idea.
Speaker 4 (01:19:14):
Otherwise, thank you for joining us and we'll talk to
you next time.
Speaker 10 (01:19:19):
Hey, this is Prestolin. I'm one of the ng Champions writers.
In our daily battle to crush out code, we run
into problems and sometimes those problems aren't easily solved. Ngcomf
broadcasts articles and tutorials from ngie champions like myself that
help make other developers' lives just a little bit easier
to access. These articles, visit Medium, dot com, Forward Slash NGCOMF.
Speaker 1 (01:19:41):
Thank you for listening to the Angular Plus show in
Chicoff podcast. We'd like to thank our sponsors, the NGCOMF
organizers Joe Eames and Aaron Frost, our producer Gene Bourne,
and our podcast editor and engineer Patrick Kay's. You can
find him at spoonful ofmedia dot com
Speaker 8 (01:20:01):
And at it it