All Episodes

September 29, 2025 66 mins
In this episode of JavaScript Jabber, I sit down with Dan Shapir and our special guest, Yoni Goldberg, to dive deep into the ever-evolving world of JavaScript testing. Yoni, a consultant who’s worked with over 40 organizations to refine developer workflows, shares valuable lessons learned from helping teams design efficient and reliable tests.

We explore emerging trends in testing, including the rise of browser-based test runners, the shift from unit testing toward more integration and component testing, and how modern frameworks like Playwright, Vite Test Browser Mode, and Storybook are changing the way developers think about confidence in their code. We also tackle the role of AI in writing and maintaining tests, the pros and cons of mocking vs. real backends, and why contract testing is becoming essential in 2025.

If you’ve ever struggled with flaky end-to-end tests, wondered how to balance speed with confidence, or wanted a clear breakdown of modern testing tools, this conversation will give you practical insights and fresh perspectives to take back to your projects.

Links & Resources

Become a supporter of this podcast: https://www.spreaker.com/podcast/javascript-jabber--6102064/support.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hello, everybody, Welcome to another thrilling episode of JavaScript Jabber.
I am Steve Edwards, the host with the face for
radio and the voice for being a mine. But I'm
still your host with me on the panel today, all
the way from Tel Aviv, Israel's mister Dan Shapier.

Speaker 2 (00:19):
How are you doing, Dan?

Speaker 3 (00:21):
I'm doing well. How are you, Steve?

Speaker 2 (00:23):
Doing good? Doing good?

Speaker 1 (00:25):
It's Wednesday morning, first day back at school for kids here, so.

Speaker 2 (00:29):
That's always good.

Speaker 1 (00:31):
I'm still hot and toasty there in Tel Aviv as always.

Speaker 3 (00:35):
Yeah, when is it not? Yeah?

Speaker 1 (00:38):
Yeah, it's a little cooler here, starting to cool down
a little bit, which is always a bummer, but.

Speaker 2 (00:43):
It is what it is.

Speaker 1 (00:45):
And our guest today coming all the way also from Israel,
mister Joannie Goldberg.

Speaker 2 (00:49):
How are you doing, Yanni?

Speaker 4 (00:51):
Hey, hey, great, great Tom. I'm really happy to be
here today. It's one degree cooler here comparing with Al Aviv.
Very hot, but the good news that after two months
as of being at home, kids are finally back at
school yesterday or before, so.

Speaker 2 (01:08):
Yeah, back to work a little yes.

Speaker 1 (01:10):
And for those of you that might see Yohannis drink,
he is not drinking and podcasting that is tea, not beer.
I clarified that before the episode, so so not that
it's not okay to drink and podcasts, but just clarifying.

Speaker 5 (01:24):
I literally drank beer on stage while giving certain talks.
My contention is that it only made the talks better.

Speaker 2 (01:34):
Started listening up a little bit. Is that it Yeah,
if you probably.

Speaker 4 (01:38):
Know, is it also legit in the States? I know
it's you in Europe, it's okay. But if I drink
at like it's the States during the talk will be
considered PC.

Speaker 1 (01:49):
I've never heard anything against it. I guess it depends
on how well you handle the the alcohol and well
you stay on topic and have somewhat of a sense
of humor.

Speaker 4 (01:58):
Then you're saying that the first two okay.

Speaker 5 (02:01):
In Germany, it's perfectly legit to have been at lunch
during work days and legita required.

Speaker 3 (02:10):
Yes, there is something to that, and it's really good beer.

Speaker 4 (02:15):
Uh.

Speaker 5 (02:16):
When I worked at Wix, let's just say. Let's just
say that we worked hard and played hard and sometimes
played hard during working hours.

Speaker 1 (02:28):
Alrighty, So with that we will transition into the topic
juju early topic of the day, which is JavaScript testing
and what's going on in testing in the JavaScript world,
so you on need to take it away.

Speaker 2 (02:44):
Where do we want to start?

Speaker 4 (02:47):
I'm sure you wanted to start by sharing a few
words about myself and.

Speaker 2 (02:50):
My Now we don't care about that. We just want
to know what you have to say.

Speaker 4 (02:56):
Now.

Speaker 1 (02:56):
I just yeah, seriously, sorry, I'm a little off track.
Tell us who you are wire famous and where people
can give you money or will say that for the end,
just tell us who are true things? Surprise, surprise.

Speaker 4 (03:08):
I'm a developer, freelancer and a consultant doing full stack
a lot of JavaScript, but more than anything else, really
my love, my passion. The things that I'm trying to
specialize in is that all developer workflow and specifically testing.
I think that I've worked with more than forty organizations.

(03:29):
Some of them are big ney you're probably recognize on
improving their testing. So I literally said, over the shoulders
of like hundreds of developers, as we try to craft
tested are both efficient and yet simple. Done a lot
of mistakes, learned, a lot, tried a lot of new frameworks,
and I hope to share some of these lessons here today.

Speaker 5 (03:50):
Isn't the process to your basically cursor right tests?

Speaker 3 (03:54):
For me.

Speaker 4 (03:57):
Exactly exactly this actually we're about to cover. I would
I would also cover some kind of testing AI trends,
mostly the useful parts of AI. So yeah, okay, just
to just to.

Speaker 5 (04:11):
Finish that particular point. The scary thing is afterwards, when
you make changes and then the tests break, you tell it,
Cursor fix the test for me, and you're not sure
whether it's fixing your test to hide the bugs or
fixing your code to fix the tests, or doing both
or whatever it's It can be really interesting.

Speaker 4 (04:33):
That's this way, definitely. I mean, I think that one
of the first things one should tune in Cursor or
cloud code or whatever you're working is that you can
let it work in yellow mode. But I think there
is one exception any a human A human confirmation should
be made anytime a test is changed. You may change
my production code, I may inspect each and every nine

(04:54):
or not, I don't know, but a test changed should
supervise that that's.

Speaker 3 (04:58):
My That's a good take. I think.

Speaker 1 (05:01):
I can't think of any time I'm just going to
say here writing some code and then committed or do
whatever without looking over it and making sure it works.

Speaker 3 (05:09):
Well.

Speaker 5 (05:10):
Yeah, that's easy to say, and everybody starts with good intentions,
but at a certain point in time, you know, especially
when you're under the gun and certain things and things
seem to work and you build a certain level of trust.
I've known people to almost even engineers, almost get to

(05:30):
the point of almost vibe coding their way through projects.
And I think you Only's statement that if you've got
good test coverage, then ensuring that the tests are properly
maintained and reviewed is a good way to add a

(05:51):
significant layer of trust and security around your vibe coding.

Speaker 4 (05:59):
Yeah, I agree with both. I think that people have
various degrees of how much they supervise AI generated God,
I'm just saying that the least conservative one can be
is at least guard you're testing. You might do this
also for the production cod but I think that letting
AI also change tests has to change God. This is

(06:22):
the highest amount of frizk one may take.

Speaker 5 (06:27):
Yes, Okay, then, so what are the latest and greatest
trends when it comes to tests, aside from having AI
do all the coding for us?

Speaker 4 (06:39):
Yeah, so I'll kind of mention some kind of super
trends and are very impactful changes, and then we can
dig dive deeper into each one of them, at least
the ones that we found interesting. So I think there
is always in testing one kind of everlasting progress in
which we always we try to get our tests to

(07:02):
be more production like without paying the price of So
in the past two years we keep seeing this progress
and in this in front and the front and world.
For the first time, I think like we ad React
testing library ruling this world for a lot of time
tests that runs in a jazz things that's for the

(07:23):
first time it's safe to say that tests don't run
in the browser, that are not visual are almost out
of scope. There are many new ways and frameworks to
run your tests in the browser and get the full
visual experience and the confidence, so it's more like production.

(07:44):
But some of them run really fast, so you're not
paying the price of like full full blown end to
end framework on the back inside it means that you
having a micro service, you can more easily now run
it on your computer with all the layers and almost
light production infrastructure, and again without paying a tremendous price.

(08:05):
I think these are bold and the changes that we're seeing,
and we I guess we will dig deeper into the
specific tooling.

Speaker 5 (08:13):
And just to clarify, there are obviously layers to tests,
so there I don't know if you should refer to
them as higher or lower layers. It's just different positions
in the testing cycle. So you've got obviously stuff like
unit tests, which should be very relatively small, relatively straightforward,

(08:36):
and certainly very very fast, and they usually don't need
special wiring, although you know a certain degree or a
certain amount of mocking is often involved. Then you've got
integration tests, and you've got into end tests. It seemed
to me that when you were talking about the improvements
in our ability to do effective, efficient and performance testing,

(09:02):
you were mostly talking about integration tests and end to
end tests and less about unit tests.

Speaker 4 (09:08):
Correct yes and a little no. I mean, I think
at first there is a trend. I'm not advocating, by
the way, on anything. I'm just trying to cover what
like trends that I observed. And I think another trend
is seeing less unit tests and more kind of component
page integration tests. But so called the testing diamond. It's

(09:32):
a kind of an emerging concept where you should write
more integration test but with the new tooling, even if
you write unit tests like take a framework that we
will cover with that. It's called VITAS browser mode. Even
your unit tests run in the browser. Now think about it.
Not all the unit tests in the world are pure.
You might test the models that actually doing some stuff

(09:53):
with the web APIs. So instead of running it in
no JS with polyphils and trust and trust the similarity,
it runs your test in a real browser and Chromium
for example. So even your unit test gains more kind
of production like experience. Does this make sense?

Speaker 3 (10:10):
Yes, it does, although it's I need to.

Speaker 5 (10:16):
Think about what I think about it. It's kind of
a recursive statement because really, when I'm thinking about unit tests,
I'm really trying in most cases to be as stateless
as possible and consequently not be dependent on having access

(10:37):
to a browser dome, which is kind of like persistent
by the definition.

Speaker 3 (10:45):
And so I'm.

Speaker 5 (10:47):
Not sure that this is where I want my unit
tests to go necessarily, but I certainly appreciate the fact
that we don't need to mock browser a p I
S domin p I S, which is always let's say,

(11:09):
less than ideal. But why do you think that is
what has changed that has made this What has changed
that has made this possible?

Speaker 3 (11:20):
Now? That was not available before.

Speaker 4 (11:24):
Yeah, first forced margining that I fully agree. The real
value is in is in integration tests and about page
testing or component tests, also in end to end. In
unit test, it's indeed less significant. So it's mostly if
if we if we double double click into the front

(11:46):
end word for a start. I think now there is
a key up from decisions the team should make. There
is kind of a triangle decision between three different frameworks
for testing components and pages, and one of them is
relatively One of them is is new. It's for the

(12:07):
past two years, and it makes running visual test in
the browser almost as fast as unit test. So there
was a lot of people trying to avoid that kind
of visual experience because of the hassle involved. But with
some of the new frameworks like vitest browser mode and
the new storybook model, it becomes easier faster, so you

(12:29):
have less reasons not to just hey, let's test it
like where the user lives. And maybe it's worth at
some point explaining what is vitest browser mode, how does
it stack with playwright go for it?

Speaker 3 (12:45):
Now?

Speaker 2 (12:45):
Is that the same as v test UI this browser mode?

Speaker 4 (12:50):
Ah, not exactly, because you can you can also have
vites I with unit tests and browser mode is a
new sub frameworks sort VI test and to zoom out
for a second, I think that we have now three
measure and good options in the front and word that
one must make a decision up front. The first one

(13:12):
is Playwright, the good all known Playwright, very reputable, very
reputable frameworks that is being very popular, which allows you
to test your system as is like you can spin
up a whole page and you get a real page
in a browser, but you get the all the you

(13:34):
get the entire you get d their application with the
router and the state and a lot of things beyond
the page level or the component what you want to test.
So the performance is like a little of concerning Playwright
even when you mock the back.

Speaker 5 (13:50):
Correct me if I'm wrong. Playwright is essentially just a
headless Chrome with api is to automate various tasks. That's
more less it right.

Speaker 4 (14:07):
Yes, and maybe a little more because it can run
your test in other browsers as well, like run the
test once and get it running in Chrome or Safari
or anything.

Speaker 5 (14:15):
Yeah, because they basically later on added support for the
same APIs as I.

Speaker 4 (14:20):
Recall exactly they imported kind of a CDP protocol for
other browsers exactly. But on top of this, they also
have a very nice and stable API to make the
experience of interacting with pages less flaky. So for example,
if you're trying to assert on something, they will take

(14:40):
care to retry in wait and until that condition is
met on the page. So the overall testing experience is
really great, and.

Speaker 5 (14:50):
It replaced all the frameworks like Cyprus and Selenium.

Speaker 4 (14:55):
Right exactly exactly. It is.

Speaker 5 (14:59):
Okay, So how is v test browser mode different from Playwright.

Speaker 4 (15:05):
Yeah, so first worldt mentioning mentioning is that the viittest
browser mode is using Playwright under the hood, so you
get a battle tested engine that you can trust. But
what they try to do Playwright is a little bit
disruptive to front and testing. So if you take traditional
front and testing with people used to work with things

(15:28):
like React Testing Library, the syntax of React Testing library,
the concept of React Testing library supporting from there to
Playwright was a little disruptive. Vitus Browser mode allows you
to take your existing test or learn nothing new, use
the same syntax, same concept, only instead of running it

(15:48):
in nogs like React Testing Library, you run it in
real browser, which obviously allows you to see. You probably
know that that really nasty experience when you with React
testing library and the test failed. What you see on
the screen is three hundred lines of XML of HTML
in a CLI in which you try to understand in

(16:10):
this my component, how does it look like? What's wrong here?
I mean nobody wants to travel shoot test in HTML
over CLI. With this browser mode, you can see them
in a browser and benefits from all of the Chrome
Developer tool bar perks. So that's that's the premise. That's
the obvious upside. They also make it like you know,

(16:33):
vitt stands for fast in French, with the word wit
is fast in French, and they they stand up to
their reputation like I can I already use it in production.
It's amazing. I can like test then entire page, some interactions,
sometimes specific scenario in two hundred milliseconds. It's like blazing fast.
You click on a button and you don't have enough

(16:54):
chance to see what happened on the screen. So the
performance is really awesome. With that, there are some cons
that should be mentioned as well. It's very new, it's
still tagged as experimental for like two years or so,

(17:14):
and so's it's it didn't yet stand the battle of
time like Playwright, And there are many other other decisions
that were made that are that might appeal to some
teams and not to others. But it's definitely a bold
opportunity I think now in twenty twenty five for testing
and I could not finish that. I'm sorry. This is

(17:36):
kind of a mini mini TED talk. But there is
also Storybook, and Storybook now change their model and now
they're using Vita's browser mode under the hood, which use Playwright.
So you have three options. You know that Russian doll
which is called I believe Matroyshka Babushka or matroys Career.

(17:56):
It's kind of a nested doll. So what we have
now the in the testing space, it's a troche car,
so you have to play right or VI test brother
mos that you just play right, or storybooks that use
you got idea and you have to make you have
to make the choice.

Speaker 5 (18:15):
So if you're starting a new front end project today,
and let's say that you're using one of the two
leading options, I don't know how familiar you are with them.
Either React, which is approximately fifty percent of the market
or view which is approximately twenty to thirty percent of

(18:37):
the market, and you need to do and you want
to do testing because we started with testing for the
front end.

Speaker 3 (18:46):
What would you use?

Speaker 4 (18:51):
Yeah, that's that's first. Very I think that the choice
is really hard today. I had to choose for a
customer like two weeks ago, and the decision is really hard.
So from one hand, Eileen Tower being conservative and say, hey,
let's use play right. It works. It's like the most simple,
it's like the last layer. It's simple, it's proven, let's

(19:13):
just use it again.

Speaker 5 (19:15):
Playwright in this case is for mostly for the end
to end tests.

Speaker 4 (19:19):
Right, or you can use it also for components, for
for component or page testing. It it has great tooling
to allow you to mock your bacond and I think
I think that for the majority of the testing front
and team want to mock the beacon. But yeah, you
can also run in.

Speaker 5 (19:37):
Well, let's put it this way, if you're not mocking
your back end, then you're by definition dependent. It's almost
it kind of almost means that by definition your tests
are going to be flaky at least some of the time.
If you do involve if you do involve the back

(19:57):
so so yes, if you want to go for a
operations of concerns, you want to mock external services reduce
the amount of dependencies that you have on external services
and that includes your own back end.

Speaker 4 (20:12):
Totally agree, it's slower, it's more flaky. But my highest
concern with end to end is actually that it's really
hard to simulate things beyond heavy passes. So think about it.
You want to test now that the user is disabled,
it's a premium user. You get twenty records, all of
the things that are beyond the simplest response. With real

(20:34):
back end, it might be very hard to tune the
backet into each and every scenarios that you want. However,
when you mock it, you can simply just say, hey,
I want this response, I want that response.

Speaker 3 (20:44):
Yeah.

Speaker 5 (20:45):
On the other hand, when you're mocking it, you're basically
inventing the responses, so it's not necessarily how the system behaves.
It's not really end to end if it's mocking.

Speaker 1 (20:56):
I mean when you normally like spin up like a
maybe this is what you're talking about with mocking, but
basically create a test database or a copy of your database,
spin one upset it. You know, you got your data
in it and then run your tests. Are actually using
a database. It's not your Pride database obviously, but at
least you have a clone and a real database that

(21:17):
you're interacting with, because I know, speaking personal experience, a
lot of times that's that's valuable testing.

Speaker 4 (21:23):
Yeah, yes, but I think that first before answering that
I'd like to repeat to a dance. And I think
it's like a key point here. If you're doing naive mocking, Okay,
I'm just walking some Jason. Yeah, you are now at
the risk of doing things that are not exactly the
same in production, and this is why you also need

(21:44):
contract testing. And I think this is definitely a topic
we want to double click into later because there are
a new interesting tooling in the JavaScript space related exactly
with this problem. How do you create smart and aligned mogs?
Now to this one. Yeah, in a simple case, it's
just crawd applications, So you set up something in the

(22:04):
database and you display them. But in many cases, creating
the server response is much harder than this. Let's give
you an example. One of my customers, is there a
user entity and it might get disabled? How does it
get disabled, there is some chron job running every minute
and concluding that the user is disabled. So how would

(22:24):
you create a disabled user from a test and make
the back end run a chron job and tag it.
It becomes much so much harder than just returning one line. Hey,
just give me a disabled user, Jason, am I being
clear on this point.

Speaker 5 (22:42):
No, it's It's obviously true like getting an application to
do what you exactly what you want and can be challenging,
and configuring a certain backend application to replicate certain scenarios
is a problem. Just think about, you know, the scenario

(23:05):
where a customer tells you that they're experiencing a certain problem,
and maybe you even have traces that kind of show
the problem the way that the customer is experiencing it,
and now you're trying to replicate that scenario in the
lab in order to debug it. We all know how
challenging that can be. So I totally agree that replicating

(23:29):
weird behaviors or edge certain edge behaviors that are legitimate
and you want to test for in an actual environment
that simulates production can be challenging, and it's often much
easier to just play recordings of back end responses and

(23:51):
you know, effectively mocking the back end it can be
much much easier. Likewise, also mocking the front end.

Speaker 3 (23:59):
You know you want to.

Speaker 5 (24:02):
Test the back end, says system, then you might use
some sort of I don't know, just post things out
of Postman or whatever or playwright rather than actually having
an actual front end that you know you create a
process that clicks through. It can be much more challenging
and more complicated than length fear.

Speaker 1 (24:28):
So is there a point, you know, Joanni, with that
example you gave with the Crown job, is there a
point where you have to say, Okay, we can't test
everything in an automated test and maybe we have to
do something manually or do you always work under the
assumption that everything in Proude can be handled with an

(24:49):
automated test.

Speaker 4 (24:52):
Yeah, so I'd say, based on my experience, as long
as you stick to gray box testing. When I say
gray box testing, it's a test that you don't test
the entire system. You test one component. You test like
the user like you just put things in and expect
some outcomes. But you also run on the same process
like that application. You can simulate anything I didn't encounter

(25:17):
anything that is not simulated that it's hard to simulate
as long as you are your test and the col
under testing within the same process. In terms of ROI yeah,
I would definitely prioritize and focus on first on the
things that the user does and the things that are
likely to happen and where the bugs are more severe.

(25:42):
But in terms of the technical capability of simulating stuff,
I think that today's in the jobs critical system, I
rarely encounter the situation that something was not reproducible.

Speaker 5 (25:53):
And I'll add to that that manual testing is kind
of like by definition anecdotal, and I hate to depend
on anecdotal evidence of correctness. And so even if if
if you do a certain amount of manual testing, then

(26:13):
record that testing and then turn it into an automated script,
it doesn't need to stay manual forever. So a certain
amount of manual testing, especially for I don't know complex systems,
might be required, but again try to automate it going
down the line.

Speaker 4 (26:36):
Definitely agree. I also have if someone is interested in articles,
that's called testing the dark scenario of your back end,
in which I've shown in an OGS application how each
in seven lines tests no test there was more than
seven lines. You can test really critical scenarios. It usually
teams don't test, like, for example, test that when your bootstrap,

(26:56):
your your beacon bootstop phase fails you then you get
the right monitoring or test that when your live liveliness
route is not behaving correctly, you get the right thing.
Or when you know, very common in JavaScript word, when
there is a process uncut exception or one hundred that
rejection is happening in your application, do you do the

(27:19):
right thing? This is a critical event. I mean your
process might just crush. Do you test this? I showed
out like in five lines of code you can also
cover these advanced or endling passes. So yeah, I think that.

Speaker 5 (27:32):
So again to summarize what we've talked about so far,
So basically what you're saying is that you're kind of
promoting more sophisticated tests, usually component level tests or integration tests,
less unit tests, and that using modern tooling that you

(27:53):
mentioned like Playwright, like v test, browser mode, and like
the storybook testing, it's much easier to do than it
used to be, is basically what you're saying. If I
understand correctly.

Speaker 4 (28:09):
Yeah, much easier than before. And i'd say, I mean
I'm not against unit tests, but I think that for
a start, your first step should be what the user
is doing. And the user is not stretching functions or models, right,
The user is visiting a page. So I think that
the first things that you should test is your page,
how the user interact, and then, as you call it,

(28:30):
if you realize some complex, highly complex logics that you
want to isolate, testing isolation for better whatever performance, then yeah,
go for unit tests. But I think this is kind
of contextual and optional.

Speaker 5 (28:46):
Well, to be fair, unit tests actually were kind of
born out of TDD, So initially the concept of unit tests,
at least for people who take that approach, is basically,
we first write the tests, then we write the code.

(29:07):
Now AI writes the tests and the code, so that's
less that approach to be Also, to be fair, I've
never been able to adopt the TDD model. Maybe I'm
too much set in my ways. It just doesn't seem
to be the way that I think. But I have
gotten value from unit tests. The main advantage of unit

(29:32):
tests is they're very lightweight, so they're almost effectively free
to run, and they run, and I have them run
constantly because you know, I don't care and if something break,
it's there. They test very specific things so you get
an instantaneous indication of where that problem is, so you

(29:55):
don't need to think about Like with and let's say,
especially the end to end tests. You know the test fails, Okay,
why did it fail? It often requires pretty significant amount
of investigation in order to understand why in end to
end tests failed. And like we talked about before, it

(30:16):
could also be related to certain things that to you know,
flaky tests because you're dependent on external services. With unit tests,
you're not dependent on everything and anything. You're just testing
your own code. It runs very quickly. You're testing a
very specific function. That's say, and.

Speaker 3 (30:32):
You know I've introduced I made a change that function.
That's why it breaks.

Speaker 4 (30:37):
Couldn't agree more. You know, I observed once something very
interesting that can it back. For those of you are
not familiar with, Can is kind of the father of DDD,
A person who should really appreciate unit test and he
said in one of his blog posts, unit tests give
up predicting suitability to production and inspiring confidence. So it's

(30:57):
not about product confidence at all. It's about being writeable,
fast and specific. Now I think that most of the
developers are looking in testing for confidence, and yes, it
turns out it's not only about this.

Speaker 5 (31:11):
Yeah, there's that famous picture about the difference between unit
tests and integration tests, about you know, building urinal in
such a position that when you open the bathroom stall door.

Speaker 3 (31:23):
You can't actually get at it.

Speaker 5 (31:24):
So everything is fine at the unit level, at the
unit level, but when you put it all together, it
doesn't work. The only we've seem to have lost your picture.

Speaker 3 (31:34):
You're still with.

Speaker 4 (31:35):
Us, Yeah, I'm working on fixing the camera. The good
news is that I'm still online. Yeah, okay, And I
think it worth just revisiting for one last minute that
that crucial decision between the three front end things. So
we've mentioned playwright. I think that storybook and Viness browser

(31:57):
mode are also great options. And we didn't story books
or storybook if you have a design you asked me,
what would I pick? So, if we're having a design
system and rely a lot of small components that we
tend to communicate and testing isolation, storybooks gives a very
compelling package in which we get first a great workflow

(32:20):
of visualizing our component and documenting them, but also a
great new testing experience. So you write a test once
and you get both accessibility testing. If you have some
accessibility issue, you are covered visual regression testing without writing
any other test out of the box, if you're if
you are out of the box, and if you sharing

(32:41):
your credit card, visual testing and also also functional testing
using playwright under the hood. So you get where.

Speaker 5 (32:51):
Say visual testing just to verify that you understand your terminology.
You basically mean that if I accidentally, I don't know,
change the font or something, it will break my test
because the content, the actual pixels on the screen are
not the same as they're supposed to be.

Speaker 4 (33:11):
Exactly. It should be smarter than this, and it should
realize and detect significant changes from like minor improvement, but
exactly this. Since that functional tests can tell you about.
So your side menu is crewed, something some I don't know,

(33:31):
something is totally broken on the screen, a big change
in the layout.

Speaker 5 (33:35):
You put a red background or something for testing purposes
and then forgot to remove it.

Speaker 4 (33:42):
That's that's that's a nice example. Yeah, so storybook gives
you all this together interesting option. On the other end,
it's the more sophisticated option. It's it's all the storybook
narratives and its browser modan hand play, right, So you
have a lot of moving pieces to handle.

Speaker 5 (34:06):
Is it open source? Is it? Is it a paid
product or service?

Speaker 3 (34:10):
What? What is it?

Speaker 4 (34:11):
It's open source, it's a free product, except I think
this is how they make the revenues. Accept the visual
regression test. What you just mentioned this one is is
a paid service.

Speaker 3 (34:24):
Okay, cool?

Speaker 1 (34:26):
Hey, just to clarify terms. I know we've talked about
different types of testing. We talked about unit testing, integration testing,
and then testing visual testing.

Speaker 4 (34:36):
Uh.

Speaker 1 (34:37):
The one I always get hazy on is a difference
between I guess what integration testing is how you find
that versus unit tests versus and the s test? What
excuse me, end to end testing? So what's what's your
definition of integration testing? Yea, because without understanding that, it's
hard to get Dan's toilet joke about uh integration testing.

Speaker 4 (34:59):
So yeah, testing terminology is totally broken. I think that
if when you say end to end testing, or or
even unit testing and integrationion testing, different people will understand
different things. And for this reason, maya suggestion in every
company in your company, name a very specific name for

(35:20):
your test which is not integration and to take for example, Uh,
just one example. In one of my customers, we used
to test pages. We take a front and page we
isolated and the test cope is every time one page.
We call it page testing, and every everyone understands what
page testing means. It's a test of a page. Integration
tests just mean might mean three different things two different people.

(35:44):
If we try somehow to clarify the integration test is.
I think that the most common definition of it that
we test the user journey in a thing. We test
the wholes user journey in a single tier of the system,
not and then tire system. So it might be the
user visiting some front end but without stretching the entire system.

(36:06):
It might be a user visit a call, coding some
backend API and expecting your response. This is our examples
for integration tests.

Speaker 5 (36:15):
So correct me if I'm wrong. It also includes, for example,
testing components in isolation. So let's say I implemented a
date picker component that the date range picking component, because
you know you don't get that out of the box
in the browser, and I want to verify that it's
working correctly, and I can host it within like independently

(36:41):
and basically simulate interactions with it. It's not end to
end because it's not done in the context of the
entire application. So it's not for example, if you think
about I don't know, let's say some sort of hotel
booking application, it's not done in the context of actually
looking stay at a hotel. It's done in isolation, but

(37:05):
it's still done at the level of actual user interface
and interactions, not just invoking functions and checking return values.

Speaker 4 (37:18):
Exactly. You're in other words, you are testing the user
experience with this in its almost real environment. There is
a browser here, There is your code. There is like
you have multiple layers here orchestrating together. You have that
UI kid usually the UI kids that you're using, and
then you have your code that trapping it, and you

(37:41):
have us sometimes your state manager, and there is the browser.
All the moving pieces that exist in production are playing
together in this test. In this regard, it's integration.

Speaker 5 (37:52):
And if you're talking about non front and stuff, for
example a node system, it might be testing a particular
specific API call. Again not in the context of an
actual complete user flow, and potentially even with certain other
parts of the system being mocked like we said before,
and not actually working with let's say you're working with

(38:15):
various external services. You're not actually working with these services,
you're mocking them, and you're just testing effectively your particular
let's say micro service or particular aspect of a microservice
in isolation.

Speaker 4 (38:30):
Yes, exactly what I observe is it works great for
backing teams is testing your entire micro services is put
the infrastructure on local doctor composed, test all the layer
of the micro service. If you deploy it, you test it,
your data access there, your business logic, your API, but
only intercept calls to other services. Otherwise it becomes end

(38:54):
to end with all the downsides that we have mentioned.

Speaker 5 (38:58):
Yeah, I'm currently actually doing exactly that, and the portrait
that I'm working on. Being able to use a doctor
composed to effectively recreate the entire system on your own
particular on your local developer machine and then be able

(39:20):
to dynamically deploy changes to the specific micro service that
you're working on is effectively a godsend. It makes life
so much easier when developing in such complex environments.

Speaker 4 (39:34):
Definitely, I mean's surprising how fast this test is. Like
I think usually like a fifty proximately fifty test for
micro service on a developer machine with real database last
lesson twenty seconds. So yeah, it's a really efficient technique
and maybe it's worth mentioning if we already discuss back end.

(39:57):
There is now a build in test runners in o JS,
mostly for back end, I think, and it's really gaining
a lot of momentum.

Speaker 5 (40:06):
Yeah, it's it's interesting that it seems that between node
implementing its own testing built in testing capabilities on the
one hand, and v test on the other end, they're
kind of cornering the testing market in a sense. And
I recall, for example, seeing this post on x I

(40:29):
think from KENC. Dodds talking about how happy is that
the React testing library is no longer needed because effectively
all this functionality is now built into v test.

Speaker 4 (40:42):
I think yes, And I think it's really interesting to
see how people are contributing libraries and ideas together. So
it reactesting Library was a library, but it was also
a philosophy of how you test. Now the library is
probably fading away or at least losing some momentum, but

(41:05):
the concept that the concepts there, the techniques are being
totally used in Vitas browser mode and in Playwright, and
so many new testing framework are taking inspiration and building
on this that It's very very interesting to see this
evolving process.

Speaker 5 (41:27):
So in general, what would you say, is it still
much more difficult to test the front end than the
back end or are we achieving some sort of parity?

Speaker 4 (41:38):
M m yeah, I tell that I'd easily said that
testing a back end is much much easier than testing
a front end. Yeah. I mean, first, you can also
judge it by the tooling. I means there are except
that new test runners of NOJ that is actually bring

(42:01):
less features. It's funny because the new test runners that
a lot of back end teams are now using, it's
us it brings much less testing features and what used
to be like ingested people don't need even all the features,
and there are very few like new tooling in the
back and accept maybe contract testing tooling, which we will mention. However,

(42:23):
you see that in the front and words there is
a bloom and always constant change and struggling to build
new tooling that are can catch up with the front
and complexity so well.

Speaker 5 (42:35):
The fact that we actually need visual testing is an
indication of how much more challenging front test and testing is.
It's it's like almost like saying, let's let's test the
bits flowing over some network connection and and basically try
to analyze the system from that. If I'm using like

(43:00):
silly analogy. So you were talking about you mentioned contract testing,
Can you elaborate on that?

Speaker 4 (43:08):
Oh, definitely, I think that this is now one of
the most interesting fields in testing in overall. I would
even say that if in twenty twenty five you're not
doing your your coding against API without any explicit contract,
it shouldn't be an option on the table. First, let
me explain there is. So we are all consuming API,

(43:28):
whether front and to back and back and to backan
and every system is distributed. Now, if as a consumer
of an API, the payloads that I'm sending and receiving
for me are just kind of unknown jason, some kind
of Jason that I can control and predict, then imate

(43:49):
risk of receiving a lot of breaking changes. So say
for example, things work, but then my the APIs and
un consuming change something. My test pass, but production will fail.
And we are at this point we are really good
at testing our components. But but the ocean, that ocean

(44:12):
between components, we have much less testing ways and techniques
to ensure the two different parties are aligned on the
on the API between the two and now there is
a there is a bloom of new techniques and tooling
to cover exactly this as API consumer, how do I know?
How do I get a guarantee that the payloads are

(44:36):
changing and I should be prepared for that.

Speaker 3 (44:40):
So what are the solutions in this space?

Speaker 4 (44:44):
Yeah, so there is a spectrum of tooling and options. Uh,
the simplest one just in a moder repot, just creates
OD schemas, share them between the back end and the
front end, and you get a solid protection, right, I mean,
whenever the back end changes the schemas, your front and
tests or it can be also your back end will

(45:07):
fail because you have now you can generate typing from this.
From these schemas, you can also do runtime validation and
you're protected. But it doesn't cover a lot of frisk.
On the other sound side of the spectrum, the more
fancy tooling are mocking servers that will let you know

(45:27):
that we'll provide you with a very sophisticated mechanism to
synchronize the two parties. But there is the middle. I
think that most of the solutions that are somewhere located
in the middle between that are not too complex and
not too simple and are not simplistic. Use the following.

(45:47):
The API provider generates open API, usually it comes for free,
and it somehow shared that open API with the consumer.
Now the consumer all the consumers have tooling to generate
both types, but also run time validation out of open API.
So now as a fronten for example, whenever my back

(46:10):
end changes the open API, if my types I say
that's a new field was added, my code will generate
types types based on this open API. And if I'm
trying to access some fields it doesn't exist anymore or
is the wrong type, I will get compile time or
testing failure right away. So it's kind of me and

(46:31):
back end are always aligned without spinning up a complex
and fancy edit end environment.

Speaker 5 (46:38):
Yeah, but like you said, it almost kind of necessitates
in many cases working in a mono repo. Otherwise, First
of all, I hesitate to call it tests. I mean,
it's it's basically you know, one of the benefits of
using static typing is that you get certain bugs become

(47:02):
impossible because of the type system. But I don't really
consider it to be tests. It's basically that's it might
You could call it another step in your build that
might fail, but I hesitate to call it tests.

Speaker 3 (47:18):
Maybe it is. I don't know. That's an interesting philosophical question.

Speaker 4 (47:22):
Some of the fancy solutions are going beyond static typing.
So what they do is that if you have an
open API, it's not all about typing. Some kind of
the assertions there are can be asserted only on run time. Say,
for example, you have a rejects some of the fields
type is a rejics. You can't assert this in typing Typescript. Also,

(47:43):
Typescript is now adding some kind of tragic support. So
what these tooling are doing during testing? Whenever you are
sending to your mock some on payload, they are very
fine whether the received testing payload is indeed conforming to
the open API definition. So in that regard they can
make the test fail if what you're using, the pedal

(48:06):
you're using is not what the Beckend API defined. Of
course for this you need like real testing. And maybe
it's worth mentioning one work very cool features. I guess
you both know ms W. It's a very popular JavaScript
library for mocking intercepting HTP requests.

Speaker 2 (48:26):
No, sorry, don't.

Speaker 4 (48:27):
Oh okay, So let me introduce it. Ms w H
is a library that allows by the way both also
know jes also back in. It allows you to kind
of define in your in your environment, kind of in
kind of intercept network requests and say, hey, whenever I'm
approaching another API, say the beckend, intercept it in browser.

(48:51):
It's using by the way service workers, so you will
see the request in the network tub and Chrome. But
instead of going out outside your browser, MSW will return
the response on behalf of your API, so you get
a much more a much faster, and more predictable working environment. Now,
ms W added a new feature this year, and we're

(49:15):
discussing recent changes that instead of you defining what will
be the API responses and you might be wrong there
right what you think that there might be threefills in
this payload, the beck end in fact, in production will
return four or two. Ms W generates the responses from
the beckend open API. So you provided, hey, this is

(49:36):
what the API provided us the real world, the real
scheme as it was generated, and it takes and generates
responses based on this. So it's kind of you are
you and the API that you consume are aligned, are
always aligned? Does it make sense?

Speaker 3 (49:54):
Yes, as it does.

Speaker 1 (49:56):
Yeah, MSW stands from Mark Service Worker in case that
wasn't clear already. So before we wrap up, is there
anything else you wanted to talk about real quick that
we have not yet covered in this world?

Speaker 4 (50:09):
Let me see, I've prepared a table of content here
on the side. We can discuss a little bit more
about testing with it. I also have we have covered
this nothing significant syd with We have touched the measure
points and advancement in testing, all right.

Speaker 5 (50:34):
So if before we I do have a final question though,
in this context, So when you usually are when you're
brought into a new project and and you want to
before making any changes, I assume you want to make
sure that you have good test coverage, because who wants

(50:55):
to start making changes in a project before we have
good testing coverage. What's the main thing that you're looking
for in order to say this project has good test
coverage versus another project that you might say this project
does not have good test coverage.

Speaker 4 (51:15):
Yeah, that's that's the million dollar question. I'm even struggling
to enter it. It's so it's so deep. But I
think that I'm using two I'm using two aroistics for this.

(51:37):
The first one is unfortunately not not in period. It's
not measured. Uh, if people are moving, if people are
really are having confidence to deploy fast, deploying deploying fast
and just feeling confident to do it, you see it,
I mean you you feel it, they trust are testing.

(51:57):
That's I think the ultimate eventually the ultimate test. On
top of this, I'm.

Speaker 5 (52:04):
It's literally just to just to pause you for a second,
it's literally that subjective feeling that you test. You can
ask people working on the project, how well do you
trust your tests? If you if you make change and
it makes its way all the way to to to

(52:26):
being deployed to production, how secure do you feel about
it working properly and not breaking any of the things
or introducing any form of regression. It's it's essentially that
that level of subjective feeling in the team.

Speaker 4 (52:42):
Yeah, that's the bottom line. I'm also using test coverage.
I'm using branch coverage. Yeah, I know that it's not
a perfect metric, but it's very simple metric and for
the majority of teams, they are not like they're not
fooling themselves. If if if system has a very high
branch coverage. It's means that the testing are indeed testing

(53:03):
most of that ninety coverage, not all of them, but
a high percentage of this at least it tells me
that most of the system areas are are trying to
be tested. On top of this, I usually add some
kind of mutation testing. So mutation testing means that it's
a framework that plant bugs in your code, Like it

(53:25):
can plant a bug in each and every line and
then it runs your test and then true that they fail.
And if yeah, now, because I mean it's it's an
it's amazingly efficient technique to realize testing efficiency. But on
the other end, it's really slow and cumbersome. So what
I usually do I run it like I run it

(53:48):
occasionally just to get a sense of what is the
ratio between the COD coverage and the mutations. So in
other words, if I see that I have COD coverage
of ninety percent in a file, say I find file
as ninety percent coverage and the mutation score is very low,
it tells me that this is a nonsense coverage. You

(54:10):
know that that's just visits amer coat off it. But
it's not asserting one anything On the other hand, if
I seek correlation between coverage and mutations in file, I
know that this coverage is authentic. It's like trustworthy. So
this is the empiric side of measuring that the test.

Speaker 3 (54:29):
Cool, Okay, thank you for that.

Speaker 5 (54:34):
Elucidation is that how you put elucidations.

Speaker 3 (54:39):
That's the word.

Speaker 1 (54:40):
I'll I'll google that word right after elucidate to explain
something clearly is basically what it means.

Speaker 2 (54:46):
So that is the word of the day, elucidate already.

Speaker 1 (54:50):
So without move on the picks, picture that part of
the show where we get to talk about anything we
want to within reason of course, board, games, books, food, movie, tech,
if you want to, I'll go first and my pick.
I do have a pick before I get to the
high point of the Dad jokes of the week, and
it has to do with dad jokes. So at the

(55:11):
end of our podcast episode that Dan and I recorded
with Ryan Carneato and Tanner Lindsley, I found out something
that I wasn't aware of that now Crosby, who was
the founder of a G Grid, had died and apparently
it is some type of helicopter accident.

Speaker 3 (55:27):
Well, it's to be fair, it's not new right.

Speaker 1 (55:30):
I know it is the first I'd found about it,
you know. So yeah, that just tells you how out
there in the news I am every day, and that
brings back memories for me. So four years ago we
did a Jobascript Jobber episode with Nile, myself and Ag
and just a j O'Neal and Chuck and the Reason
that I was. Episode five oh four came out in

(55:52):
October of twenty twenty one, and this was one of
my favorite episodes, as I mentioned before, because when it
got to the end, Nile just jumped right in and
throughout a whole string of geek dad jokes that I
had to do with going to a no sequel bar
and there's no table, So then he went to a
sequel bar that had tables, but he couldn't join anybody,
and it was awesome.

Speaker 2 (56:13):
I was giving him a standing ovation.

Speaker 1 (56:15):
So that's my pick of the for the week is
you can find it on Top Endos episode five o
four and if you want to look at the transcript
or listen to it, it's it's one of my faves.
So with that, we'll get to the dad jokes of
the week. So question, how come nobody laugh at the

(56:35):
joke about the fault DG eighteen.

Speaker 2 (56:38):
It was poor execution.

Speaker 3 (56:43):
That's good, all right.

Speaker 1 (56:44):
My wife told me the other day, I'm really getting
sick of you over using contractions, and I said, it's
what it is. And then, uh, someone threw a can
of soda at me today or i'd say pop on
the West coast.

Speaker 2 (56:59):
But I'm all right. It was a soft drink. And
those are the dad jokes of.

Speaker 3 (57:06):
The week, and those who are definitely dad jokes.

Speaker 2 (57:08):
Yes they were.

Speaker 4 (57:09):
I'm embarrassed to admit that I understood two of the three.
I promised to to rewatch that section and catch up.

Speaker 1 (57:16):
Okay, yeah, I can explain it. Yeah, it sort of
kills a joke and you have to explain it.

Speaker 4 (57:21):
That is.

Speaker 2 (57:24):
Dan. What do you got for us?

Speaker 5 (57:26):
Okay, I've got a couple of things. So first of all,
I posted it on x but I also want to
shout it out a big thanks to THEO also known
as THEO three GG, who puts out a lot of
content online. I'm working on a project that involves Stripe
integration for one of our services, and I've not worked

(57:49):
with Stripe significantly, I think, ever, so I was kind
of learning a lot about the stripe APIs and how
it works, and they seem but Theo's video was essentially
all the stripe gotchas that he encountered while working on
various projects and how to overcome them, and it was very,

(58:16):
very helpful. It came along at just the right time
and undoubtedly saved me from a lot of pain down
the line. So thank you for that, CEOs, Thank you
for sharing this information. This is what sharing on the
web is all about. So that's one thing I wanted
to mention. Another is that we did this kind of

(58:41):
watch along party at work. It's something that we tried
for the first time and got a lot of good responses.
It's great when you can have somebody at work giving
a talk on a particular topic, but it's not always possible. Obviously,
you may not have experts in how on everything. So

(59:01):
what we actually did is we wanted to learn more
about MCP and Jack Harrington, who we've had on the
show a couple of times. He was also one of
the hosts on the React podcast, and he created a
series of videos about MCP that are just excellent. So

(59:24):
we watched several of them in a row. They're actually
fairly short each They're like ten to twenty minutes each
and he explains things so well. So we basically watched
a video and then had a brief discussion about it,
and then watched the next video and so forth, and
it was really great and I highly recommend this if

(59:47):
you want to educate people at work. It's a great approach,
I think. And the final thing I want to shout
out is that I recently countered this video. This guy
does videos about old hardware, and he released a video
about a personal computer from the early eighties really called

(01:00:14):
the Ti ninety nine four A. And the reason that
this hit close to my heart is that that was
my first personal computer when I was a kid. It
was a very weird and wacky machine. And he literally
kind of opened the screws and showed the insight of

(01:00:36):
how that machine worked, to explain why it was so
weird and wacky, and why it worked the way that
it did, and what was good about it, and a
lot of things that weren't so good about it. And
you know, it brought back a lot of memories. So
I'll share that link to that video. Let's just say

(01:00:57):
that it was really interesting to probe for a system
that had all of four.

Speaker 3 (01:01:03):
K of RAM.

Speaker 5 (01:01:04):
You know, in these days of gigabytes, you know, this
is all we had.

Speaker 3 (01:01:11):
Yeah, that's it.

Speaker 5 (01:01:12):
Four kfram And within that four kf RAM, I implemented
a Donkey Kong like game that had two screens. And
I was like thirteen at the time, so yeah, it.

Speaker 4 (01:01:24):
Was which language did you use?

Speaker 3 (01:01:26):
I'm curious, basic, because that's what it had. It was.

Speaker 5 (01:01:36):
I joked that it simultaneously got me into programming and
kind of also scarred me for life.

Speaker 3 (01:01:45):
So my first.

Speaker 1 (01:01:46):
Job in tech was doing tech support, and we were
dealing with Win just three point one with our software,
and I was constantly having to help users with memory
issues and upper memory and lower memory and shifting things
around so that things could run. And and then when
I was ninety five came along and a lot of
that went bye bye.

Speaker 5 (01:02:02):
I have a whole story about that, but we don't
have the time. We could probably do an entire episode
on reminiscing. So those would be my picture today.

Speaker 4 (01:02:13):
All right, Johanni, or to you, Yeah, I've recently found
myself I'm interested in quality coffee, So I realized in
mostly in the coffee story, So I really I always
thought that like an expensive coffee is like expensive wine,
you just pay for you know, for that brand or

(01:02:34):
for that weird test. But then I when I started
reading more and more and see videos about testing, I
realized that like between the farm and your cap, they
are like ten mistakes, like like dozens of mistakes. Most
of them are intentional, intentional to just to just cut

(01:02:54):
coast and make the coffee horrible. Take for example, in
any commodity farms, when they when they pick the cherry,
when they when they collect the cherry picks, some of
them are already just mature and some are unriped, some
are red, some are green. Uh so, but you know

(01:03:16):
they have to be efficient, so so just take the
entire tree down. You get coffee bins that are right
unripe together, some are really sour, some are really really sweet.
And and that's just one thing that the coffee coffee
production is doing in order to bring your coffee with
lower costs and more revenues for them, and really getting

(01:03:39):
a great cup of coffee demands a lot of a
lot of uh decisions and optimization all over the all
over that process, and it's it's there's no point here,
just a very fascinating journey that involves biology, nature people. Yeah.

(01:04:01):
Other than that, I realized that we are getting more
and more dependent on lms, right, I mean we're about
to use it for ten percent off coding or maybe
ninety percent. I mean it's objective, and I felt like
it's too much of a black box for me. I
can't understand the mechanic them. So say, for example, I'm
providing it with fifteen instructions, it's respecting thirty. Why it

(01:04:26):
fits inside the context within the Why did you ignore
twenty of them? So I bought a book which is
called I wanted to understand this box from inside. I
bought a book that's called Building an LM from Scratch.
It's just teach you build all the layers of an LM.
Of course a very simplistic one, but you get idea

(01:04:46):
of all the moving pieces. I'm now reading it, coding it,
and I think that I, after processing four chapters, I
can surely recommend it.

Speaker 2 (01:04:59):
Right, to go back to your first pick, I would.

Speaker 1 (01:05:01):
All I will say is that the term quality copy
is an oxymoron, so there is no such thing, and
I'll leave bit at that. So all right, Well, thank you,
Johanni for joining us today and talk to you about testing.

Speaker 2 (01:05:15):
I appreciate it you coming here before we go.

Speaker 1 (01:05:19):
If people want to get hold of you and give
you money and and do all that kind of stuff,
where is the best place to track your genius?

Speaker 4 (01:05:29):
Thanks for mentioning this. Well, my gietime page has a
contacts There is also goldberg Oni dot com.

Speaker 5 (01:05:36):
And you said that you work as a consultant, so
I assume that you give assistance training and help organizations
improve their testing and test coverage and quality of tests
and whatnot.

Speaker 4 (01:05:49):
Yes, workshops ends on, ends on, and holding any any
any kind of assistant and training.

Speaker 2 (01:06:00):
And that's it.

Speaker 1 (01:06:01):
Joannie Goldberg Goldberg Goldberg, Yohnnie excuse me? Alrighty and Yonie
is y o n I just.

Speaker 4 (01:06:10):
Yeah or gita dot com slash Goldberg Yonnie.

Speaker 1 (01:06:15):
Okay, alrighty, Well that is it. Another thrilling episode is
in the books. Thank you for joining us and we
will talk to you next time on job script Jabber
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.