All Episodes

March 8, 2024 56 mins
In this episode Henrik and Leandro go over the tools types we must learn of to be a great performance engineer in this new 2024.
It is not anymore just automating for generating load. There are many new things that a performance engineer needs in the belt.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:09):
Hello everybody, and welcome to anotherepisode of Provides, where we are going
to be checking and touching on manyof the topics that are affecting or guiding
our modern performance engineers in their endeavors. And if you remember from last episode,
Miamigo, Hendrick and I were discussinga little bit about the job market,

(00:33):
right Hendrick, Yeah, correct,and we mentioned a few tools during
the episode. Yeah, there weresome things where in the previous episode we
stopped a little bit on what toolswere needed and what were the requirements.
Where were the new tasks or functionsthat these tools had to serve to us

(00:56):
when we were doing performance in thesemodern times, because it's not anymore just
scripting and something the system and checkingif it survives, measuring through our atomissions.
I don't know if I mean beingtotally transparent and open here. Both
Henry and I work for companies thatcreate tools that are around the performance,

(01:18):
not only for performance, but arearound the trade. So I wanted today
to bring these tools as categories asthings that we could do where we can.
Of course, we might have tomention our employers tool ways, but
we're trying to be as unbiased aspossible as holistic and transparent on what are

(01:42):
the types of tools that you canuse or should be you will be required
to be able to use in thismodern performance of markets. And well beware,
we are not biased. We aretrying to make our best to show
you the whole Chile in terms ofperformance testing tools or relate it. We

(02:04):
will cover a couple of tools suggestionsof tools, of course, but again
you may probably going to use atool that we're not aware, so we
will be very very eager to knowwhich tool you're using. So if you
have any ideas or if you haveany recommendation for our performance community, drop
a comment below and add your toolof choice. Because there are so many

(02:25):
tools out there, we can atleast every tools that are available and as
well, I don't I don't thinkthat we should be kind of just listing
them. I want to have likehow to call it, categories of tools,
because I think we should follow theactual, the old fashioned process of
performance engineering and and and provide thevalue of the tools and show a couple

(02:51):
of examples of tools that we mayhave to use in those given situations.
So what do we start from thebeginning? Leandro? What would be the
beginning for you, because at thebeginning for me is that if we want
to if I want to do someperformance engineering or modify or optimize things,
I need to understand the actual situation. So there are tools out there that

(03:12):
gives you insights about the actual traffic, the actual usage, and from there
we can start analyzing and figure out, Okay, if I want to have
a realistic performance engineering an approach,this is how I'll do it. So
this would be for me the understandingthe what is actually how the user are
interacting my system. This is avery good point because this approach that you

(03:37):
are presenting, like this first step, I do agree it's super important,
but not for new projects when youare just creating the m v P or
like these big bang releases that wehad in the past. Now you are
referring to agile prades that are alreadythere in production. Are if I have

(03:58):
a gig, I need to delivera quick quick load testing for a retailer,
for example, to cover the BlackFriday, I need numbers because if
I want to make a realistic testor a realistic ramp up or stress test,
depending on what you have with theobjective of your your project, you
need at least have numbers that andyou need to find those numbers. And

(04:19):
the golden rule that we talked aboutseveral years. If someone gives you a
number, don't trust the number.Try to figure out if that number is
actually real and realistic, and relyand align with the actual usage. So
how did we used to do thatbefore these tools that we're going to be
talking about, because this has beensomething that we had to do for ages

(04:43):
and because you're talking about the signing, the mix and scenario, right,
I mean, I mean over theI mean now, I think for the
new people that starts their career inperformance hearing, it's good news because it's
much more simpler you used to bein the past. But you may have
you may be working soon or inthe future in a restrictive environment where you

(05:04):
don't have those tools. So youhave to figure out to take your Swiss
army knife and say, okay,I'm going to take this this this knife
because it's going to be aligned withthe project and to start. I would
say usually if you especially if youdo like a retailer or public websites testing,
what I used to do is requestedto have access or have someone sharing

(05:29):
the access or someone making an exportof Google Nietics traffic because that or if
you I'm taking Googics as an example, but there are also Adobe providing similar
tools. There are all the toolsthat are out there in the market.
But the idea is to use amarketing because it's it's at the end,
it's very marketing ish tooling. Butit gives you at least the number of

(05:53):
you sessions that comes in and whichareas they're hitting which page, and I
think it's it gives you a gooddirections to top analyzing. But you mentioned
today most of our customers. Ihope that from a dinetary's perspective that everyone
has observed in place they have realuser monitoring utilized, and then you don't

(06:15):
need tho these googlenetics anymore because youwill do the analytics based on the user
sessions discovered, but they will usethe matric, so that will be a
good starting point as well. You'reright because in the past you just reminded
me some experiences that you had toplay good cop and bad cup with the
business and say you're gonna tell mehow many invoices per month are you processing

(06:35):
or how many Very similar to whatyou're suggesting. Check Google Analytics, check
the Adoblemetrics or many of the otherplatforms that would provide this information that you're
looking for, right or even whatI was doing quite a bit, quite
quite quite in a significant way,either for end to end testing or for

(06:56):
just component level testing, if youjust did micro services is utilized the access
logs. I mean, even ifyou work in the communators environment or if
you're you're working in a traditional Burmoenvironment, there's a big chance that you're
hitting a proxy or the big chains. There is a front layer that will
route your traffic to your component.And but just looking at the access logs,

(07:17):
even if it's a one one lineis one URL. The good thing
is if you can extend that logs, because you can add more and more
information in those loggings. Level thenyou can have the ap address the session.
And then yeah, with magic andwith a couple of pythons, Python
code or Java whatever you prefer,you can start to map and link those

(07:41):
requests together and say, oh,so the the the user started with this
URL and then went to this URLnew THEERL and then you are rebuilding at
the end. It a bit ofmanual fashion way, but you're real building
the user interactions, and that isalso a good way of saying, oh,
I know that most of the useris going to searching a product and
looking at the product and adding intothe card. So it's quite of just

(08:03):
if you're in a website. Butyou need the numbers behind scenes. So
that would give you a lot ofbig, good numbers to start building your
performance testing approach. But you hitanother interesting thing that with the logs,
which are I count them as atool for performance analysis of utilization patterns,
but you also needed godlike permissions toget access to those logs and some of

(08:28):
those things, which always was kindof political and at times well the equivalent
of bringing the blood of ten virginsand the scales of a dragon and silly
things to get those accesses and permissions. Gladly nowadays we have many of the
platforms that Henry's mentioned observability platforms.Because what you were saying gathering the logs

(08:52):
and putting together like, hey,where did the user went through? I
have some idy here, you werejust generating kind of trace is by hand.
You were manually tracing and connecting thebreadcrumbs of your user and trying to
figure it out. Fortunately, moderntools, observability tools that we have nowadays
will provide this information right away.We have tools like data dog, you,

(09:18):
Relic, diner trays, Grafana,dynamics, and the list is big.
I think those may be the biggestplayers that I am aware of.
Did I forget any other light step? Of course, honeycol and many,
many and logs out of iow,there's a lot of solutions out there doing
a fantastic job. The thing isis you make sure that the tools are

(09:41):
in place collecting logs, have gooduser monitoring because at the end and most
of our most of everything is havinga quiring language. Because at the end
you may have some data that youlook at the screen visualize that, but
if you have the options to queryand extract and process, that will help
you to do the aggregation somehow ofthe average usage. What I used to

(10:05):
like to do when I was doingsome performance gigs, I was trying to
rebuild a Sanki graph. You knowwhat, when you know what the Sanki
graph, so when you have LEAsexplain it for the ones that don't.
So you have the you have likea funnel, so you see hundred percent
traffic goes to the homepage and thenyou have thirty percent going to slash card
and then you see the actual traffichow it's split it. And if you

(10:28):
have a Sankke graph, when youdo as a performance engineer, you know
exactly how you're going to distribute thetraffic within the interaction on the website.
It's so so rich, it takestime to produce it, but once you've
done it once, then you're you'reconfident. You know where in which direction
you're going to take and how you'regoing to do it. So it's fantastic.

(10:50):
No, and an advantage of thisthat showsh you like made me think
of the Marvel multiverse that you certainlysee where the branches are the users are
going to different reality or pages whereany of the think thick branches is where
you should be paying more attention tocreate your multi end to end step atomissions

(11:11):
to generate these simulations. But yougot it right. One key element here
is a good querying capability on theinformation available from the system so that you
can look for what is important.And this is very useful for user flows,
these step by step paths. Butif your system is just service or

(11:35):
inted at micro services and all thesetype of things. I think we should
do a different approach, not theseusers and to end and multi steps.
The peramid of automation commensos to dosomething different. Right. Yeah, but
if you even if you have asignal component, just with the access log
and good career language, you canrebuild that sinking graph. I mean,
we'll take a bit more time,but at the end, consider that the

(12:01):
data that you have in your ITenvironment is the gold and if you want
to be rich, you lost thegode the goal that you have already in
place, and then you will domagical things. This is like the first
step as you were mentioning these newtools modern because yeah, I think ten
years ago it was super difficult toextract this information isily, transparently or in

(12:26):
a centralized manner without convincing admins andgoing through all those loops of fire that
we had to. Nowadays, ifyou learn how to querry these tools,
querry this information, these logs andall this, you can very quickly have
an analysis on one hand, asHendrix says, the user paths with these
graphs, or how much should youhit each service in each element on your

(12:50):
page? Of how much to generatethis mix if you want to make it
modular perservers or producer flow, youcan have those statistics. And I've seen
some very good query builders that rightaway with a good query can just like
have a scenario designed pretty quickly,which, yeah, this is your main

(13:11):
safe point, this is your mainflow of users, this is second third
place. How much do you wantto go further? And there you have
it? Almost even I so longago, one person that just put together
this query that would even give youwhat is pacing that you should give to

(13:31):
your script to keep iterating, Now, like, that's pretty clever. That's
knowing how to querry this information.I think it's the very first step.
And knowing in these tools how todo it. In my opinion, again,
this is if you understand the conceptbehind, it's going to be very
easy. If you find the customerthat has one of the tools to another

(13:54):
customer that has another one, maybelock ql, the other prom ql,
the other I don't know which oneis a splunk one I can't remember is
DNA trace core in language with thesplunk I don't know what they don't remember
the name of there. Yeah,they had like also a specific corn.
They are more or less the same, all of them. But when you
understand what you're trying to do,it's just like, yeah, this new

(14:18):
tool. I just need to getthe logs painted pretty and see what is
happening and get the information. There'sone tool that you're probably gonna need even
if you don't have the data.There's a tool that I've been using so
much when I was doing performance.It's the best local database software that we
have. What are you going tomention? We may fight on this one

(14:45):
go on. It's an excell sheet. So at the end with the okay,
okay, that's yeah, yeah,yeah, I was I thought that
you were going to go. Sothe other two that I was thinking is
tab law, which Tabla is isgreat. A lot of people are utilizing
to the analysis, I would say, but I mean, I think it's

(15:09):
it's very powerful. But if younever touched cauble in your life, that's
gonna be an interesting experience, especiallybecause, in my opinions, have lays
a tool that is good for somany other things that you may be lost
a little bit in them. Ithink Table was awesome at the moment where

(15:33):
we did not have, uh,the all those Obserdy vendor solutions supporting all
those signals and supporting lower aggregation,so more data, less, more granularity
on the data points, and thendo the nietics on top of that.
At the end, we didn't havethat, so what the performance engineer was

(15:54):
mainly doing was exporting there are datafrom from the low tests or from other
stuff, fitting it back to atableau and then creating dimensions and so on,
and then do the unretics from there. I think, do we really
need it right now or does ourobserdy back end provides the semi feature that

(16:15):
we were looking at a few yearsback. Yeah, and a big advantage
or disadvantage, I would say.Nowadays it was localized, you couldn't distribute
it easily, and nowadays, ifyou don't have something centralized and easy to
distribute, you may be crippling alittle bit your capacity because I remember it

(16:37):
was awesome to produce reports and coolthings, but nowadays you rather have dashboards
right that you can just share withall your team and like, hey,
check this out. So I'm goingto go to the other extreme, you
go to the very very beginning.But now that we're talking tableau reporting,
the end of your process the end. The last step that we used to

(16:57):
have, which big questions. Imean, I remember that we used to
do like huge performance reports through wordsdocuments, but nobody was actually reading maybe
you wanted two people and then wewere doing a simplified versions through power points
for the management and nobody was readingit. You had to be on site

(17:19):
and present the numbers to the people, but when you present it was like,
yeah, so I think the maybeit's still it's still required, it's
still needed. But if you doa continuous performance engineering approach, do we
really need to build up a documentwith everything. I mean, documentation is

(17:41):
really important, but maybe using confidenceto report things, or Gerr to report
things, or distributed testing platform thatwhere you cann't report, you can keep
track on things. Maybe in termsof reporting will be more efficient than just
firing a single documents that yeah,well, the first will take time to
create and then maybe nobody will readit. I think that the biggest reason

(18:06):
to keep these reporting tools that wehad in the past, because yeah,
of course, as you said,PowerPoint, Excel and PDF was the biggest
tool Microsoft world, blah blah blah. But this is still useful or required
if your test is going to beregulations and that you have to have the

(18:29):
document to yes, be filed upin the cabinet, but there's your checking
your process. And as a consultantusually you have those delivery documents that you
need to deliver at the end ofthe project. So if you're doing if
you deliver a gig from a consultingperspective, then yes you will have to
do. But when you think thatwe have now that we have access to

(18:52):
all these logs and these metrics andthese querying and dashboarding possibilities, I think
even that can be automated now,and especially in a modern performance or continuous
performance environment where probably you are notjust running low tests anymore, like these
balts the biggest low tests, butyou are continuously checking for performance. You

(19:17):
want not a synthetic either, butyou have your health check every sprint to
call it. In a way,I think that the modern set of tools
is again to have this dashboarding capabilityand exporting it to a documentation way to
document it. But even if youkeep some of this information, I mean

(19:38):
I wouldn't keep performance metrics for morethan six months. But if you eventually
need to just generate something like yeah, let me quickly bring up my metrics,
bring up my stuff with my quiringpowers that I just we just mentioned
earlier. There's a report I havethis Probably what would be more interesting in
that report would be the story behindwhat happened. Hey, this connector we

(20:00):
found this problem, we found thisthing and that thing, and there was
this correlation, probably as a retrospectiveforensic analysis, blah blah blah. And
I think now these days, becausepeople I mean I hope that most of
the people is doing not only onetest and then two years after they do
another test. You did test majorreleases or even for components, you do

(20:23):
it in a contuity basis. Whatyou're fishing for is during regressions or improvements.
So even if you query that,you need to have somehow get the
access to the previous test results asa reference test and be able to compare
them and say, okay, sowe increase, we performance was twenty percent

(20:45):
higher or I don't know, givesome numbers, because at the end,
the lead product leader is fishing fordid we are we better or worse?
What is status? But how oftenwould that happen nowadays? With modern I
mean, I think if you doif you do continues to I mean,
if this is a tool that wasgoing to mention later on while or on

(21:06):
the automation piece because viring your test, memily, you can do that,
but a certain state, there's adepending on the maturity of the of the
customer or your company, you aregoing to go through automations, so trigger
tests automatically and then having something thatthat will help you to uh detect those
gig questions. And I think storingthe baselines of the reference of numbers that

(21:32):
you had in the past and beable to compare it and then do the
math to say how we are,we are we better, the worse and
so on. I think it's it'scrucial. I think here you are mentioning
that tool that automates the automations.Like one layer above, you just got
me thinking of all the c CDplatforms. Yeah, correct, Yeah,

(21:53):
yeah, there is a you caneither trigger the test directly from a traditional
cs CD so get like drinking's getget up actions or whatever you want,
or well you can use these schedulingtask platforms and suppose it's more like remedications.
But yeah, yeah, you canimagine like a workflow. Yeah,

(22:17):
that's true. You can do that, but I think the the there's also,
by the way, a small productthat I personally I think is very
clever is test cube, where youcan define the execution of tests and and
how you're going to and then withan ap I then you you can either
schedule it from there or from theci CD you can say test you start

(22:41):
that that piece of test. Ofcourse, if if you have a tool
that has an API exposed, thenyou don't need test you to do that.
You can basically from a c Cperspective, send an API COLLS or
a command line usually most of thetools as the comindline that will spin up
start the actual test. Did youjust mentioned something also super important that I
think modern tools to AUTOMDIA animations shouldhave a way to API it because with

(23:07):
all our continuous and chained processes thatwhenever you do a pull request or checking
new code, or you have thisscheduled process that has to trigger your performance,
if you don't have an API tohit and trigger those in an efficient
way, you're crippled. In thismodern environment. You won't be able to
move for or either an API yousaid, or a command line by something

(23:32):
that will help you to say hey, I want to trigger that test.
I think that is one of themost important break to enable you to go
to the automation. Yeah, thisis now we're getting into the automation tools.
But one thing that I want toadd into the analytics the analysis before
we jump into the other tools.Because we was briefly mentioning the automation,

(23:56):
we didn't report but not analysis.Yeah, but from a report perspective,
I think one tool, it's notat all, it's a concept, but
we call it a tool for DevOps. And this tool is usually associated to
SRES. It's slow and slis.I think if you do load testing and

(24:18):
you want and if you especially ifyou do automated test, automated performance engineering,
then you need sets of SLOs coveringyour infrastructure, your network, your
response times, I mean all theVARIOUSKPI that makes sense for your environment.
Define those SLI define the objectives behindthe scenes, and then you can utilize
different solutions that will help you tocalcultar scoring like Captain is doing it open

(24:42):
source that will take all the resultsof ther SLOs if you hit the target
or not, and then based onthe successful numbers, you will get points
and based on that you get itbecause the past we will rely on the
SLI. We had an episode aboutthat. By the way we did it
us. I think when to refreshthat because you're mentioning some new things that

(25:03):
we didn't last time, the pointsand how these evolutions, because nowadays it's
not that much as you passed oryou fail the slow and then you won't
to go to production. And thisis a concept from as Sorry, I
may be spoiling the next that futuresl episode, but yeah, you have

(25:25):
your error budget, you are okay, you're finding if this takes a little
bit longer, how many points areyou having? I think setting these service
levels, it's it's going to bea huge episode on itself because yeah,
but I think it's it's all thetool itself, and I think it's it's
it's it's a driver for your automations. And I think the don't think about

(25:49):
the traditional because when people think aboutslow and slid and many things available to
perform blah blah blah, then thegolden numbers two seconds for everything. And
I think now you could basically utilizeSLOs to cover technical aspects that you are
checking manually with your own eyes onthe graphs and why don't we express that

(26:11):
through an indicators and then we targetthat and we set an objective behind the
scenes. So, like I said, i mentioned a network, you can
also think about the costs of theenvironment. You can think about the energy.
I mean, you be creative andthen with those sets of SLOs,
then you will be able to getbasically a feedback on the objectives. And
also it will help your organizations becausethen you will train the SLO approach and

(26:36):
maybe one of the aslows that you'vedefined will be shipping back to the SR
so then they will use it forproduction. So I think it's a great
way of analyzing and have at leastlike a red light or green light on
where you are in different level ofthe infrastructure. And you made up very
very good point that a tool isnot a piece of software that will help

(27:00):
performance. It can be also amental model, a concept like the sliss
lows and slas, although I haven'theard much around those anymore because yeah,
those we're a little bit of thepast. But will we will discuss about
that and future episodes. But yeah, in your performance modern performance endeavors,

(27:23):
not everything is a tool based onsoftware. It can be a concept.
I would even like there to say. Another tool is the concept of the
automation pyramid that you should attend tothat and where are you automated things?
How are you managing it? Ihave this concept of the treaty peramed of

(27:44):
Performance ATOMAS because and we should havean episode on that. On one hand
is where do you automate? Andin terms of load testing, how much
should you be executing? Is theother side of the pyramid, And many
organizations have those things side down whereI still see so many, so many
organizations obsessed and focusing on low testslike trying to bring down the system,

(28:11):
like capacity testing, breakpoint testing,all those things before anything else, and
trying to automate everything with browsers,with the front end, with things that
it's useful in situations. But usingthe tool of this mental model, you
have optimized performance testing. You'll begetting the best bank for your book or

(28:34):
not. As many sacrifice long hangingcrude to be honest before jumping into complex
stuff that will take a lot oftime to implement. I do what is
bringing actually bringing value for your fitts. Yeah, and this is another tool
that you can have in your mindand your culture and your organization together,

(28:56):
the service levels, how to setthem, how to work with them,
what to do with them? Thereare that that's a b topic and also
how to get the low hanging fruitin the best possible way, most efficient.
And using that as a segue,I have another element that I think
it's low hanging fruit that many organizationswon't even pay attention to web performance.

(29:23):
What do you think of tool wise? What do we have nowadays that we
can use for the performance think?I think I would say that depends on
what you test. If you testlike an API web performance, I mean
you actually don't care for web performance. I think first of all, if
you have if you know JavaScript,you can think of a plugin that will

(29:45):
measure the different Google I don't thenames of the lighthouse no. Then the
the key components that web performance usersare using. Yeah, yeah, the
Google standard for Web performance. Yeah, I don't remember. We've come very
well prepared. But I think deftoo. I mean, if you before

(30:10):
hitting some load against your environment,just use deft tools that will give you
so much great information about this.What are you thinking of Core web vitals?
Yeah, Core web Google to therescue. Now, I think the
the other so deft tools from froma browser perspective, from a single user
perspective will give you a lot ofinteresting numbers. But I used in the

(30:32):
past and I was a big fanof that tool. It was the web
page tests. I know that they'vebeen now acquired, so it's not an
open source project anymore. I justI know that some user can still deploy
their private instances. So one serve, you have one server controlling everything,

(30:52):
and then you put small instances thatwill be managed by the main web page
just instance, and one instance bea browser or mobile device, and basically
you'll hit the uril and then wepachest will take video, we'll take pictures,
we'll do the waterfall loading views,and you get a lot of details.

(31:14):
And I think I remember that Iwas combining a lot with load tests,
where at the end you can becleverer and do some scripting combined with
low tests and you will get thosefigures back from what pitches. Because what
Pagest always mentioned has an API,which means you don't have to just look
wet which as you can basically parsethe jays and results and then send it
back to your either your testing solutionsor if you have observing solutions. You

(31:38):
can you can send it back tothe solution as well, and this is
a super useful. I think youalready went a level a bit higher from
what I was thinking. Because havingall these the API triggering it and atomating
the solution, it's a great setof results that you will get in terms
of performance. And as you mentioned, you can link it with a lot

(32:00):
tests and get all sorts of crazyinteractions because here in performance nowadays we're going
to be getting so many elements ofcross pollination. You have the web performance,
you have the API performance, youhave the server side metrics, you
have the network, you have thedatabase, and you have to put together
in everything in a single picture.But coming back to the low hanging fruit

(32:22):
in terms of web performance, thereare some tools that I think are,
how to say, underappreciated in theperformance world. Manual web performance. Yeah,
just the deftol from a browser perspective, and you will get Like I

(32:43):
said, it doesn't make sense tojump into the heavy journey of automating processing,
sending about the data to an externalsolutions. If you already know that
one user is not good, itdoesn't make sense to go further. I
think it's if you're confident enough onyour solutions and how the project is behaving

(33:05):
or the apiflication is behaving, thenyou can think about going more a step
further by bringing the automation in thepicture. On one hand, as you
mentioned, you bring the deaf toolsand have a plethora of performance metrics through
those tools to right there, andthen you're clicking through it and like some
sort of exploratory performance testing and youfind out and James says this nauseum,

(33:32):
the main element of performance testing isthat the scales for one one person,
and so often that doesn't happen,and it's pointless to go further if you
can detect these things early. Andeven the reporting part, it's like,
yeah, you can just get toa customer, a new client and new
system and in a matter of minutes, I would even there to say you

(33:55):
can provide some performance metrics and anearly reporter like, hey, are you
already identify that you have issues here? Here, here, there and there?
And that's manually. There are someother systems that pretty quickly will give
you some information about how the performanceof your platform, a little bit like
pitch Speed Insights. Platforms that youput your website, your your ral as

(34:19):
long as it's public. Right.This is another interesting limitation where you can
very quickly see what is the performanceof your application for other users and it
will keep you a pretty good analysisin this matter of quickly gathered web metrics
of what is happening in your systemand you know what is happening or behaving

(34:46):
slow right there and then super quick, super easy core web performance metrics together
with Lighthouse as well from Google willalso give you another But again, if
you have the absurd absertly stack thathas a suitable will use the marging solution.

(35:07):
Usually even those the score Vietel metricswill be back in the solution.
So again, check out what youhave in your environment and if the current
tool sets will respond to what you'relooking for, and if not, you
can start doing through a manual atone usingle user or to make with other

(35:29):
tools. You stepped ahead of thenext level that I was gonna mention because
through observability platforms all the ones thatwe mentioned, and hope the audience doesn't
think that these two guys that arean observability companies can stop talking about this,
but it's true. So always becominga cornerstone for performancesessing. I think
just just think through it. Imean people talk about monitoring wenching. Now

(35:52):
it's become observerdly. But keep inmind that when when we are thinking as
a human being or even a connectedcars, if you have a Huber the
way this smart device take a decisionto turn right or stop whatever is based
on the data that he has,and it's the same thing very with a

(36:13):
great observability. Then you can startthinking of moving a step further and do
automation, make automatic decisions and soon. So I think it's it's we
used to collect monitoring back in olddays in performance engineering because we were mainly
looking at graphs. Graphs was thesingle source of the starting point of the

(36:36):
analysis, and then if we hadtroubles then we will say, hey,
give me the locks. But wenever we Profiling was something where we started
to do. I remember that wereused to connect through provide tools, but
the performing was so bad impossible toutilize the application with suitable traffic, so
we were not so much to lookingat the profiling aspects. But I think

(37:00):
now as a modern practice, modernperformance, you're in practice if you have
traces in place, you have metricsin place, you may have logs in
place. Oh I'm so jealous.You can do so much. I mean,
the journey to understand and point outthe problems will be so much easier
than what we used to have todo to get a conclusion of Oh,

(37:24):
this is the problem, this isthe bottleneck we have to I revise this.
Now you have much more simpler wayof detecting that. I think it's
really exciting moments. You mentioned avery interesting element that I used to call
it the quantum performance effect, thatwhen you were using some of these profiling
tools to observe performance, you alteredthe performance because the tools were heavy and

(37:47):
had all these issues as you mentioned, nowadays they are way more efficient.
I think that the profiling way havechanged as well. We used to do
everything, and now there is analgorithm that just pick some the common,
most common frequent functions, so wecover on the most important profiles. Because
I remember that we were flooded bydetails and you didn't know where to start

(38:10):
to stop. Even nowadays, therisk of drowning in data it's very present,
and if you don't know very wellwhat you're doing, it's something that
you'll hate. It right away,like I have so many metrics, no
no place, no more. Andit's like, but going quickly back to
that element that you mentioned, thenext level of manual performance. If you

(38:34):
have great of servability like REALM andsome front end web performance, the back
end is very well instrumented, evenyour databases, and you say like,
okay, all that is in place, I'm going to manually use the system
and gather all these metrics from myof servability platforms. And I would say
it's the developer tools, but onsteroids because you have you get an extra

(38:58):
level of insights of what is theperformance. With the developer tools. You
just know that yes, this pictureand this image and the cess they are
heavy, huge and have issues andthis API is taking I don't know,
let's say five seconds, which canbe problematic depending off your slis and solos
that you define earlier. And thenwhat with the developer tools, you are

(39:23):
just like, yeah, it's aslow it's a black box that I can
tell. But if you have delsopabilityin place, you can even dig deeper
and find out hey, probably indesign of code you can you have some
problems this data base call or isI don't know this service is taken too
long to spawn up when you're requestingit. I don't know. You can

(39:44):
go even deeper. And that's alsoI would say, and I love that
you called it the low hanging fruitbecause it becomes low hanging fruit is right
there. And then you haven't atomitated, you haven't low tested anything, and
you are identifying all these problems whichI don't know. I think it was
comparing about. So I was sayingthat in the past we were almost like

(40:07):
being an old Titanic boat and wewere looking at the metrics from a graph
perspective. We were not sure,so we were combining with the logs.
We don't have a life vest foreyone, so there was a limiting of
boats and if you were not skilledenough, then you were dying in the
middle of the sea. But nowwith the modern boats, at the end,
you should everyone has a life vests. You have a boat, you

(40:29):
have more options to be rescued becauseyou rely on traces, providing logs,
metrics. You have so much detailsthat helps you to be more efficient and
survive in the complex world of performance. I love this analogy because you're touching
so many elements from the limited resourcesthe number of life boats and the capacity

(40:52):
to contain metrics, because yeah,we would love to have all the metrics
for all the actions and everything thathappens in our system, but that very
quickly can overload our log storage andmetric storage and whatever we have, and
as you say, we can drownpretty quick. But on the other hand,
if we don't have enough of thesubservability capability to visualize, to have

(41:16):
the visibility on what is happening,we may just looking at that type the
tip of the iceberg, and wellthen you hit the iceberg and you and
now now modern boats as waiters,as detect look at satellites, there's so
much details that helps them to drawto go safely to the right destination.
Again, there's always new new randomizedevents that could happen thanks to journey.

(41:40):
More difficult, but I think it'sbetter than the best. So just to
also wrap it up a little bitto start our ramp down, everybody's one
tool that we didn't mention, andI think it's it's the actual testing tool,
I mean a testing tool. Otherwiseyou'll be very difficult. I mean
you will require a lot of peoplehitting the screen to reach out to the

(42:06):
load. But yeah, obviously wedidn't touch based in details. But obviously
performance you need load testing, performances, load testing tool behind scenes, which
now I am gonna draw a line, which is you you mentioned load testing,
which requires atomission. But there aremany other things that we may be
willing or needing to use at testatomission rather than just load testing and our

(42:34):
modern performance atomation tools. In thepast they focused on load testing. They
were the loads something tool for performancetesting, and but nowadays they don't have
this load to prefix or they arenot just oriented at it. There are
so many other things that modern performancetesting tools will allow us to automate and

(42:54):
tests and check rather than just generatingthis load. One that I can think
of is synthetic testing, which issuper useful for I think a centric test
is one single user running through aprotocol based testing approach. So if you
do just one single, one user, then you're almost close to the old

(43:17):
fashioned centeric testing we used to do. But yeah, I agree, I
agree. And here what we're gonnaget philosophical because would you consider I just
you said single user HTP, hittingan API or something, there's both.
You need to combine both. Youknow, if you have like a browser
automation just run. I think sometimeswhen something is slow down, if you

(43:42):
have several way of injecting giving youdifferent type of details. So if I
see that the performance from a browserperspective is bad, and then I check
on the protocol level it's good,then I can say, oh, okay,
so I have an idea and samething if you do synthetic testing,
for example, only from outside ofthe network from different geos in Asia and

(44:04):
whatever, Europe, US, butyou never have said anything locally in your
data network. Yeah, so atthe end, having those different options is
a way of saying, oh,everything is bad, but locally it runs
pretty well. So then you canhave a sign okay, it's not related
to the app. There's maybe somethingthe proxy or something or DNS whatever that

(44:25):
is bad. And that's a bigdifference from the tools that we used to
have, which we're using your automationionto check that performance for a short period
of time but a lot of performance. But now it's probably you have the
same number of hits, but throughan incredibly long period of time that they
are just checking the situations. Andhere's what I may get philosophical. If

(44:47):
you run five users or virtual usersor threats or whatever your tool is using
just once or twice, is stilla synthetic or did it become a low
test already? It's so small.It depends on the pacing and think time
that you put on those five users, because at the end five users could
be more than just five users,to be honest. So I would say

(45:13):
that center Dick, if you whatwe're looking at is a distribution over time
of the behavior. So do youneed actually five concurrent users at one single
point, or do you need oneuser five times they're doing every fifty minutes,
doing the entire week, Maybe youwill have a better distribution so other

(45:35):
behavior. But this is a newuse that our I think test atomasion tools
should let us know, and wewere touching on it a bit when we
were talking earlier, that our toolsshould have a CLI capability that makes it
easy for us just to trigger thesetests continuously. Because yeah, also you

(45:57):
can have an incredibly long pacing andlet it running there, which some cubernatories
maintainers would not be happy to havethat running all the time, right,
But otherwise, there's there amazing projectcalled Cuba Healthy where at the end you
can design your centic tests from cuberhealthy perspective. So even if you're in
a community environment and you do wantto use the centic tests provided by commercial

(46:22):
solutions or whatever you can think of, Yeah, let's build my own centic
tests and automate that through cuber healthy, and you will get the real health
or readiness check from the application perspective. And in this sense also is as
you mentioned, I think more thansynthetic or more than performance testing or cataloguing

(46:45):
it as low testing is a continuouscheck of performance health. I would say,
because what we're trying to do youjoint right away as synthetic. No,
it's a single user protoco BAM.Pinging for me, Seneca is not
pinging because we just now because Ithink if you do, if you go

(47:07):
on the direction say I want tomeasure the actual health, I need to
figure out a user journey that goesinto my applications, and you know that
from that user journey you will hitthe card components, you will hit the
payment components, you will hit theproduct catalog components and whatever. And then

(47:29):
if you do that journey and youknow that behind each individual request you have
a certain number of services or componentsthat are involved. Then you can say,
okay, so it's responding, soI know that those are responding.
And moreover, with trace tests,which is great, you can even do
some of those assertions to see thatafter dispute trade generated is a line what

(47:49):
you expect, so you can evencheck that the back end is actually responding.
So I think you can design amodern centic testing combined with trace tests
and other solutions. So I thinkthat that is exciting. I think it
is the capabilities of mixing things noware very interesting and important in this set
of automated performance testing because and againnot only load and people. I probably

(48:14):
have a lot of people tired withme repeating that, but performance testing is
not load testing, and you couldmix a few things in your continuous performance
efforts with tracing with I just cameup with an example. Someone changed the
JavaScript of your front end your APIs. Everything looks awesome for your monitoring,

(48:35):
your instrumentation and everything. But ifyou have a script running a browser here
and there every hour, I don'tknow one browser on key user flows yeah,
with multi steps blah blah blah,you will notice that that JavaScript got
screwed and now it takes ten hoursjust to process and show you the button
or some of those things where ifyou had only another type of automation continues

(48:59):
to checking, you would be like, yeah, my metrics too fine.
I don't know why people are complainingor mixing it as well that browser atomission
continuously checking with observability. With allthe other tools that we were mentioning,
you get like a full picture.And I think mixing these tools, what
we try to or aspire to haveis a full picture of the performance testing

(49:22):
so that we can provide better insightsand information. And before we close,
I want to hear a quick perspectivefrom you on mobile devices mm hmm,
yeah, I used to combined withmobile senter tic or mobile and user experience
tools, I think, right,yeah, I think to get real the

(49:50):
details, you need to somehow tohave something in the app itself that will
give you more insights. Because triggeringlike the old fashioned Selenium remote scripts,
you know, device, you willget some a trend, but the response
time that you get from those thosesolutions are basically not accurate because you have
the latency between you triggering an actionand the action will actually click will take

(50:14):
some time, so you're measuring.So if you know that the response is
not two seconds or three seconds reportedby the solutions because you're adding that latency
in the picture, then you canbasically just figure out, oh, I
got three seconds when everything is fine, and certainly I get ten seconds,
so I have a sign that somethingis bad. But I think it's required

(50:37):
because everyone used mobile devices, ofcourse, But then the question is should
I do a more complex test,because at the end, that's my app
doing what is expected. When suddenlysomeone opened the Facebook or Instagram or whatever
or TikTok or and then suddenly myapp is freezing because something is on the

(50:57):
other side of the phone. Somethingis is freezing, yeah, or you
have a snutchat filter working on whileyour app is open, of course you
will see some performance interesting metrics.I think on this and I wanted to
live it to the end because thisone is always available. People are looking
for ways to automate the device,which is it's doable and important at times,

(51:23):
but it's not the first step,as you said, like, hey,
first of all, it's my APIsaid that that serves the device application
working. Well, that's the firststep, Henry. Can we implement a
serpability of our mobile device performance.So yeah, I mean if in the
case of my own plane and trace, they haven't educator you can put on

(51:45):
the actual mobile device and then inthe app, right, yeah, on
the app. Yeah, and thenyou also have open temetry. I mean
it's it's not mature enough, butyou have a client mobile instrumentation for iOS
and Android. I hope that somedaythey will be mature enough so everyone will

(52:05):
include the open termtry stack when theyare building their app. So then which
means those metrics or the traces itselfwill be started from the mobile device and
we will get some lot of interestingdetails that at the moment we are able
to do, but we need torely on vendor solutions. And I think
I'm really eager to see that theoldpent temetric community is doing more effort in

(52:27):
that way. So then we willhave an en agnostic way of measuring mobile
performance or mobile behaviors. Worst casescenario, as you mentioned, you can
manually instrument your code, put sometelemetry on. It is doable. I
highly recommend please please do it.Yeah. But the thing is the question
is that, Okay, so Ihave a million of mobile users, so

(52:47):
if you have to find them.What is the same thing right that you
need to define? And then alsoyou will send the data to an endpoint.
So how do I make sure thatI'm I mean, there's a security
aspect behind the scene as well.So I think it's a great initiative to
do. Go on that directions,but don't underestimate the journey. There are
a few things that is complex behindthe scenes, so make sure that you're

(53:12):
doing it in the right way.Yeah, and mobile devices, we also
need to do an episode and inviteI have some friends in mind that can
bring us more on how because onone hand, atomating mobile is pain,
it's not pretty, and they haven'tbeen the most open on sharing some of
those metrics and things. And asyou say, the iteration gets interesting and
then You'regania is like, okay,then what do I do? Well,

(53:36):
there's going to be a perfect thatepisode that will help you figure out this.
And I left it to the endbecause yeah, that's an interesting I
don't know, could be these youcould do that, and so we'll leave
it there. I think with thissituation with mobile performance, and again,
like I said in the beginning,we cover a few tools, we cover

(53:57):
a few concepts. It deserves togo deeper. Of course we can do
that in more details. Let usknow, by the way with the comments
if you need more details on that, or if you again we forgot one
type of tool. Yes, Ithink we forgot a lot of tools,
but drop if you have other tools, drop common below because of course there's
so many tools that will be requiredto deliver properly in your projects. Most
probably we will have a part twoof this conversation because as well it's huge

(54:22):
and we only have like about anhow we're trying to keep these episodes in
a manageable size like our metrics,to not to drown on our logs and
metrics. We don't want to drownyou imperfect. So with that, Henrik
any closing comments, I'd say,I'm going to go over to Titanic,

(54:43):
so make sure that the water isnot cold warm enough before drawing into the
oceans of performance. From my end, I want to bring out that tool
wise for performance engineers nowadays is notonly just your atomations. You heard us

(55:06):
talking for an hour almost about allsorts of tools. The last and tiniest
one was anomation, which already isWe should to another episode just that it's
not that it's getting smaller, butthe priority is moving. And you're gonna
also repeat it again. As yousaw, we discussed a lot of observability.

(55:28):
I think now you have today.Absal has involved so much over the
years, So utilize the magic andthe beauty of those platforms will well,
yeah, simplify your journey for sure. Yeah, it's getting the fruit hanging
lower for us. Observability will helpyou and bring that from that fruit down

(55:50):
for you. You have no ideahow much. Check it out and well
with that, Hendrik, I thinkwe're gonna call it a day for this
episode of Profides. That's great.I want to thank everyone much as gracias
for tuning in see and stay tunedfor the next episode because we will be
analyzing and talking about all of thesetrends. We already promised a plethora of

(56:15):
episodes and topics that we need todig deeper, so stay tuned and with
that provides out and adios. Su
Advertise With Us

Popular Podcasts

Stuff You Should Know
Betrayal Season 5

Betrayal Season 5

Saskia Inwood woke up one morning, knowing her life would never be the same. The night before, she learned the unimaginable – that the husband she knew in the light of day was a different person after dark. This season unpacks Saskia’s discovery of her husband’s secret life and her fight to bring him to justice. Along the way, we expose a crime that is just coming to light. This is also a story about the myth of the “perfect victim:” who gets believed, who gets doubted, and why. We follow Saskia as she works to reclaim her body, her voice, and her life. If you would like to reach out to the Betrayal Team, email us at betrayalpod@gmail.com. Follow us on Instagram @betrayalpod and @glasspodcasts. Please join our Substack for additional exclusive content, curated book recommendations, and community discussions. Sign up FREE by clicking this link Beyond Betrayal Substack. Join our community dedicated to truth, resilience, and healing. Your voice matters! Be a part of our Betrayal journey on Substack.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.