All Episodes

October 20, 2023 49 mins
A while ago we met Lisa Waugh on a panel in a conference.
She is a rockstar that has been around for a while and knows her IT, has had lots of adventures in performance and several recommendations for women who are interested in the field.

Check the panel here: https://youtube.com/live/qbXO4lY8WYI
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:09):
Hello, perfid, Welcome to anotherepisode where we are trying to start some
of these interviews, these great conversationsso that we can share the knowledge of
other performers here in the Perfect Show. And today I have a great guest.
I'm super happy and proud to havehere, Lisa. Who did I

(00:30):
pronounce it? How did you pronounceyour last name? Lisa w Yeah,
okay, Welcome to Lisa Wall toPerfide, whom I had the honor,
the chance, great experience to meether at Star East was it It was
a starist right, and she wasa presenter. She joined us at the

(00:57):
performance table that we were doing abig discussion there about performance answering questions.
And she's a great performer. Andagain Lisa welcome. And she's a developer
experience testing engineer at IBM. Butone clarification, and she will repeat after

(01:21):
her opinions are not IBM's opinions.So Lisa welcome. Can you tell us
a little bit about your experience.I've been in the testing space probably since
the middle nineties. So my firstexperience was a client server app where I

(01:41):
was responsible for resetting the test dataon the d w T side and running
all of the batch test the batchback in and making sure that it ran
correctly. We had scripts that ranthe front end right now, I don't
even remember what we wrote them in. But more no damage testing, regression
testing we called it now damage testingthen, and from there I've moved more

(02:07):
into performance testing. But actually nowI'm responsible for the test strategy for all
of AB and CIO, all typesof testing except for security testing. I
don't have security testing, but Ihave everything else. Oh wow, Yeah,
I mean security is a pointe elementor perspective. And I'm super happy

(02:30):
to hear that sounds like, well, most, if not all, of
your professional life has been in thetesting realm, right, not entirely,
but a good chunk of it.M M. First of all, first
question that pops in my head.What inspired you to get into the IT

(02:51):
world, testing into all these sadlymostly male populated area of professional areas.
Yeah. So I first became interestedin computers when I was at the beach
with a bunch of my father's friendsand their families, and one of the
gentlemen, if I remember correctly,he worked. He was a hardware engineer,

(03:15):
and he was talking about how whathe did and having to. He
talked about some various different problems thathe had run into and what they had
to do to solve them. AndI thought that sounds interesting. I think
I'll do that, and so Iwas. I don't even I think I
was early in high school when Iheard that. And then as part of

(03:39):
the school I was able to takea couple of computer science courses, not
in my high school but as extracurricularstuff during the summer or at night,
and I was able to actually learnto program before I got out of high
school. I learned for trand andbasic remember correctly and so. And this

(04:02):
is in the seventies, and Idid this. So it was very unusual
to even be able to be exposed. But at least I was and realized
that I'm good at math. Iwas pretty good at doing the computer stuff,
and I stuck with it. SoI went to school. I have
a degree in mathematics with an optionin computer science, which was the closest

(04:23):
I could get to a computer sciencedegree in North Carolina at that point in
time. And here I am.I enjoy it. I enjoy solving problems.
So to me, it's like anew puzzle every day when I met
you at the conference, it soundedlike you were enjoying a lot of these
type of problems, this type ofchallenges that we often find in the IT

(04:46):
world, and something from what youmentioned that is super catches. My attention
is that you learned these programming thingskind of on your own before, if
I understood well right, it wasoutside of high school, but it was
in some it was in some schoolsponsored stuff. So one letsy if I

(05:09):
don't even remember, one of themwas spec and I can't remember what SPECK
stood for. But that was atwo or three week little course where we
were able to go. We actuallywent and stayed at a local college and
went to some classes and then andthen the other one was Governor School,
so the North Carolina Governor School Iwas. I was nominated and accepted for

(05:30):
that, and that was a sixweek course where I went and stayed at
Salem College in Winston, Salem forsix weeks and I was able to do
programming there, and then I wasable to go at night for I think
it was one night a week forsix or eight weeks to wing at college
and took a programming language there aswell. So it is really impressive because

(05:58):
nowadays, I mean my age,and I was used to well, not
that there was that much YouTube whenI started, but it's something that today
you can stream something. You canget the knowledge from so many different sources.
But on those days, I admirethat you chased it, that you
went for knowledge, you went forthat what you wanted to get into.

(06:21):
Yeah. I can tell passion bigtime for what was coming. And now
how did you you became a verygood IT professional for what I can tell.
But how did this transition into theQA world happened? Oh, we'll

(06:43):
see. So I was I've alwaysbeen good at problems determination, Okay,
I always have been. I cansomebody can explain to me what their systems
doing, and I just keep askingquestions until usually I make them think,
oh that's really wrong by asking thema lot of questions, right, and

(07:04):
I'm able to take pieces of informationokay and go, okay, you're telling
me that it does this, andthen it's got to do this before it
can do this or it won't work, and just basically ask questions until I
get good answers. So I hadbeen working with I've worked for IBM for

(07:26):
forty one years, okay, Andduring that time, I've had various different
jobs, but at this point intime, I was in services and I
had a client that I had workedwith a lot, and they knew my
background, knew what my capabilities were, and they requested me to come work
on this team to do the testingbefore it was released to product. Before

(07:50):
the application was released to production.It was a brand new application replacing a
nineteen sixties error application mainframe application,and so it was class sor. The
back end was still mainframe, thefront end was visual basic, and so
there were a lot of moving partsand they requested that, they requested that

(08:11):
I can work on the team,and that's how I got into it.
Wow. Yeah, many many ofthe best testers, and I'm going to
get into that. Performers do notstart like looking for that. They have
some skills, but they're like,no, no, you're good at this,
Come join us in a project.And they end up like, okay,

(08:31):
yeah, now I'm a tester.Cool. Yeah, And I'm guessing
similar with performance, how the transitionfrom QA to performance happened. So one
of the reasons that they got meonto the team was I was good at
database stuff and tuning SQL queries.I had been out. I actually had

(08:54):
taken a whole lot of classes fromBunnie Baker, who was kind of a
really knowledgeable who worked in the performancearea for dB two on the main frames,
and so I had taken a lotof classes from her and had learned
a whole lot about how to findbad performing SQL and what to do about
it. So that was one ofthe reasons they brought me over. So

(09:16):
then as I transitioned off this projectfrom the customer and moved on to other
things I was working on. Oh, I tell kind of implementations of service
management kinds of implementations, and theusers are allowed to write queries for those.
Well, users are allowsy at writingqueries, you know, selectstar from

(09:41):
ringing back the whole ringing back thewhole table. Yeah, when it's a
million plus road table, that won'twork so right. But there's also things
that you do when you do queriesthat are not good for performance, like
saying not in for ends stunts.It's better to list all the value,
even if it's not in one value. It's better to list all of the

(10:05):
values than say not in the onevalue you don't want. And that's because
of how the indexing works. Theindex can't work for not in, it
can work for an end list.So there's things like that that if you
learn what if you learn what tolook for, you can literally go down
through SQL statements and go, Nope, that one won't You need to rewrite

(10:26):
that one. You need to rewritethat one. And then you work at
them and rewrite them. So itbecame obvious pretty quickly that I could fix
people's SQL, and I got recruitedto go work on a performance and testing
team and it was and we weredoing performance testing okay, but I also

(10:50):
was rewriting our customers queries so thatthey would perform better. So I was
also doing level three work for thehell desk, rewritingcause from queries and from
there I have not done anything buttesting since then. Wow, it's interesting
because you started more like performance optimizationrather than testing. But many times that

(11:16):
goes kind of hand in hand,and it's it's impressive that I on my
developer days, and I have metmany developers who are not. You just
throw so many details around the queryoptimization, right, they go and do
the database insert one record at atime, select one record at a time,

(11:39):
or do huge data dams on asyou say, a full table scan
for select asterisk. All of thosethings are super interesting that many developers do
not even know. And they're like, yeah, it can just bring all
the information and have it on theclient side and then I can play with
it. How many clients are goingto be there? What's the size of
the database. So many of thosethings that you seem to be used to,

(12:05):
And just like when when you weresaying of that, I was smiling.
I say, it's like, youshall not do that, and all
of so many developers, users andeven testers, and I'm like, yeah,
I want to see the whole database. No, it's huge, you
don't want to. I can't dothat. Yeah, it's and it's it's
a times cute, but at theother sea is scandalizing. And it's impressive

(12:30):
that as well, you understood somany of those things that they're not argument
on the database, which many don'teven know that. Wait, is that
like avoiding the index? Is thatlike making my query wars? Can I
just openly join treat tables just likethat? Nopely and in three typles if

(12:52):
you do it right, you justhave to You just have to write,
have your queries well structured, andavoid constructs that cause table scans or index
scans. Index scans are not asbad as tybal scans, but they're still
usually not good. No, AndI bring it because in the past,
I remember so many of the thingswere table joints that generated so many full

(13:16):
table scans, multiple tables that werejust like, oh, why are you
doing this? This way is oh, and it's deep and there's some information
that we need to understand as performanceengineers. I think it's super cool that
you know it. And it doesn'tsound like you jumped right away into the

(13:37):
tripting, automating and trying to bringdown the system train in your professional career.
Right. Generally, I didn't haveto try because they had already brought
down the system and that was howit ended up with me. You were
more, it sounds to me,in the monitoring realm, checking logs,
checking what is the problem and diagnosingright right, And so I wasn't looking

(14:03):
at logs so much. I waslooking at logs some. I actually would
look at d W two performance statistics, and d B two will identify if
you turn on the right you know, if you turn on the right parameters
and everything. D B two willidentify for you how a query is performing,

(14:24):
where there's performing good, bad,indifferent, And we had some tools
that would take that information and I'llsort it. Let us sort it different
ways. We could look at thenumber of rose red, number of buffer
pool hits, number of times thequery was executed, all these different things,

(14:45):
and so we can sort by allthese different parameters. And so the
tool made it very easy for usto hone in on what were the bad
queries quickly, and we would pickthe top two or three. You don't,
you know, you don't try tofix them all at once. You
pick the top two or three thatyou thought were the worst and rewrite those

(15:07):
or maybe add an index for them. If you were lucky, you could
add an index and not rewrite,and the end they would implement that,
and then you would say, okay, give me another set of statistics after
that, let me see how you'reperforming now, and then you would pick
the next two or three and workon those, and you just do over

(15:28):
and over until they get to thepoint that they're like, okay, we're
good. No. And in essence, I think that's the well, redundantly
the essence of performance testing and optimizationbecause in the industry. I think this
confusion, right is like the toolsand you have to automate and bring down
this season. No, no,no, no, we know it's bad

(15:50):
already. Why are you trying toautomate? Why are you're trying to do
these all sort of complex steps anddetails. If you already know it's bad,
just try to investigate, polish itlittle by little and keep updating or
improving your system that way, Andit's really it amazes me, and I

(16:12):
think it's great that you started fromthat no, no investigative perspective, polish
tune instead of just trying to adamit. I think it's something that many
in the performance testing industry have thatmindset that they go right away that I
have to damade and bring down thesystem. Well, from what you're telling

(16:36):
the system was bringing itself down already. I don't performing or performing very badly,
but yeah, we have situations whereyou know, they would turn on
a system right when when they refirst to it. They would turn on
the system and start adding users toit, and the system would quit responding
to them, so basically it wasdown and we would go through and try

(16:57):
to figure out why, and yeah, performance so that was happening at the
customer locations, right. So inIBM, we needed to create performance tests
to see, well, how canwe get to five thousand users on the
system, or can we only ifwe have a system half this big?

(17:18):
How how far can we get?Can we get to you know, two
thousand or three thousand or whatever thenumber is, so that we can tell
people, Okay, if you're lookingto support this many users, we think
it's going to take a system ofthis size. Okay. Planning of capacity
planning, right, capacity planning.So yeah, A lot of what we

(17:40):
were doing was to put out benchmarksthat people could say, Okay, if
you're doing this transaction rate you knowwhich we would equate to this many users,
then get your system this big.And it ended on a lot of
things because with d W two inparticular, scanning is really efficient of the

(18:00):
table small you get a great performanceif you don't have like cap and so
developers are working with you know,systems that have a couple hundred rows probably
at most, and so they doselect star front well not quite that bad,
but you know, do they wouldhave queries that were table scanning,
but they weren't able to see thatthey were table scanning because their amount of

(18:23):
data was too small. When wewould performance tests, we were using the
size of a system that was realbased upon our experience with customers. So
we had millions of work orders andtens of thousands of assets and things,

(18:45):
so that we would know if youtable scanned seriously, I mean, you're
not going to pull back a milliona million work orders and not know it.
Uh that Yeah, because pre productiveenvironments have always had that situation where,
yeah, the performance is great,Yeah, you don't have anything there.

(19:07):
I mean it's well, sometimes it'seven with not much in the system,
the productive environment was bad, andthat's another level of poor performance.
And it seems like from all thesemovements and different perspectives in performances, because
yes, you're saying capacity planning isnot just testing the capacity how far the

(19:30):
system can go resource optimization? Arewe I'm sure that you also walked into
areas that they had the most humorgousmachines for platforms that well, you only
use half of it than the bestcase scenario. Right. Yeah, I've
seen that too, and sometimes it'snot, especially in the cloud world.

(19:52):
I've seen situations where your performance issueswere not because of the amount of memory
or the amount of CPUs that youhad. But because you had signed up
for enough connections to this cloud service, and so that was your bottleneck.
So what you had to do wasto up your up your subscription to that
cloud service to get more through poots. That was a classic problem, even

(20:17):
like bare metal machines. I remembera number of number of threats or number
of connections that you could have ifthat's not well configured. Oh man,
but my cp is so low,my memory is all. What's happening?
Well is not? Well? Whatcan I do anything else? Machine?
Well, your number of canuptions isnot on us. You're not getting enough
parallel connections to your machine. Yeah, it's and understanding all these pieces,

(20:42):
all these components. It's not onlyabout hardware, it's not only about atomas.
It's not only about we need tounderstand, uh, the big picture
and give some indications helped As youmentioned with some of the logs from the
database, from the seas them fromthe user, I mean it's a huge

(21:03):
and they were and very entertaining,And you were mentioning cloud a moment ago,
which brings it to you have alot of experience here. You have
been in the industry for a longtime, and you have seen multiple changes
happen, probably several times. Whichones are you seeing happening right now that

(21:23):
are changing the way we do thingsand execute things. You mentioned these clouds.
What other things are you seeing?Well, you went from okay,
so you went from bare metal dedicatedhardware, right then you moved into VMS
and petitioning l pars, which youknow what you called it depended upon what

(21:48):
hardware type you were running on,right, So if it was madframe or
AIX boxes, it was l PARS. If it was you know, Intel
based stuff, it was LIMS.But still it's basically, you know,
I'm still carving up a machine eitherway you wanted to do it. And
then you went to VMS on thecloud, and now you're moving into containers

(22:11):
and the and you're moving into servicesso like databases of service or okay,
we call it cloud object storage,but it's the is it S three I
think is Google's term for it.The file system on a cloud right where

(22:34):
you can store files and retrieve them. So those things behave differently. So
we containerized an image that would letus run j meter and Taurus okay,
and This was what I was presentingat stor East about and we write because

(22:56):
we're running it in a Kubrin ina container. You know, if the
container goes away, so does thedisk storage. So we were writing our
tests to cloud object storage. Wehad a cloud object storage mounted as a
drive to the Linux image, andwe discovered that that would limit our ability

(23:17):
to load the system because that wasslowing us down. It wasn't it wasn't
the number of connections to it becauseit was basically one right, Okay,
it wasn't that, but the lagtime on it was slowing us down.
So what we did was we changeour approach and we run with the local

(23:41):
disc until the test is finished,and then we copy it to cloud object
storage. So there's things like thatthat you have to it's a little bit
of trial and error to figure outwhat you can and can't do because you
know how many threads you can loadup or concurrent threads you can do in
j meter is determined on your resources. So if you're slowing down because of

(24:03):
your rights to your disc, thenyou can't load as many threads up.
You can't get to the same levelof three put that you would like to
get to as if you were runningnatively, and so we changed it to
run the disc ries locally natively rightand then copy to the cloud after the
fact, and that works fine unlessthe test bonds in the middle. It's

(24:27):
it's interesting because you bring so manycomponents that have changed from how we used
to the barren Metal having our loadgenerators, ourselves. What's a machine that
we're going to use Now we canjust send it up there to the cloud
have some instance being created. Butyou bring a very good point that I

(24:48):
saw that even in baren Metal,like some teams not. Yeah, we
can drop all the results in ashared drive in the network. Oh though,
how is that going to affect yourtest that is generating hundreds or thousands
of records per minute? And someof these things that change into where you're

(25:08):
putting it, how you're writing thefiles. Are you understanding as well the
performance tests per se. It's alsoanother cool thing that with all the things
that you're mentioning that are changing nowadays. Yeah, you don't run it in
your own laptop or the low generatorthat you have on the basement. You
instantiate, you bring cubernators, youbring your containers, everything ready to do

(25:33):
your test, and all that bringsall sorts of impotant games around that,
right, what you were saying,Yeah, it's it's it's a how well,
this is the computer industry in general, right, what you knew five
years ago is not going to helpyou now. M m well, yes,

(25:56):
but the actual things that you're doingare generally different. So the testing
tools from five years ago are generallynot the testing tools you're using today.
Some of them maybe, but ingeneral, you know, in general,
it rolls over about every five years, it seems like now, and it's
probably accelerating. It'll probably be downto three before long. But you learn

(26:22):
how to figure stuff out. That'swhat keeps you going if you can.
If you learn how to figure thingsout, then you can apply that skill
forever. And if you like tolearn new things. I personally like new
challenges. I like if if I'mever bored, I'm gonna quit because at
this point I could retire, right, but I haven't because I'm still enjoying

(26:47):
what I do. And so aslong so that I'm serious when I say,
if I get bored now, I'mgonna quit. But fortunately, I
think you won't be quitting ever theIT world world. We always have something
interesting, new and challenging that willkeep Yeah, yeah, yeah, I

(27:07):
mean most of them are phone challengesor I tend to say that we end
up with kind of Stockholm syndrome thatyeah, we love it because yeah,
because that's the only way we cando it, right. Yeah, But
it's a fun journey, and it'ssomething that many in the industry are like,
I gotta keep learning. I gottakeep this new tool, and now

(27:30):
it's this, and now it's opensource, and now it's back commercial.
But that's that's what we're here for, right right. If it was easy,
they were need us. Yeah.That's another one where you just made
me think of my past life asa performance consultant. I used to say,
why do I end up in customersthat do not know what all performance?

(27:52):
Well, that's why they are bringingyou, and that's why they need
the help. And many teams andorganizations are in this situation, even if
you're internal. Right. But Ithink as well with these new sets of
challenges as of today, because youmentioned a lot of cloud and the quanties

(28:12):
new infrastructure like having data I rememberyou needed like a special database connector to
be able to access the data inputany query. Now we have all sorts
of differences also to bring data tous. As you mentioned services, APIs,

(28:33):
graph QL or all these new technologies. How do you see them affecting
or changing how we used to thinkon performance tests. So resetting to aknown
quantity is getting more difficult. Soused to be with performance says that I

(28:56):
would do when I was working onthe asset management system. We would restore
the database because the systems were ours, could we would restore the database so
that we knew that when we wererunning, we were running exactly the same
we were getting back exactly the samenumber of roads we were getting back.
You know, we were running exactlythe same queries. So it was a

(29:18):
known quantity all the time because itwas our bear metal. Okay, well
it's our parts. It was ourmetal, Okay, there were our machines.
We knew what they were. Werestored the databases, we would reset
the WebSphere environments, et cetera.It controlled the whole thing. So with
clouds you don't have that level ofcontrol. And so today I advised teams

(29:47):
to write the tests in a waythat it creates the data that you need
to operate against. So you doyour queries in order. Okay, So
you do your creates, then youdo do you modifies, then you do
your deletes. Okay. In there, you're doing whatever queries you're you know,

(30:07):
whatever queries you're working against. Butbut you're creating your own data as
you go. And if you dothat, then you're keeping the environment about
the same. But you're only operatingon the data that you created. So
if you have a system that hasthe proper kinds of queries on the database
side, for instance, you shouldonly be pulling back what you created.

(30:29):
You should not be pulling back anybodyelse's data that they created that happens to
be in that database that you're testing. So you should be with the proper
indexes. You should be pulling backclose to the same number of roads every
time. But you can't reset theimage. You can't you can't reset the
database every time like we used to. It's just not it's not doable.

(30:52):
Once the systems in production, andmany of these things, supposedly most of
the time is your own cloud ifyou hire that. But you have no
control in these changes that you mentioned, Like someone else is in the same
mainframe that your cloud is setting yourcontainer or your data or whatever and so

(31:14):
on. You will get device.You've got your own set of tables,
but it's a shared database mm hm. One of the things that we do
now and we did this to somedegree when we were doing Bare Metal,
but now we do. If werun into a performance issue, our first
our first reaction is just to rerun. It's just because and you mentioned it

(31:37):
very well. I don't think thefocus nowadays for performance testing is so much
on load testing and pushing the systemto extremes, mostly because of all these
changing components that you're mentioning. Imean, yeah, yesterday something may have
popped an alarm or something. Goa little bit slow, check it again.
Okay, Now maybe it was arollback, someone else stopped running a

(32:01):
big batch job in our database.So many things that can be happening,
and that got to be happening becausewe are now in agile times, modern
times where everything keeps evolving. Orthis report that we used to think of
this is the performance as report.No, you're going to have a heartbeat

(32:22):
and this is your performance now,and to be honest, you expect it
to fluctuate a little bit, okay, and this is okay. What's not
okay is for it to go thisway or worse this way. Yeah,
because a huge improvement as well issomething mysterious, right, It makes you

(32:43):
wonder. We have found problems,We have found performance problems with other people's
services that we were using before,because we do track our performance and so
we were able to you know,we've been able to go back and go,
hey, what change did y'all makeon this date? And they'll be
like, wow, well, allof a sudden, you're a second and
half slower than you were. Andthey'd be like what And show them the

(33:07):
numbers and show them the call thatI made and there's where it spiked.
Why did you all change? Howdo you know that? Because we were
doing performance tests and it's very different. It could be an automission synthetic that
probably you had checking the site,or it could be just plan monitoring and

(33:28):
you figured out that real users werehaving this performance issue, not only about
bringing down the system. And Ilove that you have that perspective like WHOA,
well, first investigate everything, let'sdo it from other repeat check analys
Then we think if we are gonnalight tester expensive mm hmm, Lees not

(33:53):
only in terms of hardware usage,but also in terms of time because probably
do a lone test. You needto, you need to well, I
always test with one user because ifit doesn't perform with one user, then
it's not going to perform exactly.So I always run one. Even if
the next jump I make is onehundred, kay, or the next jump

(34:15):
I make is five hundred, Ialways do one first, okay. So,
and I generally the minimum amount oftime that I'll do for a load
test is thirty minutes. And thereason being is there's all there's start there's
often a startup penalty, okay,So when you start a load test,
you've got to that the resources haveto all you know, sometimes they're not

(34:37):
allocated, they have to allocate,et cetera. So then gives you the
other twenty five minutes to average outthat little spike you had at the beginning.
So thirty minutes is generally what Ido as a minimum. And so
if you start doing you know,four or five or six or seven or
however many different load levels as youclimb the mountain, because you're not going

(34:59):
to go from one under ten thousand, you're going to you're going to ease
up to it. You're going toyou're going to run a couple that you
think are good and then a couplethat you think will break the system.
And it takes time, and thenyou've got to analyze all of that.
Yeah, it's it's not like andwhen you mentioned the time costs. Of

(35:22):
course cost of the hardware resources,but a performance engineer is not that cheap
a good one, and you haveto also optimize, like, do we
really want to low test and pushthis to the limit right now? If
we already know, as we weresaying earlier, that there are issues happening
right now, you know you're wastingyou're wasting money, You're wasting time and

(35:42):
money and time is money to behonest. So I'll yeah, so solve
all the problems that you can findfirst and then start your low tests,
start to think to push things testmm hmm. And I mean, even
if you don't know that there areperformance issues, that are so many other

(36:07):
paths that you can take to findout instead of just yeah, let's push
the system, let's start out ameeting. Are you sure it's a good
approach to right away start And it'ssomething that I see many teams, many
organizations that are like, yeah,let's dive in head first. Load testing.

(36:29):
Automation is good for I mean,automation is not just for your load
testing. Although you have to haveautomation for your load testing, no doubt
about that. But if you're runninga build test or whatever you want to
call it, if you're running thetest every time you every time you push
code through your DevOps pipeline, youshould be capturing performance statistics. And this

(36:53):
is why I draw a difference betweenperformance testing and load testing. You can
track performance. You can track performanceon a single user or a small number
of users. You don't It doesn'thave to be a single user you're doing
as a bill test. Maybe it'stwo, maybe it's five, but it's
not many. But it's tracking thatover time. Then you you know if

(37:16):
it's gotten slower or better, orit's something weird's happening. Okay, because
all of a sudden, if it'staking twice as long to run, you
know this API or this flow inyour bill test, you need to go
look and see why I catch youearly and solve that problem before you run

(37:40):
the thousand user load tests. Andeven that thousand and you said lotests.
You're mentioning DevOps your pipelines. Youdon't put those type of things in a
pipeline. No, no, youdo not put that thing. But you
might kick it off via a pipeline, but not as part of the code
deployment exactly. That's that's it's like, do you want to hold because you

(38:01):
say load tests minutes, hold thepipeline for thirty minutes. That's well,
thirty minutes is one level. It'sthirty times. However, many levels you're
doing in your load test, becauseyou're never doing just one, You're always
doing at least two, oh anduly. Our load tests usually are five.

(38:29):
So that's our that's one of ourone of our applications runs of every
release. They run five different loadlevels against the system about to go to
production to validate it, to makesure that the levels that we know work
fine are still working fine. Andthen we run two where we expect to

(38:52):
see performance not so great. Eitherof them break the system, okay,
but we just know that performance startstrailing off side. But you know that's
two and a half hours. Yeah, and that's but that's pretty much you
triggered it. But it goes toa separate process. It's not part of

(39:13):
the continuous cycle. And yeah,that's something that many are putting, like
big load tests inside of the pipelinedepending on the low test finishing so the
pipeline complete. It's painful when Ifound these situations, and it's many would

(39:34):
say it's common sense, but no, it's like for others, like,
no, it's what performance engineers havetold me for the last five ten years.
And I love that you say thatthis five years evolution, that known,
but that was five years ago.Why are you doing it stuff like
five years ago? That's ancient timesin ity worlds, right, Yeah,
So yeah, you should have abuild I mean you should have a build

(39:55):
test that would stop your pipeline rollback, you know, well back to
the code if the if the buildtest fails. But the build test isn't
a load test, and it maynot even be a full functional test depending
upon how long the full functional testtakes to run. M Yeah, I'm
going to hit every I'm going tohit every module, but I may not

(40:19):
hit every combination positive and negative ofevery API variation in a build test.
Okay, that's my full functional testthat I'm probably going to run after So
yeah, there's but it's still goingto be automated. Just because it's automated
though, doesn't mean it runs aspart of the deployment pipeline. No,

(40:42):
no, and then and then andthose Those are some of the clarifications that
many teams need guidance. And asI said, too many is like common
sense, but others is like MA. But that was a process, right,
And I don't know, I meanjust looking at the time, I
got super excited at talking about thisbeing respectful for the audience. And I

(41:07):
think in the future, Lisa,if you want to come back and argue
and discuss more about these misconceptions,I'll be happy to. But to start
right now a ramp down of theshow. What accommendation would you give to
people for these modern performance times thatwe are leaving things to do to take

(41:27):
into account or lessons that they shouldbe learning to be better performance engineers in
this almost twenty twenty four coming soon. Have every test that you run track
response town. Oh yeah, thatwould be my number one recommendation, and
track it. Come up with somemethod if it's nothing but an Excel spreadsheet

(41:52):
or an access database, track itsomewhere so that over time, you can
graph it see if you're doing thisor if you're going this, or if
you've got a knee somewhere that you'vegot to address, and then start digging
into why become and the other Theother one would be become best friends with

(42:13):
your SRA or your monitoring people.Mm hmm. You don't have to know
how to run the monitoring, butyou need to know who to go talk
to to find out, Hey,can you tell me what was going on
with this API, you know,during this time frame? Or was there
something weird going on with the systemor was the database acting up during this

(42:37):
time frame? You need to youneed to have time frames and you need
to have, you know, hardfacts and if you've got if you're tracking
over time, you've got your hardfacts and you've got your time frame.
So that would be that would probablybe my my two business my two recommendations
for performance is track everything, becausethere's no reason if you're running a test

(43:01):
automated you can't capture the response time. Okay, capture it, track it.
Become really good friends with your monitoringfolks. And that's Harris as well.
Well. You're monitoring whatever whatever namethey go by, right, so

(43:21):
some would say, no, no, they are different. But even even
performance engineers we become a time sesmonitoring engineers must lines are super blurry in
this. Yeah, I must admitthat when I came into the job I
had before this, I was actuallyassigned to an application team and I learned

(43:43):
the monitoring tool that was in placewhen I got there, and then they
changed it and I learned that one, and then they changed it again,
and I got mad and didn't.I said, no, y'all can just
y'all can do the monitoring stuff andjust tell me I'm not learning another tool
that the best I think on thosetools, once you learn a couple of

(44:06):
them, it's so almost like programming, like yeah, I have there's codes
running in my head. It's justlike one language or the other, or
small differences. It is to acertain degree. But you've got to get
you know, you've got to figureout exactly how do you navigate down?
And this, well, the thirdone just broke me, is all I

(44:28):
can say. We got to thethird one, I was and it was
the third one in a year.Oh it was just too much, and
I was like, I don't havetime to do. I don't have time
to deal with this. Okay,why so many in a year? I
mean, okay, that's that's anotherI can't answer that question. I think

(44:52):
the first one I was on thetail end of it, and then the
second one maybe only lasted about ninemonths, and there were live into the
third one, And it was thethird one that I just said, nay,
I'm not modering. But as youvery will say, I think some
of those Yeah, you don't needto be the monitor master, but know

(45:13):
who is and know who can giveyou access or hold your hand through all
the information that is happening, becauseyeah, getting to the drill downs and
the traces and some of that ifit's a new platform, can get interesting.
I love that recommendation, like begood friends with these people if you
can learn how to do it.But if you don't have that bandwidth,

(45:35):
because yeah, time and band withis important, no one be friends with
who has the keys to that.And the second elope, I will rephrase
a little bit of how you said, because keeping the metrics from a response
time and everything in your tests issuper important. I would say keep them
elsewhere than your tool, because inthe past many tools tend the two hawk

(46:00):
them right, and now we wantto have them for posterity, for analysis,
for the rest of the team orwhoever may need them, even like
as tracking, and for us tobe able to analyze it. I love
those recommendations, Lisa, and allthe perspectives, all the experience on and

(46:21):
not so fun stories that you're sayingaround all these it and therevors and performance
issues that we walk into and beforewe close up. Any future plans like
Starists, any conferences, any otherthings that you have the horizon coming.

(46:43):
I'm planning on submitting something for StoryEast, Star West, and Star West.
I'm actually taking an extended RV tripand it happens I had planned that
before. I need the days ofStar West. So I'm not planning on
going to Star Wars, but Iam planning on presenting a storist again or

(47:04):
submitting something or at least admitting somethingto present. We'll see if they accept
it. I see high probabilities formeeting you up again in Starists. Everyone
watching listening stay tuned because Lisa isgoing to be rocking not only the RB
but some more presentations, conference time, lots of knowledge and Lisa, if

(47:28):
anyone wants to follow up more withyou, or get in touch, ask
you all of the wisdom that youhave. Any questions, where can they
find you? They can find meon LinkedIn, trying to I think I'm
out there as Lisa J M.Wall so w A U G H.
I'll send you the I'll send youthe U R L and you can include

(47:51):
it. We'll make sure to addLisa's LinkedIn contact and for any other thing
if you want to follow up withher, know what she's up to,
what she's presenting, or have aquestion. Everyone. She's a huge amount
of information, experience knowledge. Awesome. Sorry that I say, in my

(48:15):
personal opinion, one of the bestperformers that I have met. I was
super happy of meeting her. No, no, no, You're awesome,
Lisa, and I'm super happy thatyou also accepted to come here and share
your knowledge, your stories and allthe experience here with everyone in PERFTES.
Thank you so much, Lisa,You're welcome. I enjoyed it. Thank
you. It was awesome, allright, everyone, So thank you so

(48:38):
much for tuning in for listening orwatching, depending on where you're seeing us.
We'll be trying to be creating alittle bit more content, more information,
more experiences, more shows and keepperfects rolling and I'll see you around
and let's see what else we cancome home and come with. And with

(48:59):
that, well, ramped down isover. Lisa, thank you very much
again for coming here with us andeveryone see you soon and Addio s pervides out
Advertise With Us

Popular Podcasts

Stuff You Should Know
Betrayal Season 5

Betrayal Season 5

Saskia Inwood woke up one morning, knowing her life would never be the same. The night before, she learned the unimaginable – that the husband she knew in the light of day was a different person after dark. This season unpacks Saskia’s discovery of her husband’s secret life and her fight to bring him to justice. Along the way, we expose a crime that is just coming to light. This is also a story about the myth of the “perfect victim:” who gets believed, who gets doubted, and why. We follow Saskia as she works to reclaim her body, her voice, and her life. If you would like to reach out to the Betrayal Team, email us at betrayalpod@gmail.com. Follow us on Instagram @betrayalpod and @glasspodcasts. Please join our Substack for additional exclusive content, curated book recommendations, and community discussions. Sign up FREE by clicking this link Beyond Betrayal Substack. Join our community dedicated to truth, resilience, and healing. Your voice matters! Be a part of our Betrayal journey on Substack.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.