Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Testing Experts with Opinions, an inspired testing podcast aimed
at creating conversations about trends, tools, and technology in the software testing space.
Okay, here we are again. Welcome, everyone. Everyone well? Yeah,
good, Leon. Good. Very good, thanks. Good, good.
(00:20):
Every week I ask whether everyone's well. It's always so subdued.
There's a little bit more enthusiasm this week. Okay,
so topic for today is around the value of testing and how we measure or quantify the value of testing.
(00:43):
So what I mean by that is there are certain mechanisms to measure testing.
So you can measure things like how many production defects were there or how
many defects were there before production,
sort of calculate or do a calculation around what did that defect cost us.
(01:06):
But there's almost that immeasurable value that testing brings as well,
which in my mind is very difficult to quantify.
So had testing not been there at all for this release, would we have been better
off or would we have been the same?
It's very difficult. And often there are questions asked around,
(01:33):
well, if I needed to increase the
size of my team or I needed to start implementing automation, et cetera.
What's the value of that going to be?
So that's what I want us to delve into. Now, I don't know whether anyone wants
to have a first stab at this, but what are the different mechanisms to measure the value of testing?
(01:57):
So we don't necessarily need to touch on that immeasurable bit at the moment,
but just in general, how do we measure whether testing is actually adding value to an organization?
I'll let someone else answer that question, and I want to add a question to your question.
I think to determine the value of anything, we need to figure out what is the job of that thing.
(02:22):
What is the function? So what do we do as testers?
And I suppose if we can articulate why we are there, maybe we can start looking
at the value of having us there. Yeah.
Try and answer Johan's question, and then I'll answer Leon's.
So what is a tester doing is a question, right?
(02:45):
So I think you could argue to help determine quality of something,
to prevent defects growing large.
You could use a correct – like, for me, a tester is somebody that is impartially
finding the truth about something, right?
Because I'm not as invested in the outcome of the thing that you are doing.
(03:11):
A developer is invested in the outcome because he's built the thing.
The analyst is invested in the outcome. The BA is invested in the thing because
they've written the requirement.
The architect has built the architecture, so they're interested in making sure
that it works appropriately.
I'm not a part of that process in that same regard, so I am impartially going to tell you
(03:35):
what that what it is like and i think that's what a tester is
doing that help answers the first question yeah i
want to add to that because that's my view as well about there are many things
fine defects and and and but i i want to add to that to my view is that we exist
to provide information and you said that as well right so if if we exist as
(03:57):
tasers to provide information about the system amount of the test,
right, then we can start thinking about what value does that information bring?
And maybe that steers us into a direction to start answering or think about
the value then of testing, is
that then maybe the value of the information that we bring to the table.
(04:17):
I agree. I agree with that, Johan. And I'm going to give an ISTQB answer, right?
So the reason why we test is risk mitigation.
So by identifying and fixing defects early in the process, we reduce the risk
of software failure in production.
(04:39):
So we do help avoid costly fixes once the bug is in production.
Reputation damage as well, and then obviously potential financial losses and legal losses.
Suits as well. So I would say whether we tested and we didn't find defects,
(05:02):
at least we have provided the information to say our product is fit enough to
be deployed to production.
That information is still valuable. Without testing, you would not have made
that decision to have the confidence.
So if we find defects, still good enough because you have We have provided you
(05:27):
with the information to say you cannot go live.
This is going to lead to reputational damage, potential financial losses.
And in some cases, so maybe we will look at the different case studies from
projects that failed due to lack of testing and they ended up in legal battles.
(05:50):
So I think that is the value that software testing is bringing in software development projects.
So just to throw a span in the workshare.
So if for the last six months, you've had a team doing testing and every release,
(06:12):
before the release, the highest severity bugs that they find are low.
So there's no critical, there's no highs.
Let's say even there's no mediums. Six months down the line,
can you still justify for that testing to happen?
Yes, yes. Because remember with testing, it's not just about finding defects.
(06:35):
It's also preventing them from occurring in the first place.
Because we should not just think about testing as dynamic testing.
So with that studied analysis that we did, those requirements analysis,
closing the gaps, finding the if scenarios before we even get to development, that adds value.
(06:57):
So I would say not getting high severity defects does not justify the existence of testing.
So more reason why we should be there because we are now preventing them from
occurring in the first place.
I'm trying not to put the big level because you could argue you're not finding
(07:17):
good defects because your coverage isn't very good. Right.
Let's put, let's park that devil's argument to one side.
But yeah, I think to pick up on this and preventative maintenance,
why would I keep those tests as in that squad when we're not finding any critical problems?
Like, because you're not getting any critical problems in production,
right? That's surely, that's a good thing that you're doing that.
(07:38):
Why do I get my car serviced?
So they can tell me that something will be a problem before it becomes a problem.
Or catch it before it's a problem you got a bit wear on
the tire you probably want to change that in the next two months because
you may have a blowout i'd rather have them change the tire than
have a blowout on the motorway and that it sounds like that team
is doing a good job of that potentially they're preventing you from
(07:59):
having that blowout that production incident or a high level defect by
being there so how do you why they're there because they're doing a
good job that's what i would argue yeah i think
leon i think that touches a little bit on being very
careful what metrics you use when when you measure the value
of testing there's a term called pesticide paradox
like saying you know if you keep on testing
(08:21):
certain error eventually you find less and less
defects there but it does not mean that in the
next release there won't be something it depends which area is touched
or fiddled in in terms of the software so definitely if
something thing if you find less and less defects yes it's
a good sign it means stability in your code but as we
all know software keeps on evolving and does not mean we can
(08:42):
just assume the next time nothing will happen so we need to be careful which
metrics we use to just start saying okay testing is done we can spend money
on something else you know that's usually that speaks to regression testing
regression testing should never go away it gives it we provide that peace of mind,
Even if nothing goes wrong, it's a good thing, and you still need to test it
(09:04):
every time something new goes into production.
I don't think I would add this where –,
Where the other voice in the room. So the businesses are clapping themselves
in the back for doing a great job because we've, we've, we've gone live.
Everyone's doing a good job. It's great. Let's just keep going.
The testers are often the people in the room that going, have we thought about
(09:27):
that, what about this, this could happen.
I think what, how we make it in that often a project is so focused on going
live that they won't consider some of the potential negative aspects of what they're doing.
So a tester can have that difficult job of being a bit of a naysmith.
And that's an important thing to have in a business, not just on a squad,
but all levels of the organization, you need someone who can feel empowered to question,
(09:51):
an assumption or a way of thinking and go, have we thought about this as an
alternative or have we thought about the impact of that and testers have that mindset generally.
So that's why another reason why I think they're useful to keep in an organization
at all levels, not just on the squads or teams.
So Stefan, to go back to your comment about looking at the wrong metrics or
(10:15):
measuring the wrong things.
So what are some of the things which are good to measure to show the value of testing?
Yeah, the old team can jump in. The obvious one is defect.
It is a good one to have, but metrics can be viewed on with different glasses.
I'm just saying, be careful how you interpret information.
(10:38):
I think before we jump into a few metrics, I think no metric should be looked at on its own.
I think you need a lot of metrics and you need to look at possibly 10 different
metrics and holistically then make your assumption of the health of an application and a test.
So that would be my first comment. But some of those metrics obviously would
(10:59):
be defect you know defect defects.
Defect coverage or tracking the amount of defects that's been there or how often
you get defects, the severity they are of.
I think test coverage is also requirements coverage, those kind of things just
to show, okay, we're actually covering 50% of all the different business requirements,
(11:21):
so that gives you some kind of sense of comfort or tells you where the risks are.
Those are maybe just two of them. Maybe the rest of the guys can maybe throw a few in.
Yeah, it's a good start. it's also a question
that we get asked very often and not to
steal steve's answer he hasn't given it yet but it depends
right this paper answer depends but but it really does if you if you go back
(11:46):
to the opening second statement to say that we are there to provide information
and obviously provide information to people who care about the information that
you provide so if you are a tester in a squad what,
then things like I've reached an amount of test cases and those kind of things,
probably more important than reporting to a CIO that I found 40 defects,
(12:08):
30 I, 20, they don't really care.
So if our job is to provide information, then the metric that you need to provide
depends on the person receiving that.
And one of my favorite things is so what? So if I provide that information to
my test lead to say, this is what I've done, this is my clearance throughout
(12:29):
this process, I can say, oh, fantastic, I can see what we are doing because
I know what that information means.
If you provide that to a very senior stakeholder and say,
Like I said, the defect thing, oh, we automated 40 test cases here,
so they're going to ask you, so what?
So I think it's really important to understand who needs to know what.
And that's where we test this.
(12:53):
Experienced testers are a little bit different than anyone else in that organization
because we figured out what is important through this test lifecycle and who
should know what from a metric perspective.
So they can actually take that information and do something with it.
And and the higher you go up in this communication reporting chain the less you need to provide,
because a c-level person will just want to know there's
(13:15):
a big risk we can't do credit card payments our
business will fail to them that's good enough but your test
lead will need to know a lot more and then filter that information up
so it depends yeah i
hate to use my default answer if it depends but that's where i
always go to but i'll give you like a use case example you
can measure you can help measure work in progress and technical debt as a tester
(13:39):
you can say okay you're pushing through let's say 15 user stories of sprint
but we've noticed that the test pack is becoming quite bloated and difficult
to maintain so we want to advocate for some more automation here.
Because we what we'll see is that we'll become a blocker
in your whip we'll testing will slow you down and
(13:59):
for us to help speed you up and maintain the same of delivery cadence we
want going to automate majority of this pack as examples
so you can also measure things like that or to an analyst you can
you can help help them to measure the quality
of the requirements that they're generating or i've been provided with
so i can't test this requirement it's very ambiguous again example would be
something around usability it's a user-friendly ui it's pretty much an untestable
(14:23):
requirement it's difficult to quantify what that means so you can help specific
people in different agile teams or squads with metrics that are appropriate to them as an example.
I think with test automation, it's sometimes easier to show value because then
you can say, if we have a regression pack of 100 tests, it takes one tester X amount of hours.
(14:46):
You can turn that into monetary value. With automation, you can say it runs it in half the time.
We can run it often during night or after every release.
That's an easy sell. So in that sense, I've found that it's an easier,
clearer way to show value.
Yeah it's stephan i sorry before
(15:06):
you go but stephan even that i have
500 tests running every day i have i've executed this test back 500 times this
month to some degree it's a gain so what what value is it actually bringing
what so yes it's it's mitigating risk maybe if you're testing the right thing,
(15:28):
et cetera. But so what?
So what about the fact that I have it running every day and I have so many tests running, et cetera?
Does that show the value of testing?
To me, it does. I mean, if you are fiddling with a piece of code that nobody
ever wants to touch because you don't know what it's going to break and you
can run those tests automatically.
(15:48):
Versus having to wait and run it overnight versus having
to wait five days for somebody to run it manually yes that
gives you the sense of comfort it's time to market
i think that value will never go away we run
at that risk of saying oh it's passing again maybe it doesn't add value it's
adding value it tells you that that the code is is doing well that that's why
(16:10):
i say don't don't think that what well in this case automated testing isn't
valid because it doesn't pick up values it tells you that the code is actually
doing well and that to me is just as much much,
that the value is just as much in that sense.
Anyway, Mamatla. Yeah, I agree. I actually like this question, so what?
(16:31):
You know, we've provided you with a metric, then what?
So I just want to, I'm looking at one case study here to say maybe the testing
team within Knight Capital Group,
so I want to use Knight Capital Group as a case study of a software project
(16:51):
that failed and that led the company to be at the brink of being then cracked, right?
So when I did my research is that in 2012, Knight Capital Group experienced
a catastrophic software failure due to lack of testing.
So it said a simple human error in deployment of deploying software led to the
(17:18):
company executing erroneous trades.
Resulting in 440 million US dollar loss within 45 minutes.
So maybe somebody in the company provided them with a defect report to say we
have this critical bug that we have identified during testing or they did not test at all.
(17:41):
The impact was there was going to be 440 million loss within 45 minutes.
It so if you give a metric you
also need to give that so what so if you continue going live so chances are
you're going to lose this amount of money so i think there's so what part when
(18:05):
we provide metrics is very important and that is the only way to show to show value.
You've just touched on something very interesting there, which is the value
of testing can actually differ depending on what you're testing.
Testing, i.e., if you're testing a heart monitoring device, which is used as part of operations,
(18:32):
you making or missing a small defect there is maybe it has a lot more impact
compared to testing something which isn't life-threatening.
So I said it, but I'm going to ask the question as well, does the value of testing
actually differ depending on what you test?
Yeah. Yes. Yeah. So I would say yes, if it's life-threatening, that is still a problem.
(19:00):
But financial loss also, remember, is loss to livelihood.
So if a company has employees and as a result of a software bug and the company
goes bankrupt, then there's loss of livelihood.
So whether it's life or livelihood, there's value in testing.
(19:23):
Like just to think about it differently when it comes to add value like a question
back to the stakeholder when they say well so what you've done all this testing,
so what the question back to them is what did you get,
So then when they go, what do you mean, what did you get? You asked for an application
to be built. What did you get? Well, we got an application, did you?
(19:46):
How does it work? How does this? How did you find that out? We tested it, correct.
Without testing, you're not closing the loop. So you've just put something out
there and something is made.
And without testing, you won't know what you've actually got.
Even at a very simplistic level, I've built a chair in my garage.
(20:07):
Will it function as a chair i can only find
out if i sit on the chair and test that it works as a chair so
the the push back to stakeholders and they go so what am i getting you're getting
the answer to the ultimate question of did i get what i paid for did i did i
get what i asked to get built that's without testing how else can you do that
a fair point i want to go back to what you said at the start,
(20:30):
Steve, which was testers are the only ones that actually look at it from a,
maybe non-invested or less invested perspective.
But do you not find with testing shifting left and testing playing a much bigger
role in, at certain companies requirements creation, but requirements analysis right up front,
(20:57):
do you not feel people are becoming more and more invested?
So from a requirements perspective stage, maybe if you pair programming with
the developer, writing unit tests together,
et cetera, are we losing a sense of that lack of investment,
for lack of a better word, with shifting left?
(21:18):
No, I just think we're, I just think we're, we're redefining testing instead of it being a very,
it's a phase of an activity that you do in a life cycle of a thing to,
it is a process is a, a methodology that you adopt both left and right,
right, not just fully shift left.
(21:39):
There's lots of shift right principles that are really good as well to redefine.
So I used to say in one of the agile squads I worked with was quality is everyone's responsibility.
Well, those responsibilities are different.
My job as a tester is to help you understand what your responsibilities are in that.
And that's how I would explain that. So BA, your job is to try and make sure
(22:04):
that the requirements you write are testable and I'll help you identify how to do that.
And you are doing testing by doing that. But even though I'm saying tester as
a person or as a single role, testing as an activity or quality assurance as
an activity is broader across everyone within the business.
And something else testers can do in terms of adding value is get people to
think about it that way and encourage people to adopt testing practices is actually
(22:28):
a really good way of adding value, ensuring the worth of testing.
And again, you've now touched on something very pertinent and important,
and that is testing is different to a taster.
So certain organizations have
dedicated testing teams etc and testing
happens in those organizations for sure but there are also organizations where
(22:52):
they don't necessarily have dedicated testers doing testing but the function
of testing still happens and the value add is still there is it it's it's probably
touching on something that we've discussed before but
how important is it for testers to do the testing in an organization?
(23:12):
I think it's important for someone with a testing mindset to do that.
Now, I know that's not answering your question at all, but there's a different
mindset when I'm building something and I'm trying to test whether it works.
So when Steve builds the chair, he wants to think that he's going to build a solid chair,
(23:32):
but the mindset of is this good enough for my family to sit on next to the fire
so they don't fall in the fire when the chair breaks, That's a little bit of
a different mindset that you bring to the table.
And I was in a meeting yesterday having a similar conversation,
and I made the statement, and we can debate that in a different time,
but the only time you are actually testing is when you are using your mind to think.
(23:58):
Now, we spoke a lot in other things about testers keeping busy with busy work
in terms of test cases and test scenarios and doing all those things.
And those things are good, but that's not the testing part of it.
As Stephen said, the testing part is analyzing, interrogating,
using your brain and your testing mindset to think.
(24:19):
So I think someone that wants to call themselves a tester is not necessarily
someone that can do all the checkbox things, but it's someone with a mindset
of interrogation and understanding and critical thinking.
And that usually, if you look at any engineering discipline,
it's something that is in you, but it's also techniques that are taught how
(24:44):
to actually do those things.
And that a tester needs to bring to the table because a developer,
again, can do a lot of testing and we advocate for developers to do a lot of
testing, but it's a little bit of a different mindset that you bring to the table.
And I think I would just add, it's also a question of scale.
So if you've got a small in-house development team, like say 10 people and you're
(25:06):
producing mobile apps, will you have a bespoke testing team for those 10?
Probably not. if you have a thousand headcount
developers will you need a testing function i would
strongly advocate for that because it
becomes the bigger you get it becomes about how do you ensure the quality maintains
(25:27):
the same as you scale up and out and a group of testers can help maybe they
because if you read the book on how google tests they talk about the the true blooded testers,
outside of the development community become about standards of
practice tool sets methodologies frameworks
how do reporting how do we tackle larger problems outside of the individual
(25:51):
squads and teams themselves where a lot of the work is actually done by developers
a lot of testing so i think that answer is it's definitely a scaling as part
of that answer i would say.
Yeah, I agree with you, Steve. Skill plays a critical factor when it comes to testing.
(26:13):
Yes, I agree. Everybody can test, right?
Actually, everybody in software development project should test.
So we have the developers doing their unit testing.
But remember the goal for each test type and then each test level,
the objective is different. So a developer is looking at a different objective when they are testing.
(26:38):
So having the testing team testing, they have their own specific goal.
And we also have business people who are doing UAT as well.
So they are also doing testing.
Now, what differentiates a tester is the skill that they have.
So they focus on software testing, which is a skill that is not available in
(27:01):
the development team or with the UAT team.
So it's like cooking. I cook, you cook, you cook, but you're not a chef, you know.
So everybody can test. Wow, you've not tasted my cooking.
But you're not a chef. Why don't you have a restaurant? So it's the same.
Everybody can cook, but then can you sell?
(27:25):
Can it be commercialized? Is that level of standard?
No. So it's the same with testing. If you just put that here to say,
yeah, everybody can test, everybody tests, but not everyone is a tester.
Yeah. Stefan, I know you want to say something. I just quickly want to say something,
(27:45):
and it might completely distract what you want to say.
But that's actually the interesting part. So if you think about everyone's capable
of doing testing, and then there are people within a testing discipline,
a testing competency doing the testing.
So there will be people that argue, well,
(28:07):
what value am I going to get by having someone with a tester's mindset and from
a testing competency in doing the testing over using someone else in the business to do the testing?
And I think that delta there, that what do you get more from a person from a
testing background with a testing mindset from that type of competency?
(28:31):
What is that additional value that you get from that? I think that's sometimes
difficult to motivate. Right.
I think it depends on the skills of the tester, but I think just doing the testing
is one thing, but there's a lot of that happens in the background that's value
added by somebody that's a professional quality engineer or a tester in terms
(28:51):
of setting up test data, setting up test environments.
Those are also a lot of things that happen in the background.
It's not just somebody sitting down and now everything works and I'm going to test.
So I think there's a lot of value add coordinating the testing effort.
I know some of these things might be a bit more like a test lead or somebody
a bit more senior, but But in general, a good tester would be taking ownership
(29:13):
of a lot of those things, making sure it works.
I mean, we've also seen clients recently where they're doing very well in terms
of their own areas, in terms of their pods and doing unit testing and API level to testing.
But like to Stephen's point, when you grow as a company, now you have multiple
silos or applications, string environments together, getting those systems to
(29:36):
talk to each other, to build proper test environments, that's also challenging.
And testing those end-to-end scenarios could also be challenging.
Who takes ownership of that? Typically I haven't seen a lot of developers taking
the ownership of doing end-to-end testing and making sure all of that works.
Yes, unit testing is their responsibility, but if not, if you don't employ somebody
(29:57):
that's dedicated to testing, who takes full ownership of system integrations
testing in a big organization?
Organization to me that sits solidly in a
in a senior tester role but sorry
no i especially you're approaching on
set because it's it's to the front of my mind with
the client i work with but like one of the examples i've got there
(30:18):
would be like something so my i ask different
stakeholders so say i'm talking to a solution architect and then i talk to the
lead developer i ask this kind of very same question similar question i say
if it's all going to go wrong where will it go wrong and the architect will
go i'm really worried about disintegration here and here it's gonna be difficult
to do the developer will go like actually i'm really time pressured i've only got.
(30:41):
Six developers and i'm really thinking 10 because there's some constraints and
the same thing to the ba and the same thing to the pm and you get you get a
picture of like oh this is these are all the things that are going to prevent
this thing from working appropriately inequality going to be affected no one
really asks those kinds of questions to each other because it's all about
doing the do and getting the thing over the line the test is about knowing what
(31:04):
will stop it getting over the line or getting it over the line in a poor condition
so asking those kinds of questions is a real value add that i think people with
a tester's mindset to use that language,
will have those kinds of conversations but others won't.
Yeah, and also, if you ask a developer to test their code, there will be some
bias because how do you mark your own homework?
(31:27):
You know, so you won't find defects.
And then getting anyone, let's say, in the company to come and test,
I have facilitated some UAT.
And trust me, if you don't have the passion for testing, if you are not into
it, it's just a waste of time because we'll just go there, nudge,
(31:50):
you know, people like, okay, can you please just test?
I need to get a sign-off for UHC.
So there is no commitment for testing from non-testers.
So as an organization, are you willing to run that risk to say,
it's just going to be just ticking the boxes to say, okay, testing was done,
(32:12):
even though it was not done by the right people.
So that's another thing to think about.
Yeah, I've often felt that the real value in testing is in the negative side
of testing because everyone, when they look at something, they go and test whether it works.
And let's be honest, most of the time when an application is implemented or
(32:37):
released, the positive scenarios work because that's what everyone is considering.
That's what everyone has worked towards.
But it's that negative side. It's how can I give this application an input which it's not expecting?
Or how can I do something which maybe it's not being built for?
(32:58):
I think that's where the true value lies. And I think that's what Jan was saying
to some degree in terms of critically thinking about testing and actually sitting
and thinking about testing.
It's not just a tick box activity where, oh, I'm going to write some happy path
scenarios here and I'm going to sign it off and it's going to work.
(33:21):
For me, the value is, and we know this, we've all been in this industry for long enough.
Most of your defects you're going to uncover when you're doing those negative
things, when you're doing those things which wasn't in the requirements,
which wasn't part of how it should work.
So this is maybe contentious, but UAT for me has always been more of a training
(33:46):
activity than it has been a testing activity.
Most people from the business just want to see how the system works.
In most cases, they're running through a couple of happy scenarios.
And probably in all likelihood, those scenarios have already been covered in
some previous cycle of testing.
So the value of UAT for me is more in people seeing the system as opposed to
(34:09):
really bringing value in terms of testing.
You're on mute, Steve. Sorry. I was going to come back on the UAT.
I've got a really good example of how testing can really add value to UAT.
So I was asked to assist with UAT of a mobile application for a client,
and it was for engineers in the field.
And one of the things the application had to do was take photographs of parts
(34:33):
of the road and then upload that image.
And I had to take three photographs. One was like the whole road,
one was close up, and then one was like a really close up view.
And he said every time he did it only the third
photograph would work so i'm with him watching him doing
and only the third photograph would work and he said i
just it's just the first two don't work and i was like well that doesn't make
any sense well the first two camera pictures wouldn't work
(34:55):
and then i realized the developer had put a limit a
size limit on the the size of the image so when
he's doing this big panoramic shot like across the whole road
it's too big to the app just goes it's too big i
want to blow the file and so the guy said to me well how
come the developer hasn't spotted that i'll say because he'll be at his
desk in his house with his mobile phone taking a
(35:16):
picture of his desk doing it three times going uploaded i
said he's not come to your environment seeing how
you work looked at it and gone wow this is completely different to what i expected
and spotted that as an edge case so like that's an example where you attest
his mindset in you it's he can have real value to the user because i needed
him to explain to me why he's doing what he was doing and then me to rationalize
(35:37):
along on the developer wouldn't even think like that he would be doing this.
And that was an example where that defect would have gone live and prevented
the app from being used appropriately.
Yeah. Sometimes I wonder, like people in testing or people that always stumble
on testing, I'm always wondering what's that profile.
But in my mind, I always think the real testers out there are the people that
(35:58):
used to take things apart and figure out what, how does this thing work?
And how do I put it back together? Because that's that real inquisitive mindset
that I think testers really bring to the table.
It's not necessarily other people's passions and everybody has a role to play in a company.
But I think that's that real inquisitiveness, asking people,
and sometimes it can become annoying because you're asking the difficult questions
(36:20):
or the things that's ambiguous.
But I think that's the role we play, that keeping people honest and covering
the things that we haven't thought of or that they might not have thought of,
and not intentionally, but just because, like Mamata said, they're chefs in
testing, but we are the Michelin star chefs, right?
So, Leon, I forgot to mention it in the last podcast, but you asked the question,
(36:43):
what is our favorite or the thing that's stuck in terms of ISTQB?
And you said boundary value analysis. And I wanted to mention,
and maybe it dovetails with this conversation, you said earlier people test
positively, but the negative testing is sometimes the value that we bring.
And even if people think maybe I should add one or two, the skills that we bring
to how to properly test something negatively is also good.
(37:04):
So, in terms of the boundary value analysis, I read an article recall in March
this year, there was a gas station in New Zealand that all the card payment
system failed because it fell
on a leap year and the card system didn't cater for that leap year day.
And it must, I don't know how much, but it must have cost them astronomical
amount not making money that day.
So skills like boundary value analysis is something very simple,
(37:27):
but if you had somebody on the team that properly tested those kinds of scenarios
would have most likely prevented that from happening so there's a good value
add for you right there yeah the late year was a really good example i've just
that's just made me think of another one where it said.
The requirements are available five days of the week and i said monday to friday
(37:47):
or tuesday like what five days what does that matter how's that because people
might be working on the weekends so that kind of question people people that
attest has asked those kinds of questions and i think I think that's the real
value add that you can bring.
I think we're sounding like salesmen for testing today, which is fine.
(38:12):
Just I think in terms of closing remarks or closing comments.
So testing is a way of showing information.
That information is to mitigate risk or give you comfort or confidence that
something is working the way that it should based on the requirements.
(38:34):
I think depending on which application under test you're working on,
maybe the value of testing becomes either more or less.
I think there's always an element of value, but maybe if you're working on mission-critical
type systems, you can't afford to miss anything.
You can't afford any small bug that maybe creeps through can have a massive impact, Whereas,
(38:59):
I don't know, I'm thinking of if you were to test the website listing properties
and there was a mistake there to say it's not a three-bedroom, it's a two-bedroom,
it maybe isn't the same impact as missing a defect on a heart surgery machine.
So I think there is definitely value.
(39:20):
It's just it differs. I think what we still haven't really answered is if I'm
the head of testing and I currently have a team of three or four people,
but I actually feel I need a team of eight and I go to my CIO and I say,
I need more people in my team.
And that CIO wants to know, well, what additional value is that going to bring? Right.
(39:45):
I think that's still a difficult thing to answer. Yes, maybe we can do quicker
releases, but what does that actually mean?
How do I quantify what that's actually going to mean and the value it's going to bring?
And I don't think we have time to go into that today, but maybe in a next podcast,
we can delve into that again.
(40:06):
Because I can completely understand from an engineering perspective.
Perspective, people will see the value in testing.
It's more the people that sign off the checks.
And let's be honest, testing is always the first thing that gets cut. Why?
Because maybe the value of testing isn't necessarily seen.
(40:29):
It's definitely not seen on the same sort of level as maybe other other roles
in engineering and and i think part of that is because it's difficult for us
as as a testing competency to to really showcase the value because it's so difficult to measure,
(40:49):
i would just add it it's cut because you need something to test so you have
to have an idea of what you want and you have to build the thing that you want
if you don't the two things you cut back you can't cut those two you can maybe
cut a little bit of what you want but you can't put build,
so it i understand logically why people go well i don't need as much testing
(41:10):
i want to build more stuff so okay i can do get the logic around my businesses
decide that i don't agree but i get it.
I think after this conversation, if someone were to ask me this question again, my answer would be this.
So we've said we provide information.
(41:31):
So the only way that I can articulate to someone who needs to sign a check that
I need more people is to articulate what kind of information they will bring
to the table to you as a stakeholder that we're not currently getting.
What is that information? So either we need to provide more information around
performance aspects, because that's important,
(41:52):
or security aspects, or if I have three
more people, I can provide faster information to you that you can get.
But starting to try and articulate the value of testing there in a similar way
that we do with developers, I think that's maybe where we are going wrong.
Wrong because if all we have is information and that stakeholder
is happy with the information they receive then they
(42:14):
will never be convinced that they need more but if we can articulate
what extra information you'll get or how
you will get it differently or faster maybe if
that is important but we often go to because
we have constraints in the testing world we go to senior stakeholders I
need more people but to them they don't
see how they're going to get better faster more accurate information and to
(42:35):
them it does not matter so we need to start if we
all agree that it is information start articulating that information
in a way that it makes sense to them and then
you can start having the conversation about fast information more information
better information because if they don't want if they're happy with information
they get you're never going to get more people yeah i would say also before
(42:57):
you ask for more people first look at your processes You know,
how can you make your processes more efficient and then make use of technology.
You know, making use of tools, automation as well.
So before you look for people, look for processes, tools that can help you scale your testing effort.
(43:23):
So I would say, yeah, if I were to choose between people or technology,
I will maybe like request more budget for technology that will help testing
to be more efficient or effective.
So that will be the approach that I will follow.
(43:45):
Maybe just a last comment. Once you've defined all the metrics that you're against,
it's always a good idea to make sure that whatever tool you use to capture testing,
that you can pull those metrics or the stats that support those metrics.
Because I've seen, and maybe this touches a bit more on another topic of test
management, et cetera, but I've seen a lot of the time that a year or six months
(44:08):
down the line, somebody asks like, okay, we've agreed on these metrics,
but now show me the data that supports it. And this may be just a follow-up step.
It's always important to make sure that you gear the metadata that you capture
as part of your testing efforts to show or to track the metrics that you've
agreed upon to support or to prove that you've added value.
(44:31):
So AI. So I think then the next topic will be AI, adding AI in the testing space.
So we get more hands to help us with testing.
All right. Thank you very much for that conversation. As always,
(44:51):
if you're listening to this or watching this and there is something specific
you want us to discuss, is there an interesting topic that you would want us to debate?
Or is there a specific testing challenge that you have within your organization?
Please let us know either in
the comments on YouTube or send us a communication on our LinkedIn page.
(45:14):
So until next time, thank you very much.