All Episodes

September 19, 2024 36 mins

In this episode of Testing Experts with Opinions, we discuss the final three stages of a Scrum process: sprint demos and retrospectives, deploying completed work, and project closure. - Who should lead sprint demos? - How can we ensure effective lessons learned sessions? - And what role do testers play in post-deployment handovers?

We also explore the importance of creating a culture where quality is everyone's responsibility. Whether you're a tester, developer, or product owner, this episode is packed with insights to improve your Scrum practices and enhance your team's collaboration.

Don't miss out on this comprehensive discussion that ties together the critical aspects of testing in a Scrum environment. If you haven't already, be sure to check out the first two episodes for a complete understanding of AQA's role from project initiation to closure.

#AgileTesting #ScrumMethodology #SprintDemos #SoftwareTesting #QA #QualityAssurance #TeamCollaboration #ContinuousImprovement #TestingInScrum #PodcastSeries #SoftwareDevelopment

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Testing Experts with Opinions, an inspired testing podcast aimed
at creating conversations about trends, tools, and technology in the software testing space.
Welcome again. Thanks for joining us. I think this is now the third podcast
we've had to dedicate to discussing AQA's role in Scrum.

(00:21):
We never anticipated it would take this long, but like someone said before we
started the call, it's good to have an opinion. And so that's definitely what
we've managed to get out of this.
So I think today we need to finish the last sort of three swim lanes or sections.
So sprint, demo and retros, deploying the completed work and then basically

(00:43):
ending the project and moving to the next sprint.
I don't know, Stefan, do you want to kick off with that first one,
the sprint demos and retros?
Yeah, sure, Leon. Thanks. Yeah, I think this is obviously at the end of a sprint
where you need to show the work that's been completed or the user stories that's been completed.

(01:04):
And the very first one is to participate in the demo of those stories to stakeholders.
I think a lot of the times the team expects the actual, the testers to demo
the piece of work since they, well, it would make sense for them to know it
by heart since they would have tested it.
But I mean, this could be the BAs, it could even be developers,

(01:25):
it could be many different people within the team.
But from what I've seen many times, it's been usually the testers or the analysts
that they ask to demo that.
So I'm not sure what your experiences are.
Who typically does those demos? Do the stakeholders at the end of the sprint?
Or who do you think should do it? I just want to touch on what you said. I found it quite funny.

(01:46):
Like the testers need to demo it because they should know the product quite
well. I would hope the developers would know it quite well as well.
I think in my experience, it does often fall on the testers.
But then again, you'll get companies and teams where there is no tester and

(02:10):
where there is maybe a mix of different skills, but everyone takes responsibility for quality.
So I think something that I've seen work well in the past as well is actually just rotating.
So every person gets a chance to lead the demo.
It's not necessarily always the same person. And I think that's good for other
reasons as well. It gives people exposure to those sorts of things.

(02:34):
I mean, everything around presentation skills and being able to talk in front
of other people and putting something together in terms of a demo that actually
makes sense. It's all good experience.
So I always encourage rotating it through the whole team because then it does
also feel like people take more collective ownership and responsibility for

(02:56):
what was delivered in that sprint.
But I don't know whether the other two have different opinions.
No, I agree with that. I've had interesting conversations the last month or so around that.
Definition of stakeholder at that stage right now a lot of people argue that
if i demo to my product owner that means it's engineering done i cannot go on

(03:19):
with my life yet at that stage it's not necessarily the right audience to critically
look at that so i'll be interested interested to hear,
what stefan or steve thinks about that word stakeholders and who typically that
would be Because it sounds like a stupid question,
but it's anything from my tech lead to the product owner, to the program manager,

(03:41):
to the business director, all the user group.
And the definition of that word cycle becomes extremely important when we look
at the definition of engineering done within that sprint.
I mean, there's a bit of debate going on around that. So I'd be interested to hear your thoughts.
Yeah, so I've had a pretty mixed experience about testers doing demos.

(04:03):
Devs doing demos. I've also had something which you guys didn't bring up,
which was especially when demos were consistent.
So the testers would demo the fix of a critical defect that the team is aware
of to show that it's not working or how that fix is implemented.
But to answer your point of the stakeholder side, when I've had third-party

(04:25):
developers and internal QAs, the stakeholders have been business.
So our internal product owner would then review that demo make sure they were happy.
But i've also seen it go to like an exec level so
it'll come out of the team out of the squad and
up and into like a tech lead or into

(04:46):
a program lead if it's like a big program of work and they
would be part of the like a demo of demos where there'd
be a number number of projects and projects point in
front of them at different points i didn't really see the value in
those bigger demos maybe just because how they were implemented because
they were very short because he's maybe he's maybe been doing 10 different things
so each product's only maybe getting five or ten minutes in front of him so

(05:08):
i don't know how worthwhile how much feedback he's really getting from that
other than oh look i've got something that's finished that's going to meet my
deadlines i think that's kind of more of a tick box than any kind of really useful feedback,
is that demo does it include your definition of demo to cycle this does can
you assume then if I've demoed it, it was tested by those stakeholders.

(05:29):
Because that's another thing people talk about is, yes, I did demo to you, so surely it's fine.
So can you see that demo as also that person I've demoed to is now happy with the work that I've did?
Interesting. I think normally, like an internal demo, the team has tested it.
And when I mean internal, I mean as a product owner or project manager.

(05:51):
I've also seen it when it's gone to business users that weren't necessarily
involved in the UAT, weren't involved in any testing. It's the first time they've seen it.
Them being, are you happy with what you're seeing? I can't remember the exact
reason why they weren't involved in UAT. I don't think they were identified at the time.
But yeah, again, I don't know how useful that
is to someone on to here is the thing that you could have had

(06:13):
feedback on earlier but it's now completely finished so
if you've got any feedback please park it because we can't do anything for the
next two or three weeks so i don't know how useful that is interesting so so
just back to your point jan from earlier so the feeling is that there's a single
representative in the form of a a product owner or a business analyst,

(06:38):
that person works with the business to understand the requirements, scopes, requirements,
obviously, et cetera, that then gets implemented as part of a sprint or sprints.
At the end of that demo, sorry, that sprint, that whatever was implemented gets
demoed to that person, to the product owner or the business analyst.

(06:59):
And in a traditional case, maybe that's then considered signed off.
But what you're saying is you can't really sign it off at that point,
not necessarily because UAT still needs to happen.
And only once UAT has happened is the actual requirement implemented the way
that it was supposed to. Is that the point?

(07:21):
Well, that's the question, really. I often see that people say demo equals sign-off.
And I think the point I'm trying to make and get across is that it's not necessary.
It shouldn't really be the case. As you just said, they are testing activities.
And if you want stakeholders to test, that's different than I've demoed to you

(07:41):
and you felt that you're okay with it.
And I see a lot of people confusing the two. So it's really a question more than a point.
But in my opinion, I'm just trying to think whether I'm going to generalize
too much here. But probably per sprint, what's delivered per sprint is not necessarily going to deem.

(08:06):
The necessity or the need for UAT. Maybe it's several sprints that actually
contributes to a UAT cycle where it's now worth getting someone from the business
or multiple UAT users to test that.
So I've always seen the sprints and the sprint demos almost just ensuring that

(08:27):
you've delivered the right thing in terms of this sliver.
And then once you've had several sprints and there's actually something,
let's say, from an end-to-end perspective or maybe something really visual to
test, then you get the UAT users to do it.
I think if you do that, the demos start acting almost as a, well,

(08:50):
this is good enough for this sprint. This is good enough for this sprint.
Not good enough, but we've delivered what we set out to do.
And then once you get UAT done, you then take all of those UAT,
i don't want to say defects because they may
may not be defects but that almost goes back into
the backlog then and we always talk

(09:11):
about having iterative development and iterative releasing etc so maybe what
you initially had and and and developed and demo that goes into production and
then you decide well i i guess you decide whether it goes into production based
on how happy the UAT users are,
but that can always go back onto the backlog and then get reworked.

(09:32):
Because I was just thinking the flip side, if you had to get UAT users to test
after every sprint, you might just never, never sign off.
And now whilst you start working on the next sprint, they're still testing.
And then you're mid-sprint in the next one and now you get all of this feedback
from the UAT users and there's a whole coordination organization piece around

(09:57):
that to make sure that they have
the environment in which environments are they going to test on etc so.
Yeah i guess it's not as as easy as as
it should be now many times in a
sprint the stories that are activities that are being deved
are components not necessarily even the uis maybe
apis and that's not something that's ever going to be faced or the the end users

(10:19):
are never going to face that api they'll always work by ui so i think it makes
it you know the uat is definitely not per sprinting from what i've seen you
need to sort of say okay this is a milestone where there's a bit of an end-to-end
maybe not full end to end, but partially,
and maybe it's part of a release one as part of your bigger project,
then yes, it should be UAT.
But I think it's always good to get, I almost want to say more people into a

(10:44):
demo than less because everybody brings the angle, everybody brings an interesting question to it.
Product owner always yes in my eyes, but somebody from the business that's practically
going to use that, they always ask interesting questions, bringing production
incidents or things that happen to them into the mix that might not be specified
in the business requirements.
So I would almost want to say, you might not agree, but I would rather say invite

(11:07):
more people to the demo than less because the sooner they can bring a different
angle or interesting question to the mix, it might impact the way that you dev or change your story.
I don't disagree with that, but I want to go back to Steve's point.
Steve's point is that that's too late.

(11:29):
If they ask you a lot of questions there, you've done something wrong in your quality process.
It is too late. And it happens. I'm not saying that it doesn't happen. It's not a good idea.
But if you find that a lot of questions come out here, you should relook your
quality process and move that tasting mind chip.
Set way left to get those
questions asked before sprint starts and not when

(11:52):
a demo time so if you can talk people through that same audience through this
is how we see it working this is how we see it looking and they can ask those
questions there then you would have shifted left where you should have been
but i agree more people are better just trying to get to steve's point where doing it
early is better than doing it end of sprint yeah what

(12:13):
i've also seen working is is actually
getting them involved throughout the sprint and what
i mean by that is if there's a feature that i need to
test or there's what whatever you whatever you're having
to test as soon as you consider that thing to be signed off in your head in
your mind you get the product owner to come quickly see okay i've just tested

(12:34):
this this is this is how it works etc do you agree that this This is what was
intended because you still have that opportunity at that point to say,
well, actually, we need to fix this. We need to change this.
Whereas I agree at the end of the sprint, it's too late.
Not too late, but you've wasted potential time where it could have been resolved much earlier.

(12:59):
And you're not necessarily giving exposure to the business in terms of,
well, this is actually not exactly what we anticipated.
Or expected. So I think that.
What's the word? Continuous feedback loop almost in terms of testing as well
back into the product owner,
back into the business analyst whilst you're testing in the sprints is also

(13:21):
something which is a good habit to have because it can potentially save you
from that situation where you're sitting in that demo and then,
oh, actually, this isn't exactly what we thought.
Yeah, exactly. I'm going to make a comment and we don't have to spend an hour
on it because that's a topic for a different day.
But when we talk shifting quality left, that's it, right?

(13:44):
That is where that starts. Shifting quality left is not necessarily I need to
now go closer to developers to write unit tests for them. Yeah, sure.
Maybe we can argue that that's one part of it. But a big part of shifting quality
left and a taster moving quality to the left of the cycle are these things that
you're talking about now.
Getting involved, getting tasers to ask the questions, getting tasers to elicit

(14:06):
those questions from business and solve those things there where we said three
amigos, maybe it should be 10 amigos, right?
But having those discussions there is shifting quality left,
not necessarily only getting closer to the code.
I agree with that. I'm going to try and link Leon's point and Johan's last point
there and why demos sometimes don't work.

(14:27):
It goes back to the trying to do too much in a sprint.
So they're trying to get – there's way too much on the board.
There's too much development going on. Some of it is at different levels of
test maturity with a demo anyway, and then users aren't available,
business stakeholders, and full calendars, whatever that is.
So that's, I think, part of the reason why that happens,

(14:49):
and how you can like an example where we can shift if
left even on that is to challenge the team
this is back to earlier in the diagrams we've brought
in 100 items here guys do we do we think that's too much i know we're a month
in we were hitting around 80 to 100 every time but do we think that's too much
based upon the fact that our last four demos we haven't been able to kind of

(15:12):
really get on everybody that we wanted and getting the feedback that we really needed and maybe that
ties in neatly to the next topic in the chart, which was lessons learned in
the sprint, which aren't necessarily always testing.
Even though we're looking at it from a testing perspective, that's not really a testing problem.
That's a backlog reframment and grooming
problem that's created a quality problem at the back end. I agree.

(15:36):
So if we're happy to move on to that second point, which is identifying the
lessons learned during a sprint.
So what are your feelings around including a BA or product owner into those
lessons learned sessions?
Because in my opinion, they're very much part of that sprint team.

(15:58):
Even though they're not technically sitting right in that sprint team,
they can also contribute to some of the initiatives and they can have suggestions
in terms of improvement.
But equally, we may say, as an example, sitting in that lesson learned session,
well, actually, we need you to be more available during the sprint so we can

(16:18):
demo these changes back to you or whatever that may be.
But is that generally accepted that that person or people are part of that?
Because what I've seen, typically, they aren't. they it's
very much the let's call it developers and
testers although it's one scrum team but it

(16:39):
feels to me like again that's a good it's a
nice low-hanging fruit where if that person is part of that session it could
be more fruitful yeah totally agree with that we often just do it as a scrum
team and that's that's general practice but as you said involving the people who are

(16:59):
actually using the product and ask for things is invaluable
because there often lies your quality problem
again as we said right so it's it's good and
i've had many conversations with tech leads and they say everything is
fine quality is fantastic you speak to the business owner and they say well
that's a nightmare because we don't do this we don't do that we don't do that
but because these two aren't talking in that session you're missing that you're

(17:22):
never going to fix that so i agree with that i think it's really worth worth
defining as well like what we've put there and reference to testing.
And I don't think that necessarily just means...
How will we design a test case, right? Or something like that.
It could be much more nuanced than that.
I think maybe why business BAs and non-technical people sometimes don't participate

(17:44):
because they feel they're not technically minded enough to contribute.
And it could be something as simple as, when we're doing three Amigos,
can we just make sure the sessions don't run more than 30 minutes?
Because we're overrunning all the time. We're not really getting to a conclusion.
We just keep going on and on and on in those circles. or can we move retrospectives
to a certain part of the day to get the most out of it?

(18:05):
Because even that's valuable for testing because it's just encouraging people
to participate in it and be valuable and contribute.
I think a big part of it is trying to get people to contribute into these lessons
learned and it's not just an echo chamber of two QAs and one of the devs just
going back and forth with each other about one or two testing issues that the

(18:26):
whole squad's aware of and trying to get the whole team involved and looking
at it more holistically. Yeah.
So this conversation is actually around the QA activities in a scrum,
right, in scrum, in a typical sprint.
But if we do talk about a tester or whoever is responsible for testing,

(18:51):
what would you say their role is.
Is roll the right word? What should the activities be in that retro?
So when you have your retro or you have your review of your sprint,
what role should the person play that has that testing hat on? Does that make sense?

(19:13):
I understand your question. I think the answer is also a topic,
a separate topic. For a different day.
But I hear your question, so maybe we can just answer the question there rather
than going into the nuances of tasting and taster and things like that. Yes, please.
I think if you go into the mechanical nuts and bolts of it, it's the creation

(19:35):
of test cases, the generation of default reports, the estimation of how much
testing effort is going to be required for a given story,
running a UAT session with a user, those kind of practical elements.
I think that's what you're alluding to, right?
Are you thinking more of a nuanced perspective? that system i'm i'm thinking
more and i don't want to state it but i think.

(19:56):
Owning your space within that within that
retro so ie standing up for testing
and even if there's four developers or
five developers and you're the only person in
that team you have to have almost the loudest voice
there if you don't feel people are contributing from a
testing or from a quality perspective so really having

(20:18):
confidence and having a having a central voice in
that in that session at the end of the day when you're looking at what's gone
wrong in a sprint and it's obviously not only about what's gone wrong but if
you do look at what's gone wrong it's it's probably always or we haven't delivered
this as expected or we haven't there was this defect that

(20:41):
still got through, et cetera. So a lot of it is talking to quality.
And again, quality being the responsibility of the whole team.
I just feel that whoever's owning that testing hat.
If it's not a mature environment where everyone is really taking responsibility

(21:04):
for quality and everyone's playing an equal part in testing,
I just think it's very important for that person to be confident and have a
loud voice in that session.
Almost almost fight to some degree
to always mature quality more and and
and show out the show show the things
that that didn't work from a quality perspective the other thing is to be honest

(21:28):
because i've i've sat in in retro meetings before where everyone in the team
knows exactly what's gone wrong but everyone's too afraid to say what it is
and that might be a a test in a
developer that clashed and they couldn't agree about something or
two developers that i don't know all what both wanted

(21:49):
to work on the same thing and at the end of the day it
got delayed by two days or whatever whatever that may be just be really honest
with each other because why is the retro there it's it's for us to improve as
a team so if we're not addressing the real issues which are are contributing
to us not delivering or delivering on time then why have

(22:11):
it and it's not about ego it's not about personalities it's about improvement
as a whole so i think that's important as well yeah you don't really be a box a box ticking exercise.
Right i'm just following the scrum guide methodically and it says to do a retro
so i'll just take that box we've done that exactly i've got that 30 minute call
on the calendar i've done that let's move on with a life ca it's really important

(22:34):
to have those difficult discussions if you need to yeah but i've also seen a
lot of times like you say So Steve,
it becomes a bit of a tick box and you have this amazing session.
People are honest and the Scrum Master just puts it somewhere in a random link
or page on Confluence and it gets popped in Monday morning.
You rush into the new Sprint and everybody's completely forgotten about those

(22:55):
great recommendations.
It's almost like I sometimes want to say, create a user story for that recommendation
and let it be there almost in your stand up every morning.
So you think about it, otherwise you're just going to come to the end of the
Sprint and the same issues will repeat itself. So I think there's a big.
Actually well i guess it's the responsibility of everybody

(23:16):
but i think mostly it's almost like the scrum master
needs to constantly remind the people of what we
show what we all agreed on should be the the like the improvements but i guess
any everybody should take that ownership but i've seen a lot of the times those
things just get parks and popped and if people pick up that things never change
they that's when people don't start people start declining retros or they just start not hearing it.

(23:39):
What I've seen as good practice would be, let's say we're in Sprint 2,
there would be an epic or feature on the board that would be improvements from
Sprint 1, and then they would be broken out into user stories,
and they would actually have to be action items that we would track as part of our work in progress,
and having accountability for someone to do that.

(24:01):
Like you said, it just gets parked on a Confluence page somewhere.
There's seven things we think we should do, and nobody takes ownership of that.
No one's checking the accountability of that like you would with any other work item.
Couldn't agree more. I mean, imagine the frustration if you're sitting in a
session for an hour or how long ever Aretro is, you're putting it out there

(24:22):
in terms of what you feel isn't working and then nothing gets done about it.
Especially if you're somewhere or you're someone who's trying to really mature quality.
So it absolutely it absolutely has
to be a focus for not just that following sprint
but i i always feel that a
good scrum master would make sure that those

(24:45):
things are always front in mind always reminding the team what what did we discuss
in the retro what did we agree to that we were going to improve on and when
he or she sees that it's happening again just bring it up again but it's It's
very difficult for one person to always play the policeman.
So I think it is about, I mean, it comes down to the fact that it is a team, right?

(25:12):
And a team works for each other.
So if you as a team agree that this is something that we want to improve on,
then you need to hold each other accountable.
And if you see that we're not doing that anymore, whatever, then every person
should be doing what they can to improve and holding each other accountable

(25:32):
and raising it if it's not working.
Okay, shall we move to the next one? Deploying completed work.
Assist with handover and training for team where relevant.
Yeah, so I think this one is important at certain stages when you're actually

(25:52):
going to be doing a release and do production.
Just always being top of mind that we also, especially as quality,
everybody's responsible for quality, But as quality custodians always ask the
questions, have the handover documents been written? Has the training been completed?
And I think there's a big role for us as quality engineers also to help make

(26:18):
sure that the business is comfortable,
whether it be the ops team or the business users that they are comfortable and
really unaware of the changes that are coming.
So I don't know if it's in every team that they expect people from or the testers
to to participate in training or handover, but I think it's definitely a role
we can play as testers in a team.

(26:38):
I don't know, have you guys seen testers often helping with training and handover to business and ops?
I've seen testers suggest having it in the first place as there wasn't going
to have a handover or a training documentation.
So we kind of flagged it as it's an OAT task.
It's an operational acceptance task to have some documentation to make sure

(26:58):
we can kind of use the system.
And another thing I've had, which was, I think, really useful,
was one of my testers used to keep a tally of every time he asked the same question
about something within the system, let's say the UI,
or it's weird that when I upload this file that I get this dialog box,
it's just a bit, I don't understand it.
And he had to get it explained a few times to the developer,

(27:20):
and he said it would be worthwhile if you put an FAQ in the user guide to address
some of these questions that I kept bumping into,
because I imagine when it's rolled out to the business, those same kind of things
will happen don't worry that's a bit of an,
idiosyncrasy of the design, it's going to give you this pop-up don't worry about
it, this is what the pop-up means to give you some context, I think the testers

(27:41):
can help with making that effectively testing that guide and that material to
make sure that material fits the purpose like we would anything else.
Especially if I think when we are usually pretty close to UAT as well,
helping the business doing UAT and you also see the kind of things that they
ask, bring that into the training material.

(28:03):
Because sometimes you have a dedicated team that builds training material,
but just say, have you put this specific detail in there?
Because that's a question that all came to me and asked me while I was helping
them with UAT, for example. So, I think there is a glue factor that a tester
can definitely play between the training content and the business.
Yeah, definitely. But it's not the testing person creating training content

(28:24):
and doing training, right?
I think that's important. I've seen that. I've done that. I had to do it.
But that kind of frustrated me quite a bit.
Because now you say, oh, but you are the super user. You know the most.
You might as well go and write the training material and then train business
on it. I'm not a big fan of that.
I don't like documentation as it is. The other thing that I've seen post-deployment

(28:48):
are testers, because there are super users getting involved in the incident
process, which I don't necessarily mind.
I do think that it's a good second-line support function really for tests because
you do know the system quite well, not on a technical level necessarily,
but in a functional level.
And getting involved in that, I've seen testers do as well, which I'm not against.

(29:08):
Yeah, I was just going to say that exact thing, that
the better your training or your handover
is into the ops team the less they're going
to come to you for second line support because that
typically does happen if if you're i guess
if that handover between your team
and the ops team we need to support this in

(29:31):
in production if that's not good then you're
going to constantly get sidetracked with production
issues when you're trying to deliver a sprint and
now that's just got a knock-on effect because if
you're having to support three four five different
things or incidents or potentially they
don't know how to support the system then you're not

(29:54):
going to deliver on what you took into the sprint so i think that's a very important
step actually that handover it doesn't have to be necessarily a handover but
however you get that information transfer that knowledge transfer from your
team to whoever's going to support the system, I think that's a valuable step.
Otherwise, it just has a lot of knock-on effects.

(30:16):
It could be useful for, like, let's say we've had, like, performance issues,
but we've not phrased it as a defect because, let's say, the requirements are quite ambiguous.
But as a tester, I'm just like, I'm a bit uneasy about the performance of this
system and what it's going to look like in life.
This is a good opportunity for you to say to the support desk,

(30:36):
there are no performance defects,
but I've just got concerns due to the nature
of the design and what I've seen during UAT in high traffic
instances that we may see performance issues so just
keep an eye on that just like see have a look at it if it's
starting to come through on your service test tickets and your support tickets and it's
something we can then it gives me more evidence as a tester to come back into
the team and go look there's something that i thought might have been a problem

(30:59):
but i just couldn't get the evidence for you at the time or we didn't have
the requirements to test against i've got evidence from
the from the business that the performance is a problem and we need to address
it and that's a good opportunity in this period to have those kind of conversations
up front but obviously not leading the horse to water not not getting to look
for a problem that you think there when it's not there it's more just a of an
awareness piece it's not steering them into an answer that you want to get out

(31:21):
of them i'd caveat that at the end.
Okay so then on to the last step stefan the test completion report so does this always happen,
this one is a bit of a i almost didn't
want to put it in here but for completeness sake i think it's important
i think for big projects i think it's still relevant and i

(31:43):
think there are still people that would like this but to be
honest it's something that i really haven't seen a lot i've worked at corporates
where it's an artifact that we've got the template but nobody ever asked us
for it i think not a lot of people ask for it but i think it depends on on each
company how how important this is for them i think it's worth doing it i think it's almost in a way

(32:05):
a bit of a retro document, almost saying, this is the project, this is what we've done.
I think it's a good artifact to have if you ever were to tackle a similar project going forward.
I think there should be findings or lessons learned in a way in such a document.
There's quite a few things that you can put in there, but I think there is a place for it.
For smaller projects, I probably would say not necessarily required,

(32:27):
but for big, big complex projects, I think there is a place for it.
Without having spent days on an artifact.
And obviously, there needs to be a balance between these kinds of documents.
But I don't know if you guys have seen this in big projects that works in the agile methodology.
Some projects, though, where there's an auditing requirement and you know that

(32:51):
you're in the industry where there are regular audits, absolutely.
So this becomes an important artifact. But then you are doing it for a specific reason and purpose.
And then I'm not against it. If it is, like you said, CFN, there's a template
somewhere and tell me everything that has happened and 300 test cases and so
on, no one will ever read it.
But if there is an audit requirement, sure, absolutely. And I've seen that quite a bit.

(33:14):
We know that that's one of the quality gates that will be audited.
And then, of course, then I don't mind it.
But in normal projects where that requirement isn't there, I don't see people
scoring a lot of points on this.
Yeah, pretty much the same. when there's been a governance requirement or a
regulatory requirement to produce it, we've produced it.

(33:36):
I think the only time I would also advocate it outside of that would be if you're
in a problematic team where testing isn't valued to the same extent that you wish it was,
quality wasn't valued, it's your last opportunity to articulate a risk
and it's your last opportunity to to delineate

(33:59):
ownership of that risk to somebody else because what
it could do for you is let's say something happens in life and the question
gets raised of why wasn't this captured in testing is you have a document that's
clearly and this may be already on your board so maybe it's you duplicate an
effort but you can say well i issued the report and in the report i flagged
these three problems i tickled it into the business and the

(34:21):
business decided to downgrade them to a loss of priority
so i've done everything i can i've got evidence that i've
done that hopefully you're in an environment or a team that's not combative
and it's not accusatory in its culture but it does it does happen so it isn't
sometimes can be used as something to kind of just not defend yourself is the
wrong word but just just to kind of dot the eyes is it kind of what i would say about that.

(34:45):
Okay excellent i think let's wrap
that up there but if if if
i can find a common strand through everything
that we've that we've discussed over the three sessions
i i would say the the message.
That i'd like everyone to take away is to try

(35:07):
and create a culture where you are
not the sole tester doing testing and having responsibility
for the testing really try and create a culture
where and we've said it so many times i know we're sounding
like a broken record but quality is the responsibility of
the entire team and the sooner you
can get into that position where everyone is taking accountability

(35:29):
for what what's getting produced and and
contributing to the testing i think you'll be
in a better place for it i don't know do
any of you want to have any closing closing remarks or
do you have any closing thoughts on on this on anything
that we've discussed over the three sessions or i
think you summed it up quite nicely leon i think there there are

(35:52):
things that we've highlighted that we will discuss in in subsequent sessions
but i think as a summary that's perfect okay excellent
thank you very much for joining us if if you haven't watched the
the first two please if you can watch
that it does make sense to watch them one two and three
because we we did start with the with the left most information in terms of

(36:15):
the start of the project and and we finished now with with the end so yeah thank
you very much for tuning in and until next time bye-bye bye-bye this has been
an episode of testing experts with opinions an inspired testing podcast.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.