Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Testing Experts with Opinions, an inspired testing podcast aimed
at creating conversations about trends, tools, and technology in the software
testing space. And we're live again.
Everyone well? Good luck. Thank you.
Very well, thanks. It's actually so weird for me to ask whether you're well,
(00:21):
because I mean, I'm sure everyone knows that we're having discussions throughout the day.
It's not the first time i'll keep i'll
keep it that way and just be polite okay so
an idea we had was to look
through a i guess a bit of a diagram
or infographic that we have internally and it's
(00:42):
around the role of qa within a i'm gonna say a typical scrum team because we
know that scrum teams are also sometimes very different depending on on which
organization you're in Certain organizations do Agile a lot more or better compared
to others. I don't want to say better, differently.
(01:02):
They do Agile differently. And the role within an Agile team,
and specifically Scrum, which we'll look at today, it also very much differs
depending on which organization you're in.
So Stefan will actually go ahead and share something, which we'll talk through,
and we'll debate it a little bit. it.
I think this is not cast in stone.
(01:25):
It's not something that needs to be followed to the T, but it's kind of our
stab at something that should work for most organizations,
maybe something which is fairly generic and fairly easy to apply in terms of
the QA role within an organization, how that plugs into Scrum.
(01:46):
So I think, Stefan, if you can go ahead it and start sharing,
then we can take a look. Thanks, Leon.
Do you just want to start by maybe just walking us through it?
And then as you go through, we can maybe just, if anyone wants to mention anything,
give their opinions, challenge anything on you, et cetera, then we'll just do that. For sure.
(02:07):
So I think just from a bird's eye view, if we just look at the top layers,
we try to visualize it as saying for any agile project, there's usually a start of a project phase.
And then obviously when the project starts, we go sprint by sprint,
each sprint having pre-sprint activities, in-sprint activities,
(02:29):
and then post-sprint activities.
And once the actual project has been delivered, well, that's all.
It could be end of sprint or end of project.
If it's only end of sprint, we just run back to pre-sprint activities.
So just to give that sort of bird's eye view.
So what this diagram tries to show is as part of each of these steps or phases
in a project that uses the Agile methodology.
(02:54):
Just to sort of give an idea or guidance as to what the QA team's responsibility should be.
So I guess let's jump into it. So right at the start of a project,
one of the first steps should be to, well, as a team, define the initial product backlog.
And that typically is based on something like documentation,
(03:15):
like a BRS and FRS, those kind of documentations. But from a testing point of
view, that's typically the stage where a test plan is created.
So I know there's a lot of the time there's debate like, is a test plan really still needed.
I would definitely say yes, maybe to some degree, some companies like a more
detailed test plan than others, but nevertheless, definitely a test plan.
(03:38):
I mean, we were just, I was just busy with talking to a client this morning
and I think sometimes people don't realize if you properly write the test plan,
you are actually busy doing static testing of the documentation because of you
writing a proper test plan and you, you are,
you know, you need that clarity in terms of what the scope of the system is,
how that integrates with other pieces of other systems within the organization.
(04:00):
Because if you don't have that view, and if that diagrams and architecture isn't
available, you know, then your test plan is going to be very vague.
So you are actually really deep diving into a lot of documentation,
trying to understand the different non-functional requirements and functional
requirements, because that needs to be stipulated in your test plan.
So I think this is a very important step.
(04:23):
And people overlook the static testing aspect to it.
I think that's very important. I just want to talk about this a little bit,
Stefan, because it's a concept I struggle with and get a lot of questions about is this taste plan.
Now, there are two taste plans.
(04:44):
You're talking about a start of project taste plan and maybe the word taste
plan I struggle with a little bit. So maybe how do we want to test the project
from a strategy perspective or from an approach perspective?
And then a lot of clients ask for, before we start the sprint,
what is your test plan as well?
So if you're talking about create test plan here, which specific one are you referring to?
(05:08):
That's a good question. I made an assumption there that it's dangerous.
I know some companies talk about a test plan, they actually refer to the test cases.
In this case, definitely it's the strategic plan. It's the overall approach
to how you're going to handle testing.
It talks about what's in scope, what will you be focusing on, what is out of scope.
What are those test items? Like I mentioned, if I don't have an architectural
(05:30):
diagram of where this piece of new system or functionality fits into the bigger
picture, I don't know what to test.
I don't know what integration to test. I don't know the scope of my integration to test.
You know, there's a lot of components. You need to understand what are the functional requirements?
What are the non-functional? Do we care about performance? Do we care about security?
Who's responsible for what? Timelines. Sometimes, I mean, my personal opinion
(05:54):
is not to include timelines specifically in a test plan.
I would rather have a link to the overall project plan and then those testing
activities should slot into the central test project plan or in terms of milestones
and deliverable timelines.
But to answer your question in short, it's definitely the strategic document,
not the test plans in terms of test cases.
(06:14):
Then I like it because I think that word test plan sometimes confuses us.
As soon as we start talking about test plans, we want resources and timelines
and things. I think what you're positioning here is the right one.
So if you're looking at this piece of work, looking at the integrations.
Inter-system and external system integrations, what is your approach and what
(06:37):
kind of testing would you recommend we do just to start planning?
And I agree, the architectural diagrams play a big role, but it's not to,
we're going to taste this feature here, Stefan's going to taste this,
it's going to be tested in sprint three.
It's not that kind of thing or high level.
Thanks. I agree with that. I think a lot of the time people make the mistake
(07:01):
of a test plan being a burden.
It's just a document, an artifact that you write once off just to get it out
of the way as part of your quality gate.
And then they forget about it. And then maybe pick it up again when you submit
it at the end of your project.
To me, when I talk to test analysts and say write a test plan,
they should see it as a practical document. I'd rather have a test plan that
are 10 lines long and you keep it in your back pocket and you refer to it every
(07:25):
day rather than a 50-pager and you never look at it again.
So it always needs to be practical. It's not about the length,
it's about can you practically apply that strategy in a project?
If it's just a fluffy theoretical thing, nobody's going to bother about it or
look at it again. So I think that to me is the important bit about this step.
I was just going to add, is the pushback from the client, because
(07:46):
I've had clients with the pushback on something like that
would be adjad doesn't advocate lots
of process and lots of documentation so like i like to refer to them as purists
like they're a faithful scrum guide as like gospel and you shouldn't deviate
from it and i would always challenge them with it's not what it says it just
says no pointless or frivolous unnecessary documentation for the sake of it.
(08:10):
Like a one-page approach or a couple of page approaches of how you plan to tackle
testing Testing for the team for every sprint they run is a really valid artifact
to have and is really valuable. It's not going to slow the sprint down.
If anything, it should speed it up because you're not going to get questions
from, for example, people new to the team. What do we do around performance?
Well, our approach to performance testing is in the test plan or approach document
(08:35):
or our Confluence page or wherever it is you store that.
Yeah, and that makes sense because I think, again, the test plan thing,
when we grew up, a test plan was a 20, 30-page document,
as Stefan has said, and an artifact saying what do we want to test,
which testing types, how do we want to tackle it, if we do performance,
(08:56):
do you do it in sprint, are we going to leave that? So those kind of things
are extremely valuable.
And this artifact, Stefan, you said it should be a short thing that really just
guides us rather than this historic test plan that feeds into approach that
goes into a strategy sitting with 90-page documents.
So then I absolutely agree. Sorry, Leon.
I was just going to say, I think you can very easily replace the word test plan, not replace.
(09:23):
Space and alternative words could
be test strategy test approach test plan companies
will refer to it as different things but what's
important is that activity drives conversation
and that for me is the
massive benefit of of having that it's
it's going to make people it's going to force people to
(09:45):
think about a project and what's required for a
project what all the considerations have we thought about
this have we thought about that so i completely
agree it should it should not be a tick back tick
box activity and it should
be something which is a living document so let's let's let's argue that this
(10:07):
is this is an approach for how we're going to test this this project i don't
think that's something that you do and you complete it's it's something that
you're going to continuously work on.
It needs to be an organically growing document.
As you're going through the project, you're making changes and maybe you're adapting it, etc.
But again, it's good because it drives conversation.
(10:30):
And as you're having more conversations, your test plan, test approach,
test strategy, whatever you want to refer to it as, it actually adapts with you.
And it should, because if you look at that start of the project,
it's initial product backlog.
We're only getting to the nitty-gritty detail in grooming, but mostly sprint
(10:51):
planning, where you really understand integrations, for example.
You really understand, is there accessibility here? Is there performance here?
Is there functionality here? And then when you get there, you can refer back
and update that according to the new decisions. And I like that. Yeah.
But I just want to reiterate what, I can't remember whether it was Jan or Stefan
that said it, but I'm not a subscriber of this being a 20, 30,
(11:15):
40, 50-page document anymore.
It's a couple of pages maximum because we know in the past how difficult it
is to firstly get people to read a document of that length.
Let's be honest. Everyone outside of testing doesn't really care about testing
that much or they don't have a massive interest.
They do not want to go and read a 50-page document about testing and how we're going to approach it.
(11:38):
So you need to keep it light for your audience, but it needs to have the pertinent
information in there where you're getting your point across.
Everyone knows what you're planning.
Everyone's on the same page. And like I said, it's driving the conversation.
So I definitely think there's still value in doing it, but be pragmatic in terms
of the length and what actually goes into it.
(12:01):
Kind of bring me to my second question sorry stefan i'll ask after you finished your thought.
Yeah thanks jan i'll be quick now just
sometimes the devil is in the detail sometimes the test shouldn't be too light
like i always talk about reporting what what will be in those reports when when
(12:23):
we when stakeholders ask us two three months down the line for report if we
agree for example sometimes we need to go into a little bit of detail depending
on the section of a test plan or test strategy.
Because if you know, let's say for example, three months down the line,
somebody asks you, tell me, I want to see for each user story,
what's your test coverage and clearance on that.
(12:43):
If you didn't know that was a requirement, you probably wouldn't have necessarily
made the point of linking each test case to a specific user story and then link
it also to the requirement which is for that user story.
If you don't do that metadata linking upfront, front you're going
to be scrambling around three months trying to backdate your test
cases so certain things you have to be very pedantic
(13:03):
about agreeing with your stakeholders say these are the things you said
you wanted to see i've prepped for this i've got the metadata but for
these new reports yes i can do it but it's probably going to take me some
time and just to cover yourself so some sections are high but sometimes you
need to think a little bit into the future and and cover yourself so anyway
that was just a side note that's more from you know maybe different book lines
(13:26):
but it's just sort of a be aware of going too light too far to the other side as well yeah.
Yeah, I think it's very much, it's horses for courses. It's being pragmatic.
So if you're in an organization where there's a lot of rigor,
there's a lot of compliance, there's a lot of, you will know.
There's a lot of red tape typically in your day-to-day. You're probably going
(13:49):
to angle more towards having a much longer, more detailed one.
But if it's a much more agile environment,
maybe small organizations or not necessarily small organizations,
but just organizations which embrace agile more, then maybe it's slightly shorter
and there's not as much detail in there.
(14:10):
So again, I think it's just about being pragmatic and agreeing what the plan
is in terms of that pragmatism.
So this is what we're going to produce.
Does that work for everyone? Yes. Okay, let's go for it.
I quite like a wiki format for that or like a subpage format so you could,
like you said, Stefan, and there might be, say, defect management,
(14:31):
we need to be really clear and concise on that.
So you could just have a breakout page or a sub-page or a sub-thread that would
go into that more detail because not everyone cares about that.
But let's say your cab approval process, they need to see what your defect management is.
So that needs to be in a bit more detail. And doing like a wiki page or sub-threads
is another way of just making it easy to navigate and so people can just skim
(14:52):
read or read the bits that are appearing to them.
Good point. Okay. I'm going to have to get us to move on.
Otherwise we're going to be up for four hours and i'm sure no one's going
to watch that long i do want to ask the
second question i know you'll move on so we'll move on from that one should we
then have a plan per sprint because
that that's the the contentious part if i speak to clients it's
(15:13):
okay i get that i get the first one and i think most of you but
should you also then have a test plan per sprint
i would i would say
if you're a part of like a release
train the purpose of the team changes so
let's say you're an api team right and then you are
then moved to back end after like a year like yeah
(15:35):
you should change it or if the if there's like a really narrow or specific scope
for like the next four sprints we're going to be doing this block of work and
then the next four months of the year we're doing a different block that's unrelated
that's when i would look at updating it but i wouldn't do it sprint to sprint
i've rarely seen that in in Raleigh.
Agreed. I think it's very much project-based. So as long as you still,
(15:58):
as long as your sprints are still working towards delivering that project,
there's no need for a test plan.
But as soon as you're tackling a new project or maybe like Steve mentioned,
you're moving away into a different area, then I think there's a need for that again. Hmm.
Leon, should we move on to the next one? Yes, sir.
(16:20):
Okay, so talking about product backlog grooming, and that's the ongoing thing, right?
But whenever we do product backlog grooming, we should always think about the
three amigos exercise where BAs, devs, and testers are all put together in these
meetings to break down these user stories.
And the QA team should always be involved when the acceptance criteria is defined.
(16:45):
And I think mostly yes to ask, like, have you thought about this or that?
But I think to me it's always important, is that acceptance criteria testable?
We need to make sure it's practical that we can actually tell you,
yes, this is actually meeting the criteria or not. That's super important.
And I think often teams make the mistake of when they're sizing stories,
they only size the development component of that story and not always the testing component.
(17:10):
And it's sometimes difficult to combine everything together.
I've worked in teams where they would have a parent story, but then they would size the subtasks.
They would size the dev subtask and the test subtask together,
and then almost get a feel for it and to just understand the differences in
complexity, because sometimes you can make a small dev change,
(17:31):
but it has a massive testing impact, right? So we need to consider that.
And then also I think in terms of the product backlog grooming,
always ensure that the definition of ready and the definition of done has the necessary QA elements.
These are typically things that are generic across all user stories,
other than when we talk about acceptance criteria, that specific per user story.
(17:54):
But when I say ensure definition of ready and definition of done has the necessary
QA elements, when we talk about these things, that some examples of definition
of ready is, is the user story well-written and easy to understand so that you have context?
Does it, like I mentioned, does it have clear and testable acceptance criteria?
Does, you know, all the external and internal dependencies identified and noted?
(18:17):
These kind of things are important.
Is the testing, you know, is the testing effort clearly estimated as part of
your definition of ready?
And also, is the story prioritized? So that's also part of your definition of ready.
So that if you have 20 user stories to test in a sprint, which ones do you pick up first?
So this definition of ready and definition of done elements might be different
(18:38):
per team, but I think it's important to define this so that there's clarity.
In terms of definition of done, it could be things like, has a unit test been
written and passed? Has functional and non-functional testing passed?
And are there any critical bugs or there are no critical bugs before you can
actually pass a user story at the end of the sprint?
(18:59):
So I'm not sure what your experiences are in terms of these definition of ready
and definition of done or some of the other elements.
Your thoughts? That's an interesting one, Stefan.
The way you explain it now is more, I suppose, the theoretical version,
but I've seen increasingly so a lot of clients saying that testing is not part
(19:22):
of the definition of done.
So they have a definition of development done, and then those things move to
a testing backlog, which will then have their own definition of testing done.
Because for various reasons, and I've tried to debate it, but for various reasons,
we're developing now, we don't want to deploy now, we don't want to test now,
(19:43):
testing will have their own priorities and things like that.
But just for the room, what is your view on that? So they have a development
done, which they say is the definition of done, and then testing will have their
own definition of done somewhere else.
I personally don't like that because that's us and them.
(20:03):
And it breaks the team coherency. It's one team with one combined goal,
as opposed to we're waiting on QA, or we're throwing it over the fence,
and it's in their backlog now.
It creates a nice, clean waterfall cycle within the sprint, which kind of defeats
the whole purpose of it being a collaborative participation of all members, is what I would say.
(20:26):
They kind of take it one step further. So the testing sprint for the work that
was developed in Sprint 1 may only be in Sprint 7.
So they don't even make the definition of done for that Sprint,
testing and development.
So they do test plan analysis and design in the Sprint that the dev happens,
but the test execution will happen for 6, 7, how many Sprints later,
(20:50):
which is quite an interesting one.
Makes it a nightmare when you find defects, of course.
Should that story have been pulled into the Sprint in the first place,
then? That's what I was going to ask, yeah.
The thing for me is one of the principles of Agile is to be able to release
something after every sprint.
So in theory, you should have been able to deploy something.
(21:13):
Now, for me, that's just creating technical debt.
And it's adding so many issues in terms of really being able to measure your sprint velocity.
It's it's definitely going to it's definitely
it's moving away from agile right so the whole
idea of agile being able to deploy
(21:35):
after every sprint in theory i know a lot of a lot of companies don't do that
a lot of teams don't do that but it's the whole collaboration thing and it's
it's getting everyone to collectively accept responsibility for quality as well
now it's again driving a wedge between the development function and the testing function.
(21:56):
And it was actually one of the things that I wanted to mention when Stefan went
through this in the second swim lane, in that, yes, we're referring to a QA team,
but actually it's a quality function.
It could be developers or testers doing those activities.
(22:16):
It doesn't have to be someone that's designated a tester but what.
Jan, in that example where you're having development done now and in seven sprints
time, you're doing the testing, it feels more like Waterfall than it does Agile.
But it's almost an even a prolonged version of Waterfall.
(22:41):
And I would then say, get the testers to work in a completely different team
on their own and move back to what worked in the old days.
Because now you're sort of stuck in between two worlds
and i just i can probably debate this for
the next two hours it just doesn't make sense to me well the the
point of the sprint isn't it this point of agile it's just
(23:02):
a deming cycle right so the point is you
release fast and often to get feedback fast
and often so you can course correct on that feedback the testing is part of
that feedback cycle so the longer you wait for the feedback to come the worse
it is the so like this I'm just thinking like practically the test has found
something they've found a defect and the devs are like yeah we wrote that code
(23:25):
four weeks ago what are you talking about.
Surely that's going to slow the mechanism down right plus
they've got now new sprint deliverables to focus on and
the pressure for that now they're going to try and just give you a quick fix not
to really think it through because the pressure's on I was
just going to say that exact thing that exact thing this doesn't
work for QA but it doesn't work for the the development function either
(23:47):
because for that exact reason we've done it we've
completed it now two months down the line or five weeks
down the line you're saying oh and now i need to go back to it
i need to go and refresh myself around what i
did why i did it in a particular way fix it and then potentially in another
week or two you're going to let me know whether it works or not and i'm going
to have to go back go away from what i'm currently doing it's just i would like
(24:12):
to understand what the benefits are of that as opposed to maybe all the cons
we're seeing at the moment, but it just,
I, I, I struggled to see the pros.
I've seen it, and I agree with everyone here, but I've seen it in,
I want to say, 50% to 60% of the clients in the last month that I've spoken to are doing this.
So why are they doing this? And I didn't give you that context.
(24:34):
So we had a webinar about testing and who testing should report to and the autonomy thereof.
But they are saying that in a sprint, testing is in the way.
So the developers are saying, I can't develop at the pace or cadence that I
want to because testing is in the way.
So they move testing out of the sprint. So now development is saying,
well, great, look at all the work I'm pushing through.
(24:56):
And they're happy with the comebacks, a couple of sprints down the line.
Because from a development perspective, and because development people are looking
at the sprint, and they're heading up the sprint more often than not,
they are saying, look at our development effort, and it's going well.
Testers are on the way, so let's remove them. And let's give them a bone when
they need one later down the line.
(25:17):
And that's really where this is. And I agree with you, it's not the right way,
but that's where this is coming from.
But it's like I can drive my car 100 miles, but the tyres are going to fall off at 60.
So I'm more bothered about seeing the pseudonym of progress,
the fakeness of progress, when in reality, I've got all this stuff behind me
that needs to be sorted out, and eventually it's just going to crash into the back of it.
(25:40):
That's just a strange way to think about it. And it does make it an us and them
thing. It definitely makes it. It's QA are the problem.
QA are the thing that slows it down. instead of thinking how can
we help the testers test faster how can we speed you up us as developers what
can we do can we automate more of our code can we do more unit testing can we
pair test with them so we can create more depth in that automation pack what
(26:02):
can we do to speed them up and as opposed to let's pull them out of the team
and put them in a different cadence that's a strange way of going about it.
And that was my approach, Stephen. I think you hit the nail on the head.
And that's my response to that is absolutely.
I get that tasting seems to be in the way at the moment. But that is because
you're tasting at the wrong level.
If you move tasting to where it should be done, i.e. left or whichever down the pyramid,
(26:25):
but we move tasting to the responsibility of everyone, we see what developers
can do in the tasting world, then your tasting load for a traditional taster is a lot less.
And then we can get through the sprint. So if you move the testing or quality
task, as Leon said, quality task to where they should be, then there shouldn't be an issue.
But because testing are still doing everything quality-wise,
(26:48):
they are becoming the bottleneck. So I think you hit the nail on the head there.
Maybe it's a bit controversial, but sometimes I think people write user stories incorrectly.
They would maybe write an end-to-end story that says,
as a person, I want to do this,
but it's sort of this end-to-end kind of user story and they put it in the beginning and
(27:09):
it's all the pieces on there so you have to park it people need to
write small components in the user stories as small components that can be tested
in isolation at that point in time in the project and sometimes people really
misuse the the word that the way that the user story is written and they that's
why i say it needs to be testable but testable at that point in time not testable in 10 sprints time.
(27:31):
So I think that's also where we as QA need to almost guide the people in the
way that they write user stories and prioritize them.
So, you know, it's interesting.
Yeah, I would really like to debate that in another session because I think it's very interesting.
(27:52):
And someone said it's wrong.
It's not necessarily wrong. It's just different.
And maybe it's not wrong. Maybe there are merits to it. So maybe something we
should discuss in future.
I mean, at the moment, we are also being a bit theoretical, yeah,
in that this is the right blueprint, this is the right thing to do.
(28:14):
But maybe there are merits to doing it.
And I think that's how testing evolves, right, to actually look at how those
companies are doing it and taking the good parts out of it and keeping the good
parts and maybe improving the parts that could work better.
So, no, it's a very interesting concept. There is, of course,
and I mean, pure, pure agile says you need to be, it needs to be done,
(28:38):
done like testing everything done before.
But in theory, you could potentially say only dev is done in the first sprint
and testing is done the next sprint.
But then they have to have sort of say, if I can work at a hundred points cadence.
I only need to pick up 75 and allocate those other 25 for bug fixes and things.
And maybe it's not wrong at the end of your release. release
(29:00):
maybe allow another sprint just to realign and catch
up everything and i think teams are doing things like that
i've seen many flavors of what works for the team i don't think we need to be
hard and fast about it like you said leon it needs to be practical we're not
talking about perfect world purist kind of almost theory but people are creative
and what i've seen in projects and it is like that so we do the same with automation
(29:23):
right typically when it's ui automation
we always say n minus one we don't finish ui automation
typically in the same sprint we do it the next sprint and that's also
testing everything in a way so why is
that okay i'm being sort of devil's advocate
here but you're right well i think there's always going to be stuff that you're
going to carry over to the next sprint but i'm just when i say no i'm kind of
(29:45):
coming with the perspective of you don't want to carry much that technical debt
that baggage will quickly accumulate and you'll have to have more and more of
those like rebalancing sprints or reset sprints to clear that debt down and keep going.
It's not just about releasing fast. It's about reducing waste.
And technical debt is waste material, waste activity that you've not been able to complete.
(30:05):
So it's just something that will accumulate, that's all.
But it's only waste if you see it as waste. And I think in that scenario,
they don't actually see it as waste. They see it as a way of working.
But it's interesting. I was just thinking most companies we speak to,
they either want to reduce the cost of testing or they want to release more frequently and faster.
(30:28):
That example goes completely against that because you're not releasing faster.
You're releasing less frequently.
And I'm just thinking from the business perspective, the business will want
their changes on the site as soon as possible.
So you're delaying that as well. So I wonder how much internal pressure there
is on sort of that way of working in terms of actually getting into production more often.
(30:52):
In so so these case these clients that
i'm talking about don't have the luxury if
that is a luxury to release often so there are big
organizations working with financial institutions for example
and they say well we release once every three months we
don't really care about right and and again we
said it it's it's dependent on your organization your operating model where
(31:15):
this is and the argument should be then should you do sprint i suppose uh should
you should you do agile that's a different conversation but they don't have
to deploy that often and therefore they just want to get as much out from a
dev perspective and they'll they'll catch up.
Okay i'm gonna i'm gonna have to force us to move on yeah we're gonna have to
move on stefan yes i think i i don't think there's much pushback from from that
(31:40):
third one other than sorry which one second one other than the fact that i think
it's a it's a qa capability as opposed to necessarily
the QA team doing that or a tester with a designated hat
on but I think we could definitely go to the Sprint planning phase let
me see what we what we said here ensure that there is sufficient capacity in
(32:03):
the team for QA tasks related to the upcoming Sprint user stories so once again
it's just let's you know how much effort is the story really going to take it
might It might be one day of dev,
but it might require a lot more time to actually test the code.
So make sure that you understand the cadence of your team, especially in your
(32:25):
testing side, and make sure that you're not over allocated in terms of the testing effort.
I think that's actually important because often we get stuck in our ways, right?
And as an agile team or as a scrum team, we've become used to delivering X story points every sprint.
And maybe your velocity is 20 or 30 or however you measure that is kind of irrelevant.
(32:49):
But now you have someone who's going to be on holiday for two weeks.
Specifically within the testing capability or maybe just in the team if everyone's
doing testing collectively,
but actually go and adjust your velocity for that following sprint when you're
having a holiday as opposed to just relying on what we deliver every sprint
(33:10):
because then often you see failure and you've not actually considered the fact that,
well, actually we were a person down in the sprint or two people down.
And in smaller scrum teams, that has a massive impact. If you have a team of
three or four, then a person not being there, I mean, you're losing potentially 25% capacity.
So I think that first point is very important in that you need to ensure that
(33:33):
there's sufficient capacity and it comes back to the planning.
For sure. I think anything I would add to that is there's other QA tasks other
than those associated with the user story.
So prepping debt, making sure the environment is stable,
like general housekeeping around other bits and bobs us all activities
that that need to happen in that sprint not just those associated with the story
(33:54):
itself exactly and people that aren't in the testing world don't necessarily
realize the effort to prep tricky or very difficult difficult combinations of
test data for your testing and that's why you need to say and say okay this
is actually whatever five or eight or whatever the,
The point is for that effort. My next comment is going to sound straightforward
(34:17):
and something obvious, but that should be done by the person testing.
You're going to say, well, of course it should be. So many organizations I talk
to, testing will be told how long it would be depending on the development effort.
So when I'm saying that, those estimations, because it's not clear on this slide and I'm saying this,
(34:37):
but size qa tasks and
qa capacity that input should be should
come from the testing team and not being
handed down because we the development effort
is is 10 points and three points for testing because it's not empty right exactly
it goes without saying but i still talk to a lot of people who says oh wow we
(35:00):
didn't know that testing should really be in there and tell us how long it's
going to take because now it's going to back to the previous it's going going
to throw a sprint off if you want to taste for longer than we thought you should have tasted.
So for Swimline 2 and 3, tasting should be there and doing those estimations for themselves.
A good comeback on that is testers don't tell all the devs how long it's going to take, do they?
(35:21):
The BAs don't tell the devs, that'll take you two weeks to code that.
That's not a two-way road, is it? Everyone estimates their own work,
except for QA, I've had that experience as well. It's a strange one.
But also So I've used this example in the past where,
oh, testing should take 20% to 30% of the development effort,
(35:44):
but I can make a single change in a CSS file or a JavaScript file,
a single line, which is going to take an immense amount of testing.
In that case, you can't say, oh, that one line change has taken 10 seconds and
therefore we don't need to test it.
Well, actually, you now need to go and potentially do testing on different browsers
(36:06):
and you need to check on mobile, et cetera, et cetera. So I think,
again, it's about pragmatism and being pragmatic around things.
But absolutely, testing should come up with estimation.
They should have an equal voice in terms of how long an activity should,
sorry, what the size of a story should be, how long activity should take.
(36:28):
And if there's a testing-specific activity, they need to have the loudest voice
in terms of how long we actually need for that.
Because they would have thought about it much more than the developers. Yeah.
Leon, don't get me started on sizing. I was at a team where we eventually said,
drop these points. Just put in hours.
Like, this is going to take me two days. I'm putting in 16 hours.
(36:49):
To me, that's a much more accurate estimation. That's just me.
But I don't know. Sometimes it gets very fluffy.
And especially if you give like a, because then you can actually really size
the dev is going to take me X amount of hours, this hours.
And you can actually say, I've got X amount of hours in the week.
This is sort of my estimation.
Then it's like hard and fast. But maybe that's a topic for another day.
But again, I think it's about what works for you, what works for you as a team.
(37:13):
And if you as a team decide, we actually want to estimate in hours as opposed
to story points, then why not?
As long as you are getting through your work, as long as you're delivering what
you said at the start of the sprint, then why not?
I don't really care how you deliver it as long as everyone is in agreement.
But I think the problem comes in where one person wants to estimate in story
points and another person says hours is good and no, let's go minutes, et cetera.
(37:37):
Let's use planning poker. No, let's use T-shirt size, et cetera.
Then it just starts going wrong.
But I think if everyone can agree to a single way, then I think it's fine. Yeah, I agree.
Again, pragmatism. I think I've said it about 15 times already.
Might say it another ten times it's like it's your catchphrase like Stephen's
(37:59):
is it depends it depends absolutely out and pending it's going to be on a t-shirt.
Alright okay Stephen just quickly sorry I don't think we're going to get through everything so,
I suggest maybe we do one more and then we can look at the rest in a next in
(38:20):
a next session otherwise this is going to become very long and we know that people don't have that,
that much time to watch podcasts so we'll give
them the benefit of the doubt that that they'll actually watch the second one
so i think maybe let's just discuss one more and then we
can uh we can park it for for today sure and
maybe we should just wrap up this swimline i've still got two more top points
(38:42):
that may also be well may also spark debate ensure definition of ready is met
so whatever you define in your definition of ready once again like it's for
example clear and testable acceptance criteria or a stable test environment.
I mentioned a few other examples and there's lots to read up.
So that's quite an obvious one.
(39:03):
I think the second one or the third one is quite important and sometimes missed.
Coordinate any externally required testing. So sometimes, especially with quite
clear non-functional requirements, user stories in a sprint might require specific
things like performance testing or security testing or later down the line, UAT or OAT.
Somebody needs to coordinate that sometimes that falls
(39:24):
between the cracks and the testing team thinks it's
the scrum master or vice versa I think primarily the
ownership should sit with the QA team to drive it and make
sure somebody drives it at least almost like when you say RACI they could be
accountable but also responsible typically both but maybe it's a bit of a combination
between the scrum master or whoever else but I think they need to play a role
(39:46):
in especially it's in the test plan they should know that it should be coming
up they should keep an eye out and understand that that's a requirement.
If it's security testing, for example, I've worked at companies where you need
to book out the pen testing team three months in advance because they're so busy.
If you know that's going to happen three months down the line,
start booking that slot, even though it's not maybe in this next few sprints. I think that's it.
(40:09):
I think just, sorry, on the second point, I think what's very important to realize
there is where you have clear and testable acceptance criteria,
this is not the time to actually start looking at whether it is clear and testable.
That needs to be done during grooming stage, during backlog refinement stage.
(40:29):
So this is just saying, oh, we have a definition of ready, and this is one of the things.
But you can't start doing the work when you're doing sprint planning because that's way too late.
You can't now go back and say, oh, this requirement isn't clear,
or this is maybe not that testable.
So I just want to be clear in terms of what that means. It means ensuring that
(40:49):
it's met. Yeah, that's right.
I think what this means is, like you said, Leon, this is a quality gate.
We are checking that those tick boxes in the DOR is met. If it's not ticked,
we cannot pull it into the sprint.
We need to pick up the next one or whatever the case may be.
So it's a quality gate concept.
(41:10):
Stable test environments. We might discuss that for another day.
I've never seen a conversation completely on its own.
Leon, we've got five minutes. Would you like to jump into the next one?
I'm not sure if we're going to land. I think let's finish this one,
and then next time we can discuss the next four, starting with a sprint.
(41:31):
How would you – I'm going to play devil's advocate here now.
So if we put in our definition of ready, a stable test environment,
how are we defining whether it's stable before we start a testing? Okay.
It's available to the right people. I don't want to go into detail here.
(41:51):
The data within it allows me to complete the tests that I need to do.
And any other activities in that environment are people are aware of that when they're using it.
So it could be ready, it could be full of data, but there's 12 of the testers
using it at the same time. We all trash each other.
That's the kind of three standouts I can think of off the top of my head.
Well, yeah. I think also if you look at the definition of ready,
(42:14):
One of the things that I think it's important is to say all external and internal
dependencies are identified for those user stories.
If you can say upfront, as part of me having to sign off user story,
A, B and C, I need to integrate with this or that API and that's not ready.
And to me, that's not a stable test environment.
Yes, maybe I can create mocks and stubs and whatever I need to do,
(42:36):
but then we need to raise it and say, it's not really testing what we should be testing.
So you need to understand the complexity of integration. it's not only the little
bit of your application it's almost like the systems of systems environment
and you don't understand to what point you need to test so coming back to that
architectural diagram all the time if you don't understand the bigger picture
(42:57):
and how things fit in together,
you might think you have a stable environment until you start doing your
actual testing so I would challenge
that a little bit because this is sprint planning so I
will say that you will have no code
that is ready for you to test because the sprint has not
started yet right data will probably not be
there because again there's a sprint plan so i'm
(43:20):
hearing that we want code and data and
stability before the sprint starts and you'll i don't
think you'll get that because the developers are only developing the things
that you want to taste during the sprint so i don't i don't think that a stable
taste environment is necessarily quality gate thing yes from from from do we
have an environment that we can test on, should there be something to test?
(43:44):
Maybe that, but I don't think you're
going to get any code for that sprint before the sprint has started.
It depends. Sorry, Steve.
Sorry, I got scared every time you say it. It depends. Maybe there's an API
that there's no dev involved in the API.
It's just an API that's not been set up or configured yet for you to test on.
(44:05):
I understand if there's lots of bits and pieces that still need to be developed, that's fine.
But your baseline before you start any dev there should be some level of stability
on that like yeah anyway i maybe it's a really much very much a depends one
but yeah and i that's a good point you're making obviously if the abis you need
to integrate is still need to be developed then you won't know everything is still up in the air but.
(44:29):
But again, Stefan, just to come back to that, so say all the APIs that you need
to integrate to exist, where you just say that development and testing of that
development happens within the sprint.
So you can't realistically expect any of that to have happened before that sprint,
because you are testing what they are developing.
And what they are developing, they need to deploy to that environment.
(44:52):
So that environment will not be ready with the code before you go into the sprint,
because again, the code will be developed and tested in that sprint.
Yes, obviously, from a regression-pacing perspective, I think maybe there's an argument to be made.
But for any new work in this project, we are only going into the sprint,
and I need to test it when it is available, not before it is available.
(45:13):
Fair enough. I think it's more about availability of a test environment.
Sorry. It's about making sure that during your sprint planning phase,
you know which environment you're going to be testing on.
You know that you have availability to it. there's no there's no other team
in that in that environment there's there's maybe not a.
(45:34):
So let's let's go back to the other example the one
where we said oh certain teams do testing down
the road they do it in a couple of weeks a couple of
months maybe that that's a separate team that's currently in
that environment knowing that that environment's going
to be ready for you to start tomorrow when the
sprint starts or later today i think that's what
(45:54):
that is in terms of the definition of ready
we can we can debate the actual stability of a
test environment and what it needs etc but i i think it's
slightly different it's a good point so i
think i think that's good sorry stefan no no
it's it's a good point if you have 10 teams and
all everybody shares one cure environment definitely there needs to be
(46:16):
some kind of a test environment management system where you can book it or.
Reserve it so good point yet leon okay i
i like i like hearing good point Leon and and
therefore I'm gonna I'm gonna leave it right there that's
the last thing that was said and then I'll remember
it until the next podcast I think that's
(46:38):
a good point to to to pause because next time
we can then actually start looking at the actual in sprint activities
and and what that entails and then
obviously once you get past your sprint your sprint demos your
your retros and that that type of thing so i
i really enjoyed this conversation thanks a lot and i'm looking
forward to the next one so thanks to anyone that watched and uh we'll see you
(47:00):
next time thank you thank you this has been an episode of testing experts with
opinions an inspired testing podcast find us on linkedin twitter facebook instagram
youtube and tiktok where we're driving conversations.