Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Music.
(00:05):
Welcome to Testing Experts with Opinions, an inspired testing podcast aimed
at creating conversations about trends, tools, and technology in the software testing space.
Welcome, everyone, to our fourth episode in this series of podcasts that we started recording.
I think it's only right for me to set the scene for what we're going to be talking about today.
(00:32):
And it's something which is always topical, and it's around test maturity and
why test maturity is important within organizations.
I think it's important to also understand what we mean by test maturity.
And once that foundation is set, then it's easier to have a conversation around
(00:53):
what the potential benefits are, the different models of test maturity,
how test maturity is typically measured.
And why we've actually taken the time to talk about it today.
So we have our usual panel here today. All of you well?
Yes, thanks, Leon. Yes, I'm well, thank you. Good. Good.
(01:16):
Very well, thank you. And then we also have a new starter or a new joiner to
our podcast, which we really want to take the time to welcome.
I mean, we decided that it's just wrong for us to only have South Africans on here.
So we've managed to find ourselves someone in the UK that can join us.
And that person is Mr. Stephen Platton. Hi.
(01:39):
Steve, do you just want to introduce yourself briefly? Yeah,
sure. Stephen Platton, I've recently joined the team, is one of the principal consultants.
UK-based, if you can pick up the slightly Yorkshire twang in my accent.
I'm looking forward to talking to everybody about this topic today and getting to know everyone.
To our to our one or two south african listeners that we
do have we're gonna have to get a translator for you yeah the
(02:00):
transcript will cover that off i think exactly exactly all
right steve said do you do you want to set the scene in
terms of when we talk about test maturity what do
we actually mean by that what what is test maturity yeah okay
so test maturity which is something i've done with a
number of companies and clients and throughout my
(02:20):
career is effectively a framework or a
set of frameworks for you to understand where
your organization is in terms of its testing right so it's
a scaling way of kind of thinking about it is
is our testing we understand it to
some degree it's brand new we don't really do it all the way up to we are like
a world leader in testing i recognize most other companies as being best practice
(02:44):
in what we do and we're constantly optimizing our approach so it's a way of
measuring yourself against a set of frameworks to find out where you are on that scale.
So from something like initial, so you're kind of testing as ad hoc and very
pragmatic and it's very disjointed to something that is a large centralized
QA function that has lots of processes and tools and frameworks and methods
(03:08):
of practice that everyone follows and adheres to.
And then anywhere in between that big sliding scale.
So that's like kind of test maturity in a nutshell is kind of situational awareness
for your company to understand where you are on that testing journey.
Okay awesome yeah and i'll just briefly
ask you but but once you've answered that the rest can can
give their opinions as well but if we talk
(03:30):
about it why is it important and and
i guess what are the advantages of of being uh mature
in your testing yeah i think it's a good question and as steven has said it's
good for your organization to know where you are on a maturity scale and that's
one thing and just as we do with cmmi for big organizations we do it from a
(03:50):
testing perspective, why is it important?
And a lot of times we do these assessments. People tell us, okay,
I'm on a two, I'm on a three, but so what, right?
So what we try and achieve through this maturity and understanding an organization's
test maturity is to say, are we doing the right things from a test organization perspective?
And are we doing them in the most effective and efficient way?
(04:12):
Because just as the world has moved on and the software development lifecycle
has gotten shorter and shorter, as a test organization, you want to contribute
to those efforts and you want to fit into those models.
And it's only by understanding what you are doing and is what you are doing a mature thing, i.e.
Are we doing it effectively and efficiently, then we see how we contribute to
(04:37):
those efforts and how we fit into a more agile environment.
So it's not only about the rating, but it's making sure that when we do something
as a tester, and I often refer to that as purposefully testing, right?
So if you're doing something as a tester, that task that you are doing,
what is the purpose of that?
And does that add value to the organization and the software development lifecycle?
(04:57):
And it's through these assessments that we look at these items and aspects of
testing to see that and highlight the areas and elements where you can actually
do things more effectively and more efficiently.
Yeah, if I just pick up on that point that I made there is you can be really
guilty of spending a lot of time and effort in an organization on testing a
specific aspect or a specific methodology or a tool, whatever that is,
(05:21):
and then not realize that that's not really adding a lot of value to what you're doing.
So you could spend an enormous amount of time, let's say, in usability testing
when you have actually a very small amount of customers and that's not really
something they're really focused on. the functional aspects of the software,
the most important things.
And you're spending your time and efforts in the wrong place.
So test maturity is, if you do something like an assessment,
(05:43):
is also a way of going, okay, we're spending a lot of time here on this thing
over here, but it's not something the business even cares about or it's something
that's really valuable for us. So it lets you highlight that as a problem.
Yep. So Steve, just to write on what you've just said there,
I think one of the reasons why I see companies is requesting for test maturity
assessments is simply because quite often,
(06:05):
maybe they may be trying to engage consultants to come and do a certain piece of work.
But the reality is you're hiring somebody to do something, but you don't know
the starting point. You don't know where you are exactly as an organization.
So yeah, I think in most cases, people might be busy, but busy doing what?
So at some points, organizations may need to draw a line to say,
(06:26):
okay, we are at this point.
Let's assess exactly where are we, What tools are we using? It can be an assessment
on tools or it can be on processes.
In other cases, it might also touch even the people.
So for me, companies that are doing health assessments or assessments in general
are possibly in the right direction because you cannot continue spinning money in projects.
(06:51):
Say you want to implement test automation, you have to assess exactly where
are you in terms of maturity, be it of your test cases. is the structure itself
within the organization.
Okay, so now that we understand what test maturity is, and we've kind of alluded
to some of the advantages and why it's important to do, what are some of the
(07:12):
different models that can be applied to understand your test maturity?
So obviously, TMMI and TPI are two of the more prominent ones.
Does anyone just want to have a stab at, I guess, what those models,
how those models are applied to your testing practices and maybe what the benefits
(07:34):
are of using something established like TMMI versus maybe a custom model or
maybe something that you've come up yourself?
Off if you look at the team of my own so they tend to stretch the
things in levels so you have level one which would be something
like initial all the way to level five which i believe is
optimized off my head with different frameworks to
get you through those kind of levels they recently updated
(07:55):
it to be more agile scoped focus because the original
model was very waterfall kind of catered to
and the advantage of i think if
it goes to sort of the advantages of something thing like that the distinct
levels have like distinct criteria and kpis
and metrics for you to measure yourself again to understand where you
are on that journey so it does you a very clear concise map
(08:16):
to get to that point it also has a corresponding set of
actions and processes to improve and change
and create to get you to the next level
and there's also a benefits analysis with each level as
well so level five might not be the
right decision for your business and part of the team is managing those leveling
system is like well maybe we're level two right now and three would be perfectly
(08:37):
satisfactory for us we don't see the spending the extra value and and costs
on level four attainment would actually give us that extra return investment
that roi so i do like that kind of framework around it it's very very useful.
For the kind of other frameworks they're a bit more agnostic to delivery models
so they cater and fit within kind of devops delivery or um or if using safe
(09:01):
or something like that they're a bit more life cycle agnostic but again all
around the same principles of you create a level of,
maturity or a level of confidence in your testing at that specific stage and
you build up and down from that framework depending on what you want out of
it so it's very ioi driven which I like I like the kind of formal frameworks for that side of it.
It's very easy for a business to rationalize.
(09:24):
I'm going to spend some money here because the ROI is going to be this,
which will lead me to this.
And that clear leveling is a way of really illustrating that point.
But I guess to touch on like some of the negatives of some of those,
adopting some of those models that I've seen within customers and clients and
people that I've worked with, are probably around how difficult it is to attain those levels.
(09:48):
So there's actually quite an intensive amount of work to push the levels up,
depending on where you are.
To go from level one to level five is not something that will happen overnight.
That's going to take a significant amount of time, years more than likely.
And the ROI is sometimes very difficult to justify back into the business.
Why are we spending so much money on achieving this formal level when the ROI
(10:09):
is actually quite small in comparison to the amount of work that we've done?
And I would also say that some of the frameworks, probably TMI,
I would say the framework is quite heavy duty.
It's not something like, say, a small tech startup with 100 people can just pick up and run with.
It's quite laborious. It's quite a lot of documentation to follow and assessments to complete.
(10:30):
It's quite heavy going as well. So, again, that's probably not an approach that
you want to do if you're a smaller company or a startup business where speed
to market is something that you care about.
Yeah, just to add on what Steve has said around DMMI and TPI.
So those models are good.
And I think there are use cases where it is important for organization to formally
(10:51):
test its test maturity against those models.
There are also regulatory requirements sometimes and auditing requirements to
say that we as an organization needs to be on a certain level.
It helps frame testing in a business speak as well sometimes.
So it's easier, as Steve has said, easier to go to senior stakeholders and explain
the concept of maturity and moving up level.
(11:13):
So from a formalized perspective, it is good. I agree.
The other side of the scale and what we've done before as well is to have a
more informal maturity check.
And I get that as maybe you can argue that's more subjective,
right, where these standard frameworks are not.
But there's also a need for small organizations, even bigger ones,
(11:37):
to say, am I doing the right thing?
Going back to the original conversation, am I doing the right thing?
And what can I do to be more effective and efficient now, not worrying about formal levels?
And what I've seen in the market, there's a big appetite for that.
Organizations started a QA department, a QA competency, and they've done best effort up to now.
And at some stage, they want experts to come in and see, are we doing the right
(12:01):
things in the right way? Are we mature?
It's not necessarily official standardized maturity level from one to five,
but it is a maturity check to see if from your experience in the testing realm
and someone who's doing a lot of.
R&D in the testing world, come and look at our testing process and see if we
are doing it in the right way.
(12:22):
And are we adding the most value? And are we then mature in what we do?
And not necessarily only against formal labels, but also on that label.
And that helps organizations implement low-hanging fruit to say,
let's do a couple of things now that shows business benefit and shows SDLC value,
and then we can build from there.
(12:42):
So that's the other part of the not the non-formal frameworks that I've seen.
I wanted to come in on the angle of maybe how rigid some of these frameworks can be.
I'm not shooting down TMMI. I've used it. I love it. It's structured.
And when you follow it guidelines, it helps you really to come up with a very
(13:03):
fair and objective outcome of any assessment.
So initially, I tried just using the one which was following more like the waterfall
model. Dell, then I realized it wasn't really gelling with the Agile way of
things, which the company was doing. And then I adopted the Agile version.
So it was very good. I liked it. Everything's very objective.
(13:26):
Even the scoring points to say, if you've got a strategy, at what level is it?
And then you can score all the way.
And also depending on whether you're trying to target a level one or a level
five, like what Steve said earlier. But what I just like, though,
about this framework is it's very structured, but at the same time,
it can also be very heavy.
(13:46):
Just for you to go through the TMMA assessment for an organization, it's demanding.
And the number of people whom you have to meet, you have to take interviews.
So at times, that flexibility also might not really be there.
So maybe, I don't know if anybody has used any other hybrid kind of health check or health assessment.
Those ones maybe might cover all those issues which I'm talking about,
(14:10):
like the rigidity of some of these frameworks.
Okay. So Matt, I haven't worked with or I don't have any knowledge of hybrid assessments.
But then I actually wanted to talk about new testing organizations,
because I think everybody is talking about existing testing organizations.
(14:35):
So I wanted to ask, what about new testing teams or organizations?
Would it be advisable to set up a new organization by implementing a level five
TMMI model, for example,
meaning to frame testing at a higher level from the beginning? Yes.
(14:57):
So maybe just to quickly answer you, Mamatla. So a level five is quite demanding.
It is a whole lot of things. For example, for you to have a level five,
you should have a policy documents, strategy document. Everything should be to the T.
So it's too demanding for you to start at a level five.
You'd rather start at zero to say, you don't have anything, you are starting,
(15:20):
which is the right base actually. Then you work your way upwards.
But if you start at a level five,
it is too demanding and you may not necessarily meet the requirements.
But then, Matthew, if I am going to an organization, I can actually,
you actually start with a test strategy before you can implement anything.
So if I go, I'm going to start, let's say, for example, a TCOE.
(15:45):
Isn't it better to ensure that I already have the policies in place,
the strategy in place while setting
up the the new organization why should i start from
zero if i can go straight to five from the
beginning yes i get if i can maybe i'll
bounce that point i think you're right it's not
(16:05):
obviously it'd be great it's a good idea to set
out a test policy and a strategy for the organization let's say
with a new company right we've got a new qa function you want to
start with those basic grassroots and i
think if you went and you looked at the level five and all the
things that you would need to do to achieve that i don't think that's a
bad thing just like roadmap out what we
want to do starting from day one to day 100
(16:28):
right we're going to map it out against those level five but i
think matt's point was maybe more on you won't get
that level four level five assessment done in the first
six months year of your qa function existing there's got
to be an element of i'll start with a bare
minimum of best practice like a strategy policy see some templates
and some processes i mean to bed that into the organization
(16:49):
over three to six months nine months let them use
it adapt it learn it understand it see if
they're actually following it first before we kind of progress
anything else so i think a level five from scratch is
more about the amount of time your organization needs to do to kind of adapt
and live within that framework before you get that level five assessment i think
as a long-term roadmap it definitely works but i think it would be quite difficult.
(17:11):
To spin up a qa function from scratch and go okay everyone's now Now I'm going
to be at level five because you've got to be maturing.
You've got to be doing the do is kind of how I would phrase that.
You've got to be living that framework for six months, a year or even longer,
depending on how big you are.
Yeah, I agree with Stephen. I think it's a luxury to build out a competency
(17:31):
up to level five before you can start using it.
So I would also take that agile approach, minimum viable product,
and then build it out from there.
But it's a good point, Mamata. You made me think there for a second.
It is good to have that roadmap from the start. Then at least you've got a plan.
You're not sort of falling over your own feet.
You know exactly where you're going with it. But at least start small and build out from there.
(17:54):
Jan, I wanted to mention about the hybrid model. i'm not sure if you wanted
to mention something to my does comment first yeah thanks definitely i'll just
comment and i'll pass back to you around that and i think we've said it but.
We are already we are always trying to articulate the value of testing right
so in a new organization if you have a testing department and you're going to
(18:16):
start with trying to implement all the level 5 things you're going to end up
three months down the line not having done testing and not having
added anything to the software development lifecycle.
So I think it's going to be more difficult then to say, okay,
you've spent three months, you're on a level five from a documentation and process
level with two releases or five releases into production, and there was no testing done.
(18:36):
So it is, as Stefan and Steve have said, I think it's slowly but surely maturing
and implementing these things and deciding then at which level,
because that's the other thing,
is if you get to level three, for example, and you say, well,
my test organization is good enough,
then that's good enough for you, right?
Rather than trying to get to a five, getting there and realizing,
(18:57):
oh, it was actually okay to be at level three or level four. Thanks, Stefan.
Yeah, I think you're going to have to wait, Stefan. I think there's still more
to say on this. Hold your thought.
Yeah, sorry. I was just going to just add to Eon's point and maybe to give a
non-testing analogy or an example of that,
where I think people non-testers on
(19:20):
the podcast understand would be i go to
the gym a lot so i'm weight training powerlifting i do a lot of that test
maturity is like people going i've never been to
the gym before what supplements do i need to take what
routine do i need to follow what exercise is the
most optimal i was like you're not there yet you just
need to go to the gym you just need to do the do
(19:40):
right you need to go some basic principles just do
the doing for a little bit of time and then slowly but
surely start to add things in to improve yourself you don't need to look at
optimizing what supplements are best for you when you don't go to the gym four
times a week like get the basics nailed down first start to improve from that
solid foundation and then build yourself up and i think that that's another
example of why you would do that exactly i mean especially if you think you might need,
(20:07):
let's say, stronger biceps, but what you need is calves, then you'll soon find
out that you need the biceps.
So you might run the risk of building out or strengthening your calves, or you never needed it.
You're building something that's maybe going to be used once in six months.
So if you do it in an agile way, you'll soon see what the burning issues are
and focus on those first.
(20:30):
I think maybe there's a good gap just to go over to the hybrid model.
I mean, I've had experience using the TPI Next model, Steven,
which you referred to some other models, and that one is also quite heavy.
Actually, we recently just finished up a maturity assessment with a company
and that one was we literally just had two days, not even two days,
(20:50):
a few hours in two days at the client site.
And in cases like that, we don't always have the luxury to do a proper maturity
assessment that could take hours going through a lot of questions with many people.
You know, sometimes just asking the right leading questions can almost give
you that 75% issues already and help you to flesh out on it.
(21:11):
I think there's a place for those heavy, heavy assessments.
I think if you just listen carefully and ask the right questions and actually
let it be a more open-ended flow and say, you know, where do you fit in the organization?
What's your day-to-day process? And what are your burning issues?
You know, if you just ask a few leading questions, it's almost like they're
going to spill the beans very quickly.
(21:32):
And within a short period of time, you will very easily see the burning bridges
and help to give them. And it might not be all the right things to all the right
recommendations and touching on all the points, but it will be the core ones.
And that's always a good start.
So anyway, just my thoughts on a hybrid model or even just going purely more
light touch, like a fluid conversation kind of assessment.
(21:56):
I agree with that. And just to go back to the hybrid thing. So I think the benefit
we have there, Stefan, is that we come with a wealth of testing experience and
a lot of years in the testing field.
And although it sounds like it's very informal, you still have these models
at the back of your head, right?
So it's not to say that – I think from a TMI perspective, it's easier for someone
(22:18):
to go in because it's registered and you ask the questions and you get the answers
and there's a formal rating thing.
The way, like the hybrid way we approach it sometimes around the health check
or the QA maturity informal check of it is we are going there without testing experience,
but having done research on these models and bringing that into the conversation.
(22:41):
So it's not that it's a completely unstructured way. It's just not as rigid.
And you can react to what the person is saying and ask the right questions based
on that. So all I want to say is you have to be careful to say,
all right, there is a very formal structure in a TPI or TMI structure.
And then there's the way that we spoke about now, which is more informal,
(23:04):
but that informal thing is still based on experience and research that we bring
into the organization just to know what to look out for.
Yeah, I definitely agree. It's not that an informal approach isn't based upon
some kind of rationale and reasoning and processing.
You definitely got those things in the back of your head but i
quite i quite like to use within teams squads and clients i
(23:25):
quite like to use quite open questions to understand where i think the
problems are so i'll ask things like if you could wave a magic wand and you
had infinite budget infinite time what would the testing organization look like
oh we'd have we'd have stable environments okay so environments are something
you're worried about we'd have test data tool that produce all our data for
us we'd have independent qa okay so they're just listing off the things that they want
(23:47):
and then you can map that back to a formal model and go okay these are the things
that they're prioritizing and that open question allow them to kind of kind
of air their laundry of like here's the things that we're really struggling
with here's the things that are problems for us.
De-fit management is always something that's really on big use in the company
we don't know how to do that well okay that's somewhere we need to focus for
you as well so you can use those open questions to allow the conversation to
(24:09):
naturally lead to the answers that they need from the client.
So if we're saying that TMMI or TPI, for that matter, is maybe not perfect for
that company just starting off,
I guess at what point do you feel that a company could actually reap the rewards
(24:32):
of following a structured approach such as TPI or TMMI?
So is there a, I guess, is there a tipping point in terms of company size or
team size where this actually starts making sense?
Yeah. What I want to say on that is that we have to be mindful of not confusing
(24:53):
a structured approach with a structured way of measuring your test maturity.
So do we need a structured approach from day one? Absolutely.
That's why you get senior test leadership into an organization to set up your
test competency or your quality engineering competency.
And there are nice models. If you look at TMAP, for example,
that tells you what is important in setting up a test competency and what are
(25:17):
the phases through it and how do you monitor your test organization.
So, I think to answer your question, we should always start with a structured
approach to testing and making sure that we get the right people in place that
can implement a structured testing organization.
But the second part of your question, when should we start measuring that?
(25:39):
I think after about six to 12 months to say, all right, I've implemented a test
organization based on my experience as a test leader.
I've built it on my experience and I've also built it on, for example, TMAP.
So I know that at least it's based, the foundation is based on the right things.
(26:01):
But now six, eight, 12 months down the line, let's do a health check to see
if we are still on the right track and if what I've implemented is actually
mature enough to take us forward and to support the business strategy.
Yes, and I think anything I would add to that is like, well,
Leon, your question of is there a tipping point?
I think a tipping point in my head would be scale and size of the company.
(26:24):
So, again, you could probably get away with less rigueur when your test team
is two or three testers in a smaller organisation.
When you're getting into the thousands of employees and you're cutting millions
of lines of code a week, that you've got to look at then, you need to start
to do some sort of assessment of where you are.
You're going to start to introduce lots of bad practices, potentially and
(26:46):
that scaling problem is only going to hide those problems from you
doing an assessment either formal or informally is
kind of peeking in the cupboards and going oh look we've been doing
this for years and this is actually a pretty really bad way of doing it and
so i think a tipping point in my head is when a company gets to a certain size
or scale or even amount of turnover and revenue then an informal or more formal
assessment is probably a good thing to start to look at and then i want to go
(27:10):
back to a point that that jan made i think right at the the start of the conversation,
which was, if you're not following a structured model and you're doing some
sort of a custom model or doing something based on your experience,
it becomes subjective very quickly.
And especially for large organizations, they may not like that.
(27:33):
They may not like the fact that you've given me a report or you've shown me
where my maturity lies, but actually, what are you basing that on?
You're basing that on on your frame of reference in my mind any any custom model
or any assessment needs to to have the principles of some of those more structured
(27:54):
models within them is that is that a fair comment.
Yeah i would say you'd probably want to build it
out from some industry best practice
or wider frameworks but what i would say yes is
subjective it is our opinion of where you are that
doesn't mean that that's not an evidence-based assessment that
(28:14):
doesn't mean it's not a rationale based assessment it is
our subjective assessment but we've measured you against these things
that we've judged you on so if again i'll use defect management when
we looked at you we think defect management is a problem for you and we've
made that assessment of the fact that there's no centralized framework within
qa on how defects are raised and managed it's ad hoc and it's different depending
(28:36):
on where you are in the business that is not a good way to do things that needs
to be decentralized and here is our evidence of why that would improve things
that is subjective but we are still backing it up with some concrete evidence
that's contextualized as well.
In my opinion, it's better to use or to benchmark with some renowned or tried and tested frameworks.
Say we are building our own framework.
(28:58):
Simply because, okay, looking at example of TMMI, you know, it guides you to
say if this and that is available, this is how you score or you rate that particular activity.
So, if it's not structured to that extent,
you can easily create chaos in the organization in the sense that,
say, the test manager or the head of QA perceives themselves as if they are on level five.
(29:23):
Yet, when you do your assessment, which may be subjective or subject to interpretation,
you read that organization is on stage two or level two.
So that alone, it will then start bringing so many questions,
your model being challenged.
So the advantage of using or benchmarking with the renowned frameworks is you
(29:46):
get your objective answers to why you are writing in any manner.
Being subjective will lead it to subject to any interpretation and being challenged.
So, Leon, to answer you my perspective, I totally agree.
We should benchmark our hybrid models with the world-renowned and tried frameworks
like TMMI or TPI. So then the other point is, we know testing doesn't operate in isolation.
(30:13):
We don't have our own small world within an organization, maybe like in the old days.
These days, most organizations are following agile and you've got a lot of collaboration
with development teams and cross-functional teams.
So how do we, I guess, incorporate and involve the development aspect,
(30:38):
the software engineering aspect and the software engineering side into these
assessments and into these models?
Because, yes, it's called a test maturity assessment, but we know that that
testing isn't just within the testing team.
It actually spans much wider than that. Okay.
Leon, if you refer to the TMMI model, for example, the level three of the model
(31:03):
actually says testing should be incorporated in the development lifecycle. cycle.
So if you still have a testing organization that is running in isolation.
So it means you are on maybe like level one or level two.
So for organizations that are already integrated, we can still measure or we
(31:24):
can still assess them, but at level three of the TMMI model.
Yeah, I think it's a really good point. QA can go off and do its own thing and
become a leading department in QA, recognized by others.
But if your development and business practices are still behind or immature
(31:46):
or problematic, there's only so much your improvements will actually go.
What's the return on investment start to go down? I think the way to kind of
tackle that and what I've seen work with clients is that for QA to be very transparent
that they're doing this.
So reaching out to other parts of the business so hey
guys we're we're going to go through we're going through this maturity
(32:06):
process to try and improve our ways of working and this is
how we're doing it and this is the outcomes that we found what are
you guys doing in your areas that are similar to that
and how do they link to it and how can we work together as a
cohesive unit of people and delivery to make
sure that our optimizations are working with yours and they're
not against each other you don't want to create a big laborious
(32:28):
test process to improve things that clashes massively with where the business
practices work right it's got to be done in unison so being transparent and
and sharing those changes with the business i think is a way of doing that yeah
i agree with that just to add on so one of the hybrid models that we use here.
As stefan i said we did that one recently so we spent as much if not more time with developers,
(32:52):
architects and devops engineers in
this maturity process for exactly that point so
we we don't see it as a testing organization on its
own and a silo thing we see it as a quality organization
throughout the software development life cycle and
the stakeholders they are so when we do when we
do these type of health checks or maturity assessments we involve all the stakeholders
(33:15):
in the engineering department and understand how they tackle quality because
as you often say leon quality is a mindset and it's everyone's responsibility
and when we assess we assess with that mindset as well to say say,
developers, what are you doing?
Architects, development leads, and then even to the CIO level around quality and testing.
(33:36):
So we consult all those stakeholders to make sure that when we put recommendations
out there, that we've considered everyone in the software development lifecycle and not only testing.
Our main aim is obviously maturing the test organization, but even in saying
test organization, it's that mindset of everyone in the software development
lifecycle is part of that test organization, not only testers.
(34:00):
Agree i think that the other important thing to
to point to make here is any sort
of any sort of an assessment is only
the starting point it it tells you where
you are today and what you need to improve on but the hard work starts after
that you actually need to start implementing those and i think that's that's
often where a lot of what you just said yeah and that becomes even more pertinent
(34:24):
because now you have something where you need to start getting other people
in the business, potentially outside of testing involved,
and you kind of need to get them to go on this journey with you.
But that measurement that you do upfront, it's equally important to try and
implement what the recommendations are, get on that roadmap in terms of what needs to be improved.
(34:46):
But then at some point, you need to remeasure again to see, have you actually
made improvements? So what is that?
I guess it'll depend on the size of the organization and and how many people
are within the testing space, et cetera.
But generally, how often should you, I guess, re-undertake that assessment,
redo that assessment to see whether what you have tried to implement has actually,
(35:11):
I guess, worked or maybe see, well, what's next?
We've achieved everything. We've implemented everything that we set out.
Now, what is the next set of activities?
Yeah, that's a good one. because i think when you
see those levels kind of laid out but we're gonna get to level five
like we're finished we're done we've got level five that's it
(35:33):
we can stop now whereas in reality it's a continuous process
it's part of level five is being optimized so continuously
reviewing the way that you do things continuously trying to improve those things
all all the time it's not just something it's a destination it's an ongoing
journey like you said and when do you when do you kind of step back and go we
need to do a review i think if you follow a formal framework they'll they'll
(35:54):
actually tell you when that accreditation will lapse,
and I think they'll last potentially up to three years or maybe less than that,
and that's when they would come in and do another assessment.
If you're going down an informal route, I think it's subjective and,
again, probably down to the client.
So if you're working with us, okay, you reviewed us after year one,
we've implemented stuff, we've worked for another year, can you come back in
(36:15):
year two and have a look for it with us?
And that would depend on the organisation, the client size, and things like that.
So I think for a formal assessment, it's quite structured as when those reviews happen.
Informally, it'd be down to the client and the environment that we're working with.
Just to add to that, yeah, so it also depends on the size of the organization
and maybe the desired outputs.
So, for example, one of the things which you'll find in the 10-MMI model is
(36:38):
the testing department should be properly structured and even things like reporting,
like where does the, who does the test manager report to?
And also whether the job descriptions are there for the testers to know what they are doing.
So those things are pertinent. end so as part of maybe the kpis before the year
ends you would want to review to assess whether,
(36:59):
they have been put in place so like i
said earlier it's not only about the tools and
technology but people also so if that aspect has
to be reviewed or looked at maybe i would say an assessment can be done twice
a year depending on the size of the organization or it can be narrowed down
maybe the The follow-up one is just to go specifically on the action items to
(37:21):
assess what has been actioned and what has not been.
But ideally once a year, but in other cases, twice depending on action items.
That this this line of thought has made me think
now i've completed assessments there's one
specific assessment quite a few years ago that i did at a client and it was
a laborious one it was also tpi next and a lengthy report was written many interviews
(37:47):
and that was the only engagement of the client we walked away and a year later
i i spoke to somebody in the team and they said,
none of that was ever implemented it was just a tick box a formal document and
a nice to have and then they walked away.
So my question to you guys is, from what you've seen, who actually at the client
(38:08):
side, if it's not, for example, us as a consulting company, sometimes the client
asks for us to now actually do the implementation.
Then it's not easy, but then it's an actual dedicated task, right?
But sometimes clients have the document, and from your experience,
who's the best person to actually implement that?
Some companies have the luxury of a centralized TCFE or like a head of testing or a test manager,
(38:30):
but sometimes people were purely in isolation
in in teams or pods you know and agile
teams who do you think would be the best in that
kind of scenario where teams sort of run separate silos who
actually should take the responsibility of driving that out with
an organization yeah so that's a
really good point is if you haven't got a centralized like
(38:51):
quality quality department that sits alongside or
as part of qe or you've got a head of qe or a head of tests or
anything like that that obviously would sit with that person if you
don't have that in my head it's the
same thing how how do you as a tester understand where
do you want to focus on so are you passionate about it then you're probably
the person that should be doing it i i want to improve my accessibility testing
(39:14):
skills so that's something i'm going to go and do myself for the maturity who
owns it who does the maybe informal audits or audits it's someone in the organization
who's actually passionate about the company doing it i think that's the
right person and it's how it's just trying to infer who that person is.
Yeah, Stefan, it also depends on the structure of the organization.
But basically, the outcome of any assessment, be it TMMI or TPI, it's very strategic.
(39:41):
For example, if there's need for the organization, the testing department to
be structured in a certain manner, it's a strategic decision which a QA lead
or a QA manager might not make.
That's why I said it's a very strategic outcome. So implementation wise,
executive level, they should oversee it, but the guys who do the actual work,
(40:03):
it may be the head of quality assurance,
test managers, or in whatever structure the company is laid out.
You just said something, Matt, that I think is important and it made me realize.
So this whole strategic initiative that you spoke about, what I've seen is that
where companies have a platform.
(40:25):
Problem and we do these assessments that Stefan alluded to, it's a very tactical thing.
I have a problem because of a symptom that I've seen helped me mature to make
sure that that problem doesn't happen again.
So we see a lot of that. I've started a test organization.
I'm not sure it's working well for the following three reasons.
(40:46):
Come and look at my maturity to make sure that we are doing the right thing.
So it's a very tactical thing.
And because it's a tactical thing stemming from a problem, we see that a roadmap
of action items is owned by a very senior stakeholder, often a C-level stakeholder.
And they put those KPIs on the people responsible for testing.
(41:07):
They put it to them and then they measure that because they don't want that
problem to happen again.
The formal things like the TMMI is very often strategic, right?
You do this, you put it in report, they put the report somewhere and you present,
but no one has really faced a big issue that led to this point.
So they just want to understand their maturity. And when you just want to understand
(41:27):
something, you don't often act upon it as quickly as you would if there was a real problem.
So where we've seen, to answer your question there, Leon, but where we've seen
people implementing these things very quickly, it's when it was that we had
a problem. I think there's a problem with QA.
Come and look at our maturity and tell us what we need to do now to make sure
that problem doesn't happen again, whether it's people, process, or technology.
(41:50):
And that's where we see these things really being implemented The other one
is more strategic vision. That takes longer.
And that is why we periodically re-look at these assessments to make sure that we are moving forward.
But when it's the more hybrid assessment one, we have a problem.
We want to make sure that we track that so that problem doesn't happen again.
And that's a constant improvement cycle. Just to follow up on Stefan's question,
(42:14):
is it advisable to have external consultants to do the assessments or can it be done internally?
Because I think if we do it internally, we're going to be biased.
Good point, Mamatla. Definitely, if you're doing it internally,
most likely you'll be marking your own homework.
(42:35):
And it's always good to just bring an infinite view. You know,
just avoid those cases where you've got context. You're understanding.
I'm saying understanding in quotes. You're understanding the current situation.
You understand that there are no resources.
You understand the company is still small. you then tend to compromise and be
soft on how you should ideally, objectively rate or assess yourself.
(43:00):
So engaging in external consultants is usually the best way to go.
They don't have bias in terms of the current environment or the financial situation of the company.
So if there's need for somebody to do this role, that person should do it.
I remember when I was doing one assessment using the TMMI, the head of QA was
(43:21):
reporting to, in fact, the QA lead was reporting to the dev manager.
And there were so many conflicts, even when I was just interviewing them,
simply because at that point, that company could not afford to hire a QA manager.
So the QA lead was reporting to the dev, and there were always fights and conflicts.
So if you are internal and you're doing such an assessment, you understand that
(43:46):
the company doesn't have money, but if you're objectively looking at TMMI,
you'll be saying, okay, what does it say for A, B, C, D?
And you have your own objective assessment.
So independence is possibly the best way to go in my perspective.
Yeah, and I'll just finish that off with, yeah, independent assessment is always
(44:06):
going to be more preferable because you don't have that bias.
But I would always try and partner that person up. So if we came into a client,
it would be partnering up with someone within their QA function or test organisation.
To partner with because that will help that independent assessment,
like what's the culture like, who are the people I need to speak to,
get an insight into the company that you won't get as an outsider looking in and partner that up.
(44:28):
So we bring the impartiality. you bring the business culture
and knowledge and know-how of all your systems of work and practice work as
well so i think that's a really useful way of doing it great conversation
i i think if i
was to to summarize the conversation it sounds like there isn't necessarily
a one fits all model so it's very dependent on on i guess the size of the company
(44:52):
whether you're at the start of the journey whether you've already taken undertaken
the journey maybe you're a much bigger organization and you're fairly mature already,
and depending on where you are, you might use a different model.
I think what I've heard is the hard work starts after the assessment is actually done.
And I was going to add that I think if you get the assessment done and just
(45:15):
go about your day job normally, none of that's going to improve.
You're not going to mature. You have to have someone who's tasked and made accountable
to actually ensure that those recommendations are implemented.
Otherwise, you're going to
see yourself in the exact same spot six months, 12 months down the line.
Unless you have someone doing that hard work, working with the test team,
(45:40):
working with the development team, working with whoever is required to improve that process,
I think it's often just a tick box activity and that's all it's going to be.
I think it was a really great conversation in terms of the different models
and the pros and the cons in terms of using a more structured approach like the TMMI and TPI.
(46:04):
But also there's a place for, I guess, custom models.
And maybe the custom models are more appropriate for the smaller companies.
Maybe the TMMI, TPI is maybe more prone towards the bigger companies.
But like I said, it's not necessarily always the case in the fact that there
(46:24):
might be a small organization really benefiting from understanding what that
TMMI roadmap looks like and what they need to work towards.
So thank you very much for participating. A really good conversation.
I was trying to keep this podcast to half an hour, but we've,
I think, run almost double that time.
(46:44):
But as long as it's a good conversation, then we're always willing to extend.
If you listen to this podcast and you have any specific themes or topics you'd like us to cover,
please either reach us on LinkedIn or put a comment on wherever you listen to the podcast.
(47:05):
We're always open to exploring different topics and themes.
And then, yeah, we very much thank you for listening.
So until next time, thank you very much. This has been an episode of Testing
Experts with Opinions, an inspired testing podcast.
Find us on LinkedIn, Twitter, Facebook, Instagram, YouTube, and TikTok,
(47:25):
where we're driving conversations.