All Episodes

August 19, 2025 28 mins

"I'm not here to catch your mistakes, I'm trying to get customers to stop yelling at you." - Amanda Drws

That’s how Amanda Drws thinks about QA, and it’s a big reason her teams trust the process instead of fearing it.

In this episode, we dig into how Amanda built a QA approach that actually makes life easier for agents and more valuable for the whole company. She shares why she audits a tiny fraction of tickets, how she decides what’s worth flagging, and the surprising ways QA can uncover customer trends you’d never think to track.

What we cover:

  • Why “less QA” can lead to more insight
  • How to make QA a culture-builder instead of a compliance drill
  • A simple way to catch big issues without nitpicking typos
  • Using QA to surface trends before your dashboards do
  • Getting other teams to actually care about support insights

If you’ve ever thought QA was just about catching mistakes, Amanda’s going to change your mind

Take the Next Step:

📬 Subscribe for weekly tactical tips → Get Weekly Tactical CX and Support Ops Tips

🔍 Follow Amanda Drws on LinkedIn for more  insights →Amanda Drws on LinkedIn

🎙 Keep listening → More Episodes of Live Chat with Jen Weaver

🗣 Follow Jen for more CX conversations →Jen Weaver on LinkedIn 

🤖 Sponsored by Supportman: https://supportman.io


Episode Time Stamps:

00:00 – Why QA Shouldn't Catch Mistakes

01:32 – Amanda’s Weekly QA and CSAT Ritual

04:15 – QA Is Not Stress Relief

07:48 – Audit Less Than 10% of Tickets

09:55 – QA Insights for Marketing Wins

13:12 – Training Doesn’t Stop After Onboarding

16:44 – How QA Builds Team Safety

20:08 – Weighted Scorecards, Not Gotchas

23:31 – Share QA Gold in Slack Channels

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey, I'm not here to catch your mistakes.
I'm trying to get customers tostop yelling at you Like that's.
My only goal is for you to haveas smooth of a day as possible
with as few customers who areangry and stressed out and like
this is how it gets done,because all of these things,
like all of these errors, you'renot treating it as well.
The agent made a mistake, it's.

(00:21):
How do I set up systems thatmake it impossible for this to
happen?

Speaker 3 (00:25):
Hey friends, welcome to Live Chat with Jen Weaver.
Today I'm chatting with MandyDrews, an absolute expert in QA
and a support leader who treatsquality assurance less like a
compliance drill and more like acalming force for anxious teams
.
She argues that QA over 10% oftickets is really just a signal,

(00:49):
not a solution.
Instead, she focuses onamplifying the QA wins what's
discovered during your QAprocess to many different teams
at your company.
She walks us through herweighted, point-based rubric
that dings a stray typo lightlybut flags systemic errors loudly

(01:11):
.
Even better, she shows how alean QA loop can surface trends
like sudden spikes in certaintypes of questions, specifically
menopause, sleep connections.
We'll get into that, but doingit before any dashboard knows
how to track them.
So grab your headphones.
Let's learn how less QA candrive more insight, more safety

(01:34):
and more trust on your team.
Before we get started, though,our QA tool, support man, is
what makes this podcast possible, so if you're listening to this
podcast, head over to theYouTube link in the show notes
to get a glimpse.
Supportman sends real-time QAfrom intercom to Slack, with
daily threads, weekly charts anddone-for-you AI-powered

(02:00):
conversation evaluations.
It makes it so much easier toQA intercom conversations right
where your team is alreadyspending their day in Slack.
All right on to today's episode.
We're here with Amanda Drews,who is close to my heart, a QA

(02:20):
expert, and I've been soinspired by Amanda's work.
We're doing a new feature notreally new anymore about a week
in the life, and I love that youcan give us a perspective on a
week in the life of a supportleader in your previous role.

Speaker 1 (02:36):
Yeah, sure.
So at my last role I was thehead of support for a consumer
tech product wellness device.
It was definitely prettychaotic.
I think anybody who works at astartup knows just how different
weeks to week can go.
But at the same time, you know,there were some things that
were pretty regular.

(02:57):
And in a normal week, Monday wasalways kind of dedicated to
what went wrong over the weekendand getting caught up.
Tuesday and Wednesdays weretypically the days where I was
meeting with other departments,giving them updates about what
we had found in the previousweek and hearing their questions

(03:17):
for the next week, and alsomeeting with my team to just
touch base and make sure thateverybody knew what was coming
down the pipeline.
And then Thursday was always myQA CSAT overview day and I
would meet with my qualityassurance specialist.
I would talk to all of mysupervisors and we would all sit

(03:38):
down and say, okay, how is QAgoing, how is CSAT going, what
are we seeing in social mediaand what questions did the other
departments have that maybe wecan focus on answering in the
next week.

Speaker 3 (03:50):
I know we jumped the gun a little bit with the week
in the life, but for folks whodon't know you, could you share
a little bit about your previousrole?
You were the senior manager andhead of support and you, I
think I remember you telling meyou grew the team quite a bit
during your time.

Speaker 1 (04:07):
Yeah, so at my most recent role, we had a crazy
backlog of tickets.
We were building thisdepartment and also kind of at
the same time, finding out whateven were the technical issues
that our customers were runninginto.
What were the issues that wewere running into with
e-commerce?

(04:27):
You know, this was June of 2020.
So USPS was losing like half ofour international orders.
What do we do about that?

Speaker 3 (04:39):
So as you grew that team, you also worked on QA
quite a bit and became prettyopinionated.
What do you think the mostimportant part of a QA program
is?

Speaker 1 (04:51):
My impression of QA from conversations with other
people is that a lot of it isreally just stress.
Soothing is what I call it.
People are anxious.
It's an anxiety tax.

Speaker 3 (05:08):
An anxiety tax.
I love that.

Speaker 1 (05:10):
Yeah, and it's not about necessarily improving
training or improving ourmessaging or anything like that.
It's really just a case of,like, people are worried that
there's something going wrong inthe background and they want to
soothe that anxiety to makesure there's nothing going wrong
.
But they're leaving it at thatand they're not because their

(05:35):
goal, the question that they'retrying to answer, is is there
something going wrong in thebackground?
Which?

Speaker 3 (05:38):
they're anxious about and so like if I just give
myself the illusion that I'mdoing due diligence here.

Speaker 1 (05:45):
Yeah and then, but there's no, also not really like
a set metric or understandingof like.
Well, what does it mean ifthings are going wrong?
What's an acceptable error rate?
They don't go into QA trying tothink about that or trying to
answer that question, and so itnever gets answered.

Speaker 3 (06:03):
You have to know what questions you want to answer as
you go into it Exactly and theyhave to be pretty specific,
Otherwise, again, you're notgoing to end up answering them.

Speaker 1 (06:14):
So when I go into QA, a lot of the things that I'm
thinking about is you know, howgood is my training, and is this
new hire that is getting moreQA than the rest?
Are they really internalizingand understanding the training,
or do I need to make a change?

(06:34):
My agents understand thatfeature, Are they like?
Did I fully adequately prepthem for this?

Speaker 3 (06:51):
big feature launch or no, and if you're asking that
question in QA it's kind of alast catch of whether that
worked.
But really you should beanswering that question in
training, but really you shouldbe answering that question in
training, Exactly, Exactly.

Speaker 1 (07:06):
And so that's kind of that last calibration, to say
like yes, I got it right.
And then the other part of thatis you can't just say like yes,
no, you have to say okay, ifthe answer is no, what do I do
next time, so that we are notcatching it after we have
already made this mistake?

Speaker 3 (07:22):
Right, yeah, if you just go well, we caught this and
the specialist just needs to dobetter then that doesn't really
solve the systemic problem.
Yeah exactly.

Speaker 1 (07:34):
And, additionally, you know, one of the things that
I've been seeing a lot iscompanies talking about like
being customer obsessed, andthat's something that gets
thrown around a lot.
But then I also have to ask,like, okay, well, how is your QA
supporting that?
Because there are plenty ofopportunities in QA to not only

(07:56):
be finding out, like, what isgoing wrong, but also what's
going really well and makingsure that one customer who has
this amazing experience that'smaybe not being captured in your
data collection or not beingcaptured in CSAT, is still being
distributed to the rest of theteam, to marketing, to product,
to give a better understandingof, like, what makes a truly

(08:17):
exceptional experience for acustomer.

Speaker 3 (08:20):
Yeah, and a lot of teams are doing a small number
of QA conversations, a smallpercentage of their total
conversations, and even if youdo a large percentage, you're
not guaranteed to catch everyissue.

Speaker 2 (08:44):
Yeah, exactly, I definitely have the opinion that
QA should be like 10% or lessof your tickets.

Speaker 1 (08:46):
It should be really like a spot check of confirming
anything new is working well.
It should be a little bit of acheck to make sure that old
ideas and old concepts haven'tgotten rusty and there should be
.
You know that last quarter orso of QA should be like okay,
like what's working well.
What happened in these reallygood CSATs, what happened in,

(09:09):
hopefully?
I mean I love when my agentsflag conversations that say hey,
this went really well, like Iwant to brag about it, put it in
a brag box, and that's whenthat really support.
Qa has implications for everyteam.
Can you break that down Like?

Speaker 3 (09:41):
what are some of the teams that really should be
utilizing QA data, if?

Speaker 1 (09:45):
marketing is doing A-B testing.
They know, based on click rates, based on open rates, which of
their A-B worked better, butthey don't necessarily know why,
and that's something like thatis data that support can provide
.
They can say this is whatresonated.
But there will be times wherethere are concepts or things

(10:08):
that just really resonate withcustomers where either it's not
easy to collect automatically inany kind of sidebar or we don't
know that we should becollecting it.
And that's where QA comes in.
We're, you know, working forthis wellness company.
A big concept was sleep, so wedid have some kind of general

(10:28):
data collection happening around.
Is this helping people sleep?
Yes, no, but it was QA thatcame to us and said, hey, I'm
seeing a lot more questionsaround sleep for women in
menopause and like, why would wecollect that data?
We wouldn't know to collectthat data.
But QA was the one who said I'mlooking at all of these general
questions, all of these generaltickets, and here's a trend

(10:51):
that I'm seeing and that means Ican go to marketing and I can
say, hey, like now that I'mlooking for this, menopause is
being mentioned in like 10% ofour tickets around sleep.
Just doing a word search formenopause, and then marketing
can turn around and say, okay,like let's have a webinar about
this, let's write some articlesabout this, and then we ended up

(11:12):
having like a 500 personwebinar talking exclusively
about how to manage stress andsleep for women in menopause.

Speaker 3 (11:19):
That blows my mind, because what you've done there
is marketing.
Maybe that's not even your ICPright is that age group of women
particularly, but marketing haslearned from customer support
interactions what actualcustomers are really interested
in, which means potentialcustomers would be interested in
that, and that's something youcan't really get from anywhere

(11:41):
else.

Speaker 1 (11:42):
Yeah, exactly Exactly , and they weren't our ICP.
They're not even our main, Iguess, like purchasing
demographic, but they are a maindemographic of people reaching
out to support Uh-huh.
Yeah, but they are a maindemographic of people reaching
out to support.
Uh-huh, yeah.
And so you know, here's thiswhole segment of our market that
was not being utilized andwhose questions were not being
answered, needs not being met.
And yeah, it was great, it wassuper interesting too.

Speaker 3 (12:06):
That's a great example.
I know we also talked about thelike the support training team
a little bit and how they canbenefit from QA.
I think a lot of times thosethings people go from training
to graduating their onboardingand then they are just that data

(12:27):
is not returned to the trainingteam.
But what kind of a programwould you recommend for that,
for getting that informationback to the training team to
iterate?

Speaker 1 (12:38):
Yeah, so my trainer was part of the QA meetings.
You know it's not just aboutexperience or practicality, it's
also getting that data andgetting that feedback.
And training doesn't ever stop.
It's really important to makesure that you know when you're

(12:58):
going through your training.
Hopefully your training is good.
You're looking at exampletickets.
You're looking at examplequestions.
That doesn't need to stop whentraining ends.
It's really helpful for agentsto see how other agents are
answering those questions, howthey're handling de-escalation,
how they're handling issueswhere they don't know how to

(13:19):
answer it.
And when you kind of set upthis rolling system of your
trainer gets to see how thesenew hires are doing in QA and
CSAT and roll that into training, you can start doing
interesting things, like theselistening sessions.
And every other week we had aCS only listening session and

(13:41):
that was for calls that wentreally well.
And again QA would be supplyingreasons like very specific
reasons of like what exactlywent really well and what
exactly did this agent pick upon that?
Like made that experiencebetter.
And it also gave agents anopportunity.

(14:03):
You know we set out bonuses.
We're like hey, did you have acall that went like a little
wonky and you don't know why,like, we'll give you a bonus if
you share it with the team andyou describe what you're not
sure about.

Speaker 3 (14:13):
That's great.
I love that.
It incentivizes something thatis usually disincentivized,
which is being vulnerable inpublic about something I'm not
sure I did well.

Speaker 1 (14:23):
Exactly.
And you know I always say likelow stakes, high quality.
Because when you say likethings like that, when you're
like this went wrong and thestakes are really low, then you
can bring that quality ofeverybody up.
I know one of the things I wasalways trying to tell my agents
is like we don't all need tohave the same bad phone call, we

(14:43):
don't all make the same mistake.
We could just have like oneperson could do it once and we
could learn from that and thatwould be fine.
And so we had these listeningsessions, and you know even the
ones where the conversationswent well.
You know you open the floor andyou say, hey, have you had a
call that went like this whatworked well for you?

(15:04):
This worked really well in thissituation.
When would it not work Likewhat are other times?
And then once a month we alsodid whole company listening
sessions.
We gave bonuses for those too,because the agents were
horrified.
They were like what are youtalking about?
I'm going to listen to my calland we're like this is a call
where we think you did so greatthough we don't care.

Speaker 3 (15:25):
We hate this.
Yeah, I totally understand that, but on the other hand, it's
the customer information thatthe rest of the company really
needs so often in support.
We have this real closeness tothe customer, and it's hard to
get that across to other teams.

Speaker 1 (15:43):
Yeah, and there's certainly something about, you
know, if you can tell peoplelike, oh, this problem is a real
pain point for us.
But then if you hear a customeron the phone with their voice
breaking because they are sostressed out Like it is, it's a
different experience.
And when you see or when youhear an agent like picking up on

(16:06):
these clues and picking up onthese like undertones, it also
does a lot for really hammeringhome the level of skill,
experience and professionalismthat customer support agents
have to bring.
I think it's really consideredan entry-level job.
It can be really hard toconvince other departments that

(16:30):
this is an exceptionally skilledrole and a highly technical
role, and so having thatopportunity for these agents to
really show off the level ofthings that they are picking up
on that other departments don'teven hear in, that first listen
to the call, I think that's alsojust a fabulous opportunity for
support to really show itsvalue.

Speaker 3 (16:51):
Yeah, and we think of it as just another day.
But often other teams are blownaway by the ability we have to
deal with issues.
It's a complex job.
So, with QA and data collection, are there interesting trends
or feedback that you want tokind of talk through?

Speaker 1 (17:11):
I think that ops you know that kind of system ops
runs into the same issue of QA,of okay, but what are you doing
with that data?
Are you just collecting itbecause you know it's there and
you feel like you should becollecting it and you have all
this data around your customerto feel good about it.
But, like, what are you doingwith it?

(17:32):
Are you customer obsessed orare you customer good enough?

Speaker 3 (17:36):
A lot of teams would be grateful to shoot for
customer.
Good enough, unfortunately.
But a lot of teams talk aboutbeing customer obsessed.
Qa is not just the data andit's not even just those
projects where you prove valueto marketing, for example.

(17:56):
It's also a culture tool.
So when you have a culture ofongoing learning as a team
instead of just coachingmistakes, right, how does I
guess, how does QA support that?
How can you structure a QAprogram so that it does that?

Speaker 1 (18:15):
One of the other commentary that I've seen around
QA, and around QA and AI inparticular, is using AI means
that QA is no longer apopularity contest, and that
immediately put my hackles upbecause I was like absolutely no
.
You have learned something veryvaluable there, which is that

(18:37):
you have a department problem,not a QA or a tool problem,
because certainly QA should notbe a popularity contest.
If it is, I want to know.
I don't want AI to fix it.
It's the idea of like, well, doyou need to fix QA or do you
need to fix your department oryour customer journey as a whole

(18:58):
?
These are very different things.
You know very band-aid treatthe symptoms versus treat the
root cause, and QA can be thissystem that is set up as support
, particularly if you do kind ofgo into it with the assumption

(19:19):
that your agents areprofessionals, that they are the
experts in their field and yourQA is there not to catch their
mistakes but to align theirexpertise with your training and
to make sure that when thingsare going particularly well,
when an agent does go a littlebit off script and a customer

(19:40):
loves it, that everybody doesfind that out, that that's
celebrated.
I mean, like I said, when thereis something behind these hard
numbers and these data thatreally points to the humanity
and the soft skills that come inwith support, that are so
valuable, like that's somethingthat QA can be doing, and then

(20:04):
it turns into this much moreopen and supportive culture
where agents are less concernedabout making errors.
They're a lot more open whenthe idea is like hey, I'm not
here to catch your mistakes, I'mtrying to get customers to stop
yelling at you like that's, myonly goal is for you to have as

(20:27):
smooth of a day as possible withas few customers who are angry
and stressed out and like thisis how it gets done, because all
of these things like all ofthese errors.
And like this is how it getsdone because all of these things
like all of these errors,you're not treating it as well.
The agent made a mistake, it's.
How do I set up systems thatmake it impossible for this to
happen?

Speaker 3 (20:56):
So QA really needs to ally itself with the customer
support reps so that it's ontheir side, not here to nitpick
at them.
Um, I like what you've sharedwith me about.
You have a rubric, a very, abeautiful spreadsheet.
That is amazing that.
I was just like, okay, can Ihave that?
Um and uh.
But one thing about it issometimes what can make QA feel
adversarial is if I make onetypo then I get dinged for that

(21:20):
right and on the reverse.
Sometimes it can miss seriouserrors.
That do happen only one time,but it was about billing or it
was really difficult.
And so what's your solution forkeeping QA on our side?
Right, it's here for me as aspecialist, but making sure it

(21:41):
finds the right problems and notjust randomly some things.

Speaker 1 (21:47):
So I have looked at some QA tools and my process is
certainly that of a startup withno money, so we had Google Docs
and that was what we couldafford when we were first
building this QA template.

Speaker 3 (22:03):
I think a lot of teams are in that position.
For sure, qa is scrappy moreoften than not for small teams,
I think.

Speaker 1 (22:10):
Yes, we built this template instead of you know
100% and ticking down.
What we did was this weightedQA with points that added up and
we made it pretty granular.
So there are things that youknow.
Like you said, a typo I don'tcare about.

(22:30):
A typo Like one typo is not abig deal unless it's coming up
in every single ticket.
You QA, you know sometimes,like I said, some typos are a
one-point error.
It's not a big deal unless it'strending across a lot of
tickets.
Other things, like accidentallyleaving in a macro insert that
you forgot to edit out.

(22:51):
Maybe you wait that more as afour or five.
I have done that and it's sopainful.
Who among us has not done this.
I always tell my agents like ifyou haven't done it, don't
worry, you will soon Not a bigdeal once you're doing it all
the time.
And then there might also bethings that you would consider
either auto fails or much moresignificant errors that you want

(23:15):
to weigh a little heavier.
And it also kind of gives youthe option to decide okay,
what's your hard cutoff of howmany errors can occur in a
ticket before you get properlyconcerned?
And if you're also tracking,are they happening multiple
times over the course of themonth?
Are they happening in anyparticular type of ticket?
Then you can get this reallygood understanding of exactly

(23:41):
how your team is functioning.

Speaker 3 (23:43):
For someone who is listening, who's on a team, who
maybe does have a QA program,but it's really just scorecards
and compliance.
What would you say is one thingthat they can do to really make
QA have more impact than justwe're testing out whether a rep
is doing a good job or not?

Speaker 1 (24:01):
I think that, particularly with like online
and the fact that we are all onSlack all the time, or whatever
we use Slack.
So that's my example If youwant to show value, if you want

(24:22):
to show value, don't wait to beasked.
And I would say, don'tnecessarily put it in an email,
but put it in a Slack message.
You know, maybe if you're partof like the marketing general
Slack communication or somethinglike that, I mean, hey, find
those channels, if they'republic, join them and just say,
hey, here is something that Ifound, here is something that
might be useful for you.
And if you are consistentlydoing that, if you are

(24:44):
consistently paying attention tothe questions that they're
asking in their meetings, intheir Slack channels, and being
able to come back the next weekand say, our QA found this
answer, even if they weren'tasking you, right, you have to
just kind of announce it.
And you know that's one of thethings that we did with these

(25:04):
whole company listening sessionswas like nobody asked for it,
we were just we think that thiswould be useful for you, and
sometimes there would be, likeour first meeting, I think, like
one person from anotherdepartment showed up, but we
recorded it, we went through it.
We posted a list of like someof the findings from that call

(25:27):
and said here's what we found.
Here's some things that mightbe useful.
Here's the recording and thenext month, a lot more people
showed up.

(25:58):
I love that.
That's a drop that you know.
I can raise my hand and say Ihave fallen into that as well
and in the process of doing dataanalytics and QA, got humbled.
But I think that's really whatI wish that they understood is
investment in QA and well-doneQA will save you so much time

(26:22):
and money in the long run.
You know, maybe this month itdoes not impact your profits,
maybe this quarter it's netneutral, but as time goes on, if
you really do want to becustomer obsessed, you have to
invest in QA, you have to investin data analytics.
Yeah, it's necessary, it trulyis, and it will assuming you're

(26:45):
doing it right, then it will payyou back tenfold.

Speaker 3 (26:50):
I can't thank you enough for being on the podcast.
You're a delight to chat with.

Speaker 1 (26:56):
Thank you, jen.
You're such a great interviewer.
I will come back anytime youwant me, thanks.

Speaker 3 (27:01):
I appreciate that.
Thanks for listening.
Before you head out, here's aquick recap of Mandy's proven QA
playbook, because small,repeatable moves will help your
team more than giant wishlistitems.
So first, you want to auditunder 10% of your tickets?
So first, you want to auditunder 10% of your tickets.
Keep reviews below a tenth oftotal tickets so QA stays a

(27:24):
spotlight, not a bottleneck.
This is what Mandy recommendsfor teams who are doing manual
QA.
Two aim at what's new, so pointa spotlight at fresh launches or
training rollouts so you canmake sure you catch problems as
they crop up.
Three wait what matters.
A granular, points-based rubriclets a missed macro or accuracy

(27:50):
problems outweigh somethinglike a stray typo.
Also, four automate the basics.
Dashboards can track firstresponse, but other easy metrics
are also important so humanscan hunt for other insights.
Five broadcast what's gold.
You'll put findings back toproduct marketing training so

(28:13):
that wins and fixes can scale.
Six recalibrate your QA.
Often when your patterns shift,new regular typos or new
stressors for customers, tweakyour systems before tickets
start to pile up.
If you put just those sixhabits into action, you'll trade
catching mistakes forempowering your specialists to

(28:35):
solve problems ahead of time.
I hope that's super helpful.
I'll see you in the next livechat episode.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.