All Episodes

September 16, 2025 31 mins

This podcast is sponsored by Supportman, which connects Intercom to Slack and uses AI to give agents feedback and surface problems in real time.

“Consistent quality review, I think, is better than no quality review.”

 That’s the principle that guided Stacy Justino, Product Support Manager at PetDesk, when she launched a brand-new QA program for veterinary support in just weeks. Drawing from her experience at companies like Wistia and Loom, Stacy created a lightweight system her team actually enjoys and that she can sustain over time.

In this episode, we cover:

  1. Why ten interactions and two reviews each month provide just the right amount of feedback without overwhelming the team.
  2. How to select tickets that are random, recent, and representative so reviews reflect real work.
  3. The four pillars of Stacy’s rubric—accuracy, completeness, customer excellence, and empathy—and why they matter.
  4. How private coaching helps agents grow while public kudos reinforce team culture.
  5. When and how to scale QA up or down depending on changes in performance, products, or processes.

If QA has ever felt overwhelming, Stacy’s fearless ten-ticket method shows how to keep things simple, fair, and effective.

For more resources related to today’s episode:

📩 Get weekly tactical CX and Support Ops tips → https://live-chat-with-jen.beehiiv.com/

▶ Keep listening → https://www.buzzsprout.com/2433498

💼 Connect with your host, Jen Weaver, on LinkedIn

🤝 Connect with Stacy Justino on LinkedIn

🔧 Learn more about our sponsor Supportman → https://supportman.io

Chapters:
0:00 – Intro: Consistent quality review in support  
2:27 – Meet Stacy Justino: Product Support Manager at PetDesk  
5:05 – A week in the life of a support leader  
8:41 – Ten interactions and two reviews each month  
13:41 – Random, recent, and representative ticket picks  
19:00 – Building a concise rubric to define quality  
22:11 – Private coaching paired with public kudos  
24:12 – QA only what you need to review  
30:36 – Key takeaways for support leaders

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Consistent quality review, I think, is better than
no quality review.
Since we are a small group ofpeople who are carving out time
to do this consistently, wedecided to go with a cadence of
10 support interactions a month.

Speaker 2 (00:18):
Hey friends, welcome back to Live Chat with Jen
Weaver.
It's not often that I get to doa podcast episode with somebody
who I admire as a professionalin our field, but who also has
been a great friend and a mentorto me, so I'm super excited to
talk today with Stacey Justino,who's the product support
manager at PetDesk.

(00:39):
Stacey arrived last fall andspun up a brand new quality
program for their veterinarysupport in just a few weeks.
Now you know we're all about QAhere at Support Band.
That's what we do, so I'm alsosuper excited to unpack her
topic her 10-ticket cadence, thefour-point rubric her team

(01:00):
actually uses and the simpleGoogle Sheets trick that keeps
her QA reviews fair and random.
Stacey spun up this program tobe lightweight so that she could
actually keep it going overtime with minimal overhead.
I think it will be reallyhelpful for support leaders who
don't maybe have a ton ofresources for building a QA

(01:22):
program.
Let's be real, that's probablymost of us.
Before we get started, though,our QA tool, supportman, is what
makes this podcast possible, soif you're listening to this
podcast, head over to theYouTube link in the show notes
to get a glimpse.
Supportman sends real-time QAfrom intercom to Slack with

(01:43):
daily threads, time QA fromintercom to Slack with daily
threads, weekly charts anddone-for-you AI-powered
conversation evaluations.
It makes it so much easier toQA intercom conversations right
where your team is alreadyspending their day in Slack.
All right on to today's episode.

Speaker 1 (02:26):
All right on to today's episode, like for for
stacy in a week at work.
First, there's something that'slike maybe starting with things
that like are multiple times aday, sort of things.
Yeah, checking on the chatqueue.
Uh, checking out our ticketqueues, is there any ticket
that's been sitting there at thetop for a while?
Let's make sure that somebodygets on that.
Oh, we have six people waitingin chat.
We have wait times about threeminutes.
Let's post in Slack to say, hey, we need anybody who's not on

(02:50):
break at lunch or in a meetingto grab a chat.
So that's something that'shappening, you know, multiple
times.
Throughout the day Beginning ofthe week, there's, you know,
some weekly reporting that needsto get done.
So doing that on Monday, weeklyreporting that needs to get
done.
So doing that on Monday.

(03:10):
In terms of other stuff, youknow there's projects to be done
.
So we have, you know, we callour quarterly projects, or OKRs,
like rocks.
So making sure that I have timeand I try to book at least a
few hours in my calendar everyweek to make sure I can focus on
those things.

Speaker 2 (03:26):
And where do you keep that?
What is that?
Do you do that like on paper?
Or In a notebook.
Nice, same girl.

Speaker 1 (03:37):
And then I have another colored pen where I
check it so I can check out theboxes.

Speaker 2 (03:40):
Of course, yeah, colored pens are where it's at.
So you're talking about likeplanning out your week.
I'm going to try that out, thatsimple little just like writing
down what needs to be donetoday and then this week and
with due sort of like for weeklyreporting, um that needs to get
done, um.

Speaker 1 (04:07):
And then I have one-on-ones with each of my
direct reports 30 minuteone-on-ones with each of my
direct reports, um, each week.
So have those sort of sprinkledthroughout.
I try to put them in likeblocks of two or three, so not
too long of blocks, but that I,I like, I'm in that mindset Like
, okay, this is my timeone-on-one with the people who
report to me.
So I really aim to be presentand not be distracted during

(04:31):
those times.
And then, of course, we haveour team meetings on Tuesdays
and Thursdays.
I also have a weekly meetingwith our senior product support
specialist and our productsolutions advisor on Tuesdays,
as well as a Monday andWednesday meeting with the
support leadership team.
So it's with our boss, thesenior director of global
support, and the supportmanagers of all of the pet

(04:53):
advisors.

Speaker 2 (04:54):
Pet desk support teams and the other support
teams serve other parts of thebusiness.

Speaker 1 (04:59):
Other tools yes, we collaborate a bunch because of
the way the products interactwith each other.
Yes, we collaborate a bunchbecause of the way the products
interact with each other.
We now have a weekly pet deskcommunication, support and
customer success leadershipmeeting which has been really
great.

Speaker 2 (05:14):
Today, we're doing something a little bit different
.
We're going to take a deep diveinto your QA process for your
support team.
Will you just kind of get mestarted on where you work and
what the context for theconversation is yeah, so I am a
product support manager atPetDesk.

Speaker 1 (05:33):
Petdesk is a company that has multiple products, but
I specifically support ourPetDesk communications product,
which is basically kind of likea CRM for veterinary providers
that integrates with theirpractice management system.
And we also have anaccompanying app for pet parents
.
So veterinary provider has ourclients that have booked

(05:56):
appointments and then, if theyhave a pet test communication
subscription, we can handleappointment reminders, health
service reminders oh Pixie isoverdue for a rabies vaccine and
we can send those kind ofreminders.
And then the pet parent can seeall their appointments in the

(06:16):
app, see a lot of otherinformation.
If the clinic has it set up,they can earn loyalty points or
reorder prescriptions for theirpet.
So that's kind of the scope ofwhat my team does, and we hadn't
had a quality program before Igot here and I started a pet

(06:37):
desk in end of October 2024.
And so we spun one up.

Speaker 2 (06:45):
So that's exciting.
But putting that aside, thequality program that you created
, can you tell me a little bitmore about, like, how you got
started?
It must have been.
You saw the absence of aquality program and you thought
this is an initiative I can run.
Did I get that right?

Speaker 1 (07:01):
Yes.

Speaker 2 (07:01):
Yes.

Speaker 1 (07:02):
So we already had some loose sort of quality
standards.
One of our senior productsupport specialists those are
the folks who do training andonboarding of new people David,
he would always go over sort ofquality what does quality mean
when it comes to pet desksupport and how do you do that?
So we already had a sort of abasis for that and I was lucky

(07:23):
to come into an organizationwhere quality was already sort
of emphasized.
You know it wasn't straight uplike answer as many tickets as
you can, so that foundation wasalready there.
So I wasn't sort of startingfrom a point of sort of oh, our
quality needs tons and tons ofwork.
We need to put this in place,because where we're at versus

(07:45):
where we need to be is a prettybig gap.
So that was a good startingplace and I think the team, like
all of the specialists on theteam, were eager for more
regular feedback.
Right, because basically thefeedback mechanisms up until we
launched the quality program wasoh, ticket got escalated by a

(08:07):
CSM or negative CSAT, right.
So or in the cases where youknow specialists on the team in
their one-on-ones with theirmanager would be like, hey, I
think I could have done betteron this.
Can we talk about this ticket?
So a little haphazard and notconsistent and very reactive.

Speaker 2 (08:24):
Okay, cool, yeah, and so you saw a big opportunity to
systematize that.
Where did you start?

Speaker 1 (08:30):
Well, I drew on a lot of my previous experiences.
So when I was at Big Fish Games, I ran the quality team within
the customer support team, soyou have some experience with
that.
Yes, yes, and also at Wistia.
I spun up a quality programthere.
So I was able to take a lot ofwhat I'd found success with in

(08:52):
the past and we decided to sortof start with what the quality
guidelines were already in place, right for the team.
So what did we?
We cover in training, what didwe have in our one guru card
around quality and um, ratherthan sort of totally rework it.
But then I incorporated some ofthe things that I found pretty

(09:14):
important and foundational in aquality rubric, um, and then,
since the folks doing thereviews are the two managers and
three senior product supportspecialists who, spoiler alert,
we all have a lot ofresponsibilities already on our
plate.
So I had to really wanted tofocus on something that was

(09:36):
something we could do anddeliver on consistently.
I know a lot of organizations,especially ones with a dedicated
quality team or that are usingsoftware that enables you to do

(09:57):
things a little more easily.
I know that those orgs tend todo like a percentage of total
interactions small group ofpeople who are carving out time
to do this consistently.
We decided to go with a cadenceof 10 support interactions a
month, so two QA reviews eachmonth, five tickets in the first

(10:18):
review, five tickets in thesecond review, so that people
get regular feedback review sothat people get regular feedback
but it's not too cumbersome forthe folks doing the reviews.

Speaker 2 (10:31):
Yeah, that makes total sense, because you don't
want you're trying to initiatethis program and get buy-in and
you don't want people to, youknow, feel overwhelmed and feel
like it's an addition to theirworkflow.
But that brings me to a burningquestion a lot of people have
about QA how do you choose whichconversations?
Is it the ones with negativeCSAT or is it random?

Speaker 1 (11:00):
be random but with some parameters, to make sure
that you are selecting ticketsthat are sort of representative
of the work that they do, right?
So in our case we have livechannels, we have folks who do
phone support and tickets andfolks that do live chat support

(11:21):
and tickets and the phone volumeand the chat volume is greater
percentage than the ticketvolume, the handle.
So for folks who are workinglive channels, we pick over the
course of the month for thefirst QA, we will pick three
chats or phone calls, dependingon which channel they are on,

(11:42):
and then two email tickets areon and then to email tickets.
So that's kind of how we makesure that it's representative of
sort of their work over thecourse of the month.
And then, in terms of othersort of things we look at.
You know, recency is prettyimportant, right.
I always want the person whoseticket is being reviewed to be

(12:03):
able to remember that ticket,right, of course?
Yeah, that makes sense, and soif you pick a ticket, they
handle it in the beginning ofthe month, you know, and this is
their second QA review.
Then it's not as impactful,right?
So I have set so we're usingZendesk and so I've created a
report in Zendesk.
Unfortunately, I have to createtwo reports, one for chats and

(12:24):
one for tickets, because of howthe data sets work.
But we look at tickets thatwere solved within the past
seven days and exclude someticket types.
Like we have a not for supportticket type where it's meant for
another team, or we're justescalating to the CSM.
I haven't gone too far intolike adding more filters, but

(12:52):
that is sort of how we selectthem and then I make sure that
the ticket created date is theday after they were emailed
their feedback so that we'rereviewing tickets that they
interacted with after their lastround of feedback.

Speaker 2 (13:12):
Yeah, so it's not reaching back into the past.
So you have these sets of timethat you're QAing.
That's really interesting andthe filters you're using are
essentially it needs to be arecent conversation and it needs
to be representative ofwherever they're working.
So email and chat or email andphone, that makes total sense.
And so just to get a handle onwhat you're working with, what's

(13:36):
the typical volume of ticketsor total conversations that your
team is handling On an averagemonth.

Speaker 1 (13:46):
A single support specialist on our team probably
is doing between 350 to 400.
So it's like a pretty smallpercentage that we're looking at
over the course of the month.
But consistent quality review Ithink is better than no quality
review.

Speaker 2 (14:02):
Yeah, absolutely, and especially.
It makes sense to me thatyou're starting this program
with you don't have a QA team,right, and so you're being very
mindful of the workload that themanagers you said it's team
leads and managers who are doingQA.

Speaker 1 (14:18):
Senior product support specialists they're kind
of like a tier two type role.

Speaker 2 (14:22):
That makes sense.
They've been there a while.
They know what a good ticketlooks like.
Exactly so, as far as thepeople who are doing QA, it's
the senior specialists.
What about who receives QA?
Is it everyone, or is it newpeople?

Speaker 1 (14:35):
All specialists, like we have level one, level two
and then we have our seniorproduct support specialists.
So all level one and level twofolks get QA.
The way we've sort of mapped itout is the you don't get QA
during your training andonboarding.
And then we recently justdecided that for that first
month post-onboarding, the QAwill be done by the senior

(14:57):
product support specialist,because those are the folks that
they've been interacting withthe most.
And then after that first monththat they're like fully trained
then and they'll also deliverthat feedback right and they'll
also deliver that feedback right.
And then the next month itdepends on how we've sort of
assigned who's going to QA whofor the month.
But then the manager will bethe one who has that feedback

(15:20):
session going forward, becausethen they'll already have a
couple one-on-ones under theirbelt with their manager and have
built some rapport before themanager takes over that QA
feedback.

Speaker 2 (15:27):
Yeah, so I'm getting a picture of a really gentle
process for everyone.
Right, it's gentle for thefolks who have onboarded.
It eases them into it, and thenit allows them some time with
their manager to kind of justgain rapport and get to know
them before they start doing QAwith their manager.
Yeah that makes total sense.
And is it the senior specialistwho does training?

(15:51):
Like are they doing onboardingat?

Speaker 1 (15:53):
the new timber Okay so it's an extension of training
, exactly, exactly, exactly thatfirst month, getting sort of
folded into the QA process is anextension of training this
program.

Speaker 2 (16:10):
If I'm a customer support leader who's looking
ahead to a team that doesn'thave a quality program, what
would you say is the biggestmistake I'm likely to make that
you could maybe help me avoid?
Well?

Speaker 1 (16:18):
I think the first thing would be like focusing on
like negative CSATs, because Ithink that's what people think
they should do, but that's notreally representative of
probably the majority of theirwork, and so I think it's still
important to review those, and Ithink that I was actually just
having a conversation earliertoday with someone about this
and say, hey, I think that ifyou want to review those against

(16:39):
your quality rubric, that'sperfectly good and okay, but,
like if their quality score isgoing to be part of their
performance guidelines andperformance expectations, that
is not fair and equitable.
To have your like ticketselection criteria be all
negative c-sats plus a handfulof random ones, because that's

(17:01):
going to tank their score and is, like, like I said, not
representative.
So like to me is making surethat that's one of the things.
Keys to success is the qualitythe tickets.
Google Sheets has a really coolfeature where you can randomize
a range.
So you just copy all of therows that you want to do this to

(17:37):
and you right-click.
In the very bottom there's likeadditional options, I think
it's something like that and itsays randomize range, and so I
just filter by the onespecialist randomized range and
then click through the ticketslike top down, because they're
in a random order, I do sort oflook into them.
Yeah, I figured that out liketwo months ago when I was first

(17:59):
doing this.
I was like there's got to be away to do this.
So I was trying to use therandom function, but that's
annoying because it refreshesand so you have to, like, use
the random function, copy andpaste this value and then sort
um, and so this was much easierto do, yeah that sounds great.

Speaker 2 (18:14):
You copy it into a new sheet so that you're yeah,
you're working with separatedata.

Speaker 1 (18:19):
Yes, yeah um, and so what I also do is I'll click
through those tickets becausebecause of certain situations
with our software, we get a lotof tickets from providers who
are reaching out to support tosay, hey, can you update this
client's phone number, can youupdate this client's name, and
so that's a big percentage, butthat's so.

(18:40):
That is the one case where it'slike the tickets I'm selecting
aren't maybe quiterepresentative of like 30
percent of our tickets.
Are these account modificationtickets?
But I don't want to QA 30% oftheir tickets being those,
because those are sort of verymuch standard use, a macro
updated thing in our tools.
So I try to make sure thatthere's a balance.
So it's like not all of oneticket type because that's not

(19:05):
helpful to them.
So still random in terms ofstill going top down.
But what I'm kind of workingthrough now is, now that I've
gone through this for a couplemonths, let me add to our guru
card about support interactionselection criteria and I want to
put that in there because Ithink one of the things to your
question is transparency, likebeing really transparent about

(19:26):
the process.
So the cadence, the supportinteraction selection criteria,
all that stuff is in our gurucard about our quality program
that everybody has access to.

Speaker 2 (19:37):
So, speaking of transparency, do you feel like
it's always a good idea to do QAmore privately?
So if I'm giving a specialistfeedback, what are the pros and
cons to doing that in a DM or ina meeting, versus maybe
sometimes shouting it out thatthe specialist has done a great

(19:59):
job in putting that in a teamchannel?
Do you have thoughts about that?

Speaker 1 (20:03):
I think there's situations where both of those
are probably good mechanisms.
I think the one-on-one sessionjust gives them this
face-to-face time to get to talkthrough it, and I think one of
the things that also I think isreally important is you need to
give mechanisms and your teamreally needs to feel that if
they don't agree with something,that they can bring it up right

(20:26):
with something that they canbring it up right.
So that to me is better and youcan establish that sort of
rapport in like a one-on-onesession.
But I think, as like a teamculture, shouting out when
people do a really great job issomething that should be
happening right.
So like we have two teamhuddles every week and in the

(20:46):
first one of the week I alwaysdo a CSAT shout out.
So, and one of the great thingsabout having a quality program
when you do that is you can callout things that are in your
quality rubric about what thatperson did really well.
So that's the other thing Ithink to make it successful is
everybody needs to reallyunderstand and buy into your

(21:09):
rubric.

Speaker 2 (21:10):
I love that you mentioned this rubric.
I would like to dive into whatyour rubric is, how you
determined it.
Is that something that camefrom your previous work, or did
you kind of tailor that to yournew company?
Both.

Speaker 1 (21:25):
So David, the senior support specialist I mentioned,
he had been working on a rubricwith the previous manager and so
took some of that because thatwas based on the quality
training that new hires on ourteam go through.
And then I incorporated, like Isaid, some of the elements of

(21:48):
the criteria that I've used inthe past that I found that I
think is pretty foundational,incorporated those together to
come up with what our pet deskcommunication support team
quality criteria are.
So, for example, our fourcriteria at pet desk for pet
desk communications team areaccuracy, completeness are

(22:10):
accuracy, completeness, customerexcellence and empathy tone.
We're dealing with veterinaryproviders.
It's a very high-stress job,it's very busy, right.
So empathy tone might not be afull criteria in some places,
but for us it was reallyimportant.
So there's some things that aresort of universal in terms of,
like, a quality supportexperience.

(22:30):
But then I would alwaysrecommend to someone that one of
your criteria should be reallyrooted in, like your company
values or what is sort ofspecific to your company.
So one of the elements ofcustomer excellence is guidance
right Controlling thecommunication, guiding to the
best options, effective solution, providing additional

(22:53):
information to the customer toaddress their next question or
issue based upon the originalreason for writing in that last
one is actually one that Icarried over from Big Fish Games
and Wistia, but we put it underthis guidance Like we need to
serve as guides sometimes forfolks, whereas in other support
orgs that I've led that hasn'tbeen as important.

Speaker 2 (23:18):
That makes perfect sense.
So it's almost the customerprofile that guides that rubric
quality.

Speaker 1 (23:24):
Correct Interesting.

Speaker 2 (23:26):
I just wonder if you have any other advice for
customer leaders who are workingon quality programs.

Speaker 1 (23:32):
I think one piece of advice is, if you're
incorporating the quality scoreinto your monthly performance or
quarterly performance forindividuals in your team, is
making sure that whateverbenchmark you've set for your
internal quality score and whatyou're looking for in your

(23:54):
rubric is complementary to sortof your productivity
expectations or your workflowsor your business needs,
sometimes right.
So I can give a real-lifeexample of where this happened
when I was at Big Fish Games.
So when I was at Big Fish Gameswe had monthly performance and

(24:15):
we had sort of meetsexpectations and exceeds
expectations and like needsimprovement.
But there were and we hadmonthly bonuses for our support
team.
So, um, turn a level one bonus,you had to meet expectations
and productivity quality c-set.
And then, uh, have we had amaximum threshold for average

(24:39):
number of replies per ticket tomake sure that like they're
really productive, but onaverage it's taking three
replies to solve a ticket.
We should not be giving a bonusfor that behavior.
So that was sort of like thelike a threshold.
So but we have that level oneand level two criteria for

(24:59):
quality, right.
So it's like 85 if you got an85 to 90 qa score, you were
meeting expectations.
If you get 90 or above, you'reexceeding expectations.
Um, and then we also hadminimum standards which, if you
missed one of those minimumstandards you were automatically
at means improvement forquality.
And so we basically created thissystem with the best of

(25:23):
intentions that somebody couldmiss a minimum standard on their
first QA.
They know they're not going toget a bonus.
Why would they be incentivizedto exceed expectations and
productivity?
They'll still want to meetexpectations because they don't
want to get a raise in theannual performance review cycle,
they don't want to go on aperformance improvement plan,

(25:46):
but they have no incentive totry to exceed expectations of
productivity.
And I actually got thisfeedback service to me by a
specialist on the team in a skiplevel meeting I had and I was
like, oh wow, this is reallygreat.
Because one of the minimumstandards was around selecting
the correct ticket fields, liketicket type, because reporting

(26:07):
is very important, right.
But like, yeah, and we we stillbelieve that it was really
important.
So we actually adjusted it toput that as like a tense
standard or we incorporated thatinto like complete back-end
processes or whatever.
So we took that out of being aminimum standard and then we
made that one of like ourregular, folded it into our

(26:28):
regular email standards.
And then also the fact that ifyou missed a minimum standard
because not all the minimumstandards were like the same
weight, right, but we weren'tgoing to weight them differently
so I was like, okay, that'salso a good point, because that
also is something like okay, myfirst two way I missed a minimum
standard.
I was like, okay, that's also agood point, because that also
is something like okay, my firstQA I missed a minimum standard.
I'm not going to bust my buttto exceed expectations of
productivity.

(26:48):
And I was like, totallyunderstand that that's valid.
What we did when we were usingmy QA there so I had a little
bit more flexibility in terms ofgetting creative with the
scorecard.
So what we did is I figured outhow we should weigh a minimum

(27:09):
standard.
So basically I figured out themath that a minimum standard
would deduct 3% from youroverall score, because that's
what it would come out to, to bekind of double what a regular
standard missed would.
And so somebody could stillmiss one minimum standard on a
qa, but they'd have to likereally knock down to the park on

(27:30):
everything else, but itwouldn't disqualify them and
that change was like really wellreceived, made sure we were
incentivizing the right behaviorand not sort of shooting
ourselves in the foot um yeah soI thought that thought that to
me is always like a reallypowerful example of me having a
really good learning, of an ahamoment.
I'm like, oh yeah, this is bestof intentions, but in practice

(27:54):
it was actually not serving theright purpose.

Speaker 2 (27:57):
Yeah, and what really stands out to me about that is
that you found this out in askip level meeting, and it
really highlights to me theimportance of leadership,
listening to frontlinespecialists and really taking
time with intention, digginginto what is your working life
like, what are thesewell-intentioned programs that

(28:17):
we're creating doing to yourdaily life, and how does that
motivate or not motivate youExactly?

Speaker 1 (28:24):
Yeah, I love that, in terms of cadence, if somebody
is like fully trained and doingwell and they meet the their
internal quality score, the nextmonth will only look at five
tickets, and so it's sort oflike it helps keep the the
workload manageable, but it alsois something that somebody
could be working towards right.

Speaker 2 (28:44):
Yeah, so it's elastic .
You grow the number of ticketsas needed or shrink them as
needed as people demonstrate.
Yeah.

Speaker 1 (28:53):
And I think if we had a big fundamental change in the
product or new features, andthen we would potentially feel
like, okay, this montheverybody's going to get 10
tickets because there's been alot of change, we made some
significant shifts in processand we want to make sure that,
like everybody's sort of movingforward with the new processes,

(29:19):
and so that's a way to kind ofdo that.
And then one other thing that Icould maybe foresee in the
future is, certain months, wemight do sort of like a sprint,
where maybe we know that ouraccount-related tickets account
modification tickets are alwaysreally, really good, so we're
going to focus on, liketechnical issues this month,

(29:42):
right, but you're doing that foreveryone issues this month,
right, but you're doing that foreveryone.
And the thing that you mightwant to do, though, is like,
okay, if it's part of theirperformance metrics, maybe we
say like, hey, let's look at howeverybody did, and if the team
average is way lower than maybefor this month, we adjust what
that like internal quality scoregoal is, because it kind of

(30:04):
goes back to being fair andequitable and transparent.

Speaker 2 (30:08):
And if everyone is experiencing the same thing,
then something's clearly goingon Exactly.
Yeah, I love the emphasis onfairness and making it work for
your team's workflow, as is Ithink that's probably why it's
been successful.
Thank you so much for beinghere.
I just really appreciate you.
I have adored working with you.
You're one of my favoritepeople, so.

(30:30):
I'm really glad, aw, thanks.
That's Stacey's setup story ina nutshell.
Here are the steps that you cantake back to your team.
One 10 interactions.
Two reviews every month.
Keep it lightweight.
Two random recent andrepresentative ticket picks.
Three a four-pillar rubric.

(30:52):
So keeping that lightweight.
Make it simple.
Hers is accuracy, completeness,customer excellence and empathy
.
Four private coaching foranything that maybe doesn't go
perfectly, plus public kudos forwhat goes well.
5.
Shrink to 5 tickets if an agentis crushing it.

(31:13):
Bump back up if things change.
So only offering QA where it'sreally needed.
If, as you listen to thispodcast, an idea landed, or if
you're using this process in thefuture, please share this
episode with a fellow supportleader and drop us a quick
review and a subscribe.
Until next time, keep leadingwith clarity and with care.

(31:36):
See you soon.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.