All Episodes

February 18, 2025 • 29 mins

Send us a text

Unlock the secrets of speech audiometry and speech perception with the renowned Dr. Lisa Lucks Mendel. With over 35 years of expertise, Dr. Mendel offers an enlightening exploration into the significance of choosing the right tests for speech perception assessments. Learn why classic tests like NU6 and CIDW22 remain relevant and how full 50-item word lists provide a more authentic reflection of natural speech sounds. Discover the rationale behind shorter word lists and how they can streamline assessment without compromising their purpose.

Get ready to unravel the complexities of evaluating speech recognition in challenging auditory environments. The Signal-to-Noise Ratio 50 (SNR-50) test stands as a pivotal tool in understanding hearing loss and the benefits of hearing aids. As we examine the nuances of phoneme-focused scoring, particularly impactful for cochlear implant users, we offer fresh insights into setting realistic expectations for auditory device performance. This episode also delves into the scoring protocols that might just change the way we interpret hearing capabilities.

Join us as we compare the efficacy of modern MP3 recordings against traditional monitored live voice (MLV) in audiometric testing. Uncover the surprising findings from our student-led research and the implications for clinical practice moving forward. As we advocate for standardized methods in speech and noise assessments, Dr. Mendel reflects on the historical recommendations that still resonate today. This episode promises a comprehensive look at enhancing real-world hearing evaluations, leaving our listeners informed and inspired by Dr. Mendel's invaluable contributions.

Connect with the Hearing Matters Podcast Team

Email: hearingmatterspodcast@gmail.com

Instagram: @hearing_matters_podcast

Twitter:
@hearing_mattas

Facebook: Hearing Matters Podcast

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Blaise M. Delfino, M.S. - (00:19):
Thank you.
You to our partners.
Sycle, built for the entirehearing care practice.
Redux, the best dryer, handsdown.
CaptionC all by Sorenson.
Life is calling.
CareCredit, here today to helpmore people hear tomorrow.

Fader Plugs (00:36):
the world's first custom adjustable earplug.
Welcome back to another episodeof the Hearing Matters podcast.
I'm founder and host BlaiseDelfino and, as a friendly
reminder, this podcast isseparate from my work at Starkey
.

Dr. Douglas L. Beck (00:56):
Good afternoon.
This is Dr Douglas Beck withthe Hearing Matters podcast, and
today's guest is Dr Lisa LuxMendel.
She is a PhD.
Lisa is a professor emerita inthe School of Communication
Sciences and Disorders at theUniversity of Memphis.
She is a clinical researchaudiologist with more than 35
years clinical and researchexperience in the assessment of

(01:18):
speech perception forindividuals with normal hearing
and hearing loss and haspublished extensively in the
area of speech perceptionassessment.
Dr Mendel is a fellow of theAmerican Speech-Language-Hearing
Association.
She has received the honors ofthe Council of Academic Programs
in Communication Sciences andDisorders and serves as chair of

(01:39):
the Audiology National AdvisoryCouncil for Educational Testing
Services.
Those are the folks out in NewJersey, right, that used to run
the SATs and all that stuff.
Lisa, welcome, it's so nice tosee you and by way of disclosure
I should say that 40 years agofor me, probably just a few
months ago for you, we workedtogether at the House here

(01:59):
Institute doing some researchprojects on cochlear implants
before the FDA approved them,and we worked with my dear
friend and resident genius, drJeff Danhauer.

Dr. Lisa Lucks Mendel (02:09):
That was how we first started Healthier
Institute.
I was a Ripley Fellow.
I was a PhD student at UC SantaBarbara, yeah.

Dr. Douglas L. Beck (02:16):
So today I want to talk about speech
audiometry, because you areamong the most published people
in that area and I think that inaudiology we do a lot of
clinical work, clearly, and wedo a lot of research work, and
sometimes the research,validation and verification does
not make it all the way toclinical work, and I mean that
in a kind way.
But I think that most peopleare doing word recognition score

(02:39):
testing probably incorrectly.
So what I'd like to do is let'stalk a little bit about NU6s
and CIDW22s, because I thinkthose are the most common
monosyllabic word tests used,and can you start by telling us
a little bit about the idea ofbalancing and word list length
and things like that?

(02:59):
How is it supposed to be done?

Dr. Lisa Lucks Mendel (03:01):
When I think about speech audiometry or
speech perception assessment, Ithink the thing that I always
would tell my students what isyour purpose?
Why are you doing this?
Are you trying to find out howthey're doing with speech
understanding in a controlledenvironment?
Do you really want to know?
How does this predict theirperformance in a real world
environment?
And when you think about whatthe purpose is, then it helps

(03:23):
you determine what's theappropriate test.

Dr. Douglas L. Beck (03:25):
I love that .
That's a great approach, thankyou.

Dr. Lisa Lucks Mendel (03:28):
So, yes, you're right, we still use
monosyllabic words, meaningfulmonosyllabic words.
Nu6, w22s are older than all ofus.
They were designed to assesstelephone communication back in
the 20s, 30s, 40s.

Dr. Douglas L. Beck (03:45):
Yeah, that was the Bell Labs out of Murray
Hill, New Jersey.

Dr. Lisa Lucks Mendel (03:47):
Exactly the early ones.
You know, nu6 and W22 came alittle bit later, but it was the
same model, and so we, asaudiologists, said well, let's
just see if we can just usethose word lists to see how
people are understanding speech,which is a good idea.
Back then we still need to bemoving forward.
But to your question, so themodus alabic words.

(04:09):
They are word lists that are 50items NU6, w22s.
There are many others out thereand the idea behind the 50 item
word list was to make sure thatthey were either phonetically
or phonemically balanced.
Most of them are phoneticallybalanced, meaning you are using
speech sounds that typicallyoccur at their regular frequency

(04:31):
within the English language.

Dr. Douglas L. Beck (04:33):
And that's important because it's not
necessarily all the sounds ofthe English language.
It's the sounds in proportionto how they occur in common
speech.

Dr. Lisa Lucks Mendel (04:42):
Exactly, and if you don't do an entire 50
word list, you no longer havephonetic balancing.
Now, if that's not important toyou, then that's okay, but the
bottom line is, those lists arewhat we are dealing with, not
the words themselves.
Just a word by itself is notphonetically balanced.
It's the entire list that'sbalanced.

Dr. Douglas L. Beck (05:04):
Yeah, and I remember some research that was
done.
I hesitate to say the namebecause I might get it wrong,
but I remember and tell me ifthis is correct.
About four or five years agothere was an article that came
out and the researchers werecomparing word lists 25 word
lists, whether they be NU6s orCIDW22s.

(05:27):
That wasn't the point.
The point was that if they justused random monosyllabic words
out of the phone book or out ofan encyclopedia, they got the
same results because none ofthem were particularly balanced
or particularly clinicallyefficacious.

Dr. Lisa Lucks Mendel (05:41):
Exactly, and so it's not meaningful.
I know a lot of people wantshorter word lists.
I know it takes time to do a50-word list, but if you want to
be sure you're assessingregular speech sounds for
English speakers, then youreally do need to do that full
list with that genetic balancing.

Dr. Douglas L. Beck (05:58):
And for the short list, there are
reasonably well-documented10-word lists.
Can you talk about those?

Dr. Lisa Lucks Mendel (06:12):
Yeah, there are some who have taken
the 50 items and they've rankordered the most difficult items
in that 50-word list and putthem as the first 10.
So if a patient can get all ofthose first 10, then they don't
need to do the rest of the testbecause they've gotten the most
difficult items first 10, thenthey don't need to do the rest
of the test because they'vegotten the most difficult items.
I'm okay with that.
But again what's your purpose?
If you just want to see howthey do with 10 difficult words,
okay.
Is that telling you much aboutwhat their speech perception

(06:35):
ability is?
I'm not sure.

Dr. Douglas L. Beck (06:38):
This becomes a very interesting
discussion because you knowthere are people now who are
advocating that rather thandoing speech in quiet you know a
typical word recognition scorelist we should probably just be
doing speech in noise.
That to me is a hard sell.
I don't want to replace speechin quiet because I think we have
75 years of history of takingthose measurements, but I do

(07:00):
think that a far more meaningfulmeasure is speech in noise
because that tells us how thepatient is doing in the real
world, more or less.
And of course there is no facevalidity.
On most speech and noise tests,including the, you could do it
seriously in headphones becausethen we get into the

(07:25):
localization versuslateralization discussion Is the
sound inside your head or is itoutside around you?
And requiring that you use andthat your brain considers head
shadow effect, interauralloudness differences, interaural
timing differences those thingsall go into speech perception
and noise, but they're notmeasured.

(07:46):
If you're doing a test such asone ear and let's talk a little
bit about that If we're doingspeech in noise, some people do
measure speech and noise in oneear, speech and noise in the
other.
I personally don't understandthe value of that because I can
see that as a signal to noiseratio test.

(08:07):
I can see that as figure groundtest.
But if you're not doing it inthe acoustic environment, that
involves localization.
I'm not sure how that relatesto anything diagnostically.

Dr. Lisa Lucks Mendel (08:18):
Well, I agree, and I think we need to
again step back a second andremember that speech.
Understanding is a bottom upand a top down process, right?
So, am I hearing all theacoustic pieces of information?
Do I hear the speech sounds,the different phonemes?
But then I need my brain tothen fill in the gaps, perform
closure.
I didn't quite hear everythingexactly correct.

(08:41):
So if you're just doing speechand noise in one ear, you're not
really taxing the brain.
And we say we hear with ourbrains.
And I think there's a large,you know, it's a large strength
to that comment.

Dr. Douglas L. Beck (08:53):
Yeah, I have to agree.
I mean, most of the tests thatI really like in speech and
noise testing involve a speakerin front with the primary
speaker noise in the back.
Now, that's nowhere near aperfect test Nobody's arguing
that that's a perfect test but Ithink it's the best replication
that we can do in a boothwithout using virtual technology
and I do think that ultimatelywe'll have three or four or five

(09:17):
speakers in a booth and we'llbe able to do it virtually so
that it is all around you and wecan move the primary speaker.
I mean you could do that now inlabs, but it's not clinically
easy to do in most boothsituations.

Dr. Lisa Lucks Mendel (09:29):
Right, I think the improvement in
technology, though we'll be ableto get that in some kind of a
virtual surround sound,something in the booth, probably
sooner rather than later, and Ithink the other thing that's
important is what's the type ofnoise that we use in those tests
?

Dr. Douglas L. Beck (09:45):
Oh, I'm so glad you mentioned that.
Tell me about that.

Dr. Lisa Lucks Mendel (09:47):
Yeah, because a lot of the tests have
well, they vary.
Some of the tests havespeech-shaped noise.
That's not actual talking, it'skind of sound and that is not
typically what people complainabout when they say I hear you
but I don't understand whatyou're saying because it's noisy
.
What they're complaining aboutis other people are talking.
So many of the speech and noisetests that we have today do

(10:11):
have people talking in thebackground and there's
multi-talker babble and thenthere's babble, and I think
babble means you have so manypeople talking.
We used to call it cafeterianoise, where you hear the people
in the background but you can'treally make out too many of the
voices.
Multi-talker like two, three,four talkers.
You're going to hear certainwords from certain people and I

(10:33):
think that's the more realisticbackground noise that people
complain about in a restaurantor a noisy environment.

Dr. Douglas L. Beck (10:42):
Yeah, people will think that we
rehearsed this, but we didn't.
But I agree with you 100%.
In fact, in the Beck Benitezspeech and noise test, which is
free, we recommended 4Talk orBabbel as well, for exactly that
reason that it's very, verychallenging and as you get to 10
or 12 or more speakers itbecomes speech noise and it does
become indiscernible.
And the idea of why we want4Talkr was that we wanted it to

(11:04):
be difficult, we wanted it to bechallenging, to replicate what
would happen at a cocktail party, what would happen at a
restaurant.
And I agree with you speechspectrum noise, narrow band
noise, white noise, gaussiannoise, those noises, that sort
of a sound.
Your efferent nervous systemcan actually suppress that by
two or three dB, which you can'tdo with four-talker babble.

(11:25):
So it's much more challengingto use four-talker babble.
Now, if you don't havefour-talker babble, what I
usually say is okay, use speechspectrum noise.
But then the most importantthing is you have to have an
unaided score and an aided score, regardless of which background
noise you use.
Because if you're trying to useit as part of best practices,
perhaps for hearing aid fittings, you want to get the unaided

(11:47):
score for your patient withnothing in their ears and see
how their speech and noise scoreis and then, aided, it could be
their hearing aids that they'vebeen wearing for 10 years, it
could be a new product thatthey're trying and my philosophy
is, if you don't improve theSNR50, the signal to noise ratio
, to get 50% of the wordscorrect, if you're not improving
that, you're probably justmaking things louder.

(12:09):
I mean, the goal is immediatelyfor the patient to have an
improved signal to noise ratiosuch that they can better
understand speech and noise,because that's mostly why they
came to see you.

Dr. Lisa Lucks Mendel (12:20):
Yeah, and I think that you know the point
you bring up about the SNR-50,.
Not everybody's doing that on aregular basis.
We're still doing percentcorrect speech perception scores
.
I'm not sure how meaningfulthat is.
The SNR-50, which people I thinkare calling either SNR-50 or
speech reception threshold notto be confused with speech

(12:40):
recognition threshold, which isa spondee threshold, which
drives me crazy, because a lotof people do use that
synonymously.
The SNR50 is a more realisticmeasure, as you said, because
it's that, what is that?
Where would they get that 50%correct in noise?
And then how does it compare toa person with typical hearing

(13:01):
to look at the SNR loss andwhat's the difference between
those with typical hearing andthose who have hearing loss?
That's helpful information toknow.
Where do they stand now and,like you said, unaided versus
aided, I think we can see thosecomparisons.
You know the Quentin used thatcomparison of SNR loss and said,
okay, if you have a moderateloss, you need this kind of

(13:24):
hearing aid technology.
Well, we're past that now,because all of it said you need
directional mics.
Well, every hearing aid usesdirectional mics Pretty much,
but we can use it more from adiagnostic standpoint and I
think that's really helpful.

Dr. Douglas L. Beck (13:37):
Yeah, and for those not familiar, let me
walk you through an SNR-50, howyou would do that and, dr Mendel

(14:03):
, please correct me if I'm wrong.
So what we recommended in theBeck-Benitez test about 70 dB of
speech in the front, hl, sothat's loud for people with mild
to moderate loss.
And what you're going to do is,in the background noise with
your four talk or babble.
That might start at 55.
So your signal to noise ratiois 15.
And that's a very easy task forpeople with normal, mild or
moderate hearing loss.
Then what you do is you givethem three words, let's say say
the word went, say the word shop, say the word thought.
If they get all three correct,that's great.
Then what you're going to do?
You keep the primary talker infront at 70.
Now you bring up the backgroundnoise five.
So now you're at 60.
That's a 10 dB signal to noiseratio.

(14:26):
More challenging but still easy.
Mead Killian published a scale30 years ago where he said
people with normal hearing andlistening ability need about
zero, one, two or three dBsignal to noise ratio to get
half the words correct SNR 50.
People with mild to moderateloss need about 8 dB SNR to get

(14:47):
50% of the words correct andpeople with severe loss need 15
or greater.
So that's your basic scale andyou can look that up.
Mead Killian brilliant paper.
It was in seminars and hearing.
So now we're at a 10 dB SNR.
So that's pretty easy, becausepeople with mild to moderate
loss need about eight decibelsso they get those three words
right.
Then what I would do isincrease it five more.

(15:08):
So now your background noise is65.
That's the four-talker babble,your speech in front.
Your primary signal is 70.
That's a five dB SNR.
That's where things getinteresting, because a lot of
people with mild to moderateloss cannot repeat those words.
So then what you might do is godown a little bit, make it
easier on the background noise,bring that down to, let's say,

(15:28):
58.
So now you've got a 12.
So you're going to more or lessin a Houston Weston-like
approach.
Like we do with puritans, youcan go up and down and find the
point where they get half thewords correct.
That's how we do it in BeckBenitez.
There's lots of ways to do it.
You know, there's the BKBprotocol, there's the AZBIO

(15:49):
protocol, there's the Quicksendprotocol.
They're all fine.
I don't care which one you use,but the point is to understand
how you do that.
What's an SNR50?
The signals-to-noise ratio, toget 50% of the words correct.
And if you do it the way DrBenitez and I did it, we're just

(16:10):
keeping the primary speech infront, stable, and we're varying
the background noise to see atwhat point does the patient fall
apart, and that's prettyreliable.
So I would get an unaidedspeech, snr 50 and then aided,
and again, if you're notimproving their SNR 50, you
could argue that you're justmaking things louder, and louder
is not clearer.
Most patients want soundsclearer, not louder.

(16:31):
So, lisa, I'm sorry, I didn'tmean to hijack that, but I
wanted to make sure that we'veexplained what an SNR 50 is.
Did I get that right?
You want?

Dr. Lisa Lucks Mendel (16:40):
to add to that you can Very good.
You studied hard.
Most people are not doing it,as far as I know, are not doing
it routinely in the clinic and Ilike what you're saying about
unaided speed perception andaided.
And again, I don't know howmany people are actually doing
that in the sound field in theirhearing aid fittings.

(17:00):
But it gives you the baselineto see whether you're seeing
improvement or not, and to methat's telling you more than if
you look at monosyllabic wordsor NU6, quiet and noise, or
unaided-aided, because it's alot.

Dr. Douglas L. Beck (17:14):
Yeah, and these are functional measures.
I mean, if you have an SNR50unaided of 14 and an aided SNR
50 of six, you've dramaticallyhelped that patient.
You know that's a major change.

Dr. Lisa Lucks Mendel (17:27):
And I think also setting up
expectations for the patient.
So if you show them here'swhere you are unaided and then
here's where you are aided, andshow that you do see improvement
, it's not perfect.
You're still going to strugglein background noise.
We're not quite there yet toget you 100% there, but it's
going to show improvement.

Dr. Douglas L. Beck (17:46):
That's great.
All right, I want to spend afew minutes on scoring
monosyllabic word recognitionscore tests because I think,
clinically, we have a protocolthat may not serve the best
interest of the patient, and DrMandela share your thoughts on
that.

Dr. Lisa Lucks Mendel (18:01):
Yeah.
So most people will score usingwhat we call synthetic or whole
word scoring.
So the patient says you say saythe word yard.
The patient says yarr, and youget it wrong.
Say the word.
And they say and they get itwrong.
They get no credit for thephonemes that they did get
correct.
I'm a proponent of phonemicscoring where if you took a

(18:23):
50-word list for the NU6 or W22s, there are 150 phonemes.
So if I only miss one phonemeper word, I've done pretty well
and I've gotten aboutthree-quarters of this test
correct.
It gives me credit for what Idid get correct and doesn't harm
me for the things that I didn'tget correct.

Dr. Douglas L. Beck (18:41):
I love that and that brings me back, you
know, again going back 40 years.
But when we were trying toevaluate the sounds that a
cochlear implant patient wasperceiving, we actually scored
them all phonemically Because ifyou said the word cat and the
patient said catch, they got youknow.
So cat is three phonemes ca-atand what they said is ca-ach.

(19:04):
So they got two out of threeright rather than marking that
wrongt-ch.
So they got two out of threeright rather than marking that
wrong.
We gave them credit for the twoout of three that they got
correct and that told us a lotmore about what they were
actually perceiving, rather thandid they get the word correct
or not.

Dr. Lisa Lucks Mendel (19:16):
Exactly.
And now back in those days wecouldn't do a lot of tweaking of
the implant, but today we can.
So, if an audiologist isgetting that kind of information
and that sounds like a ch, thenthey can make adjustments
literally in the electrode arrayor in the mapping to improve
the perception of thatparticular frequency range, to

(19:37):
hopefully improve the ability tohear the difference between ch
and sh and whatever.

Dr. Douglas L. Beck (19:43):
Yeah, and I think that that works as well
with hearing aids.
You know, as we want to knowwhere the errors are, because
that tells us where to applycorrections, where to reprogram.

Dr. Lisa Lucks Mende (19:51):
Absolutely .
And I have the other point toraise about scoring, doug, that
I think is really important isreminding people that when
you're doing a percent correctscore on a 50 item or even a 25
item monosyllabic word list andyou get, let's say, 80%, and
next year the patient comes inand they got 88% and you say, oh

(20:13):
good, your speech perception'sgotten better.
Well, again, we have to go backto some of the germinal
information.
We know that what is consideredan improvement in a percent
correct score on a 50-item wordlist and you go back to Thornton
and Raffin's data, you look atCarnage-Lass who updated it back
in 97, I believe we have tolook at are those critical

(20:34):
differences?
Is that improvement truly animprovement?
And most of the time it's not.

Dr. Douglas L. Beck (20:40):
This is so important.
I'm glad you mentioned this.
Thornton and Raffin, I thinkthe original publication this is
off the top of my head so it'sprobably wrong, but I think it
was JSHR and I think it was in77 or 78, something like that
and the difference of themagnitude of these differences
is huge.
If you did a 25 word list andthe first score was 92, and then

(21:02):
you check them a year later andthe score was 64.
Those are actually the same.
They are within the statisticalprobability of each other at a
0.05 alpha level.
So, thank you, that's brilliant.
And the point would be, if youdid the full 50 word list, the
variability is much, much less.
And so then you could saystatistically significantly,
there's been a change.

(21:22):
When somebody goes from 100% to88%, that's the same.
There's no differencestatistically in those two.
And I know that's hard forpeople in clinic to say, oh, but
why?
Well, because it's an imperfecttest.
It's not a very good test.
So before I let you go, I'dlike you to discuss two things
monitored live voice versusdigitized recorded speech

(21:44):
samples and the time involved todo monitored live voice versus
digitized recorded speechtesting.

Dr. Lisa Lucks Mendel (21:54):
Happy to do that.
We have published a couple ofstudies looking at monitored
live voice versus recorded.
Back in 2011, I worked with oneof my students and we looked at
comparing the time it took todo monitored live voice 50 item
word lists using the CDrecordings, with four second

(22:15):
interstimulus intervals and twosecond interstimulus intervals,
compared to doing that withmonitored live voice, and those
data showed that monitored livevoice is faster statistically.
But clinically our opinion isthat it's not a clinically
significant difference and thereason for that is number one
it's less than a minute to dothe recorded.

(22:37):
But, more importantly, what weknow about the variability of
monitored live voice people'saccents, the speed of their
speech, intonation patterns allof that really affect how well
they present those words and ifyou were to compare clinic to
clinic or year to year, youwould get different results
because of those variables.

(22:58):
We just did a study that justcame out in AJA in December and
we called it recorded wordrecognition testing is worth the
time.
Again, a student project that Idid with two students, allie
Austin and Catherine Ladner,where we looked at okay, now we
can do MP3 recordings throughthe audiometer.

(23:18):
That's going to be faster forsure, right?
You just push the button.
Correct, it presents the nextword.
Incorrect it presents the nextword no interstimulus interval.
It's going to be faster.
And we found out that it's notfaster, that monitored live
voice is still faster than that.
We were really kind ofsurprised by those results.
But a couple of things happened.

(23:40):
One these are still the samerecordings that are on the MP3.

Dr. Douglas L. Beck (23:45):
Right, they've just been digitized
differently.
Yeah.

Dr. Lisa Lucks Mendel (23:47):
Correct.
So the speed of the speechthat's being presented is
exactly the same, right, but youhave shortened the
interstimulus interval and inmonitored live voice, you can
vary your speed as much as youwant.

Dr. Douglas L. Beck (24:00):
Yeah, but here's the thing Over the 50
words presented, monitored livevoice versus recorded, there was
less than one minute difference.
Total, right, exactly, yeah, soso it's not like oh, we do it,
we did it.
Digitized, that took 45 minutesand monitored live voice was
one minute.
No, no, the whole 50 word less.
The difference was less than aminute, I think it was.

(24:20):
Was it 50 seconds or something?

Dr. Lisa Lucks Mendel (24:23):
48, 48, 48 seconds yeah so.
I know time is money in manyclinics, but when you're wanting
to do best practice, it's justthe thing we've got to do, and
these MP3 recordings areavailable.
You can plug them into youraudiometer and you're good.
The other thing I want tomention is that the average
score using the computer MP3presentation of the word was

(24:47):
significantly better for thosewho had typical hearing compared
to those who had hearing loss,and so that's showing.
But that only happened for thecomputer-assisted presentation,
not MLV, not monitored livevoice.

Dr. Douglas L. Beck (25:00):
Interesting .
Why do you think that was?

Dr. Lisa Lucks Mendel (25:02):
Well, that tells me that MLB is not as
capable of differentiatingperformance between adults with
normal hearing compared to thosewith hearing loss.
So really, MLB is notdiagnostically sensitive to
those with hearing loss.
Maybe so with normal hearing,but not so with hearing loss.

Dr. Douglas L. Beck (25:20):
So that alone is a good reason to get it
recorded, there might be somesort of I don't want to say
placebo effect, but some sort ofbias from the person running
the test.
If you see the personstruggling with MLB, maybe you
do slow it down, maybe you dosay it a little bit louder into
the mic.
You know you do things likethat because you're there and

(25:41):
it's a live situation and you'retrying to maybe help.
And it's hard to be perfectlyobjective and not smile and not,
you know, be involved, becauseyou're talking with a patient
who needs your help.

Dr. Lisa Lucks Mendel (25:53):
So well, and we did measure how people
with typical hearing and hearingloss responded, and it does
take longer for those who havehearing loss.
For exactly those reasons, weprobably slow down a little.
They need more processing timeto provide the answer.
So so, yeah, I would agree withthat.

Dr. Douglas L. Beck (26:09):
All right, so bottom line, just your
thought on the idea.
Before I let you go, Lisa, onelast issue Do you anticipate
that anytime in the near futurewe will have speech and noise
actually replacing speech andquiet in the diagnostic battery?

Dr. Lisa Lucks Mendel (26:30):
I don't know that that will happen.
I do think there is some validinformation we get from speech
and quiet, but more importantly,we need to add speech and noise
to the routine.
There's more information thatwe can find if we get assessment
of what they're doing inbackground noise, but we have to

(26:50):
do it with ecologically validstandardized tests that really
do tell us how people are doingin a real world environment.

Dr. Douglas L. Beck (26:58):
Yeah, I have to agree, and I this is not
a new idea.
The first time I read this was1970, ray Carhart and Tillman.
Carhart and Tillman said speechand noise should be part of
every audiologic assessment.
And that was, oh, 55 years agoand we haven't done it yet.
All right, well, dr Lisa LuxMundell, it is a joy to see you

(27:20):
again and to work with you, andI wish for you a joyful 2025.
And thank you so much forparticipating.

Dr. Lisa Lucks Mende (27:27):
Appreciate it.
Thanks for having me on.
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.