All Episodes

March 18, 2025 67 mins

Send me a message

In this episode of Inside The Mix, mastering engineer Ian Shepherd demystifies loudness metrics and debunks common mastering misconceptions while offering practical advice for producers looking to improve their masters without chasing arbitrary targets. 

Ian explains the role of LUFS in mastering, why normalization matters, and how focusing on musicality and dynamics leads to better results than simply hitting a loudness target.

What You'll Learn:
LUFS measurements explained – momentary, short-term, and integrated loudness
How many LUFS should my master be? Understanding the balance between dynamics and loudness
• The truth about LUFS for Spotify – why 83% of users never change loudness normalization settings
What does LUFS stand for? And why it's just one piece of the mastering puzzle
• How normalization impacts your music across different streaming platforms
• The role of audio normalization in creating a consistent listening experience
• Why AI mastering struggles to match the emotional intent of human engineers
Spotify’s approach to loudness and what it means for your masters
• Internal dynamics – how balancing different sections of your song enhances clarity and impact
• The mastering feedback loop – why collaboration between engineers and artists is key

If you want your music to stand out in today’s Spotify-dominated landscape, don’t obsess over loudness numbers. Instead, focus on musicality, dynamics, and emotional impact. Test how your tracks sound at normalized streaming levels, and let the music, not the meters, drive your mastering decisions.

Links mentioned in this episode:

Support the show

Book your FREE 20 Minute Discovery Call

Follow Marc Matthews' Socials:
Instagram | YouTube | Synth Music Mastering

Thanks for listening!!


Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Ian Shepherd (00:00):
The EQ, the amount of bass, the amount of
mid-range, the amount ofdistortion, the amount of
density, the amount of stereowidth.
All of these things have a muchbigger influence on how loud we
think it feels than the actualLUFS number, because the LUFS
number has been changed right.
You might have two songs thatwere 8 dBs apart to start with,
but once they're both at thesame level, then all the other

(00:21):
stuff comes into play.
You're listening to the Insidethe Mix podcast with your host,
mark Matthews.

Marc Matthews (00:28):
Welcome to Inside the Mix, your go-to podcast for
music creation and production.
Whether you're crafting yourfirst track or refining your
mixing skills, join me each weekfor expert interviews,
practical tutorials and insightsto help you level up your music
and smash it in the musicindustry.
Let's dive in.
Hey folks, welcome to Insidethe Mix, or welcome back.

(00:52):
If you are an existing listener, a big welcome.
Today I am honored to welcome atrue expert in the world of
mastering and other things andeverything in between, someone
who has been at the forefront ofthe industry for decades as
I've got my notes here and hashelped educate countless
producers and engineers IanShepard.

(01:13):
Ian, how are you?
And thank you for joining metoday.

Ian Shepherd (01:15):
I'm very well, thanks, Mark.
Thanks for inviting me.
Glad to be here.

Marc Matthews (01:18):
Yes, yes, thank you for joining me on this.
I've been looking forward tothis.
I've got a lot of content to gothrough in here, some really,
really interesting stuff.
So for the audience listeningwho might not be familiar with
Ian, I'm just going to read abit from his bio so we can get a
bit of background, so then wecan just dig straight into the
questions for today.
So, ian, a British masteringengineer and the MD of Mastering

(01:39):
Media Limited.
Over the course of Ian's careerlimited over the course of his
career he's worked on thousandsof cds, dvds and blu-rays for
major record labels, tv stationsand independents, including
several number one signalsingles and award-winning albums
.
And he also runs the popularproduction advice website and,
of course, a fellow podcaster,the host of the mastering show

(02:00):
podcast as well.
And he's a fierce critic of theloudness wars.
I love anything surrounded theloudness wars.
I love anything surrounding theloudness wars, always
interesting.
And on top of that he'sco-developed the loudness
penalty website with meter plugs.
So there's a lot.
There's a lot to a lot in there.
Ian, you've been very busy, Iwould say, and a lot to keep you
busy as well, most definitely.

Ian Shepherd (02:21):
Yeah, for sure.

Marc Matthews (02:22):
Yeah, so Ian's going to share his valuable
insights about masteringloudness and the evolving
landscape of AI.
So there's an interesting bitHopefully we get onto it today
about YouTube and its stableaudio feature, which I wasn't
aware of until I saw one of yourpieces of content on Instagram
and I was like I had no ideathat existed.
So that's very interesting.
So we're going to start offwith LUFS.

(02:44):
That existed, so that's veryinteresting.
So we're going to start offwith luffs.
So my question here do higherluffs masters sound better, even
when they're normalized?
But I think it's important tostart off with just defining
luffs, so maybe that's where wecould start, ian yeah,
absolutely, um.

Ian Shepherd (02:59):
So luff a luff is a loudness unit, full scale,
like we have dB full scale.
So basically you have aloudness unit and that's
relative to zero, the top of themeters, and it is our current
best attempt to measureperceived loudness.

(03:20):
So not just loudness butperceived loudness and the
reason I mean it's actuallysurprisingly tricky.
I mean, everybody can hear whenstuff is louder or quieter.
But you know, the audio signalis stored as an electrical
current in a wire and we canmeasure that really precisely.
But if you track the waveform,which is basically what the peak
level does, when you see thepeak levels on a meter, in a, in

(03:42):
a, you know a daw that veryoften doesn't really bear very
much relation to how loud thingsare.
I mean, generally, if the peaklevels are higher, the loudness
is going to be higher, so itwill sound louder.
But you can have things wherethe peaks look huge and it
doesn't sound that loud, and youcan have stuff that doesn't
look that impressive and itactually sounds quite loud.
So that's why back in the dayyou had vu meters, the old

(04:02):
needle meters, um, and thenmoved on from there to rms
meters.
So there's the durra meter thatsome people are familiar with,
and there's rms, features inagain quite a lot of daws and
more recently than that, youhave loudness units.
They're basically the same asrms but they have a filter
applied to them to try and makethem match what we hear more
accurately, because our ears aremore sensitive, especially in

(04:24):
the mid-range, so they theytweak the frequency response of
those and the idea is to try andrepresent the way how loud it
feels right, um, to be moreaccurate than rms or vu, and
it's pretty good.
I mean, like, for example, forme, if I'm matching stuff by ear
and then I slap a meter on it,very often it will be close, you

(04:47):
know, often within like half adB to a dB or so.
And in particular, if I havetwo things, you know when you're
trying to match two things thatare exactly the same, that
works.
However, they can be a littlebit tricky because there are
three different types ofloudness unit.
Can be a little bit trickybecause there are three
different types of loudness unit.

(05:08):
So there is the momentary LUFSor the short term loudness unit
and the integrated loudness.
So momentary goes really,really fast and I don't
personally find it that helpfulfor music.
So I kind of tend to ignorethat.
One Short term is measured overa three second window, so it's
the meter moves slower, and Ifind that is actually the most
helpful for me when I'm working,because it shows me the

(05:29):
loudness at this particularmoment that I'm at, um, and it,
broadly speaking, is kind ofsimilar to what an rms meter or
a vu meter uh show, I meanactually when I'm working, I
tend to use a plug-in version ofa vu meter.
I still really like.
I find they're really helpfulbecause they're really sensitive
around the middle of the range,right.
You calibrate them to where youwant your zero point to be and

(05:52):
I think they go 3 dB above thatand maybe 20 dB below that or 30
, depending on the exact.
So it's really easy to see whenyou're in the right ballpark,
which is obviously kind of animportant thing to know when
you're mastering, especially.
Um, there's just a great visualindicator, but short-term
loudness is kind of similar towhat you see on there.

(06:13):
So I think that's a good way oflooking at it.
Integrated loudness is anoverall number for an entire
podcast episode or song or albumor whatever it.
Basically, you start the audioplaying, you stop it and that
gives you an overall value andit's kind of a good sort of
general rule of thumb.
You know typically highernumbers as in so less negative

(06:36):
numbers.
So minus eight is louder thanminus 10 is louder than minus 12
, right?
So you know, minus eight isprobably going to sound louder
than something that's at minus12.
The problem with it is that itis an overall number and music
doesn't do the same thing allthe way through.
You know it changes you versesand choruses, um songs build and
ebb and flow, and also you havedifferent musical genres.

(06:58):
So, um, you know, like a reallycommon question I get is you
know somebody saying, well, Iset the loudness to minus 14 or
whatever it was, and that's notthe right number.
We can come back to that yes,yeah, but they.
You know I set the loudness tominus 14 and it sounds too loud.
Very often when that's happening, it's because it was intended
to sound quiet right, it makesno sense to make an acoustic

(07:21):
guitar ballad the same level asa death metal tune, right,
because they're not meant tosound.
The measurement will tell youoh, they're equal loudness, but
that's artistically wrong.
So it's really important tohave your wits about you when
you're looking at integratednumbers and it's also really
important, when people aretalking about loudness, to make
sure you understand what they'retalking about.
Because you see, people say, oh, everything goes to minus eight

(07:42):
.
Right now, I think that's tooloud, just for for what it's
worth, and we can talk aboutthat later as well.
But if something, do they meanthe integrated loudness is at
minus eight or do they mean theshort-term loudness at the
loudest moments is at minuseight?
Because that for me is the mosthelpful way to assess the
loudness is what is the loudestpart of this song?

(08:03):
How loud does it get?
And if you think about a songwith lots of variety, the
overall level is going to beless than that, right?
So if somebody says, oh, it's,everything is at minus eight and
they mean the loudest bits,that's a bit loud in my opinion,
but probably not disastrous.
Whereas if they're saying theintegrated loudness is minus
eight, right, that means overall, the song is minus eight, which

(08:25):
means it gets well above minuseight at the loudest moments
probably.
Um, that could then be an issue, because the louder you go, the
less room you have to work interms of with, in terms of the
peaks, um, and more limiting andall that kind of stuff you have
to use, and that's where youknow there are, uh, there's a
balancing act to be done withthe sound to get the best
possible results.

Marc Matthews (08:46):
Yeah, so with that you mentioned the three
different types of loft meterreadings there.
So you mentioned short-term.
You've got momentary selectyourself.
I don't really momentary'sthere, but it's not one that I
pay attention to as much as sayshort-term and integrated.
At what phase of the masteringprocess are you then sort of
focusing on the integrated love?

(09:07):
So what I'm getting out here, Iguess, is, when you're actually
mastering a song and you're andyou're looking at the meters,
are you mainly focused on theshort term at that point, and
then then is it, once it's done,then you start looking at the
integrated, or are you looking,flicking between the two in the
process?

Ian Shepherd (09:22):
um, a little bit of both.
I mean, the honest answer is Idon't really care about either
of them.
Um, so I because I don't choosemy loudness by meters I mean I
have a little kind ofcatchphrase I came up with, I
shared on socials, which is thatthe lufs should be the result
of the mastering, not the targetof the mastering.
Right, so, master it so itsounds good.
Then the integrated I mean theintegrated value don't get me

(09:45):
wrong is important because it'swhat streaming services use and
we can talk about that in moredetail.
So it has a big impact onwhat's going to happen to your
music when it gets played backon apple music and youtube and
spotify and tidal and all therest of them.
So it is important to know atthe end of the process and to
check that you're happy with theresults that you've got.
But when I'm working actually,I mean I tend not even to watch

(10:07):
an LUFS meter.
Um, as I say, I've got my VUmeter.
So basically, with that I have,for anybody who's interested, I
have it calibrated most of themout of the box.
You the?
The calibration level is minus18, which means an rms.
A sine wave at one kilohertzhas an rms of minus 18 um.
It's just, it's a calibrationthing.
That's good for mixing.

(10:29):
I would say um, because thatmeans you know.
You know because the way that Iuse it is, if you have two
things that are more or lesskicking the same vu, meat
reading they're going to besimilar in loudness, right.
That's why they're, that's whyit's helpful.
And if, if the meter is pegged,it might be a bit too loud, and
if the meter is way down, theneither it's intended to be
really quiet or it might be abit quiet um, when it's hovering

(10:51):
around zero, you know that theloudness is roughly in the right
ballpark um.
So mixing, that's super helpfulbecause it allows you to
balance the elements and, um,you know, figure out what's
going on when it comes tomastering.
I change that calibration so Iset it to minus 11 um, which
means for me, so that means thatthe loudest sections are
kicking up to plus one, plus two, maybe plus three on the meter

(11:14):
right, which is up to kind ofminus eight um in terms of rms,
vu, but also probably lufs, um.
So because, as I said, they'revery similar and because they
focus on the mid-range um, sothat means at my loudest moments
I may be a couple of dbs louderthan minus 10, but overall it's

(11:35):
kind of hovering around thatrange and then I balance
everything else musically withthat um, which means that if
it's a quiet song it'll come outwith a lower integrated lofs.
If it's a loud song, all theway through the integrated lofs
might get up to minus 11, minus10 for my masters, um, and if
it's varied, then the loudeststuff could be up at minus eight

(11:58):
, nine or ten, but the theoverall, you know, in general it
will kind of be a little bitlower down.
So that's how I'm, you know,and I don't even judge it by.
I mean, what I would say is ifthe meter is pegged, then so
flat out, you know, and justkind of banging up against the
stops, that's a that's going tocatch my attention and make me

(12:20):
think is this actually a littlebit hot, you know?
And I kind of take a step back,maybe take a break, compare it
to some of the other songs.
Or it could also mean, becauseVU meters are very sensitive to
bass, that I've overdone the lowend.
For example, if the meter lookslow but it sounds loud, then
maybe the mids are a bit up.
So there's a kind of All ofthese meters, you have to learn
how to use them.

(12:40):
There's an art to using them.
But that's what I'm payingattention to when I'm actually
working how it sounds and thenjust keeping an eye on the VU
meter just to make sure nothingcrazy is happening.
I mean, actually I don't evencare about the LUFS at the end
because at this point I've doneit so often I know what the
result is going to be and I knowI'm confident that the results
work really well.

(13:01):
Um, if I have an artist who'sparticularly, or a client who's
particularly kind of uh,concerned about this stuff, then
I might kind of do some testsand sometimes I do tests just
out of interest.
Um, but yeah, for anybody kindof getting into this and trying
to get their head around it, itcan be really useful,
particularly because of thiswhole thing about, um, online

(13:24):
streaming services which mostdefinitely so in.

Marc Matthews (13:27):
In summary, there , really you've got to be
confident in your conviction.
So at the end of it youmentioned there about that
you're you're not really payingattention to the metering unless
specifically the record label,the artist is, has requested you
to do so.
So, like you say, you've doneit enough times.
Now that you you're confidentin what has actually come out,
the other end is, is what itshould be at, which is?
It's interesting?

(13:47):
You mentioned about the VU meterand setting that again in
mastery, and that's somethingI'm going to take away because
it's not something I'd everreally considered doing.
So I'm going to definitely diveinto that a bit more as well,
in the interest of time.
I think it'd be good to move onto loudness normalization so it
kind of segues on nicely to thenext part and sort of what it
means for producers andengineers, because streaming

(14:09):
platforms will normalize musicand obviously with spotify we
can go in and turn that off.
I do that myself so that way Ican actually hear the level, the
actual the, the true.
Well, I say true representation.
Obviously it's a truncatedaudio file, but representation
of the audio itself.
So I wonder, could you explainhow normalization works and what

(14:29):
producers need to or mixengineers.
Mastering engineers need tounderstand.
To ensure their masterstranslate well across different
platforms.

Ian Shepherd (14:37):
Normalization is basically users, music fans
complain when there are extremechanges in loudness.
Right, there's loads of peoplecomplaining about the ads on
YouTube, for example, tv it'sthe number one source of
complaints is when people can'thear the dialogue or something
is super loud, and you know.
So people don't like when theloudness changes too

(14:57):
dramatically.
You know it's kind of slightlydifferent and if you go to the
cinema, you want the explosionsto pin you to your seat and you
want, you know, the reallythat's a different experience.
So streaming services know that.
So they introduce normalizationand the really simple way is to
say that on all the mainstreamplatforms, the loudest stuff

(15:17):
gets reduced in level.
There's a lot of detail andnuance involved in that.
They all do slightly differentthings and it works in slightly
different ways, but the bottomline is and the most common
number is minus 14, which is whyeverybody hears this number
minus 14.
Indeed, yes.
So that is their distributionloudness.
So what they're saying is ifyou master something louder than
minus 14, we will turn it downso that it is only minus 14.

(15:39):
And they're using theintegrated loudness, which is
why that number is important.
And they're using theintegrated loudness, which is
why that number is important.
That's what normalization is.
I want to kind of pick out that.
You mentioned that you turn itoff on Spotify.
I have no problem with that,but it's important that people
know that if they do that, theyare doing the exact opposite of
what most people listening tothe music do.

Marc Matthews (16:07):
So the stats say 83% of people on Spotify never
touch the preferences forloudness.
Wow.

Ian Shepherd (16:10):
Okay, so four out of five listeners that setting
is on by default.
They don't know it's there,they don't care.
Yeah, this is why I say it'sreally important to test, right
For me.
I master it so that it soundsgood, and then, if I'm in any
doubt, I will pull it into.
Well, I've now created anapplication with Meet the Plugs
called Loudness Penalty Studioto help me do this, but you can
do it yourself in a DAW.

(16:31):
You measure what is theintegrated loudness of my song,
my master, and then you findyour favourite reference track,
whatever that is somethingsimilar that you think sounds
amazing everywhere else.
Put both of them at minus 14and play them next to each other
and see how they sound next toeach other, because that's how
most people will hear it.
And I say most people because,like I say, it's on by default
and most people don't turn itoff in spotify.

(16:52):
It's on by default in applemusic.
Now, um, it's not only is it onby default, you can't disable
it on youtube, so the loud stuffwill always be turned down on
youtube.
Um, it's on by default in tidal.
It's not on for soundcloud andbeatport.
Those are two platforms,obviously, that are important to

(17:12):
people, and it's not thatbandcamp doesn't have it yet,
but I wouldn't be surprised ifthese platforms add them in
future.
Um, and all three of those.
Well, I mean, it depends whatkind of stuff you're releasing,
who the audience is, but yeah um, and I mean one of them just as
a kind of tangent soundcloud,don't have it, but you look at
the top 10 and the loudness isall over the shop.
I mean, if you wanted proofthat you listeners don't care,

(17:35):
you know, I mean literally I hadone where the number 10, the
number one uh song in thisplaylist of uh, it was just kind
of popular stuff yeah, it's 8db louder than the thing that
came after it, right?
so the one that came after, eventhough it was 8 db lower, still
managed to get to number two inthe chart.
So loudness can't be that bigof a factor in terms of
popularity.

(17:55):
This is my opinion, um.
So, yeah, that's what loudnessis.
Loudness normalization is theloud stuff gets turned down.
And yeah, I think it's reallyimportant that, because when you
, I asked this in a facebookgroup, I said to people do you
turn off loudness normalizationor not?
And it was a, you know, a musicproducers, engineers facebook

(18:16):
group.
So, um and uh, three quartersof them said they turn it off,
like you, whereas only one infive users on spotify or people
out there in the world you knowthe fans, the people who we care
about, who you know it's.
So there's this weird thinggoing on where the people who
care most about the music, whichis us and the artists, are

(18:39):
listening to it in a completelydifferent way than the people
who actually buy it are.

Marc Matthews (18:44):
Um, so there you go very, very, yeah, everything
you've mentioned there as soonas you mentioned it.
It makes perfect sense.
And also I'm thinking to myself, because I've said this on the
podcast before, about how peopleconsume music.
And then you see people walkingdown the street with a mobile
phone, or I've seen um deliveroo, um delivered deliverers with a
, a, with a jbl speaker on theback, playing out music.

(19:08):
So people are.
There are other deliveryservices available.
Um, people are listening to itin different contexts and
different mediums, differentenvironments.
So it's, you're exactly rightwith that.
There.
It makes perfect sense.
And I didn't realize thatstatistic was so high for people
like you say, I guess a lot ofpeople don't know it exists and,
like you say, no, no one reallycares.
It also makes me think aboutbecause I know you can change

(19:30):
the EQ in Spotlight.
I've never done that.
That's one thing I've neverdone and I don't see why you
would ever do that in the firstplace.
But I don't know.
Have you got any statistics onthat?

Ian Shepherd (19:47):
of people actually using the eq on a streaming
platform.
I don't know, um, I'm guessing.
I'm guessing it's small.
I mean, I guess maybe you wantto do it to adjust for you know
kind of a crappy phone speakeror you know little earbuds or
something yeah, um but um, Imean the other thing to say so
here.
You didn't ask me this, but kindof one of the big things about
mastering is translation, but Ithink there big things about
mastering is translation, but Ithink there's a misconception

(20:07):
about what translation actuallymeans.
Lots of people think it meansyou're trying to make it sound
the same everywhere, right, butthat's impossible because you
know you'll never get a pair ofthese.
You know old apple earbuds tosound the same as a car stereo
or a set of beats headphones ora PA system.
The goal, or translation, isabout making things sound right

(20:29):
and sound good in comparison toeverything else.
So if you listen to it on apair of Apple earbuds,
everything is going to soundtinny and rubbish right.
So the goal is to sound good inthat context, whereas, if you
want it to sound good in thatcontext, um, whereas, if and
same, if you know you want it tosound good in comparison to
everything else on a smartphoneor on a um, a speaker or you

(20:51):
know whatever it might be, um,so yeah, I just think that's a,
that's a point kind of worthmaking and it's one of the big
factors in that is the eq and Ithink this is one of the things
I want to.
I mean, your initial questionwas do higher lufs masters
always sound louder?
I mean, one obvious answer tothat is no, because, as I say,

(21:14):
it depends whether it's loud allthe way through or whether it's
very varied.
Right, those will givedifferent loudness readings and
make you feel different aboutthe loudness.
But the other thing is thatwhen you match the loudness
which is what happens on allthese streaming platforms almost
everything else becomes moreimportant.
So I did a video of this where,if people want to kind of see

(21:35):
it or hear it for themselves,the eq, the amount of bass, the
amount of mid-range, the amountof distortion, the amount of
density, the amount of stereowidth all of these things have a
much bigger influence on howloud we think it feels than the
actual LUFS number, because theLUFS number has been changed
right.
You might have two songs thatwere eight dBs apart to start

(21:55):
with, but once they're both atthe same level, then all the
other stuff comes into play andyou know, to give another
example, that's why it'simportant to make this
comparison.
People also make stuff that'stoo dynamic, so you have a huge
contrast between the verse andthe chorus sounds amazing when
you listen to it in the speaker,in the studio, with the
speakers cranked up, yeah, butwhen you match the loudness and

(22:17):
play it against something that'smuch more consistent, the risk
is that the verse will just kindof disappear, you know, into
the background noise, um.
So that, and that's what I callinternal dynamics.
So it's the balance betweendifferent sections of the song,
different instrumentation,different songs on the album.
Um, getting all of that rightmeans that things will.
When I'm mastering, I'm lookingfor a center of gravity, a kind

(22:39):
of line that runs through, thatmakes everything feel
consistent.
Um, and when you get all thatstuff right, that's when it
translates.
Because you play it on aspeaker that's got tons of bass,
and it'll sound super bassy.
You play it on something that'stinny and awful, and it'll
sound super tinny.

Marc Matthews (22:53):
And getting the loudness optimized is a key bit
of that equation interesting youmentioned there about the uh
inter I think you mentioned youdescribed as internal dynamics.
You're saying that, um, some uhmusic is is translating,
whereby the the quieter sectionsare too quiet compared to the
louder sections.
In terms of those, thosedynamics, why do you think it is

(23:15):
that those particular mastersare levering, levering, leaving
the mastering studio?
Is there sort of amiscommunication going on
somewhere with regards toinformation that's been
disseminated, with regards totranslation, or is it?
I'm interested to know yourthoughts on that.

Ian Shepherd (23:32):
It's a good question.
I mean it kind of leads intothe philosophy of mastering,
right?
What is the role of themastering engineer Like for me?
I'm not trying to stamp my ownsound onto anything.
When I'm mastering, I come into, I try and have empathy with
what the artist or the, theproducer, the production team
are trying to achieve.

(23:53):
So I try and listen and think,okay, this is what they were
going for and I'm going to tryand get them closer to that.
Um, sometimes that means beingvery minimalist, sometimes it
means being very hands-on.
Um, and I'm personally willjust do whatever I think is
necessary.
There are mastering engineersthere who feel that it should be
very minimalist and thatthere's a, there's a kind of a

(24:16):
line they won't cross.
You know yeah and so for me justto take an example, one of the,
I think, most important thingsthat I do um, when I'm mastering
a song, I start off with theloudest section, um, and the
kind of the fullest eq and allthe rest of it.
I'll listen to that, I'll getthat, so it's really working for
me.
And then I'll probably go backand listen to a quieter section

(24:37):
and check that, and if the mixwas absolutely spot on, chances
are that section will also soundamazing.
Um, but if you get something inthe situation you're asking
about where there is a bigdifference, you might kind of
think oh okay, so now the verseis a bit quieter, so I'm tempted
to push the level of the wholesong up.
But then when I go back to theloud section, that's going to

(24:58):
hit all my dynamics, processing,the limiting, compression, all
that kind of stuff much harder.
That might also work and soundamazing, or it might be too much
.
So then you've got threechoices you go with setting it
by the loudest section andleaving the verse a bit quiet.
You go with setting it by theverse and having the chorus too
loud and smashing up into alimiter, which happens way more

(25:22):
often than I would like for me.
I don't have a problem, I willput in some automation level
automation.
I choose the crossover pointsand the crossfades really,
really carefully, um, to justrebalance it ever so slightly.
Um, so I might bring the verseup but leave the chorus so that

(25:42):
it just hits all of the dynamicsprocessing really nicely.
And my goal is not to changethe the levels, it's to make it
feel right, um, and now thereare mastering engineers who
would say that's completelyoverstepping the mark because
I'm messing with the mix at thatpoint.
Right, the mix has made achoice.
So the answer to your questioncould be respect for the artists

(26:02):
and their decisions, whichobviously is important.
Whether or not that's misguidedis.
It could be that they're a bitminimalist and they just don't
feel that that's the role of amastering engineer, or it could
be that they I think those areprobably the two main reasons it
could be lack of experience, Imean you know or just a
different perspective, becauseat the end of the day, all of

(26:24):
this comes down to taste andopinion.
You know something that isperfect for me when mastering in
general, you can spend agesworking on something and then
you loudness match it with theoriginal and do a comparison,
the difference doesn't seem thathuge.
And you think, well, I did allof that work.
What happened?
And the answer is, if instead Ihad just cranked the source up

(26:46):
to into a limiter, then when youdo the comparison, they
wouldn't sound remotely closeright.
The EQ balance would be wrong,there'd be distortion, there'd
be pumping, it will be messed up.
You put all of that work in inorder to keep everything that
was great about the mix and makeit translate in the context of
the master and the finalloudness and the.
You know all the other songs onthe album and all the rest of
it.
So, yeah, I think it'scomplicated.

Marc Matthews (27:06):
Yeah, most definitely, and I like what you
said there about it's sort ofyour.
You're catering to the artistthere.
It's the artist's vision andthat's what you're trying to
draw out of it.
But I do have a question.
This is I'm going on a slighttangent here.
I do this quite a lot on thepodcast, but I've been listening
to other podcasts as I do, andother schools of thought, and I

(27:27):
was listening to one the otherday and it was regards to
feedback and I want to get youropinion on this.
So a mix engineer, an artist,sends you a song for mastering
and I think there are twoschools of thought here in terms
of some mastering engineerswill provide that feedback loop
and offer some feedback,potential advice, let's say in

(27:47):
terms of how maybe the mix could, they could change the mix to
be to help with the masteringprocess.
Alternatively, there's a I findthere's another school which is
that's the mix they'vesubmitted and that's the mix I'm
going to master using my toolsand techniques.
What are your thoughts on thetwo sort of very crude camp
descriptions I've created there?

Ian Shepherd (28:08):
I am a bit of both .
What I tend to do I I thinkit's really important as a
mastering engineer, like I say,to have empathy and to have
respect for, for, for all thework that has gone into the
music before it reaches us.
So I mean, there are masteringengineers, I know who.
The first thing they do for themajority of their clients is
kick it back and say, no, you'vegot, you've got to do this,

(28:29):
this, this, this and this, andtheir clients love them for it.
Right that they, they lookforward to it.
It's why they use thoseengineers.
They happily make those changesand then they end up with a
master they're happy with.
Um, that's not me, unlessthere's something that's clearly
a technical fault.
Um, so my favorite example isyou know, somebody's flipped the
polarity on one channel of astereo piano.
So which kind of sounds hugeand wide when you listen to it

(28:51):
in stereo and you hit mono andthe piano just disappears.
That, to me, is an issue thatcan't be fixed in the mastering.
They need to correct it.
I'm going to make them aware ofthat.
Or if there's blatantdistortion or I don't know any
kind of technical fault, but ifthe mix is in good enough shape
for me to be happy to master it.
Um, I will do my best um getthe best out of it, assuming

(29:17):
that everything that's happenedis deliberate right, so in that
sense I'm respecting the mix.
Now, sometimes that can be quitehands-on.
I quite have some quite extremeeqs or a bit of automation or
whatever it might be, um,messing with stereo width
sometimes, whatever.
But and then I'll pass thatback to them, say, here you go,
this is this is my take on it ifthere's something that I think

(29:37):
and a very common one isactually it's been.
The mix was super hot to beginwith.
You know, back in the 90s oneof the big things about
mastering was making thingslouder because people didn't
have access to all of the toolsthat were in mastering studios.
So a big thing that you couldbring was increasing the density
maybe a little bit of, you know, saturation, just everything
thicker and bigger and all therest of it.

(29:58):
These days everybody's gotthose tools.
There's a million and oneplugins and for me everything
comes in has had too much ofthat already so often it's about
trying to get back into morespace and punch and impact and
that kind of stuff into, andthere's only so much you can do
at the mastering stage in thatsituation.
So if something I think is abit overcooked, then I'll say I
do wonder whether I could getsomething even better if the mix

(30:23):
had been, you know, just takenback a little bit, if we could
ease off the final compressionand limiting or maximization,
saturation, whatever it is.
If that's an experiment you'dlike me to try, that would be
fantastic.
On the other hand, if you'rereally happy with this as it is,
I'm completely comfortable withthis master.
So I do offer that feedback,but it's not a.

(30:44):
This is wrong, it's a.
I think I could get an evenbetter result here.
Do you want to try it?
And sometimes they say yes andsometimes they say no.
Either of those is fine with me.
What I will say is that, with100% success rate, when they say
yes, we want to try it, I endup with a master that we all
like.
Better right, because I havethat original mix.
I have that original vision inmind.
I know where they're trying togo, so I'm going to maintain

(31:15):
that.
But I have more flexibility towork when I've got more room to
work with in terms of loudnessin particular, and yeah, so we
end up with something thatsounds even better than it would
otherwise have done.
So, as I say, it's kind of amixture of those two approaches.

Marc Matthews (31:25):
Yeah, I like that approach as well.
They're sort of like you'reoffering some insight into we
could try this, but it's open towe could try it or we could not
at the end of the day, which Ithink that's a nice relationship
to have, I think with clientsas well, when it comes to the
artists, record labels andmastering and whatnot.

Ian Shepherd (31:42):
You haven't asked this, but I will say.
The one thing that kind ofcompletely mystifies me is when
mastering engineers won't giveany feedback.
People have come to me and said, oh, I submitted this to xyz,
big name online masteringservice.
They sent this back and I don'tlike it.
So, and I always my first thingI would say was well, have you
asked them to?
Have you told them that?
Have you said, you know, thatwasn't really what I was hoping

(32:03):
for, can we?
And sometimes they say, oh, no,I couldn't possibly do that.
And then other times they sayyeah, and I just got this answer
about no, that's the master,you take it or leave it, and
that to me that makes no senseat all.
For me, mastering is acollaboration, right?
Yeah, I'm working together withthe client.
Um, you need a conversation?
I mean, sometimes it's justhere are the files, okay, here
are the masters.
Yeah, sounds great, thanks,fantastic.

(32:24):
You know, actually that almostmakes me a little bit nervous,
because I'm kind of wonderinghow critically are they
listening?
You know, I'm almost a littlebit more comfortable if they
come back say, yeah, it's great,but can we just tweak x, y,
because then I know that they'vereally they're not just blindly
trusting me yeah, um, yeah,most yeah so so when there's no
conversation at all, that to meI can't get my head around that
yeah, that that's.

Marc Matthews (32:45):
That's very odd, um, because I how does I mean?
This is obviously I'm not privyto to these conversations, but
I wonder how that works in termsof the revision process.
Do they just is it a case thatthey just say there are no
revisions, or it just seems?
It just seems like a very, veryodd cycle to not have that
feedback?
But, yeah, very, very uh, veryweird.

(33:05):
What you mentioned there aswell about feedback feeds in
nicely to the, the questionsurrounding ai, assisted
mastering, being a valid option,which we'll get to in a bit,
because that's, I think, anotherpart of where that potentially
falls down in terms of thatfeedback loop.
But I just want to circle back,so I've taken us on this trip
slightly out of where we weregoing originally, uh, which was

(33:25):
regards to the negative 14 LUFSfor streaming platforms and what
you can share with our audiencewith regards to that in
particular.
Can you explain why and whatproducers, artists, engineers
should consider with regards tonegative 14 LUFS?

Ian Shepherd (33:49):
I think, as I mentioned before, the thing to
do is to test what your musicsounds like when you adjust it
to minus 14, along witheverything else, right?
So how does my song at minus 14sound?
Next to my favourite referenceat minus 14?
Because that's how most of thestreaming services are going to
play it back.
I mean, apple plays it at minus16, you know, but the majority,

(34:10):
and again, in fact, the minus14 isn't important at all.
The point is matched loudness,right, because if you master
louder than minus 14, it will bereduced.
Everything gets reduced tominus 14.
So that's how, like I saysomething like at least 80 of
people are going to hear it.
So that's the importance of it.
Other than that's not importantat all.
Um, it's unfortunate.
There are some guidelines onSpotify's website, which they're

(34:32):
well-intentioned, but I thinkthey're a bit misleading because
they suggest oh, I think whatthey say is if you don't want
the loudness of your music to bechanged, master it at minus 14.
That's accurate, right?
If you submit something atminus 14, it'll get played back
at minus 14.
But for me, aiming for aparticular LUFS value, it, for
me, aiming for a particular LUFSvalue, it doesn't make any
sense right, because of thegenre you know, is it an

(34:54):
acoustic ballad or is it a deathmetal song?
But also, how do I know whatthe final LUFS is going to be if
I haven't even heard the song?
So I actually suggest peopledon't have any kind of target in
mind at all If they want to.

(35:14):
I realise that's not terriblyhelpful for anybody kind of try
to do this for themselves though.
So if you're getting started,my recommendation is to make the
loudest moments around aboutminus 10 short term and no
louder.
And if you make that consistentfrom song to song and then
balance everything else so thatit feels good, um, my experience
is you're going to be in greatshape.
And I mean, I've beenrecommending this now for 15, 20
years and I've had hundreds ofpeople, thousands of people,

(35:34):
take the advice and hundreds ofpeople tell me that it's worked
for them and it's really helpful.
So, yeah, if your goal is to golouder, then you just choose a
louder number, right, butconsistent, short-term loudness
at the loud moments.
Balance everything elsemusically.
And another important part ofthe equation is choose your
mastering monitoring level, thegain that's going to your amps,

(35:59):
and stick with it, right.
So let's say you follow myguidelines, you pull your
favorite reference track in, youplay it back, you look at the
loudest sections, you measure it.
You go oh, that's okay, I'lladjust it down by 2 dB.
That's now at minus 10.
Then adjust the gain on yourmonitoring so that that sounds
really good and loud, but not,you know, not fatiguing, not so.

(36:20):
It's quick, giving you makingyour ears ring, but also really
exciting.
You might have to kind of tweakit a bit over the next couple
of days just to find the perfectsetting.
But once you find that perfectsetting where you make the
loudest sections at minus 10 andeverything starts sounding
great, mark it, you know, a bitof Tipp-Ex on the volume dial or

(36:41):
the gain pot, whatever it is,and stick with that always.
So you're then combining themetering with what you're
hearing, and there's lots oftechnical reasons for that.
It's to do with the sensitivityof ears.
That's when, when the gain isin the right place, the
frequency response of ourhearing is the flattest.
Excuse me, but also you juststart to learn over time what

(37:07):
stuff is intended to sound like.
You know because and you you'llget a clue from the waveform
but you just put something onand you go, oh, that's super
loud or it's super quiet.
And, in terms of choosing theloudness, you just adjust the
gain until it sits right interms of the meters and your
monitoring level and you'regoing to be in great shape.
And they're two really simplesteps, but they can absolutely

(37:27):
transform the results you getwhen you're mastering.
And, yeah, my guideline is theloudest bits should be minus 10,
but people can take that with apinch of salt.

Marc Matthews (37:36):
Fantastic advice and what you mentioned there
about the gain and themonitoring level as well.
I think I was rummaging aroundand listening to various stuff
the other day and I heardsomebody mention a level of 80
dB, because you mentioned thereabout it gets to a particular
level where everything sounds atits most even.
Does that ring true with you atall?

(37:57):
I think it was 80.
It might have been 85.

Ian Shepherd (38:00):
Honestly, it's very personal and when you're
talking about stuff that's loud,the difference between 80 and
85 is a lot.
So, yeah, I think that's theright ballpark.
I forget I've got an app on myphone.
I mean, obviously the mic onthe phone isn't the most
accurate anyway, but just tokind of get a ballpark.
You can play some pink noiseand get it playing back, and I

(38:22):
think mine is more like 75 or 76on average, but the loudest
bits get up close to 80 probably.
I might have those numberswrong, but so, yeah, I I think
actually our ears are a betterguide.
You know it's pick somethingthat feels right.
Try working with it for a fewhours.
If you're getting to the end ofit and you're starting to get
tinnitus you know your ears areringing a bit just tweak it down

(38:44):
ever so slightly.
You know, basically, when it'stoo loud it'll be uncomfortable.
When it's quiet, you'll justconstantly want to push things
higher and higher.
So if you're constantly, thenthat's a clue to tweak it up a
bit and it just, you know, giveyourself a few days to figure
out where it is, and I thinkmost people will get there.

Marc Matthews (39:02):
Yeah, while you were talking there about using
your ears, in my head I was sortof summarizing the questions
and the line of topic so far andwhat it's made me realize is
that I mean, I like to think Idon't do this in general, but I
think at the moment there is Idon't know, maybe it's an
indictment on me and maybe musicat the moment but targets and

(39:28):
it's a topic of conversation andI cannot help but keep falling
into that conversation in termsof target, and I don't know if
it's because of the content Iconsume or the way.
Well, it probably is that to befair, but yeah, it just seems
to be targets.
We're talking about thenegative 14 LUFS, and then I
mentioned about the 80DB and I'mthinking why do I keep having

(39:52):
these targets in my head?
Maybe it's because I see themso much.

Ian Shepherd (39:55):
Well, yeah, people use it.
So I'm a member of the AES, theAudio Engineering Society, and
I got involved in so we draftedsome guidelines for streaming
services, not for musicproducers, but for streaming
services about playback levelslevels we actually recommended
minus 16 instead of minus 14 tothem.
Um, but this is all about thedistribution loudness, right,

(40:18):
it's not about.
And the interesting thing was Iwas involved in the original
and then I came in towards theend of the, the update, um, and
I just dropped a huge spanner inthe works because I said can we
please not use the word target?
All the way through it saidtarget Targets for the streaming
services, right, for thedistributors.
It's a valid kind of use of theword in that sense, if you want

(40:40):
to not upset people with superloudness, have a target for the
loudest stuff.
But the problem is musicproducers read that stuff and
think it applies to them, right.
And we also added a paragraphat the beginning in fact that
was already there saying theseare guidelines for streaming
services, not for contentcreators, not for musicians and
mastering engineers and all therest of it.
But also I think it wassomething like 40 uses of the

(41:02):
word target, and that's why wecame up with the phrase
distribution loudness.
So instead of target, it saysdistribution loudness to try and
make it clear that's the finalvolume that's going to be played
at.
You can make it however loud orquiet you want going in.
I recommend you've got to testit to make sure you're happy
with the results, right, becauseif you have stuff, something we
haven't mentioned yet is thatsome of the streaming platforms
don't turn quiet songs up.

(41:22):
Youtube is a big example and,by the way, in terms of
statistics I was saying aboutthat 80 thing spotify is less
than well.
Youtube is four-fifths of themusic, online music listening
market.
Right, four-fifths of the usersare listening to music on
youtube video not youtube music,youtube video and that's where
everything is normalized.

(41:43):
So, whether actually that wholething about spotify, whether
people turn it on or off or not,that's already a tiny little
slice of the of the pie.
But, um, yeah, the distributionloudness is for the streaming
services.
So it's interesting and it'simportant to test because, yeah,
let's say, you master something, you want it to be super loud
and you master it at minus 16.
Youtube is not going to turnthat up.

(42:07):
Um, they will turn stuff that'slouder down to minus 14, but
because if they tried to turn itup it would cause clipping or
they'd have to use limiting.
So they don't do that, so theyleave anything that's lower,
they leave low.
So that means that your loudsong won't sound loud as you
intend, because all the loudstuff is at minus 14, right?
So if you want your music tosound loud, you do need to be at

(42:28):
least at minus 14 or above.
I guess that's another reasonto pay attention to that number.
Right, it's like if you want tobe loud it's got to be at least
minus 14.
But the reality is most peopledo their stuff louder than that
anyway, so it's probably fine.
Um, yeah, sorry, that was alittle tangent of mine.

Marc Matthews (42:42):
no, no, no, thank you, yeah, it just like.
Like it's that word, isn't ittarget?
And and I fall into that traptoo many times Not that I work
towards targets, but it just sohappens it comes up in
conversation and it's justmoving away from it.

Ian Shepherd (42:57):
I've got to take some responsibility, right,
because I've been talking aboutthis stuff for decades now, and
when LUFS first came out, I waslike, oh great, this is really
going to help people, right,because it's way better than
RMS's not so sensitive to base,it's.
You know closer to what we hearand we can give these
guidelines and actually thereare rules, so that's gonna help

(43:17):
people to understand it.
the downside of that and talkingabout numbers all the time is
that people do start to obsessabout them yeah, the irony of it
is that when I actually work, Idon't pay any attention to the
numbers until right towards theend of the process, you know,
when it's like, okay, I betterdo as much like you do your I
don't know your car test right.
For me as a mastering engineer,it's like, okay, let's just

(43:37):
double check.
How does it sound at minus 14?
Yeah, that's good.
Okay, move on.
Um, so yeah, I absolutely agreewe shouldn't have any targets.
It's good to have guidelines,it's good to understand how this
stuff affects what we do, butthere's no need to aim for it.

Marc Matthews (43:52):
Yeah, 100%.
I totally agree.
Wise words, I think, in theinterest of time here, and I
think it's important that wetouch on AI-assisted mastering,
which is an interesting topic ofconversation.
It's one I've had on thepodcast a few times and I find
that there are generalcommonalities in terms of the
discussions surrounding this.
So, in your experience withAI-assisted mastering, it is

(44:14):
becoming more popular.
We know this.
Where does it fall short?
And do you see I'm fairlycertain I'm going to know your
answer on this do you see itbecoming a serious alternative
to human mastering engineers?

Ian Shepherd (44:32):
serious alternative to human mastering
engineers?
I think the short answer is itfalls short on uh being able to
understand emotion and intent.
You know it has no empathy.
I've used the word empathy somany times in terms of mastering
and understanding what theclient is going for.
Currently, the machines don'thave that and I'm I'm personally
skeptical about whether theywill ever have that.
You know they can.
They can do a convincingimpression in terms of text, but

(44:54):
, you know, because there areloud songs that are meant to
sound sad, right, and there arequiet songs that are meant to
sound angry.
So it's, I think.
You know mastering for me isall about people.
I guess that's my big thingabout AI mastering is it's not
mastering right, becausemastering is a human being, a
mastering engineer having aconversation, you know, building

(45:16):
a rapport, uh collaboratingwith another person to get the
best possible results out of themusic, and so you know their,
their ai optimization.
Or you know it's like, it'slike the wizard, the magic wand
tool in photoshop or yeah, yeah,yeah it does the best it can.
But I mean, I had a photograph Itook in a studio the day.
Um, I clicked the magic wandbutton and it turned me green,

(45:40):
but normally it's a slightimprovement, yeah, fine, but in
that case just got it completelywrong, did not understand what
it was, something about thecolor balance, or because it had
colored lights in studio, youknow, and the with the AI tools,
I think the other thing kind ofeven beyond that, is that they
don't have any understanding,currently at least, and again
I'm skeptical about whether theyever will of context.
So I actually did a test forthis.

(46:02):
I did a talk at a universityrecently where they were asking
specifically about AI and I havea project where the client sent
me the ai version, ai masteredversion and mastered in air
quotes that they already had andthey said this sounds all right
to me, but I'm pretty sure youcan do better.
Um, and I ended up running itthrough.
I put it in logic, I ran itthrough ozone.

(46:24):
They'd already done it throughone of the online services, I
can't remember um, so I had fouror five different versions and
the interesting thing about itwas when you being me, first
thing I did was match theloudness right, because I knew
some of them were going to belouder than others.
Some of them were going to bequieter and that would be
deceptive, because turn up theloudness, you hear more bass and
treble, you think it soundsbetter, but it's just louder.
So the first thing I did wasmatch the loudness and when you

(46:54):
compare them that way, justgoing from master to master,
there were differences betweenthem but they were kind of okay.
You know they weren't.
I didn't think they werefantastic, but they weren't
disastrous for the most part.
Then I played from song one tosong two, because it was an
album.
Um, and artistically speaking,creatively speaking, the second
song was about from most of theai tools was about 3db too quiet
because it was intended to be abig, loud song.
It was following a more gentlesong but the ai had just gone oh

(47:16):
, it needs to be this rightwithout any understanding of
what had gone before or after,any context, any of that stuff.
So that's not to say they won'tfigure that stuff out in future
, but I think it's a much biggerchallenge.
I mean because at the end ofthe day, a lot of what we do in
mastering does feel like itcould be automatable.
You know you turn it up to acertain kind of level.

(47:37):
You broadly match the balance,the eq over the frequency range.
You know you take a listen tothe stereo image, all that kind
of stuff.
But when you get into the howhard should this song hit after
the previous one, you know whatwas the emotional intent, why
did they make?
You know, and also the, theservices don't.
I was going to say what?

(47:58):
Should it be this distorted?
But ai doesn't care aboutdistortion or clicks or thumps.
Or you know pianos that havegot the polarity flipped.
Or you know verse, chorus,balances or any of that stuff.
They just you know it's, theygo through.
So all of that sounds quitenegative.
I will say you know verse,chorus, balances or any of that
stuff.
They just you know it's, theygo through.
So all of that sounds quitenegative.
I will say, you know, the greatthing is they're really
accessible, they're reallyaffordable.
Um, and people are using themin very creative ways.

(48:21):
So there are people submittinga song, listening to the master
and going that's not quite right, tweaking the mix, resubmitting
it, listening to the new masterno, it's still not quite right.
And they do like multiplepasses through and each one is,
you know, if they've got asubscription, doesn't cost them
any extra money, happens reallyfast.
I think all that stuff isreally cool and you know if.

(48:42):
If people want to use it ofcourse they absolutely should,
and if it helps them get resultsthat they're happy with, then
that's fantastic.
But one reservation I have aboutthat, which is a kind of aspect
of mastering we haven't touchedon yet, is one of the big
challenges in mastering isworking on your own stuff.
For me, as a mastering engineer, somebody sends me something.

(49:03):
I listen to it and go oh yeah,I think that's what you're
trying to achieve.
Here's how I think you couldimprove it.
I'm a different person.
I'm listening on differentmonitoring that is set up as
best I can possibly get it in aroom that I've worked in for
decades.
That gives me a reallyimportant perspective, a

(49:24):
separate perspective from theperson working on it.
If you've been working on asong for six months, you've got
this story in your head aboutwhy the kick drum is a bit boomy
and why the vocal doesn't quitesit right there and all the
rest of it, and it's very hardto disentangle yourself from all
of that, take a step back andsay, okay, here's what should
happen.
Years ago I did a thing with Idon't know whether you know joe

(49:44):
gilda and graham cochran.
They had a site called duelingmixes where every month they
would both mix the same song andtalk about the differences.
And I did a little thing withthem called Dueling Masters,
where they both did their mix ofthe same song and I then
mastered them and I think it wasGraham's had a big ring in the
kick drum.
So within the first 10 secondsof listening I was like oh,

(50:05):
that's actually my thoughtprocess.
Was that not going to translate?
If you play that on littlespeakers or something that's got
a bit of a bass resonance,there's a good chance that's
going to crack up you know, it'sjust over the top.
So I went straight in with anotch and just ducked it out and
graham was actually shocked.
You know, he was like I reallylike that.
That's part of the character ofthe sound and all the rest of
it.
Um, if that, if he was myclient for real, we would then

(50:30):
have a conversation, and I'm notsaying I would force that on
him.
I'd just say, well, here are myreasons, here's my opinion,
here's what I think I should do.
But the point for this is thatit would never have occurred to
him to do that, and immediatelyI had.
So that's an advantage I haveis just being a different person
and having a different set ofexperience.

(50:51):
If you're using AI tools to tryand get great results, it's
still you in your room, with allof the great qualities and
maybe you know, nobody has aperfect room.
Um, if, if, if you're nothearing quite enough bass,
you're not going to judge thingsin quite the right way relative
to everything else out thereand it's going to make it hard
to get things to translate.
So, even when you think the aihas got a perfect result, it's

(51:12):
still your opinion in your room,which, if you're lucky, the
room is great and your opinionis absolutely spot on and it's
all good, but lots of people areunlucky.
Um, you know they, and that'sanother challenge.
You know, I'm just it's so,yeah, short version.

Marc Matthews (51:28):
They're great, useful tools, but they're not
there yet and I'm not sure theyever will be I would have to
agree and I think what youhighlighted there about the
feedback loop is is veryimportant in terms.
If you're not going to get that, I mean you can, like you say,
if you've got that unlimited orthat subscription and you can
keep submitting masters to itand and and then refining your
mix, submitting again, butultimately you're not going to

(51:50):
have that human interaction,that rapport, that relationship
with the mastering engineer thatyou highlighted there.
And I also think as well youmentioned there about Graham
Cochran and the ring in the.
Was it the bass drum?
I think I think it was the bassdrum In terms of he liked that
and that's what he wanted, andyou that that's a human decision

(52:10):
and that's not something you'regoing to get like.
If you wanted to keep that in,or you're an artist and you
wanted to keep that in and youwanted that extra, that you
wanted that ring or you wantedthat base to, you wanted that
extra base, the ar is not goingto wrap, the air is not going to
acknowledge that.
I mean at the moment it may dowith prompts further down the
line, but I don't think it.
I don't.
I mean I might get caught outon this in five years time and

(52:31):
say you were totally wrong.
But I don't think it.
I don't think it will.
I just don't think, and that'swhy I don't think it's not going
to, isn't it?
Ultimately, it's not going toreplace mastering engineers.

Ian Shepherd (52:41):
You can't even.
I mean, it's the conversationright.
So for me, I mean, that was myinstinct.
Graham was shocked by it.
He told me he was shocked and Idon't remember I probably eased
back on it, you know, Iprobably.
Well, okay, well, and that'sone of the I mean again, it's a
kind of general thing aboutmastering.
How far do you go?

(53:01):
You know, that's what the mixwas.
Here's what I hear in my head.
And so one of the plugins Ideveloped, um, is called
perception ab and it's basicallyjust automatic loudness
matching and before and after abcomparisons.
Because one of the big thingsin mastering is you do all this
stuff.
You almost always make itlouder.
So when you're comparing withthe original mix, it's really
important to balance theloudness so you don't just think
it's better because it's louder.
That's actually a reallydifficult skill.

(53:21):
So Perception does thatautomatically for you.
You just click a button andit's done.
Quite often when I do that, youknow, especially on the first
track on an album, you know I'mlike, oh, this is fantastic, I'm
going to do this, I'm going todo all this.
Oh, that's amazing.
And I put perception on.
Listen to it loud as much.
I think, ok, I've overdone it.
You know I've got carried away.
Might have sounded great to mein the moment, but actually

(53:44):
that's not what they were goingfor.
Right, it's not find a balancewhere I keep all the good stuff,
hopefully I get all theimprovement and the benefits,
without pushing things too far.
Um, and yeah, in terms of if.
If that thing with graham hadbeen a real project, you know,
if the client said to me well,where did the ring and the kick
drum go?
I'd explain why and we'd have aconversation and we'd decide

(54:06):
and maybe I'd put it all back inum, or maybe I would.
It would be we'd split thedifference or maybe they'd'd
then decide to go with what Iwas suggesting.
That's a conversation you can'thave with a machine.
You don't even that often havethat much control over.
I mean, I have a bit more timefor things like Ozone, where
it's not kind of done and dusted.
It gives you a bunch ofsuggestions and you can preview

(54:27):
because it gives you a bunch ofdifferent processes.
You say, well, I like whatthat's doing, I don't like that.
But then again, one of the bigthings about mastering engineers
you're going to somebody for anopinion.
You know if you, if you thinkyour mix is already perfect, you
don't need a mastering engineer.
You should just, you know,choose the level, find a limited
setting that works and releaseit.
When people go for masteringcome to me for mastering they

(54:49):
because they value my opinion,they think there's something
helpful I can bring to the table.
Um, if I mean, there's anotherthing about ai, is there an ai
mastering algorithm at themoment where you put the same
song in twice and it listens andgoes yeah, that's perfect.
I've already done that.
I don't think there is.
I think every time you put itthrough, it'll squash it a
little bit more, it'll, you knowI see, yeah, yeah, yeah a bit

(55:13):
more and and so so actually isthere, even because one of the
benefits is to get a bit of anopinion right.
Rather than going to a person,you give it to the algorithm and
you say what do you think thisshould sound like?
And you listen to it.
If it sounds good to, you gowith it, but if the algorithm is
going to keep doing stuff overand over again, I mean the
algorithm doesn't have anopinion right.
It has a set of rules that it'sapplying.
Yeah, so I don't see it being aproblem for mastering engineers

(55:40):
in future, and I think it'sbeneficial for people who can't
afford a mastering engineer ordon't want to go through that
process.
And if it encourages loads ofpeople to improve their
monitoring and start referencingthings more and loudness
matching things more, that's allgoing to be good.

Marc Matthews (55:55):
Yeah, most definitely I would agree, I
would totally agree.

Ian Shepherd (55:58):
But yeah, I think you need to have your eyes open
when you're going into it.
It's not a situation of like oh, this is just a perfect
replacement for a masteringengineer and I saved myself a
bunch of money.
I think that would be misguided.

Marc Matthews (56:12):
Ian, I think that would be misguided.
Ian, I realise the time hereand I think it's important
because I mentioned this rightat the beginning that we were
going to quickly.
I think we'll have to do itquickly touch on the new stable
audio feature in YouTube.
So I appreciate we've moved onnow from the AI mastering
conversation, but I think it'dbe an important one to quickly
talk about because I didn't knowit was a thing and I'd hazard a

(56:34):
guess and say a large number ofthe audience are probably in
the same boat.

Ian Shepherd (56:36):
so maybe if you could just quickly describe what
it is and what we should beaware of, I think the first
thing I should say the good newsis it doesn't apply to music,
or it's not intended to apply tomusic, so it's not a major
concern for people, for you, interms of releasing, uploading
your stuff to youtube, right?
Um, the it's the.
The platform seems to be prettygood at going oh, this is music

(56:57):
, I'm going to turn this featureoff, so that's good.
Um, unfortunately, it's notperfect, so there's stuff that
definitely is music and quiteoften, sadly, it's classical
music, where this is probablythe worst thing that could
happen.
Um, so you need to keep yourwits about you.
If, if you're uploading stuffand it sounds really odd, it's
worth checking this.
Um, the idea of it is.

(57:19):
It's a good idea.
It's to help people with uhmaterial on youtube that has a
very wide dynamic range whenthey're listening in noisy
environments, especially, youknow, environments where things
are.
So if you're trying to watch afeature film, the loud sections
are really loud, the dialoguemuch quieter.
If you're on a bus or in a, youknow, a tube, a subway, it's

(57:45):
possible that you willconstantly have to keep
adjusting the volume control tohear the quiet stuff.
Um, so youtube have added thisfeature called stable volume and
it basically lifts the quietmoments and turns down the loud
moment, loudest moments.
So it is dynamic range control.
It is a compressor basically um, it's not a very sophisticated
compressor, it's not a greatsounding compressor.
It does the job and I actually,like I say I think it's a good

(58:08):
idea for, and particularly likefeature films could be an
example, but also kind of um,more diy content that's up on
youtube where maybe you know theperson doing it is not a sound
person.
They're focused on the video.
So super loud moments or bitswhere it's hard to hear the
dialogue and the audio qualityis not that great, it's
beneficial, so that's whythey're doing it.

(58:30):
The problem that I want peopleto be aware of is there's a good
chance it's being applied tovideos that I've made, videos
that people like you have made,anybody who's trying to where
the audio is important but it'snot music, right.
So if I'm trying to do, I havedone this.
There's a video of mine.
If people go and find thatvideo I mentioned, maybe we
could put a link in the shownotes for people with the

(58:51):
comparisons of loudness so theycan hear what I'm saying about
the LUFS values.
I made that video and for weeksor months, I didn't realize that
people weren't hearing it theway I intended, because I was
saying, listen to thedifferences here with these two
songs, but the loudness wasbeing changed by YouTube, by
this stable volume feature, aspeople were listening.
So if anybody here is watchinga music production video or an
audio production video onYouTube, I'm thinking, well,
this is already by this stablevolume feature, as people were

(59:12):
listening.
So if anybody here is watchinga music production video or an
audio production video onYouTube and thinking, well, this
is really interesting, but I'mnot 100% sure I can hear it, you
need to jump into the settingsand try to see if this feature
is enabled, because they've justturned it on by default for
anything that's not music.
And that's why I'm makingpeople aware of it, because
obviously I mean honestly itruins a good number of my videos

(59:34):
, because I'm talking aboutquite subtle differences and you
know, as we've been saying allthe way through, the loudness
has a huge influence on the waypeople hear things.
So if it's changing well and ifit's adding extra compression,
anybody listening is going toknow that that's not a great.
You know you're not hearingthings accurately, basically.
So, yeah, it's definitelysomething to watch out for.
That that's not a great, youknow, you're not hearing things
accurately, basically, um, so,yeah, it's definitely something

(59:54):
to to watch out for and if, as Isay, hopefully, if you're a
musician, it's not going to beaffecting the stuff that you're
uploading, uh, but it it isworth watching out for.
Um, I'm hoping they'll changeit.
I'm hoping they'll give peopleyou know, somebody like me with
the youtube channel the optionto just say no, I don't want
this for people watching mychannel.

Marc Matthews (01:00:10):
That would, would make sense.

Ian Shepherd (01:00:11):
Yeah, that's not how it works currently,
unfortunately.
So yeah, hopefully that answersthe question.

Marc Matthews (01:00:18):
Yeah, it does.
Yeah, and now that you'vedescribed it again, I can
remember why it's stuck in myhead, because it was for that
reason of the tutorial videosnot being consumed the way that
they were intended to be,because this setting is on and
you'd like to think that theywould change it so that, as a as
a content creator, you're incontrol, and then if people want

(01:00:38):
to enable it, then then that'stheir choice, but by default,
your content is uploaded the wayyou intended it to be, which
would make sense.

Ian Shepherd (01:00:46):
um, yeah, and just there's a tiny little important
detail there.
It doesn't actually change it's, it's the playback that changes
it, right?
So when I upload stuff, it goesup and it's on the server as I
intend yeah, this is anin-browser thing, so if people
want to disable it, there's avideo on my um, my channel, on
my my page, uh, my website,where people can.
It demonstrates.

(01:01:06):
But basically, if you're in adesktop browser, it's the cog
icon at the bottom of the videowindow.
Just click that and you'll seean option to turn it off.
If it's on mobile, you have toyou go to settings and then it's
.
I think it's in additional orextra features or something, so
it's a bit more buried there, um, and the good news is that once
you've turned it off, it's off.
Um, so once people have donethis, they don't have to worry

(01:01:29):
about it going forwards, unlessit accidentally gets switched
back on somewhere.
Um, but yeah, the big thing isthat most people don't know
about it that was gonna be myquestion then.

Marc Matthews (01:01:36):
I was.
I was gonna ask you like do youknow whether or not it is a uh,
once you disable it is then notpermanently disabled, but it is
in effect until you thenre-enable it.
But that that's good to know,that that is the case.
So, audience listening, be beaware of this when you are
watching tutorial videos orsomething along those lines yeah
, absolutely it's so.

Ian Shepherd (01:01:56):
And just again, tiny little detail, it's off in
that particular browser, right?
So if you're using chrome andyou turn it off in chrome, it'll
be off in chrome, but it willstill be on on your phone, it
doesn't it doesn't go with youryoutube account or your login
details or any of that stuff.
So wherever you want to listen,if you're bothered about it, you
need to turn it off on a youknow or a different computer.

(01:02:16):
Or if you use some safarisometimes and chrome other times
, or you know opera, whatever itis, um, they all have that.
That setting is individual tothose playback systems that's
interesting.

Marc Matthews (01:02:27):
I I mean, maybe it's because it doesn't fall
within my algorithm, but I wouldhave thought that there would
have been a push on that fromyoutube, in terms of letting
people know it exists, or googlerather google disseminating
that information.
It's interesting.
Interesting, maybe I justwasn't part of that party, I
don't know well, there's aninformation page.

Ian Shepherd (01:02:46):
Yeah, um that.
But I mean, this has beenyoutube's way.
They tend to Google's way.
They will introduce newfeatures without telling people
Watch what happens.
Well, no, but it's a tactic,right, it's a blind test,
because they then look at thestats.
Because the concern is peoplewill stop listening to videos
because of extreme changes inloudness.

(01:03:07):
So they want to maximize views,which makes sense.
So they want to avoid that.
So they introduce this featureand they'll just run it and
watch.
People who have this thingenabled and think, well, do they
watch more of the videos?
Are they less likely to stopwatching?
If they do, then it's a goodthing, um, and then they'll
start rolling it out foreverybody else.
So there's a kind of couple offactors there.
Right, one is there's no, it'snot like it wasn't there, and

(01:03:28):
suddenly it is.
It's like they'll introduce itfor a small number of people,
test it and gradually spread itout.
The other thing to say is theydo have an information page.
If you google for stable volume, you will find a page where
they tell you what it is and allthe stuff that I've been
telling you.
It's not like it's a secret, um.
But as you say, I mean anotheroption that I would like,
perhaps, that they might do, ifthey, you know, they don't

(01:03:48):
necessarily want to give channelcreators the chance to turn it
off, because loudness can be atactic to try and get certain
results, jump scares or makeyour stuff stick out more than
others.
So I guess that's why theydon't necessarily want to give
that option to us straight away.
Another thing maybe would be ifI could tick a box that has a
little thing that comes up andsays stable volume or not

(01:04:10):
enabled.
Would you like to disable?
Because people are saying to meoh, put a message at the
beginning of all your videos.
I could do that, but a, I can'tback and go and do it on the
stuff that's been going foryears, um and b, if they change
it in future I'm not going towant it there, because you know
everybody says the first 10seconds of a video is the most
important.
You want to if the first thingeverybody sees is a stable

(01:04:30):
volume?

Marc Matthews (01:04:32):
lecture from me.
They're not going to watch thatvideo.
Yeah, yeah, I would agree In away.
Yeah, it makes perfect sensewhat you said there, and then
the way you were describing itin my head I'm like, yeah,
basically, even if they were tochoose a small percentage of the
YouTube consumer market, thatis still a massive focus group
for them to use for theseparticular implementations of

(01:04:53):
their tech.

Ian Shepherd (01:04:54):
So it does make sense, um, it's logical, but
it's not necessarily just.
You know, it's a kind of mrspock type issue.
I think you know it's.
Yeah, it might be logical, butdoes that?
You know?
Does that translate toeverybody?
No, there are specific uses.
I mean, asmr videos is anotherone.
Um, the asmr community is up inarms, right, because their

(01:05:14):
whole thing is, you know, tinylittle changes yeah and little
sounds and the contrast betweenyou know all that kind of stuff.
It completely messes that up.
And classical music as well.
It seems like most of thevideos that have been in where
this is incorrectly applied forsome reason.
The examples I've seen, theclassical, maybe it's just
they're the ones that it's mostnoticeable, um, but those are

(01:05:35):
the ones where dynamics are themost important.
You know where it's all about,the, the contrast between the
loud and the quiet sections andstuff.
So to have those messed with iseven more annoying maybe than
kind of pop and rock musicinteresting hopefully they will
improve it in in time.
um can say that you know theyare aware of the issue.
It's on their radar, butobviously you know there are

(01:06:01):
bigger factors at play and thereare priorities within the
organisation of where they'regoing to invest their time.
We can only hope that they willlisten to us eventually.

Marc Matthews (01:06:10):
Very, very interesting, though I hadn't
considered the asmr community.
Uh, in that and that I can, Ican see why that would be a uh,
a, a um, put their nose out ofjoint for one of the better way
of putting it.
But uh, yeah, ian, it's beenfantastic talking with you today
.
I realize we've we've gone overand I've taken quite a lot of
your time here, so it's been apleasure having you on the

(01:06:31):
podcast and talking everything,mastering everything in between,
and I'm sure there's other bitsand pieces we could have spoken
of as well and gone on for abit longer.
But again, I just want to say abig thanks for joining me on
the podcast today.

Ian Shepherd (01:06:42):
No, my pleasure, I've enjoyed it.
It's good talking to you.

Marc Matthews (01:06:47):
Yeah, indeed, indeed, from this episode.
At the very least, go onYouTube and check out that
feature in particular.
And if you ever hear me mentionthe word target again, feel
free to bombard me with whateverit may be that you're going to
bombard me with to stop me doingit, because I need to get that
word out of my lexicon.
Ian, it's been a pleasure andI'll leave you to enjoy the rest

(01:07:09):
of your day.
Cheers, buddy.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.