All Episodes

November 18, 2024 39 mins

We look at the history of auto-tune, how it works, and how it impacted music and culture in general.

 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeart Podcasts and how the
tech are you. You might be able to tell from
my voice that I have a cold, so I apologize

(00:24):
for that. But we're going to soldier on because I'm
back from vacation. It's time to get back to work,
and I love to talk about the intersection of technology
and music. So in past episodes, I've done shows about
how electric guitars work, the history of the Moog or
Mogue synthesizer, the evolution of various kinds of recordable media,

(00:47):
and much much more. But way back, like back in
two thousand and nine, my co host at the time,
Chris Palette, and I did a little episode of tech
Stuff about auto tune, and I thought it would be
fun to go back and revisit that topic. So this
is not a rerun. It's an all new episode about
the same subject. I haven't even listened to the old episode,

(01:09):
so I have no idea how much of what I
have to say is going to be a repeat. I
imagine a lot of it will be new, but I
don't know for sure. I figure there will be fewer
puns in this version compared to the last one, because,
contrary to popular belief, it was actually Chris Pillette who
made the most puns on tech stuff back in the day.

(01:29):
I got a reputation for it, and I don't get
me wrong, I won't shy away from a good pun,
and by that I mean a terrible pun. I love them,
but Chris like he loved them the way I love,
you know, rich Indian food, he dined upon puns, So
probably not as many in this one. But let's talk

(01:50):
about auto tune now. I think just about everyone knows,
or I think a lot of people know that Shares
song Believe, which came out nineteen ninety eight, was the
first major song to prominently use auto tune in an
effort to achieve a particular artistic effect, but the technology
had been around for more than a year at that point,

(02:13):
and the original intention wasn't to make a tool though
it actually draw attention to itself. Rather as the name
autotune suggests, it was intended to automatically nudge the pitch
of a musical note in the right direction so that
it would be in tune. That way, the occasional wrong
note could be subtly pushed into place, and it wouldn't

(02:35):
require you to do another take and then try to
splice together a great master recording. But even all of
that is getting way ahead of ourselves. To understand the
history of autotune, we must first learn about reflection seismology
as well as the oil industry. And I am being serious,
I'm not making a joke about this. As it turns out,

(02:57):
reflection seismology has a lot to do with our story
because the man who would go on to found the
company that would create autotune was a doctor, Andy Hildebrand,
who had made a career in using sound and complex
mathematical calculations to help oil companies, namely Exon, locate oil

(03:17):
deposits underground. So reflection seismology is in some ways similar
to sonar. So with a sonar system, you would beam
out pulses of sound waves. Typically we talk about this
in water, right, Like using sonar on a boat or
on a submarine, that kind of thing. You would pulse
out these sound waves and those soundwaves travel outward from

(03:41):
the source from the speaker. Essentially the transmitter, and if
there's something solid in the way of those sound waves, well,
the sound waves that hit that solid object, they're going
to reflect back toward the source. They'll become an echo.
This is what we get when we hear an echo.
If you are ever in a place where you make
a loud noise, then you hear the echo. It's because

(04:01):
the sound waves have traveled out from you, bounced off
something and came back to you. Well, if you if
you measure the amount of time it took for a
sound to leave you and then reflect off something else
and come back to you, you can figure out how
far away you are from that thing, right, because sound
is going to travel a specific speed away from you

(04:23):
and then hit the thing and then travel back to you.
So if you know how long it took, you can
do some very simple math and figure out how far
away that object is. So, for example, if you're on
a ship and use sonar to measure the distance between
you and the sea floor, you do a little math. Right,
you have to divide by two because it took a
certain amount of time to travel down then back up,

(04:46):
And you have to know how fast sound travels through water.
You have to have all these important bits of information
in your mind when you do this, but then you
can suss that out. You can say how deep the
ocean floor is from the surface. That saves you the
trouble of having to do it the really old fashioned way,
which typically involved lowering a weight on the end of
a knotted line, a knotted rope, and then you use

(05:10):
the knots to keep count of how deep in the
ocean you were. That's a sounding line. That's the other
way to see how how far down the ocean floor is.
But sonar made it way simpler, especially once you were
able to build that math into the sonar workstations. Reflection
seismology does something similar, but with seismic waves, and those

(05:32):
are waves that pass through the earth, and we typically
talk about seismic waves in connection with earthquakes or like
volcanic eruptions that kind of thing, and in fact earthquakes
were what inspired smarting Pants is to say, hey, if
we made something that could you know, create a huge
vibration through the earth, and something else that could detect

(05:53):
those vibrations, and we were able to calculate how long
it took for the instrumentation to pick up on the
echoes of that initial vibration event, we might be able
to figure out stuff that's actually underground. We could figure
out what is underground without having to dig it up

(06:13):
and see. Now, that's because the seismic waves will travel
at different speeds depending upon the density of the material
that they travel through. You've probably heard things like, you know,
sound travels at a consistent speed. That's true, but that
consistent speed is dependent upon the medium through which the
sound is traveling. So sound travels at a different speed

(06:34):
through the water than it does through the air or
through solid objects. You know, vibrations travel at different speeds
depending upon the medium. So at a very basic level,
a seismic wave will travel at a constant rate through
one kind of say rocky soil. But let's say there's
a place underground where that rocky soil gives way to

(06:57):
a different material says petroleum for example. Well, then the
speed of those sound waves is going to change. Moreover,
as the sound waves hit that barrier between one type
of material and another, some of the sound waves are
going to reflect off of that and become an echo.
Some of the sound waves will continue to penetrate through

(07:19):
the new material and through lots of observations. We gradually
began to learn about the different rates at which a
seismic wave will travel depending upon the medium it's traveling through,
and if it hits something really solid like bedrock, it
pretty much just echoes back. So here's how reflection seismology works.

(07:39):
From a very high level. You set up sensitive equipment
at different distances from a blast sight, and yeah, you're
likely to use something like explosives or maybe a really
powerful air gun. It has to be something that's going
to give a real jolt to the ground in order
to do this, because that's essentially what you're doing is
creating like a very localized earthquake. So this vibration travels

(08:02):
through the earth, and because you know how far away
you've set up your measuring equipment from that blast site,
you already have distance figured out, right. You know how
far away it is from the original source of the vibration,
and you measure the time it takes for your equipment
to pick up the echoes from that particular vibration event.

(08:22):
So you've got distance and now you have time. Now
you've got those variables sorted, so you can start to
work out what material is actually under the ground that
produces this particular result. And by doing that, you're kind
of like working backwards. You're using this information to draw
conclusions about what's under there, and that's where you can
start to make a determination as to whether or not

(08:46):
you're standing on top of a Beverly hillbilly is like
oil deposit, or maybe you're just on top of a
bunch of rocks or whatever. Now, in order to do
that what I just described, it's actually incredibly complicated. It
involves an awful lot of calculations in math, and it's
a lot of work. But then you have to think
that drilling for oil is even more work. That's a

(09:07):
huge endeavor. It costs a lot of time and money
and effort to do it, and like if you drill
in the wrong place, like that's a huge loss. So
you want the best possible information before you select a
drilling site, and reflection seismology is one way to obtain
information and to help make a decision. So doctor Hildebrand

(09:27):
was making a really good living out of this work,
but companies like Exon were saving hundreds of millions of
dollars through Hildebrand's approach of narrowing down potential drill sites,
and Hildebrand thought, you know, I'm not doing badly. I'm
making a decent living. But you know, Exon is making
out like a bandit. They're saving like half a billion

(09:49):
dollars a year or whatever using this technology. Maybe if
I apply my knowledge and skill set in a company
that I own, I might actually, you know, do better
than just working for Exon. So Hildebrand left Exon in
nineteen seventy nine and he founded a company called Landmark Graphics,

(10:10):
which at first sounds like, you know, it's a company
that makes computer graphics, which is not untrue, but that
wasn't It wasn't just general graphics. This company was still
rooted in the oil industry. Hildebrand's team developed and produced
workstations that could take incoming seismic information from these these
you know, soundings that they do and generate three dimensional

(10:33):
seismic maps based upon the data. And again, it was
incredibly complicated. You had to analyze so many different points
of information in order to create this three dimensional representation
of what's under the ground. But it worked and it
made Hildebrand very successful. He stuck with it for a
decade until nineteen eighty nine, whereupon he retired and he

(10:57):
decided to return his attention to a different passion he
had had since he was a kid, which was music.
Now Hildebrand wasn't just a music fan, he was a musician.
He had played flute professionally. He had been a studio
musician for some time. He had paid his way through
college partly by giving flute lessons to musicians, So he

(11:20):
decided he would go back to school as a retiree
and study composition and techniques. He attended Rice University to
do this. While he was back in college, he encountered
some newer technologies in the music space, like music samplers
and synthesizers. So these were machines designed to take a
sample of a sound like a flute, and then allow

(11:43):
a keyboard musician to recreate those sounds on a synthesizer.
The only thing is that Hildebrand thought they sounded terrible,
and partly it was because there was a limitation on
how much data a synthesizer could actually handle, so it
couldn't really replicate sound naturally. The sound it replicated would

(12:04):
be like a gross approximation of the original sound, So
Hildebrand wasn't really impressed, but he thought that there was
room for improvement, and he developed a technique to compress
audio data so that synthesizers could more effectively handle information
and make notes, to produce notes that sounded more natural
and less synthetic. He released his software as a product

(12:27):
called Infinity, and while this tool would revolutionize the orchestration
process for stuff like film and television, it did not
revolutionize doctor Hildebrand's bank account. He didn't actually see much
of that success himself because what actually happened was other
companies purchased copies of Infinity and then bundled it with
their own audio processing tools, and then sold those audio

(12:50):
processing packages to other people and companies, and it kind
of cut Hildebrand out of the picture. So while others
were benefiting from his work, he did not see that
much success. It did, however, again have an enormous impact
on orchestrations, like According to doctor Hildebrand, he was the

(13:12):
reason why the Los Angeles Orchestra hit real hard times
in the nineteen nineties because his tools allowed composers to
sample various musical instruments and create a natural enough representation
of those sounds to be able to create a synthetic
orchestra that sounded more or less like a real one.

(13:33):
So there was no need to go and hire a
real orchestra to orchestrate your film or TV project. You
could do it yourself. I've actually heard some some of
my favorite music scores. When I listened closely, I can
tell like, oh, that's not a real cellist. That's a
synthesizer playing a sample of a cello that sounds almost,

(13:54):
but not quite like the real thing. Anyway, we can
thank doctor Hildebrand for that. I'll talk more about what
we could thank doctor Hildebrand for, specifically auto tune, but
first let's take a quick break so we could thank
some other people, namely our sponsors. Will be right back. Okay,

(14:21):
So before we left off, I was talking about how
doctor Hildebrand had released a program called Infinity that improved
the performance of synthesizers and samplers. But in nineteen ninety
he decided to take an extra step. He founded a
new company. He called it Antare's Audio Technology, and this

(14:41):
would be his music company, his music technology company that
would ultimately produce autotune. And he knew that technology was
poised to make a huge impact on the music industry
and already had been like, that's kind of the history
of modern music is how technology has shaped it. But
he knew we were on the brink of another revolution.
He just wasn't exactly sure how that was going to

(15:03):
manifest now. According to an article by Simon Reynolds, it's
titled How Autotune Revolutionized the Sound of Popular Music, and
it was published in Pitchfork, the actual birth of Hildebrand's
idea for autotune grew out of a casual lunch with
some of his friends and peers back in nineteen ninety

(15:24):
five during a National Association of Music Merchants conference. So
he's at this conference, he's meeting with other people in
the music and technology spheres, and at this lunch, one
of the attendees jokingly suggested that what Hildebrand should do
next is develop a technology that would allow her to
sing on key, like, can you make a box that

(15:47):
lets me sing well? And while this was presented as
a joke, ultimately Hildebrand would think, huh, could I do
that now? According to Zachary Crockett's article, which is the
Mathematical Genius of auto Tune, this one in price Anomics
This wasn't like a light bulb moment where the moment

(16:07):
this woman says the thing, Hildebrand immediately thinks, ah, that's
what I shall do. Actually, it took like another six
months before Hildebrand really kind of revisited the concept and thought,
maybe there's something here. But in order to do that,
he would have to develop a technology that could do
a few things really well, all of which are a
bit tricky. One is it would need to detect the

(16:30):
pitch that someone was singing in. For example, if you're
using it for vocals, and so you would need to
be able to detect exactly the frequency that was being sung.
You would need to then also be able to have
a list of tones that were in the whatever key
you were supposed to be singing it. So I don't

(16:52):
want to get into music theory, because goodness knows, I
don't know that much about it myself, and I would
just mess things up. But you know, if you're singing
in a specific key, there are particular tones that belong
to that key. And often when we sing and we're
a little off pitch, what we need is to be
gently nudged a little up or a little down, a
little sharp or a little flat in order to hit

(17:14):
a semitone that belongs in that key. So it needs
to also quote unquote know which tones are appropriate, and
then it has to be able to digitally alter the
incoming pitch the actual sung note, and then guide it
to match that of a target note. Now, ultimately that
all sounds like a pretty simple idea, but in reality

(17:37):
to achieve this it was incredibly complex. Ultimately, also, the
toolould need to work in real time for live performances.
Like it's one thing to have this for the studio, right,
because even if you don't have an automatic, you could
have a tool where an engineer could fiddle with some
controls and gently alter the pitch of a performance to

(17:57):
get it closer to being where it needs to be.
It would be preferable to have that automated so that
you don't have to go through there and do the
manual process. But even so, like in a recording setting,
you don't have to have it be real time necessarily,
but if you're doing a live performance, you do have
to have a real time. If someone's up there singing
and they just hit a flat note when they're not

(18:19):
supposed to, that could really be a memorable moment and
not in a great way. So having a tool that
could gently account for that and fix it in real
time would be really helpful. But this would mean that
this tool would have to be able to process a
huge amount of sound data extremely quickly to make millisecond

(18:40):
decisions like split millisecond decisions relating to how to shape
a note moment by moment. Now it does help if
we also think of sound in terms of mathematics. We
describe sound in different ways, right, But some of those
relate specifically to how sound looks to us. If we
plot sound on like a wave chart, right. For example,

(19:05):
sounds can be really loud or they can be really quiet,
and that is volume, But it can also relate to amplitude.
When you think of a sound wave. The amplitude of
a sound wave describes how tall those peaks are or
how low the valleys are. The distance between the furthest
point of a peak or valley and the zero line.

(19:26):
That's your amplitude. But we also describe sound in terms
of pitch or frequencies. Higher frequencies correspond to higher pitches,
And if we plot a sound wave, let's say that
we plot it so that the x axis is a
demarcation of time, so we have one second listed there,

(19:46):
like the x axis is one second. If there's one
wave that we draw so that the wave begins at
the zero point and ends at the one second point,
then we have a one hurtz sound wave. A hurtz
is just a measurement a frequency. It refers to one
cycle per second. So if a wave is one hurts,

(20:06):
it means it takes one second for one of those
sound waves to fully pass a given point where you're
measuring the sound waves, right, If two waves pass that
point within one second, then you're talking about two hurts,
you know. Just so that we know, the typical human
hearing range is anywhere between twenty and twenty thousand hurts.

(20:28):
So one or two hurts sound We wouldn't even perceive it,
at least not as sound. If it was a great
enough amplitude, you could potentially perceive it as vibration, but
you wouldn't feel it, you wouldn't hear it. But between
twenty and twenty thousand hurts, that falls into the typical
range of human hearing. Of course, as we get older,
we start to lose the ability to hear those higher frequencies.

(20:50):
These days, I think my hearing tops out around sixteen
to seventeen thousand hurts somewhere around there. Like once you
get beyond that, I don't hear anything, whereas younger people
could hear it. Anyway, Hildebrand was working with music on
this mathematical level. He was analyzing music to recognize where
the frequencies were and where they should be, and to

(21:11):
then shape the sound wave so that it would fit
what the ideal would be where it would be on key.
He was not the first person to attempt to do this, however,
Earlier engineers had largely abandoned the quest because the signal
processing and statistical analysis needs were so high. They were

(21:32):
so extreme that you would need a supercomputer dedicated to
the task to be able to do it. There's just
too much data to process in too little time to
be able to do anything meaningful with it. Hildebrand determined
that yeah, to fully analyze music, you would have to
run thousands or millions of calculations, but many of those

(21:54):
calculations were actually redundant at the end of the day,
and eliminating the redundancy would not affect the quality of
the outcome, and so in his words he quote changed
a million multiply ads into just four. It was a trick,
a mathematical trick. End quote. That's ron the article I
mentioned earlier by Zachary Crockett. So yeah, in prisonomics, pretty

(22:20):
phenomenal that he was able to recognize that ultimately he
just needed these four processes to really be able to
zero in on pitch correction. So Hildebrand developed the autotune
technology in nineteen ninety six. He actually used to customized
Mac computer or specialized Mac computer as the way I've
seen it explained. I don't know in what way it

(22:43):
was specialized. I just know it was a Mac. And
he brought his software to the next National Association of
Music Merchant's conference, if you remember, that was the same
conference where one of his lunch companions had inspired the
idea for autotune in the first place. To say that
he felt interest in his product at this conference is
really under selling it, and it's understandable why. So let's

(23:07):
talk about the process of creating a master recording for
a song. If you want to get a perfect take
of a song, where this is the master recording, this
is what you want to use in order to you know,
create your album. You can't just hope that everything lines

(23:28):
up when you hit record and that everyone is playing
seamlessly together and no one makes a mistake. Invariably something
is going to be off. Maybe one of the musicians
is lagging behind the others and it might not even
be detectable at first, but upon closer examination you're like, ooh,
you came in late, or you came in too early

(23:49):
or whatever. Or the drummer is not keeping perfect time,
whatever it may be. Maybe someone hits a wrong note,
either while playing an instrument or while singing, or maybe both.
But what it means for engineers is that they'll need
to get another take where that mistake isn't there, and
they'll probably need another take and another take, And if

(24:09):
you want the perfect performance, this could mean recording the
same track dozens or gosh even hundreds of times and
then slowly picking apart each recording in order to piece
together a perfect edit. And that alone is hard because
just lining up the different takes isn't always the easiest
thing to do. You don't always have a seamless point

(24:32):
where you could line up take one would take two.
Like again, if the band is playing at a slightly
different pace in the second take. You can't easily line
up the two different ones to you know, even if
like one had an accident and the other one didn't,
you can't necessarily put them together to make the perfect recording.
So this is a really laborious, time consuming and expensive process.

(24:55):
Expensive because studio time is limited, so expensive. Hildebrand's invention
would take a ton of that effort off the table,
at least for vocals, because rather than re recording a
billion times, you could get maybe just one good take,
one decent take even and then use pitch correction for

(25:16):
any little flubs that might have found their way in
during the recording process. So it was a huge time saver,
and time is money. So immediately studios recognized the value
of Hildebrand's product and they rushed to get in on that,
and the tool absolutely revolutionized the recording industry. Studios that

(25:37):
incorporated auto tune were able to work much faster than
their competitors. They were able to cycle clients in and
out of their studios more quickly. That meant getting more
work done and more money coming in, and efficiency skyrocketed.
So studios that were not on the auto tuned train
soon found themselves getting out competed, and they ended up

(25:59):
adopting the technology as well, because it was either adopted
or go out of business. It also wasn't enough to
just be able to change the pitch of a note.
Auto tune would also have to be able to adjust
that pitch on a sliding scale of rapidity. That is,
the sound would be unnatural if you were to correct

(26:21):
a note instantaneously. It would be the effect that we
associate with autotune, that robotic effect. That's if you were
to change the pitch correction super fast. You don't want
to do that if you want the tool to remain unnoticed. So,
particularly for stuff like slow ballads, you would not have
a more gradual approach to correcting a pitch. So Hildebrand

(26:44):
wanted a tool that would let users determine how quickly
the note would get nudged to the correct pitch, and
the scale essentially went from zero to ten. The higher
settings would have longer adjustment times, so for a really
slow song, you might go with a nine or a
ten to let the note find its way to the
right pitch more gradually. Faster songs like rock and roll

(27:06):
type stuff or a rap or R and B. You
might require a lower setting, like fast rock songs, you
might need a two, three, or maybe even down to
a one. The zero setting. Really, Hildebrand just added that
for kicks, So essentially the software would immediately correct the
pitch upon detecting an incoming signal. And this sounded weird

(27:27):
and unnatural, and it was obvious that something was going on.
So this was more for fun than an intent to
create a new tool for musicians. But it turned out
that's exactly what autotune was really destined for, to become
a tool for a process called pitch quantization. But again,
that wasn't what Hildebrand set out to do. In fact,

(27:48):
ecquate to that Pitchfork article I mentioned earlier, the idea
here was to aim for perfection, at least in terms
of being in the right key and on pitch. That
imperfections would somehow interfere within a momotional connection to the music,
and you want that music to be perfect so that
you can have that emotional impact. Now, personally, I disagree
with that take. Some of my favorite recordings are with

(28:11):
artists who have imperfect voices. They weren't screeching or catterwalling.
It wasn't like it was unpleasant to listen to them,
but they aren't pitch perfect either, and to me, that
adds a lot of character and emotion. So as an example,
Warren Zevon, who you know did the song where Wolves
of London and tons of other stuff. I mean, prolific
musician who tragically passed away several years ago. He has

(28:34):
a great cover of the song back in the High
Life Again and which is a pretty cheesy song, but
Warren Zevon's cover is really emotional. It's great, and it's
a little bit raw, and to me it resonates far
more than a note perfect performance would have. But I
do understand where hill le Brand and his team were
coming from. You know, if you if you have a

(28:55):
take from a recording session that is almost but not
quite right, maybe there was a transition where the wrong
note came out, or you know, just a moment where
it took an artist a little bit longer to slide
to find the right pitch. A tool that could smooth
things out a little while not remaining you know, noticeable,
while you know, slipping under the radar. That could prevent

(29:17):
listeners from being distracted by something that was unintentional. But
what if you took that tool that was meant to
fix errors and used it to create unintended effects. That's
what we're going to talk about when we come back
from this quick break. So we talked about how auto

(29:42):
tune was meant to fix little imperfections in music recordings
and live performance. But as I mentioned, if you had
that setting set to zero so that it would instantaneously
attempt to correct pitches, then you could create an almost
robotic vocalization. So instead of shying away from the artificial

(30:05):
sounds that could come out if you were to use
it improperly, you leaned into it. That's what happened in
nineteen ninety eight with Sher's song Believe, saying that dolls
zero would create the robotic like effect, which in this
case was the goal in the first place, and that
song was a smash success. I couldn't stand it, and
it was everywhere. I couldn't stand it, not because of

(30:26):
the auto tune. I just didn't vibe with the song.
No no shade on chare phenomenal artist, you know, incredibly talented,
Just that song didn't jibe with me and The interesting
thing was that this huge success not only pulled the
curtain back on a tool that was meant to correct
little mistakes and thus create a whole conversation around whether

(30:49):
or not artists were quote unquote cheating by using it,
but it launched a whole new way to create music
in the first place. I personally think the artist who
is most associated with auto twoun is one who adopted
the technology and made it an intrinsic part of his brand.
That would be te Pain. He came to the party
a little bit late. He became interested in autotune around

(31:10):
two thousand and four, and he wasn't looking for something
to help compensate for his singing ability, because he actually
sings very well. But he liked the thought of the
technology that would set him apart from other artists, and
he could forge a vocal identity using this tool to
create a sound that no one else was really embracing
at that point. So he jumped wholeheartedly into autotune, and

(31:34):
he made liberal use of the technology and achieved tremendous
success along the way, selling like Platinum records by using
this technology. His love of the software led to an
official partnership with Hildebrand's company for a few years, and
Tarees licensed the technology to tee Pain for an app
called I Am te Pain, which you could use to

(31:56):
do autotuned right there on your smartphone three dollars initially,
and it was downloaded by millions of users that generated
quite a lot of revenue just on its own. Now,
eventually t Pain and Antarees parted ways, and t Pain
ultimately partnered with a different pitch correction company called Isotope.
The t Pain story also led to a lawsuit against Antarees,

(32:20):
and Antaries filed a countersuit against te Pain, and ultimately
the whole thing was settled out of court and everyone
signed an NDA. So I have no details about, you know,
how that shook out in the end, but it was
one of those things where it was kind of a
smudge on the Antarees name at least it was awkward, right.

(32:43):
But a much larger threat to Hildebrand's company was Apple.
Apple had purchased a German company called Emagic. Emagic also
had a pitch correction tool. In fact, it was a
pitch correction tool that, according to Hildebrand, essentially copied autotune technology.
This was possible because Antaris had failed to protect its

(33:07):
German patent properly, and so Emagic was able to appropriate
that technology or copy that technology without fear of legal recourse.
So then Apple acquires Emagic, which means Apple is then
able to incorporate Emagic's technology into their own products, including

(33:28):
their own sound editing software. And this meant that autotune
effectively got incorporated into Apple software without having to license
the technology from Antaries, because again they got it by
acquiring this German company. Now, Antaris could technically have still
sued Apple. There's no guarantee that they would have won,

(33:50):
but they could have sued them. However, Hildebrand explained that
they didn't really have that option because Apple has enorm
mislead deep pockets. Apple is just an incredibly rich company,
and Apple could easily just outweight Antaries in the legal system,

(34:10):
while Antaries would drain its resources trying to sue Apple.
So even if Antaris was in the right of it,
even if they would have won a judgment against Apple,
the chance was that Antari's would go out of business
just trying to pay for all the legal fees for
the whole battle in the first place. So Ultimately, Antari's
didn't go after Apple. It just it would have been

(34:32):
a death sentence. Culturally, autotune began to face resistance in
the late two thousands. Some artists expressed disdain for the technology,
going so far as to say it ruined Western music.
This was partly due to an oversaturation problem. The success
of Tea Pain, as well as the earlier instances of autotune,

(34:53):
inspired countless others to embrace the technology while not necessarily
doing very much else to differentiate themselves from other artists.
In other words, they were kind of leaning on it
as a crutch or a gimmick. So there was a
glut of auto tune robotic voiced vocals and music in
the early to mid two thousands, and by the late
two thousands some folks were absolutely fed up with this

(35:14):
and there was a backlash. It actually kind of reminds
me about how people began to turn against disco in
the nineteen seventies, and that in some ways the punk
rock movement was partly a reaction to disco or a
rejection of disco. I would only say partly because punk
rock also has its roots in glam rock, and I
think glam rock also kind of helped inspire disco. So

(35:37):
it's a complicated set of relationships, as you might say
on Facebook. But bands like Death Kem for Cuti actually
actively spoke out against autotune. So again, some artists were
arguing that autotune was being used by people to compensate
for a lack of ability, So they're kind of casting
shade on fellow artists saying, well, yeah, they have to

(35:58):
use autotune because they can't sing, or others would say
like it was making music less genuine and sincere, like
less human because it was going through this digital processing process.
Jay Z famously released a song titled Death of Autotune
in two thousand and nine, the same year when our
original Tech Stuff episode about autotune came out. As you

(36:18):
might imagine, jay Z's song had some pretty strong opinions
about the technology inside of it. It resonated enough to
win him a Grammy for it, so other people agreed.
But despite all that backlash, autotune continues to be a thing.
It did not, in fact die. It's been incorporated into
software and digital audio workstations. It and similar pitch manipulation

(36:40):
technologies are often found in everything from professional audio engineering
software suites to free programs that you can download online. So,
for example, I sometimes use a program called Audacity, and
Audacity has an option under its effects where I can
manually adjust the pitch of a recorded piece of audio.
I can set what the pitch should be. Now that's

(37:02):
not autotune, right, because by definition I'm not using an
auto feature. I'm manually changing the pitch, but it's using
similar approaches to get an effect. I've actually even made
use of that tool while I was editing my friend
Shay's podcast, Kadi Womple with the Shadow People. Shae does
nearly all the voices on that show. I've actually voiced

(37:22):
two characters on that show. So if you're eager to
hear other output from me, that's not a technology podcast,
go listen to Kadi Womple with the Shadow People. I
voice a couple of characters on that, but I edit
the show, so I use pitch adjustment tools in order
to make some of Shay's voices sound like different people.
So it's still herb doing the voice, but I digitally

(37:45):
manipulate the voice to give certain characters their own distinct sound.
It's pretty neat stuff. I have no idea what it
would sound like if I actually used an auto tuned tool.
That probably would sound very different, But I have a
lot of fun playing with these pitch manipulation tools. Now,
to get more into the cultural and social impact of autotune,

(38:09):
I highly recommend that article in Pitchfork by Simon Reynolds. Again,
that's titled how Autotune Revolutionized the sound of Popular Music.
It's a long form article, it's well worth your time
to read it. As a Zachary Crockett's article that I
mentioned earlier, both of those are great articles about autotune
and not just the technology, but it's impact on music

(38:30):
in general and society and culture as well. And Reynolds
goes into much deeper detail about how the technology has
had an impact on the recording industry and the backlash
that came out as a result of that, as well
as sort of a counter movement against autotune. So check
those out. They are well worth your time. And I
could go on, but really I feel like those articles

(38:52):
do a much better job than I would of describing
all of that, So check those out when you have
some time. That's it for today. I hope all of
you out there are doing well, and I will talk
to you again really soon. Tech Stuff is an iHeartRadio production.

(39:13):
For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts,
or wherever you listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Host

Jonathan Strickland

Jonathan Strickland

Show Links

AboutStoreRSS

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations.

Crime Junkie

Crime Junkie

If you can never get enough true crime... Congratulations, you’ve found your people.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.