All Episodes

November 28, 2023 43 mins

Digital disruption is knock knock knockin’ at the music industry’s door, 20 years after the MP3 and Napster made CD collections obsolete. Artificial intelligence is now filling playlists with ambient music and making pitch-perfect copies of human stars like Grimes, who Bloomberg Opinion columnist Lionel Laurent interviewed for this special episode of Crash Course. He dives into the risky race to make musical robots and how record labels and artists are fighting back with new business models, new types of music, and new ideas about copyright — which could serve as a guide for how the wider economy and the rest of society can deal with AI.

NOTE: This episode incorrectly states the name of Grimes' manager. It is Daouda Leonard, not Leonard Daouda.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
I think a lot of people think it's just going
to be great. If we just make these things, they're
just going to be great, and we're fine. And a
lot of people think this is definitely a dystopia, and
we're definitely for it, and I think it can truly
be a third thing.

Speaker 2 (00:14):
Welcome to Crash Course, a podcast about business, political, and
social disruption and what we can learn from it. I'm
Tim O'Brien. Today's Crash Course Artificial Intelligence versus the Music Industry.
The voice you heard at the open was producer and
songwriter Extraordinary Grimes speaking to us about the good, the bad,

(00:36):
and the in between of artificial intelligence and music. The
AI hype may be starting to die down overall, but
AI isn't going away. You can literally hear it in music.
Back in April, a super convincing AI duet between rapper
Drake and singer The Weekend went viral, causing panic in
the music industry, and it opened the floodgates for deep

(00:58):
fakes like Frank Sinatra singh hip hop and a fictional
Oasis mixtape called Aisis. It's not just novelty memes either.
AI generated background music is sneaking into your Spotify playlist,
cracking out chill soundscapes, and helping the Beatles put out
one last song called Now and Then. So music is

(01:20):
again on the cusp of huge tech driven change. Twenty
years after file sharing misfit Nabster made CD collections obsolete.
To help us delve into what that change might look like,
we at Bloomberg Opinion set one of our columnist detectives,
Lionelle Laurent, out to investigate. Lionelle is a writer who
wears many hats, one of which involves keeping an eye

(01:41):
on digital innovation and upheaval in the music industry, and
he has a tale to tell. Hi, Lianel, Hi, sim
I'll let our listeners know that you've been working on
this from Paris, So before we get started, why music well.

Speaker 3 (01:56):
Music is a personal passion of mine, and music is
part of what makes as humans, So seeing AI take
on more of that is I think a big cultural moment.
But music as an industry has also served as a
kind of canarian, the coal mine for wider tech disruption,
including streaming. It really holds a mirror up to the
rest of the economy, so I think it's important for
everyone to follow this well.

Speaker 2 (02:18):
I'm going to hand it over to you to tell
us what you've learned, and I'll catch up with you
later in the show.

Speaker 3 (02:23):
Thanks Tim. Now, I'm willing to bet that you probably
haven't heard of French songwriter bunoits Care. He's got a
special claim to music fame. He's behind the first ever
song composed by artificial intelligence. It's called Daddy's Car and
it was released in twenty sixteen.

Speaker 2 (02:58):
In Daddy's Car Sound. So do we like something new?

Speaker 1 (03:04):
It ms?

Speaker 3 (03:07):
This isn't one hundred percent AI. Bern Noir wrote the
lyrics and arranged the song, and that's him singing, but
the rest was composed by an AI model trained entirely
on songs by the Beatles, using all those Fab four classics.
The machine came up with its own suggestions and ideas.
It's like if John, Paul, George and Ringo had a
fifth robotic friend jamming away and Bernoit playing with the results.

Speaker 4 (03:31):
I played with the tool. I fed it with fifty
songs of the Beatles, my favorite songs of the Beatles,
and then I got a result eight bars and then
it was funny, and I continued.

Speaker 3 (03:48):
So I think we can agree this AI music isn't
quite at the level of let it be Still. Daddy's
car was a kind of milestone, and for bu Noir
it is like improvising or writing a regular song hunched
over a piano. He's been using tech to make music
since the days of the Commodore sixty four kids ask
your parents what that is today. He does it all
from his studio in Paris, a music geeks paradise, stuffed

(04:11):
with guitars, real real tape and computers. It all helps
inspire new ideas, which he puts out under the name Skiggit.
That's Danish for shadow, inspired by a Haunds Christian Anderson
tale about a shadow that's alive.

Speaker 4 (04:36):
AI helped me to explore an expected part of my creativity.
It's like the shadow in the Tale of Anderssan with
becoming something independent from the main character.

Speaker 3 (05:04):
This summer, Bnoir gave me a magical mystery tour of
how AI could revolutionize music and why it shouldn't be
banned or dismissed out of hand. First stop deep fakes
like Heart on My Sleeve that Drake Weekend duet we
mentioned earlier. Bernoir doesn't think the imitation game is all

(05:27):
that interesting musically or fair ethically, but he says it
shows how AI has improved from the days of Daddy's
Car to do some things incredibly well, like fooling the
world into thinking a tambra transfer or vocal transfer is genuine.
Here's Bernoir whipping up a similar kind of song in
the style of Eminem.

Speaker 4 (05:47):
I asked Judge Appetty to write a song in the
style of Eminem. I took this this result from jud
Jipauty and I sung it and it created this result
with Eminem's timber and you will hear that there is
a French excellent. The weight of the world is crushing me.

Speaker 5 (06:07):
Down, trying to keep my head up, but I'm feeling
as well, the thousand fears they're always lurking in my phrasal.

Speaker 6 (06:13):
Song is kind of hurting.

Speaker 3 (06:15):
Wow, that is pretty convincing, though I should mention we're
not yet at the level of being able to hit
a button and get instant deep fake back. This took
several steps. A voice clone, an AI backing track, lyrics
by chat GPT and Benoir's vocal cords like the so
called Mechanical Turk, a fake chess playing robot that was
all the rage and the seventeen hundreds. There is a

(06:36):
human hidden in there.

Speaker 4 (06:38):
I got in two hours something that is from nothing
with the voice of Eminem.

Speaker 3 (06:43):
Deep fakes are just the tip of the AI iceberg, though.
Bunoir also showed us hybridization, a way to mix his
voice with that of another singer's, in this case the
late Chet Baker, and create the kind of weird new
voice in between.

Speaker 4 (06:58):
Seevu chutt.

Speaker 1 (07:02):
That plan.

Speaker 4 (07:05):
Chutts too.

Speaker 3 (07:14):
The next step will be performers able to use these
kinds of augmented voices on stage, manipulating not just the
tone of a sinner's voice, but it's feel, it's groove,
and eventually also have a machine able to do more,
maybe all of the composing process. I asked Ben, why
what it's like as a human dealing with the machine.

Speaker 4 (07:36):
The risk is to lose yourself and get bored by
all the reasons that you get. Because AI is never tied,
you can always generate, and I must say that often
lead the results are not so good. Most of the
results are like average. It takes time to get the
result that you were expected, or the result that will

(07:57):
inspire you to finish the song.

Speaker 3 (08:00):
What's your vision of the future, Then at which point
do you become obsolete or unnecessary?

Speaker 4 (08:07):
If I can imagine a tool that will make and
generate what you describe more like me Johne for the
image for example, or Wali, I think orderly, I mean
I think it will be great, and maybe it will
replace some musician or it will be game changer.

Speaker 3 (08:25):
The ability to type in a text prompt and get
a full song back sounds wild and also pretty dangerous
when you consider those replaced musicians. But let's face it,
a sense of danger is not going to slow down
this tech race, and the finish line may be closer
than we think. Ed Newton Rex is a composer turned
AI f Sconado. He says this year will be a

(08:47):
major milestone towards one hundred percent computer written songs, though
for him, the real value of AI will be in
collaborating with and inspiring humans. He points to his own

(09:07):
work I Stand in the Library, which you can hear
in the background, as an example of that. He wrote
the music, but he got AI to write the lyrics.

Speaker 7 (09:43):
This isn't just about kind of creating a three minute
song having AI do that. Once you have an artificial
entity that's essentially musical.

Speaker 6 (09:50):
There is so much you can do.

Speaker 7 (09:50):
You can ask it for ideas, you can ask it
for inspiration, you can have interplay with it, you can
get it to fill in extra parts in the music
you're making. You can kind of do anything with it
that you could do with a human musician working with you.
And I think that's where it gets really exciting.

Speaker 3 (10:14):
All this enthusiasm for AI has kind of surprised me. Clearly,
Bonoir and Edge showed that there's real human creativity going
into AI music. I came into this expectant to find
tone def tech bros, not real musicians. But I'm also
feeling uneasy about it. Even setting aside the messy ethics
raised by AI that's trained on other artists' work, I'm

(10:37):
starting to get nagging flashbacks of the Disney cartoon Fantasia
I saw as a kid. Remember the Sources Apprentice, when
Mickey Mouse starts getting addicted to his productive time saving tool,
creating an army of broomsticks to do his chores chaos.
Since hughes as control is lost, and imagining an uncontrolled
release of AI models and voice clones does not inspire confidence.

(11:02):
I asked the while what the dangers of this might be.

Speaker 4 (11:05):
I think that Eminem remembers all the songs that he
has done, so he will easily say no, it's not me.
So I don't think that this kind of demo, this
kind of process is really dangerous. But hybotization, as I
show it begins to be dangerous because you can't control.

(11:26):
It's hard to control. If I didn't said to you
like like it's I did it with Chet Baker, you
wouldn't have recognized Chet Baker, So that becomes a bit
dangerous here. And it's dangerous also because you can do
it very easily.

Speaker 3 (11:45):
We have to wonder about unattended consequences like misinformation, misuse,
and bias too. Last year, a fictional computer generated rapper
called fn Meeker, complete with green hair and nose rings
and teeth grills, sparked an online backlash after his lyrics
used racial slurs and trivialized police brutality. Will the toxic
content we've seen from aichatbots like chat GPT be any

(12:07):
more acceptable if it's set to music. This is fundamentally
about humanity versus technology, says musicians, writes advocate Crispin Hunt
once a member of nineties British pop band The Long Pigs.

Speaker 8 (12:19):
Paul McCartney singing the Beach Boys is really amusing, Or
Drake and the Weekend doing a duet on somebody else
is really amusing. I would be totally freaked out if
somebody took my voice, and I think we as humans
need to get in front of that very quickly, before
before the concrete set before it's set in and before

(12:41):
mistakes get made, or we just give in to this
idea that you know, oh it's progress, you can't stop progress.
But there's a great line by a Polish dissident poet,
courseeddann islav Lek, which said, is it progress for the
cannibal eats with a fork?

Speaker 2 (13:01):
Ouch?

Speaker 3 (13:02):
So maybe there is a thin line between the well
meaning digital composers and the evil fork wielding cannibals. The
next big question is what is the music industry doing
about it? Well, the first response from the record labels
has been straight out of the naps to playbook, fight, Fight, Fight.
But this isn't the year two thousand anymore. They're good
at it now. When the NP three landed, record companies

(13:25):
were like a grandparent trying to understand how to work
their new smartphone. Today, they've rebuilt themselves around a small
number of streaming platforms like Spotify. It's a lot easier
to make digital pirates walk the plank, and there's been
a lot of plank walking. Hearts on My Sleeve was
quickly taken down from social media. Unauthorized voice cloning apps
are being shut down, the Grammys have banned fully AI

(13:47):
generated music, and Spotify is cracking down on bot activity
on his platform. Here's Jeffrey Halston, general counsel for Universal
Music Group, whose artists include Taylor Swift and Ed Sheeran,
testifying before the US Senate Judiciary Committee in July.

Speaker 9 (14:03):
AI in the service of artists and creativity can be
a very very good thing. But AI that uses, or
worse yet, appropriates the work of these artists and creators
in their creative expression, their name, their image, their likeness,
their voice without authorization, without consent simply is not.

Speaker 5 (14:29):
A good thing.

Speaker 3 (14:30):
And the music industry has also learned other lessons since
the napster days. It knows if people really want to
use something, death by lawsuit is not gonna work, and
not all AI music is deep fate. Users of AI
website Boomy have created over seventeen million weird sounding electronic tracks.
For example, I have trouble getting through just one of them.
But still maybe one day there'll be a hit in there,

(14:52):
and not one that can be shut down by a lawyer.
So here's the next twist. Record labels are done just
playing defense. They now want to go on the offensive
by making some AI music of their own that plays
by their rules.

Speaker 2 (15:07):
Wait, wait, hold it right there, linell. This is shaping
up to be quite the story, but we need to
take a quick break and I'll see you on the
other side.

Speaker 3 (15:14):
Sounds good to me.

Speaker 2 (15:20):
We're back with Lionell Lurn talking about AI and music.
So Lionelle, let's hear more about how record labels are
fighting back by co opting AI.

Speaker 3 (15:31):
Sure, it's a story that begins with a young entrepreneur
based in Berlin called Oleg Stovitski, the AI starts up.
He co founded Endel, made the music that you're listening
to now. And Endel has just signed a landmark partnership

(15:52):
with Universal Music group, that really anti deep fake label
we heard from earlier, to make AI music use their
huge catogy stars. And this all started with Oleg looking
for music that would help them concentrate, similar to the
work of ambient music pioneer Brian Eno.

Speaker 5 (16:10):
Every time I'm trying to concentrate, I listened to Brain,
you know, and then noticed that a lot of his music,
and a lot of ambient music in general started kind
of popping up. When all of these functional music playlists
and projects like music for Programming appeared.

Speaker 3 (16:26):
That got Oleg thinking, what if every time you hit play,
the soundscape was different and fit your mood by monitoring
data such as your real time heart rate or the
time of day. That would be like having your own
personal Brian Eno, made by a machine.

Speaker 5 (16:42):
I kind of turned to my co founders and I
was like, well, I guess we're going to have to
build an AI.

Speaker 3 (16:48):
This AI would be trained on a very specific set
of musical building blocks, or stems. Stems can be thought
of as these separate instruments or components that make up
a song. Think of an individual drum beet, a keyboard line,
a vocal melody.

Speaker 2 (17:08):
I've been to the top of Mount Everest, I've.

Speaker 9 (17:12):
Sailed the seven seats, I've shared the stage with all
the best.

Speaker 3 (17:20):
As three separate stems, Endel's AI will then uniquely reassemble
the stems into a new soundscape every time. Personally, it's
not for me, but clearly there is a market for
functional music. It represents an estimated seven to ten percent
of the entire streaming market. Endel's next smart move was
to blend that robotic AI with well known human artists

(17:41):
like James Blake and Grimes. They submitted the stems and
Endal did the rest and basic. This all on stems
rather than existing albums or songs, means Endel gets official
recognition as co composer of the real time soundscape. That's
how Oleg says he avoided a fight with the music industry.

Speaker 5 (18:02):
With us, it was always the reaction was always a
mix of through kind of genuine interest and almost excitement,
and people were intrigued because we never removed the artists
from this process. We never said, oh, you know, we
just get the stamps from James Blake, but then we're
going to produce like a million records and James is

(18:23):
not going to be mentioned anymore.

Speaker 3 (18:26):
But the bigger hand'ed go, the more competition became unavoidable
because money that would normally be going to the record
labels is being diverted to text us ups making ambient music.
The economics of streaming today are all about market share.
Listen to a track on Spotify for more than thirty seconds,
and it counts as a play, whether it's forty seconds
of experimental electronic music, a three minute pop hit, five

(19:00):
minutes of whale song. The more people click on ambient
music rather than say, drake, the more it'll impact the
bottom line. According to Golden Sachs, the three major labels
market share has dropped from around eighty five percent to
seventy one percent between twenty seventeen and twenty twenty two,

(19:24):
which is why when the announcement of that landmark universal
Endel deal for more record label approved AI music landed
in May, it looks a lot like can't beat them,
join them?

Speaker 5 (19:36):
You know they can. I just keep ordering takedowns. That
is a road to nowhere. They need to have kind
of an offensive strategy and they need to embrace this
in some way in form and it's just turned out
to be that Endel was kind of the only company
that spent years kind of learning to speak the language,
and then it's the dance of these labels.

Speaker 3 (19:58):
So what does a record label backed ambient AI do next?
Ole extreme is to make new AI versions of artists
existing catalogs. So, for example, Ed Sheeran or Taylor Swift
might one day offer Endel versions of their albums designed

(20:20):
for sleep, work or exercise. And imagine if in the future,
instead of Taylor's version, we get happy, sad or heavy
metal versions of her albums.

Speaker 5 (20:31):
Essentially expands the universe of the artists. And it's just
such a beautiful concept of creating a functional soundscape version
of your favorite artist music.

Speaker 3 (20:41):
It's not just Endel. Other labels are sensing an opportunity.
Warner Music's CEO recently praised an AI enabled duet between
Costa Rican musician Pedro Capmani and the generated voice of
his late father. Oh I asked Gairi, head of music
group Believe, which owns a DIY music platform called tune Call,

(21:05):
about this change of heart. Some might say double think
in the industry.

Speaker 10 (21:10):
Whether it's for Believe or generally for the music industry,
is an opportunity and not a threat. Why do people
think AI is a threat generally because they do think
that the deployment of AI will not be controlled and
af the opposite view around this, which is that AA

(21:30):
deployment will be heavily controlled and everyone is approaching AI
with very a very strong sense of responsibility.

Speaker 3 (21:39):
That control means labels can set their own rules of
engagement with AI. Deri gives four examples of rules that
Believe is currently applying on AI. Music uploaded to its platform.
Consent from the artist, control over the output, compensation, and transparency.

Speaker 10 (21:56):
That's how we're approaching the landscape today, and so we
are not going to distribute tracks that are one hundred
percent created by AI unless they meet some of these
grit areas.

Speaker 3 (22:09):
Believe operates mainly in the mid market of artists rather
than the superstars, and the hope is that industry friendly
AI can dig up the next generation's bright young things
by making it easier to unlock tunes sitting inside their
heads again or while following the rules.

Speaker 10 (22:25):
We think the number of artists that gets educated for
technology to become better musician is going to increase, and
more people creating better music will find an audience. This
is going to help frontline artists elevate and create better music.

Speaker 3 (22:41):
This optimism around control is also why YouTube has created
a new set of rules for what it calls responsible
AI music. But this all seems to be moving very fast.
There's a lot of pressure on labels to be a
first mover and sign deals with tech companies. I'll be
sure that artists consent and all that precious data from
their life's work isn't being handed over a little too quickly.

(23:04):
And what if the focus on control starts to stifle creativity?
If it trips up artists who are using AI but
fairly without breaching copyright. Remember ed Newton Rex, who he
heard from earlier in the show. In twenty fifteen, he
wrapped about this exact thing when promoting his startup jute Deck,
citing the copyright constraints faced by YouTubers looking for good

(23:27):
background music.

Speaker 6 (23:28):
Imagine there's a video that you're creating, it's been weeks
in the making, and your painstaking need take in it
praying by prayer, have got the whole take looking really
super great. Start to find the right music. If you've
got a song you like, you're not allowed to use it.
Copyright means if you choose it for your video, YouTube
will just remove it. But this right stuff that's all unique.
It will do it in any style U.

Speaker 3 (23:47):
See come Back, Fake Drake, all is forgiven? And going
back to the Sorcerer's Apprentice, what if record label robots
become so effective at expanding their catalog of stars that
it makes it harder for the new generation of artists
to cut through the noise. When does the revolution become stagnation.
It's time to ask if there's an alternative to either

(24:09):
the dystopia of no control or the utopia of total control,
something like a third way that's maybe a little more
punk rock, putting more power into the hands of the artists.
And the best person to ask is Grimes, who we
heard from at the beginning of the show. She is
one famous artist who is actually embracing AI experimentation while

(24:30):
also still trying to set limits, sort of like rapper
Chuck Dee, who during the Napster era said downloadable music
was a model to embrace. I asked Grimes first off,
if she generally felt excited or worried about the current
state of AI and music.

Speaker 1 (24:45):
I think I would say I'm quite both. One of
the main things I'm doing right now is trying to
figure out the ways to make this safe for humans,
not even just existentially, but like socially and emotionally and civilizationally.
But I also think that's totally possible.

Speaker 3 (24:59):
One of her biggest experiments is a new project allowing
deep fakes or songs that clone her voice, which she
announced when Hearts of My Sleeve went viral. She tweeted
in April, feel free to use my voice without penalty.
I have no label and no legal bindings. I guess
it's not surprising Grimes as into AI, her dark electronic

(25:19):
music already embodies a kind of tech human coexistence. She
cultivates an extremely online sci fi anime persona, live streams
video games, and assault digital art as NFTs. When I
asked her how she sees AI being used in music,
she compares it to auto tune, that instantly recognizable tool
allowing anyone to hit those high notes.

Speaker 1 (25:38):
I am really pro democratizing the process. I have a
pretty interesting, recognizable timber to my voice, but I have
a weird speech, like ever Lis been, Like I just
am not that in tune and I don't really care
about being in tune, and like a producer, autotune is
a good sort of like metaphor, because I think we got,
especially in the last decade a lot more interesting and
unique top blind than we'd gotten previous to that, because

(26:01):
you had all these kind of brains that are not
singer brains, like singing really well. And I think you
especially get magic a lot of the time when you
have someone who doesn't have the fundamental or it wasn't
super trained in something and they just try it for
the first time.

Speaker 3 (26:15):
So the more AI can support creators by lowering the
barriers to entry for composition and production, the better that
melody you just wrote on an acoustic guitar could quickly
sound like a full band in the studio without actually
needing the band or the studio.

Speaker 1 (26:33):
And I personally know a lot of artists who have
really struggled the process of finding people who are good
and then paying them can be really hard, and especially
with the complexity of music publishing, that can be really hard.
So if you can just sing a song and then
get like a bunch of different generations of instrumentals behind it,
I think for singers specifically and songwriters, there's a lot
of opportunity there.

Speaker 3 (26:53):
Grime's inspiration for her own initial experiments with AI came
from the idea of training a model on her own
work bottling her entire style into a product, not just
one particular sample, effect or tone. She imagined a database
of all sorts of different music producers from Timberland to
underground hadent like the late producer Sophie, getting a reliable

(27:15):
income from selling this type of data. And then came
the hype and the panic around Hard on my Sleeve,
which served as a kind of wake up call to
get the debate around AI going.

Speaker 1 (27:26):
I think it's good for people to push the limits
with what's legal so everyone sees what the technology can do.
I tried to make my voice a bunch of times,
but the technology just hadn't been there.

Speaker 3 (27:36):
Grimes partnered with Create Safe to launch software called Elftech,
which uses AI to replicate her voice. The way it
works is this music creators can upload audio or record
directly into the app, then receive a file with the
same audio, but with Grinder's voice instead of their own.
If the song gets an official release, the royalties are
split fifty to fifty. Grinder's manager Lenard Doda told me

(28:00):
when we spoke that fifty thousand people are using the
product and they've had over one hundred thousand audio generations
uploading their audio or playing with the tool. Over the
more than one hundred grimes Ai songs that have been
put on streaming platforms, one has over six hundred thousand plays,
he says. And if you're wondering what grimes clone actually
sounds like, here's a track by Bnoit Care Yes, mister

(28:22):
Daddy's car himself called us Noir, in which grimes Ai
replaces his vocals sung in.

Speaker 1 (28:27):
French Santan palin coofen Hesiod custom O Gas.

Speaker 3 (28:45):
Compared to how collaborations are normally done, this is way
less paperwork, way less hassle, and way more streamlined than usual.
Now there isn't just one Grimes, but potentially infinite Grimes,
which is and always online twenty four to seven world
of social media and music streaming is a huge advantage.

(29:10):
This takes fan engagement into a new era. Grimes also
reckons the music being created in the more AI future
could actually sound more exciting than today's all you can
listen to streaming buffet.

Speaker 1 (29:22):
If generative music is deployed very carefully and safely, you
still create a environment where people want to create. You
incentivize It's like you want to be a really unique
producer so that you have a unique sound that it's
worth training a model off of that is like recognizable.
I feel like a lot of times the algorithms, like
the streaming algorithms, really encourage music to sound the same.

(29:42):
This is something that I think would really encourage music
to sound more different. And I think that is like
a net good.

Speaker 3 (29:48):
So far, so good. But Grimes is not a utopian.
She's also aware of the risks. I asked her what
her red lines are, what the downsides are to this.

Speaker 1 (29:57):
I think, especially if the deep fakes, like people should
have consent. I think that's super important, super essential, Like
even though I'm doing this with my voice, I think
there is potentially an argument that deefix should be illegal
across the board. I don't actually see a long term
utility for deep fix, like I see a lot of
potential for political distress and like political unrest and harming

(30:23):
especially women. Is it worth the emotional downsides? I don't know.
That's part why I'm doing this, is running the experiment.

Speaker 3 (30:28):
Grimes has a point. It's tremendously powerful to imagine artists
rather than tech platforms or record labels, leading the charge
of AI clones for their own benefit, But the issue
of control is still there. What if malicious actors, hackers,
fraudsters use Grimes's voice to sing about hate crimes or
to ruin her image. What if her own work becomes

(30:50):
devalued as a result. What if a lot of bad
content gets uploaded. If there's one big challenge hanging over
a platform like elf tech, it's probably long term sustainability.
It takes for alsays in bandwidth to believe a platform.
So why do it makes sense that models would be
run at the individual artist level. It should be done
with a pragmatic, not idealistic approach. This is all moving

(31:12):
very fast, and Grime says we need to slow down
and think of all sorts of ways to minimize AI
harms and maximize benefits.

Speaker 1 (31:19):
I think a lot of people think it's just going
to be great. Before we just make these things, They're
just going to be great and we're fine. And a
lot of people think this is definitely a dystopia and
we're definitely fund and I think it can truly be
a third thing. It just is going to require a
lot of working together and stuff that we have not
always displayed. We haven't always been great at that, but

(31:40):
I really truly think there's no reason why we couldn't
be better at that.

Speaker 5 (31:43):
Now, what's your vision of a future?

Speaker 3 (31:46):
How we consume music? And where is your voice? Is
it everywhere? What's your what's your for me?

Speaker 1 (31:52):
Personally? I love that, I love I always use these
two examples. But I think League of Legends and Harry
Potter have done a very good job of not punishing
fan art. They don't seem to issue takedowns like they don't.
They let people sell prints of like League of Legends
characters and stuff, right people, There's just like it issanly
prolific amount of Harry Potter fan art, and it makes

(32:13):
the community richer and super engaged. And Grimes is like
pretty different from how I am and I see rhymes.
It's more of a character as a science fiction project.
It's just all these things, And I don't feel bad
about not having ownership of that. I would like it
to not do harmful things. But it's like you look

(32:34):
at like Disney, It's like a lot of the best
Disney stuff was made after Whalt Disney was dead.

Speaker 3 (32:41):
So maybe there is that third way between Utopia and
dystopia after all, And maybe there is a chance at
a more punk rock independent approach and what the big
companies are offering. But we soon need to talk about
some of the real existential risks that are being posed
by AI, and also why Rick Astley could be the
key to one of the most burning issues out there, copyright.

Speaker 2 (33:04):
Did you say, Rick Eshlee?

Speaker 3 (33:06):
Yes, Tim, I'm never going to give you up, never
going to let you down. And actually that's the closest
we're ever going to get to a Rick role.

Speaker 2 (33:14):
Okay, I'm excited to hear more as I've been throughout
this episode, but let's take another quick break and then
pick this back up on the other side. Sure, Okay,
we're back with Bloomberg opinion columnist Lionel Laurent, and we're
reaching the end of our journey into the world of
AI and music. What's next, Lionel?

Speaker 3 (33:40):
So we've talked a lot about them big techtonic shifts
around our music, but we haven't really talked about the
existential threat at the heart of this story. There are
two hundred and thirteen thousand, seven hundred and thirty eight
identified singers and musicians out there, according to the twenty
sixteen US Census, and that is, of course only talking
about the States, they won't be able to keep up

(34:00):
with a deep pocketed AI industry that's attracting money and talent.
And even if we keep telling ourselves we'll always value
human connections, the fact is these musicians are already under
pressure from streaming. The top one percent of artists on
Spotify and ninety percent of the royalties, meaning that for
everybody else, live music has become essential as a way

(34:22):
to make a musical living. But even that is getting
tough as top concert tours from superstars like Taylor Swift
get bigger, pricier, and more dominant. Singer and songwriter Taylor
Swift has a new reputation for boosting the US economy.
It's estimated that her tour could generate over four and
a half billion dollars in consumer spending. More musicians acquitting

(34:47):
the industry unable to make a living, and it's become
normal to see Grammy nominees working as realtors. Sam Grimley
is one musician who spent years performing with artists including
Tom Jones and Ed Sheering, but whose sins has left
the industry.

Speaker 11 (35:02):
Being a touring musician is the best job in the
world for someone who's twenty five years old. But once
you get to a certain stage in life, the sort
of glamour starts to fade and you start to want
slightly different things in life. So I wanted a slightly
more stable existence. When I launched my music career, it

(35:23):
was around the time that Napster was turning the music
industry upside down, and over the next fifteen years roughly
the overall revenue from recorded music absolutely plummeted. So in
a sense, I was lucky that I had gone into
live music industry. But another way of looking at that

(35:46):
is it became increasingly the only option for musicians to
make a viable living.

Speaker 3 (35:57):
So if there is an existential threat out there, it's
that the combined forces of streaming and AI will break
music's long running passion principle. This is the force that
keeps new musicians coming up and dreaming that they just
might have a shot and making it big, even if
so many of them don't. If AI music is so

(36:18):
successful that it's smothers platforms like Spotify and disincentivizes new
music completely, that would be a disaster.

Speaker 11 (36:25):
I do think AI could replace some aspects of what
musicians currently do. It seems entirely plausible that in the
very near future we could end up with AI session
musicians who are able to replicate the feel or the
sound of various famous musicians, and that really does represent

(36:47):
a serious threat to people for whom session performance is
their livelihood.

Speaker 3 (36:52):
The poetic twist is that Sam is now a lawyer
and he talked to us about copyright, an issue that seems,
let's face it, a little dull, but it's actually absolutely
critical to the AI story. Copyright protects the ability to
profit from art, and it also incentivizes the creation of
new works. Maybe we need to strengthen it or adapt
it to AI proof future generations.

Speaker 11 (37:15):
Those AI systems are trained on human creations, so there's
a question mark over whether those creators would or could
be adequately credited, compensated, or able to control the output
of those systems.

Speaker 3 (37:32):
For example, imagine an AI trained on the Rolling Stones
that outputs a piece that sounds like but isn't actually satisfaction.
Might the rolling Stones have a case for compensation. Maybe
we need new rights to those musical stems that go
into an AI or an artist's general feel or groove.
And that is where Rick Astley comes in.

Speaker 11 (37:53):
There's a very interesting case being brought in California by
Rick Astley, and he's essentially claiming that imitation of his
voice is an infringement of his right. An AI system
is trained to sound like or trained to have the
feel of I don't know, Ringo Star, could Ringo Star

(38:16):
demand a royalty fee, or demand a license fee, or
prevent the usage of that track?

Speaker 3 (38:23):
To be clear, that Rick Asley case refers to a
human imitating his voice, not AI and Ashley also settled
out of court, but the fundamental legal principle remains the same.
Record labels are actually pushing for a new federal right
of publicity law that would protect artists against the unauthorized
use of their likeness or identity. Another way to AI

(38:44):
proof copyright is by considering the possibility of some kind
of junior credit to the machine itself. This has been
explicitly ruled out by the recent Screenwriters Union deal in Hollywood,
but in music things aren't settled yet. Crispin Hunt, who
we heard from earlier in the show, thinks this might
help us also re evaluate the role of bots and
AI and perhaps allow us to pay their human counterparts more.

Speaker 8 (39:08):
I believe we need to give AI some form of
formalized not quite copyright, but maybe a neighboring right so
that it has a value. Otherwise it will be completely free,
and that will completely undermine the possibility of humans making
a living from music.

Speaker 3 (39:26):
Some of that is getting closer. The website Boomy says
that music produced using its platform belongs to Boomy, and
albums produced by Endel carry its name on the songwriting credits.
If that were also accompany by better transparency on what
is AI and what is human music, we could be
at the start of a more positive debate about better
rewarding authentic human creativity.

Speaker 8 (39:49):
I think it would be fairly straightforward for the streaming
services to have a button that you can flip where
you listen to only sixty percent created human music, or
you can set a level if if the robot music
has some kind of neighboring right or copyright, then I
don't think the streaming services can charge as much. I
think they can charge twenty quid a month for organic

(40:09):
human made music and three quid a month if you're
going to be listening to robots.

Speaker 3 (40:14):
Crispin's idea could be getting closer to reality. Universal Music's
partnership with Deezer proposes to pay human artists more above
a certain threshold, but more needs to be done. This
could be a good moment to consider a whole new
payment model for streaming, such as the user centering model,
which would take everybody's individual subscription fees and pay them

(40:36):
out according to what they actually listen to themselves on Spotify.
The current system puts all of the fees into one
pot and distributes them according to market share. So I
think ultimately, even as there is so much to grapple
with when it comes to AI, there's one big positive.
It just might force us to put a new value
on what's authentically human and that might just be the

(40:59):
third way between utopia and dystopia that keeps artists stepping
up to the plane.

Speaker 2 (41:05):
Okay, Leonel, I have to jump in again. As you know,
crash course is all about learning something new, and we
don't let anybody get off the show without telling us
what they've learned. So what were your key takeaways?

Speaker 9 (41:17):
So?

Speaker 3 (41:17):
I guess one takeaway is that AI is already hit,
so the debate needs to shift to how to use
it safely and also how to change the industry's economics
to put human artists first. I think what isn't clear
is the actual level of demand that exists for AI music.
I mean Napster and made everyone feel like a kid
in a candy stool. This feels a little more niche

(41:38):
like VR or even the metaverse. I think, though, it's
clear that tech is taking over more and more of
music and culture, and that is going to be what
ultimately holds lessons for the rest of the economy.

Speaker 2 (41:52):
All right, Leonelle, that's a wrap. Thanks for joining us today.

Speaker 3 (41:55):
Thanks for having me. This was fun.

Speaker 2 (42:00):
Crash course. We believe the collisions can be messy, impressive, challenging, surprising,
and always instructive. In today's crash Course, I learned that
while AI is turning the music industry on its head,
as it is so many other industries, there's also some
very smart and creative people trying to figure out how
to navigate around some of those problems. What did you learn?

(42:21):
We'd love to hear from you. You can tweet at
the Bloomberg Opinion handle at Opinion or me at Tim
O'Brien using the hashtag Bloomberg Crash Course. You can also
subscribe to our show wherever you're listening right now, and
please leave us a review. It helps more people find
the show. This episode was produced by Moses Adam, Linael

(42:41):
Laurant and Anna mas Rakas. Our supervising producer is Magnus Hendrickson,
and we had editing help from Sagebauman, Jeff Grocott, Mike
Nitze and Christine Vanden Bilart. Blake Maple says. Our sound
engineering and our original theme song was composed by Luis Kara.
I'm Tim O'Brien. We will be back next week with
another Crash course
Advertise With Us

Popular Podcasts

Dateline NBC
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.