All Episodes

November 17, 2025 36 mins

Spill the tea - we want to hear from you!

A world-touring jazz bassist turned educator and AI builder joins us to explore what happens when smart tools meet hard-won craft. We dig into how Futureproof Music School blends a curriculum-aware chatbot with real mentors so producers learn faster, pay less, and still develop their own voice rather than a template sound. From Wembley Arena stories to DAW specifics, John breaks down what large models already understand, where proprietary production knowledge still wins, and why structure matters more than infinite answers.

We take you inside Kadence, a memory-based AI co-pilot that analyses mixes, compares references, and serves targeted, actionable feedback instead of overwhelming students with lists. Think fewer rabbit holes, more progress: clear mix notes, arrangement guidance, and strategic nudges that build week over week. We also get honest about what students actually want from AI right now—help with marketing, release planning, and social consistency—so the music doesn’t drown under admin. The throughline is creativity as curation: your taste decides what ships, even if AI offers a hundred options.

We tackle the big questions too. Can detectors keep up as artefacts vanish? What counts as responsible use when training data is opaque? John argues for consent and compensation, drawing a sharp line between experimentation and high-stakes commercial work while legal frameworks mature. Looking forward, we preview screen-aware and voice-first coaching that can see your DAW and fix problems in context, compressing learning curves without flattening style. If you care about music production, AI ethics, and building a career that sounds like you, this conversation brings clarity, nuance, and practical steps.

Enjoyed the episode? Follow and share with a friend, and leave a quick review to help more producers find the show.

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Stephen King (00:00):
Hello everyone! We are having a really, really,
really exciting time.
We're speaking to so manyamazing people.
And today, returning is Imnah.
Hi everybody.
Clearly we've been talking toso many incredible people that
everything is all just one bigAI blur at this point.
It is.
And this is a musical blur.

(00:21):
So who did we speak to today?

Imnah (00:24):
We spoke John Von Seggern, whose name is
said exactly the way it lookslike.
And he is the CEO and founderof Futureproof Music School.
And we got to have a reallyinteresting talk with him about
how AI is kind of enhancingdifferent aspects of his music
school.
And also about the things thatAI can't do when it comes to

(00:46):
music, like takeover musiciansjust like because there will be
a fundamental aspect ofuniqueness and creativity in
work that's done in any form ofthe creative art that just
cannot be replaced.

Stephen King (01:02):
And we go through a whole lot of things related to
education as well.
So if you are all ready, we areready to go.
Here we go.

Imnah (01:38):
Everyone say hello to John Von Seggern.
He is the CEO and founder ofFuture Proof Music School.
And uh yeah, we're reallyexcited to have you on here,
John.

John von Seggern (01:50):
Yeah, thanks.
I'm I'm really glad to be hereand talk about AI.

Imnah (01:54):
Amazing.
So first off, I think let'stell people a little bit about
who you are, your background.
And so as we understand, you'vegot a background in music, in
education, and also now in AI.
How does that happen to oneperson?

John von Seggern (02:10):
Well, if you're a musician or an artist,
I think you and you have a longcareer.
I think you end up goingthrough a lot of iterations and
incarnations of what you firststarted to do.
So I was, I am originally anacoustic jazz bass player.
That's what I trained to do inschool.
I went to jazz school in NewYork.

(02:30):
I then moved to Japan to pursuemy early jazz career for
several years.
And I was playing a lot inJapan, but I realized at some
point that jazz musicians don'tget paid very much.
So then I became open to otherkinds of musical employment.
And uh as fate would have it, Iwas hired by one of the biggest

(02:53):
top stars in Asia from HongKong.
His name is Jackie Chung,Chung, uh Chungho Kiao in
Cantonese.
So I signed on with him for ayear and toured all over the
world.
We played like a hundredconcerts, and these were like
giant concerts.
Like we played at I've playedat Wembley Arena and Madison
Square Garden and other giantvenues with with different

(03:16):
Chinese stars.
After playing with him for ayear, I stayed in Hong Kong for
another four or five years, andI ended up uh eventually I'll
try to try to make this short,but my career's pretty pretty
all over the place.
Um I ended up doing a graduatedegree in Hong Kong on the

(03:37):
effect of the internet on music,specifically on musical styles,
actually.
And then I moved back to the USand I ended up working in I
moved to LA and then I ended upworking in music technology.
I've worked for a couplesoftware companies, including
the German company NativeInstruments, which is very well

(03:59):
known.
And then about 15 years ago, Igot into education, and since
then I have run online programsin electronic music production
specifically, ever since then.
And I've been at a fewdifferent schools and last year

(04:20):
I left my last institution andstarted my own school with a
partner.

Imnah (04:26):
Wow, that's fantastic.

John von Seggern (04:29):
Just to say uh my career in education has been
my greatest interest has beenadopting new technologies to
help people learn better andfaster and easier.
And so my interest in AI wasjust an extension of that.
It was just like, oh, this isthe next thing we can use to
help people learn better.
So that's that's how I got intothat.

Imnah (04:52):
Perfect.
So it actually seems likeyou've lived a very full life so
far.
So you've done so manydifferent things.
How did Future Proof comeabout?
What was kind of the seed ofthought that was in your head at
the time?

John von Seggern (05:06):
Sure.
Well, um, I'm always trying tothink how to do things better
and solve problems that come up.
And so Future Proof was reallythe outcome of thinking about my
prior program and how it couldbe better and reach more people.
Um, I I was working at a schoolhere in LA called Icon
Collective, and my onlineprogram was very highly rated

(05:27):
and successful with thestudents, but it was quite
expensive.
And so the biggest problems Isaw were uh we well, for one
thing, having it be so expensivemeant that a lot of people
couldn't come, and often thebest artists and musicians don't
have a lot of money, so I feltlike some of the best people

(05:48):
couldn't come.
And then also, uh, like mostschools, we were offering a year
and a half set program thateveryone would do.
So I became very conscious thatif you're somebody that already
knows quite a bit and is maybetrying to make your career,
you're never gonna think aboutgoing to a school like that.

(06:09):
That's for people that are kindof starting from the beginning
or close to the beginning.
And so that was a bummer to me.
We weren't attracting the beststudents and the ones that we
really wanted to work with.
And so when AI became availableto us to use, I mean, kind of
in the wake of Chat GPT, I as Igot into it further and learned

(06:32):
more about it, I realized wecould be using this to for part
of the educational process, andby doing so, we could
potentially make it a lotcheaper and bring high-quality
education to a lot more people.
And so our basic concept is uhwe want to meet people where
they are and take them wherethey want to go.

(06:54):
And we're using AI tofacilitate that.
Um AI is only part of thepicture, though.
We believe very strongly.
I believe that, especially inthe arts, you need to learn from
a master artist at some point.
Once you've got kind of thebasic skills under your belt,
you need somebody, you need tolearn from somebody who's done
it and has had to make thosekind of decisions about their

(07:17):
career and their life and theirartistry.
And so we're trying to use L AImore at the lower levels to
help people, and then we're alsomatching them with mentors to
help guide them.
So it's kind of an AI humanhybrid model, right?

Imnah (07:33):
So they still oh go ahead, Steve.

Stephen King (07:36):
Oh, sorry, I was sorry, I'm not talking about
you.
We tend to do that, don't we?
Um, so I I wanted to talk aboutthe the technology that you're
using because I'm aware of voicerecognition and voice
synthesis, right, in terms ofAI.
Or their what what whatservices are you relying on for
your music teaching, your musiceducation?

John von Seggern (07:56):
Um, well, I'm using the I'm using the same big
foundation models thateverybody else is using.
Mostly using GPT-5 and Gemini.
Uh, we use Gemini specificallybecause it's the best for
musical analysis.
And I've built a chatbot thatis integrated with our
curriculum.

(08:16):
My original model was the KhanAcademy AI co-pilot.
They call ConMigo.
It kind of you're studyingsomething and then it pops up
and it helps you understand itin a different way, or you can
ask more questions, or you cankind of go deeper into it if you
want to.
So that's what I've built.
And it's also got a very deepmemory about each person, and it

(08:37):
keeps track of itsconversations with you and the
music you've submitted and whatyou need to work on, and it is
like resurfacing those things asyou go along.

Stephen King (08:46):
So um what does it look like?
So, so for example, I amcomposing or I am learning to
play something on my saxophone,uh, and I record it, and then do
I upload it to the to the AI,and then the AI will tell me the
tonality or give me some adviceon that.
Or how does it work?

John von Seggern (09:05):
It can, yeah, it can do exactly that.
That's but that's only one ofthe functions.
Um a lot of the times it mightjust be understanding some
concept better, or I I'veprogrammed it to if somebody has
a question about something inthe curriculum, it has a few
different educational strategiesit'll try.
Like one could be Socraticquestioning, then it just keeps
asking you questions.

(09:25):
I I don't think it's thatuseful to just having a bot that
you can ask questions to and ittells you answers is only of
limited use in education.
You have to your responsibilityas an educator is to help guide
people to learning andknowledge, and you need to have
it structured in a way where thechatbot is part of the overall

(09:48):
experience and not just like anextra thing hanging out that you
can ask questions to.
It's easier to say that than todo it.
It's a long process, so um alot of it actually the the most
important thing to think aboutis we all have these giant AI
models available, but what comesout is totally dependent on

(10:10):
what goes in to it.
And so it's a combination ofthe instructions that we're
giving it and the knowledge thatit has about the student, and
then the knowledge that'sdrawing from the curriculum that
we designed.

Stephen King (10:21):
So, in terms of the knowledge that you've put
into this, uh have you c createda music uh database?
Have you uh is there a modelwhich is uh which you've uh
inputted musical sheet music oror even just the this the the
way that things should beplayed?

John von Seggern (10:43):
I mean well, first of all, you gotta
understand we're not teachingpeople how to play the
saxophone, it's mostly aboutelectronic music production.
So a lot of what we're teachingis software techniques.
If we were trying to teachpeople to play instruments, that
would be a whole differentchallenge, and we're not
equipped to do that.
Um But as far as as far as whatyou said about the knowledge

(11:06):
we've put into it, I mean youhave to realize that there's two
sides of that.
For one thing, these giantfoundation models like GPT-5,
they're trained on everythingalready.
So it already knows all aboutmusic theory and composition and
stuff.
We don't have to put that in.
What we have found though isthe the concepts and knowledge

(11:28):
our students really want are thekind of production secrets that
like my partner is a dubstep DJand producer.
He's a pretty famous guy.
Proto-hype is his artist name.
So he knows techniques that theAIs don't know, actually,
because there was no place forthem to get them from.
Those things are in ourcurriculum and our AI, whatever

(11:51):
is in our curriculum, our AIdraws from.
We haven't really given it alot of specialized knowledge
about music.
That's not really necessary,honestly.
I do give it some softwaremanuals about the software that
our students most commonly know.
So it has like a it's better insome cases that it has a single
source of truth rather thanjust looking things up on the

(12:12):
internet.
But for many areas, the modelsalready know everything.
So it's more a question ofsurfacing the right information
at the right time.

Imnah (12:21):
Yeah, that's interesting that you put it like that,
especially if we go back tothink about how you mentioned
it's kind of like a human and ahybrid uh model when it comes to
your approach of integratingAI.
So is there any aspect of thecreation and the production that
AI is involved in, um, perhapsfrom the students' side of

(12:42):
things or maybe even fromfaculty side of things?

John von Seggern (12:46):
Um, we're developing some curriculum
around generative AI musiccreation, but in general, we're
more interested in using AI tosupport the students in other
ways.
I think what our students aremost interested in now is how
they can use AI to help themmore with marketing and business

(13:06):
and social media posting andthis kind of thing.
Because I think uh musicalcreators today, the biggest
frustration is there's too manythings to do.
You have to writing the musicis just part of it.
You have to create your publicpersona and you have to post
online all the time, and there'sjust so much to handle.
So we're trying to teach thestudents how they can use AI to

(13:29):
support themselves in a varietyof ways.
Um most of our students aren'tthat interested in generative AI
music creation at this point,actually.

Imnah (13:41):
That's interesting.
And do you see um perhaps anyscope for new innovations kind
of changing what that looks likeright now?

John von Seggern (13:49):
You mean on the creation side?

Imnah (13:51):
Yeah.

John von Seggern (13:52):
Yeah, I do.
I think that that will be itwill become one of the tools
that people have to make music.
And at at this point, uh Iwould say AI music creation is
widely hated and opposed by alot of people, but I do expect

(14:12):
that to change.
I think it'll be normalized andeveryone will get used to it
before long.
I haven't spent that much time.
Go ahead.

Imnah (14:20):
Um, I was gonna ask if you think that there's a
potential that AI wouldaltogether just replace
musicians.
Like one day we wouldn't haveSteve sitting on the other side
of a call playing the saxophone.
We would just have AI Steve.

John von Seggern (14:34):
I don't think that will happen personally, and
there's a couple reasons.
One reason is the results youget in art are in large part
because of the process that youwent through.
So you could use sam-I mean,leaving AI out of it, you could
always just get samples ofsaxophone solos for your music

(14:55):
now.
But if you had Steve recorded,it would be something special
and sound like something thatnobody else has.
I always encourage our studentsif they can sing or play any
instrument or do anything, theyshould always try to include
that in their music because itgives them a flavor that other
people don't have.
Um also I think it's importantto think, let's say and we're

(15:17):
we're getting there veryquickly.
Let's say we're in a worldwhere you can generate a pretty
good song by hitting a button.
Okay.
But that raises the questionthen, okay, if I can make one
song in a minute, I could make10 songs in 10 minutes, or 100
songs in a hundred minutes, andthen which songs am I gonna
release, or which ones arebetter than other ones?

(15:38):
Somebody still has to decidethat.
So even if we are in a worldwhere everyone is generating
everything with AI, there isstill the human creative vision
that you're using to decidewhich thing do I want to put out
in the world and put my name onit.
You know what I mean?
Even if you didn't do anythingwith it, I think that's really,
really important.
And I think electronicmusicians, especially, have kind

(16:01):
of been working like that allalong because the approach for
many artists has always beenit's not so much that you hear
something in your head and youtry to do it, although sometimes
that is the case, but much moreoften you're just experimenting
with things in the studio tosee what happens.
You know, what if I plug thissynthesizer backwards into this
thing?
Wow, that's really cool, andthen you make a bunch of sounds

(16:23):
and then you come back and yourealize, oh, that little bit
would be great to build a trackaround.
So what we believe at futureproof is ultimately creative
vision and taste will becomemuch more important.
And um whether people aregenerating all the music or not,

(16:44):
they're gonna be using tools tohelp them do things that are
more the grunt work side.
Like, like if I could have a,and there are things like this
already, if I could have asoftware that helps me mix my
track better, I'd like to usethat.
I might not agree witheverything, I might still work
with it on my own, but if Icould get halfway with AI, I'd

(17:05):
love that.
And then I could work more onuh making the actual music, you
know.

Stephen King (17:11):
Just if they if someone is using some AI to
create some music, we see we getintegrity issues, uh whether
it's with advertising or whetherwith film or whether with its
text.
Now, is there any way do youknow that you could check if
someone is using AI generatedmusic?

(17:32):
Or because if in the electronicmusic scene everything's
electronic anyway, right?
So is there a fingerprint thatyou can detect from this, or is
that something that is just notrequired right now?

John von Seggern (17:44):
Well, that's an interesting question.
And there are, yes, there aredetectors, just like there's AI
text detectors.
I think the music detectorsprobably work a little bit
better actually because uh thegenerators leave detectable
artifacts in the music that acomputer can detect.
And there's a couple platforms.
Uh Deezer is a it's acompetitor of Spotify, and they

(18:08):
do have a uh I don't use Deezer,but they they are able to
detect all the AI music and showyou whether it was made by AI
or not.
I think the the tricky bit isgonna be I expect more people in
the future are gonna be usingAI to do part of the music.
Maybe they just make the drumsand then they do everything
else.
And I think it's gonna get moreand more tricky for people to

(18:29):
distinguish what was made by AIand to what extent.
And I also think it willimprove.
I mean, eventually I don'tthink the detectors will work.
I already think with text, likethere's all these, like I'm
always reading in educationabout students who said, I wrote
this essay and handed it in,and now my professor says I
wrote it with AI and I failed.

(18:50):
You know, and with text, it Imean, sometimes you can tell,
and sometimes it's obvious, buta lot of times it isn't.
I know how to use AI myself tomake text that doesn't sound
like ChatGPT.

Stephen King (19:02):
So that is always the case of the false positive,
which is going to dramaticallyaffect a student's life.

John von Seggern (19:10):
Uh I do think sorry, but in music, there is
the question of obviouslycopyright and intellectual
property.
And so right now, if you wereworking on a big Hollywood
production or something, oractually pretty much any
commercial kind of job, like youprobably wouldn't want to use
any of these AI generatorsbecause the copyright situation

(19:34):
is kind of unresolved right now.
I do expect that that will beresolved.
The biggest question right nowis would it be technically
possible for AI generators todetermine what the influences
were that were drawn on forparticular thing and then pay
some kind of royalty model?
And that's I suspect, or Ihighly suspect that's what the

(19:58):
major labels and the AI musiccompanies are negotiating right
now.
But we'll see how that worksout.
But I I expect they'll makesome kind of deal and money will
flow.

Stephen King (20:09):
Money will always unlock the doors, right?
Um you have this fantasticchatbot due to Kadence, um,
which it again, other literaturesays that these chatbots are
really beneficial to students.
Uh, how long have you beenoffering Kadence and have you
got feedback?
Has there been any uhevaluation on its effectiveness

(20:32):
as a teaching buddy?

John von Seggern (20:34):
Um we're I've been working on it for about a
year.
We are a relatively new school.
We've only officially launchedin May, so I can't say that I
have a lot of data about howmuch Kadence is helping.
I have a lot of data about howmuch chatbots have helped other
educational institutions.
Um but also part of thequestion is uh we're on this

(21:01):
technological curve ofadvancement with AI, and so as
an AI builder, you have to youhave to think like what can I
build today and in the next sixmonths and in the coming few
years?
And those will often havedifferent answers because like
you mentioned the voice spotbefore.

(21:22):
I have built a voice version ofKadence that's on our website
actually, and it can see yourscreen and discuss with you.
It the the voice technologyworks great, actually, but the
screen sharing technology, whichis it's something made by
Google, it doesn't quite workwell enough for the applications

(21:44):
I want, but it's really close.
It can't quite read the fineprint on the screen, actually.
That's the main problem.
But uh any day now, Google willrelease an update that's like,
okay, now this works better.
And then that will enable thattechnology will leapfrog a lot
of the things I'm doing nowbecause once the AI can see what

(22:04):
you're doing on the screen,then you don't have to explain
it anymore, and it can be a lotmore helpful, and you don't have
to feed data in the back somuch because it can just see you
and help you.

Stephen King (22:14):
So I excuse me, because I'm not that familiar
with electronic uh musicproduction.
I can only I do sheet music andclassical notation.
Um correct me if this questionis wrong, and you can hopefully
translate it into your own intoyour own way.
But I'm assuming if the the theAI browser, which they've now
created, can see what you'rewriting or what you're

(22:38):
composing, it will then be ableto suggest better uh better
better uh chords, better, betterways of of of of of uh of
writing a particular piece ofmusic.
So would there be co-authoredmusic potentially?

Imnah (22:57):
Kind of like grammarly, but for music.

Stephen King (23:00):
That's a very good explain.

John von Seggern (23:02):
Yeah, there's a few music is different in a
few ways.
Um it's really hard at thispoint to process music in real
time.
You kind of have to make a mixand then give it to it, and then
it can and also it needs to todo a good job of analyzing the
music, it needs to have kind ofthe whole thing.
It needs to have the context ofwhat's happening to figure out

(23:23):
what kind of music are youtrying to make and what happens
in the arrangement and thesekind of things.
Um but I also think is itreally right to say better, or
is it I one of the ways that Iuse AI in non-musical things is
for brainstorming and coming upwith new ideas because I can ask

(23:45):
it, you know, come up with ahundred new ideas for this, and
then I'll just look through themall, and most of them maybe are
bad.
But number 93, oh, that wasgreat.
I never would have thought ofthat, you know.
But I'm still the one pickingwhat to do.
It's just like offering mepossibilities.

Imnah (24:01):
So do you think that the inclusion of AI kind of makes
this process somehow faster?
Streams line tree streamlinesit maybe for your students.
Maybe think about it, I guess,in terms of being different from
your contemporaries.
Like, does AI actually give youand your students a boost in
that regard?

John von Seggern (24:22):
Um, well, on the education side, it
definitely does.
As far as creating the music, Ithink potentially it will.
But uh I think what producerswould like to have is tools that
help them with parts of theprocess that are grunt work or
dru drudgery now.
Uh, like for example, we've allbeen using samples for a lot of

(24:46):
things for years.
Until a few years ago, youwould have been just like
looking through samples, like,no, not this one, not this one,
not this one, not this one.
And now better tools have beendeveloped that will kind of help
you narrow the field.
Or, for example, you can finddrum and percussion loops where
the timing lines up already.

(25:07):
They might be all differentstyles, but it can see other
timing.
Any of these would work, andthen you're kind of at least
you're in the ballpark.
So I think producers would likethings to help them.
I think producers and artistsare less interested in something
that does the whole work forthem because then they have less
investment in it and it's lesspersonal and other reasons.

Imnah (25:28):
Yeah, I think I see it similarly as a parallel to even
as we talked about content,right?
It's completely different whenthere's a person sitting behind
a computer putting theirthoughts in um and inputting
kind of what turns into anentirely different creative
output than what you would findfrom, say, ChatGPT, if you had
then asked it to do the samething.

(25:50):
It's something about our livedexperiences and our biases and
even imperfections really umthat um make those kinds of
things unique.
And I feel like that would bequite similar for music.
Would you agree?

John von Seggern (26:04):
Yeah, for sure.
And I I should say too, part ofmy experience running these
programs for years is that ifyour courses and your curriculum
are well designed, you can seethe students learn and improve.
That was my experience in mylast program.
So I'd see people come in,didn't really know what they
were doing.
Gradually, as I heard theirmusic, it would get better in

(26:25):
terms of it would sound cleaner,better organized, more like
their references or their idolsor whatever.
But I became aware that thebiggest problem was that um as
an artist, it's not that usefulto just become the 500th person
to sound like your hero.
Nobody really cares about that.

(26:46):
If you want to make an impactas an artist, you've got to be
doing something that otherpeople aren't doing.
And AI could help or hinderwith that, depending on how
you're using it.
You know, if I just go to AIand I'm like, make a techno
dance track, it'll just makesome generic thing that doesn't
sound special at all.

Stephen King (27:08):
Right?

John von Seggern (27:09):
That that that's the fear that there's but
if I spend all day generatingthings and taking them apart and
trying to find, oh, where waslike like I have made quite a
few things with I have generatedquite a bit of AI music that is
quite unique and different thananything I would have been able
to make any other way.
And that interests me a lot.

(27:31):
It's still not quite at thelevel where it sounds good
enough for me.
That's kind of what's holdingme back.
But um, I definitely think itcould lead to a lot of new
creative directions that wehaven't tried before.

Stephen King (27:42):
I'm just starting to think in my head, and this is
a crazy idea, but we we havethe large language models, but
surely there could be a largemusic model, something which is
specialized purely for musiccreators, because music's a
different language in its ownright, it's got it's got its own
phenoms, whatever whatever thisthing is that they use.
Uh there should be a way ofbuilding a transform, and I'm

(28:05):
just going through some of thesetechnology, building a
transform which could understandthe context of what music is
you're trying to play if itisn't already existing, and then
you'd be able to builddifferent services.
You mentioned drums, so we wewould want to have a whole
series of uh drums that we mightbe want to be able to apply.
Uh there's you know there'sdifferent uh different uh tempos

(28:30):
that you you you you have tokeep uh with uh with this chords
with the guitar, so you coulddiffer different things that you
can do with that.
Um I'm just wondering, is isthat something that you s you've
even thought about, or is thatsomething I'm just fantasizing
I'm going crazy about?
Just so having a you knowa Claude, but Claude with music

(28:52):
and only for musicians.
And does that sound likesomething?

John von Seggern (28:55):
Which is I mean, yeah, there are things
like that.
There's there's a lot of workin that area.
The biggest company is Suno AI.
I don't know if you've heard ofthem, but yeah, they have a
large music model, that's whatit is.
It's a music generation modelthat's been trained on uh a lot
of music that they're notwilling to say what it was, but
are currently negotiating withthe major labels to gain the

(29:18):
right to use it.
But also, I mean the bigfoundation models like ChatGPT
and Gemini are trained on music.
In fact, ChatGPT 5 is not, butGPT-4 was, and Gemini Pro is.
And so I have a the way Kadenceworks is I'll take a student's

(29:39):
mix and we send it to Gemini,and then it makes an incredibly
detailed long report about themusic, including transcribing
the lyrics, which is kind offun.
But then it doesn't just shootthat back at the student because
as an educator, I know the wayfor the way to help people learn
things is not to draw.

(29:59):
Drop huge amounts ofinformation on them, you have to
kind of have a structure whereyou're feeding the right thing
to the person at the right timeso that they can learn.
I've always told my teachers,you know, when you're giving
students feedback on theirmusic, don't tell them 10
things.
Just try to tell them a couplethings.
If you can tell them one thingthat they actually remember and
then they do, like that's huge.

(30:20):
If you tell them 10 things,they're likely to forget all of
them.
So cadence is taking thisreport on the music, and then
it's looking at what is thisperson trying to do, what has
their music sounded like in thepast, what are their strengths
and weaknesses, and then it'lltry to tell you something useful
and actionable that you coulddo.

Stephen King (30:38):
That's interesting.
And then I will let you go in aminute.
I just say it's beautiful thatI'm just again in my
imagination, because you've gotthis record of this their
journey.
We've just seen the BruceSprinstein movie that's coming
out.
Uh you've probably seen italready.

John von Seggern (30:54):
I haven't seen it actually, but I know what
you mean, yeah.

Stephen King (30:56):
You know, the in the future there could well be
uh an AI integration into one ofthese documentaries because you
will have that element uh fromworking with your school anyway.
So that was just what I had inmind because I saw it and I
intend to go see it later.
Imner, sorry, you can you goahead.

Imnah (31:13):
I was just gonna ask, um, in thinking about how we're
using AI increasingly, um, howdo we kind of avoid this one
size fits all music?
So, how would creators, howwould producers, songwriters use
AI in a way that actuallysupports their personal style
rather than take away from it?

(31:34):
And then we all kind of havebasically like a fast fashion
version of the industry.

John von Seggern (31:41):
Well, I I mean, to be fair, like, yeah,
that will be part of theindustry.
It already is, honestly.
Uh, the kind of the fastfashion model, that's funny.
But um I think it you know, wetalk about this all the time at
the school.
I think it comes back also toyou use different tools to make
music and you get differentresults and different sounds

(32:04):
from each one.
I find when I use Suno AI, eventhough it has it can make you
know a very wide range of music,but it does have all kind of a
similar sound to it as though itwas all made on the same piano
or something like this, which isanother reason why I would lean
against using it for my ownmusic.
I'd rather use some crazyprocess that nobody ever thought

(32:25):
of before.
Um I should say I'm more of anexperimental musician myself.
And before COVID, I was playingwith uh really one of the
pioneers of electronic music.
His name is John Hassel.
Unfortunately, he died duringCOVID.
But um I was recording with himfor four or five years, and

(32:47):
every weekend we would gettogether and we would try to do
something that we never didbefore every time.
And a lot of the times it wouldbe something good, but but
sometimes it would, and we wouldfind some wow, that's just
really new and cool, and nobodyever did that before.
Let's make a song around that.
And uh AI may be one of theways that people do that in the

(33:11):
future, but I don't think it'llever be the only way.

Stephen King (33:16):
We're coming towards the end now, I think, or
one question here, or which I'mgonna split to two.
Uh, it's if you could set onerule for the responsible use of
AI in music, or since it'seducation, what would you say
for AI and music education?
Uh either or or both.
What would what rules would youyou play for responsible uh use

(33:36):
of AI integration?

John von Seggern (33:38):
Well, I think the biggest the biggest debate
and question now is about thecopyright and intellectual
provenance.
I don't want to use tools wherewe're stealing from people or
uh where the original creator isnot being remunerated in some
way, but um I think that thatwill happen.

(34:00):
I think it's just a matter oftime.
I don't know how it'll beworked out, but um I previously
I was in a previous in aprevious incarnation, I was one
of the earliest laptop DJs usingcomputer files to DJ.
That's what I transitioned intoafter being a bass player for

(34:20):
my previous career.
And it was very similar to now.
There was a lot of controversyabout copyright of the MP3 files
and where are they coming fromand who gets paid for them.
And it even affected me.
Like there were like other DJsdidn't like me because I was
using a computer.
I never thought of that.
I was like, wow, this is socool, we're using the computer,

(34:40):
and then these other guys thatspent their whole lives playing
vinyl records, they're like, no,that's not fair.
But in a couple years, it allgot normalized, and before long
everybody was using computers,and now it you hardly ever see
somebody playing records becausethere's a lot of damages using
computer.
So I expect the same thing willhappen this time.
But as far as rules, yeah, Iwould I would say right now I

(35:03):
wouldn't try to make my albumwith Suno because they haven't
really worked that out yet.
And I'd like to see money goingback to whoever created the
ideas in the first place.

Stephen King (35:14):
That's amazing.
I I think we're just at the endof time here.
Ibna, would you like to closeus down?
Thank you very much, John.

Imnah (35:21):
Yeah, again, thank you, John.
I feel like that was a reallyum insightful conversation,
actually.
I think it would also challengelisteners to think more about
the authenticity of anindividual that they bring to
any creative piece of work, um,especially in music, and how
that's going to altogether bedifferent from um using AI to

(35:44):
enhance parts of it and thenusing AI altogether to just
create it from scratch.
So, yeah, some great food forthought here.
So um thanks for talking to us,John.
Um, and uh yeah.

John von Seggern (35:57):
Yeah, thanks a lot for having us.

Imnah (35:58):
I don't know where else to go from here.

John von Seggern (35:59):
It's fun, fun talking about this stuff with
you.

Stephen King (36:02):
And if everyone listening at home would like to
like, comment, or follow.
Uh, what's the social media orwhat's the tag they should
follow for future proofs?
Future proof music.

John von Seggern (36:11):
Uh just look up Future Proof Music School.
We're on all the majorplatforms.

Stephen King (36:15):
Super.
And on that note, you couldalso support us if you so
desire, put the links below.
Thank you everyone, andgoodbye.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by Audiochuck Media Company.

The Brothers Ortiz

The Brothers Ortiz

The Brothers Ortiz is the story of two brothers–both successful, but in very different ways. Gabe Ortiz becomes a third-highest ranking officer in all of Texas while his younger brother Larry climbs the ranks in Puro Tango Blast, a notorious Texas Prison gang. Gabe doesn’t know all the details of his brother’s nefarious dealings, and he’s made a point not to ask, to protect their relationship. But when Larry is murdered during a home invasion in a rented beach house, Gabe has no choice but to look into what happened that night. To solve Larry’s murder, Gabe, and the whole Ortiz family, must ask each other tough questions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.