All Episodes

September 6, 2023 70 mins

What if the key to relaxation and creative flow is just a piece of tailored music? Join us as we sit with composer and designer Jeff Miller to explore this intriguing intersection of music and technology. Jeff enlightens us about his groundbreaking work with the Sync Project and the leading-edge Unwind Music app. This app cleverly utilized the principles of generative music to encourage relaxation and better sleep. Our conversation examines how the app matches music to the listener's mood and gradually adjusts it to achieve a desired state, an approach grounded in the ISO Principles of music therapy.

We'll also uncover how Jeff and his colleagues ingeniously used data and natural language processing to create a chatbot for Slack. This unique tool was designed to generate personalized playlists, helping to regulate mood and enhance creative flow. We also examine the meticulous process behind building the Unwind music app, spotlighting the significant roles of a diverse team, including music legends like Peter Gabriel.

Tune in as we conclude our fascinating conversation with Jeff, delving into the power of music in liminal spaces - those fleeting moments between doing and being. Here, Jeff shares how the principles applied in his work can be used to create a relaxing sleep podcast, drawing from his own experiences with meditation to guide listeners into tranquillity. With Jeff's valuable insights and our stimulating talk, you're set to discover the science and artistry behind using music as a powerful tool for relaxation and well-being. So, are you ready to tune in to relaxation? Let's get started!

Learn more about Jeff and his endeavors at www.jmcreative.com.

Listen to us on all our platforms.
www.audiblegenius.com/podcast

Our courses
Building Blocks: www.audiblegenius.com/buildingblocks
Syntorial: www.syntorial.com

Our Social Networks
Facebook: www.facebook.com/AudibleGenius
Instagram: www.instagram.com/audiblegenius/
TikTok: www.tiktok.com/@audiblegenius

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:08):
Welcome to the Audible Genius podcast, where we
take a behind the scenes lookat being a musician.
I'm Joe Hanley, and today I'mtalking to Jeff Miller, a
composer and designer who'sworked extensively with
technology that uses music tohelp people sleep.
We'll be talking about how tocreate music that connects with
a listener and then guides theirstate of mind down a particular

(00:31):
path.
This is a fascinatinglyspecific journey into composing
technology in the human psyche.
Let's get started All right.
So, jeff, you've done a numberof things in your career that
I'm really interested in, andone of the first ones I want to
talk about was your work withthe SYNC project, where you made
the Unwind Music app, which,simply put, used music to help

(00:55):
people relax, fall asleep thatkind of thing relieve stress.
Tell me how did it work?

Speaker 2 (01:05):
Well, it begins with the goal of trying to figure out
how to use music effectively toaffect people's physiological
state, and so there's a wholelot of thinking that went into
it.
So we can touch on any aspectof that that you like.
But mechanically speaking, itwas built around this concept of

(01:28):
generative music, which I knowyou have some familiarity with
that and I'm sure with somepeople in your audience do as
well.
But I'll just give a quickoverview.
Generative music waspopularized by Brian Eno, and
what it describes is musicthat's sort of ever different

(01:49):
and changing, and it's performedby a system.
So in 1978, brian Eno did thiswith tape loops, which is very
analog.
He had tape looping aroundchair legs and all kinds of
crazy stuff, a bunch of verymechanical machines doing this
work.
So we wanted to do a digitalversion of that.

(02:12):
It's very common in gaminggenerative music.
But the other sort of piece ofit that we wanted to add was to
make it adaptive, and what thatmeans is making it possible for
the volume, the rhythm, thesample content and basically a
bunch of other parameters tochange in response to specific

(02:36):
events or inputs, and in ourcase we were looking to use
sensors, either through a phoneor sensors that you would wear
so that we could performgenerative music but also
respond to what was going on inyour body, and so that's the
sort of that's the palette thatwe're working with.
When we created Unwind, whichwas a really interesting sort of

(03:00):
combination of learning abouthow all these different musical
parameters might affect somebody, and we did a bunch of research
around that which I can talkabout, and then building a
system that would almost in asense emulate the work that a

(03:20):
music therapist would do whenworking with a client, which is
rooted in this idea of ISOprinciple, which you and I
talked about not long ago, but Ican go into more detail around
that as well.
But in terms of how it worked,we had a system that was able to

(03:42):
ingest a bunch of samples, asopposed to more of a synthesizer
or MIDI type of performance.
We used sample-based music.
I composed a bunch of music towork with the system, and then
we wrote a rules-based sort ofsystem that would allow us to

(04:03):
define specific levels of energywithin a composition and then
affect through these inputs ofsensors or actually even through
just a user input on the phone.
We would understand, sort ofwhere somebody was and then
perform the music over time totake them where we wanted them

(04:24):
to go.

Speaker 1 (04:25):
Okay, so let me start at the core of that.
What is the ISO principle, thatmusic therapy principle that
was all built on?

Speaker 2 (04:32):
Sure, yeah, the ISO principle is defined as it's a
technique by which music ismatched to the mood of a client.
This is the way musictherapists work.
Actually, it's at the core ofmusic therapy.
So you want to match the moodof your client and then
gradually over time, modulate toaffect the desired mood state.

(04:56):
So you can imagine an examplewhere somebody comes into the
music therapist's session in avery agitated state.
They're not going to playsomething peaceful and mellow to
try to chill them out, they'regoing to actually try to meet
them where they are withsomething maybe that's a little
more active or maybe has amelodic content, that they,

(05:21):
through their sort of lens ofmusic, might meet the client
mood better.
And then they're going to sortof again, using their craft,
modulate over time to help shiftthat client into a different
headspace, which is just amagical if you think about it.
Yeah, that is why music therapyis a thing, because there's a
lot of evidence and sciencebehind this stuff.

(05:42):
It's fascinating, absolutelyfascinating.

Speaker 1 (05:46):
It really is.
So, since they're a humantalking to a human, they could
just gauge using their instinct,and then they have maybe one or
a handful of instruments thatthey can pick and choose to
start playing for them, and thenthey can just modify it as they
go.

Speaker 2 (05:59):
Yeah, that's one very hands-on kind of way where
you're performing music in realtime, and often music therapists
will have instruments that theclients can have if they're
capable that they can useusually simple things, rhythmic
instruments or just basicinstruments that anybody could
play.
Sometimes therapists will usepre-recorded music as well.

(06:23):
It really just depends on theclient and on the approach.
But they're still using theirinnate musical craft to gauge
what the client's state is, andso that's what we wanted to do
with Unwind.
We wanted to create in thiscase we did it as an iPhone app
we wanted to create a situationwhere you could put this thing

(06:46):
in the hands of anyone and getenough information about them to
understand where they are inthe moment and then use
generative music to meet themthere and then, over time, take
them somewhere.
In our case, we were mostlytargeting relaxation and sleep.

Speaker 1 (07:04):
So I love this process where you take something
that's currently being done inreal life human beings, human to
human and then try to createthis automated app version of
that experience.

Speaker 2 (07:17):
Well, that's kind of blowing up right now, isn't it?
I can't open my feed withoutreading something about AI, and
so this was more of a machinelearning model.
It started with a very logicalset of rules and responses, but
it was intended to move in thedirection where it would get
better with time, and, yeah, itis a fascinating thing.

(07:39):
I think people are of it's apolarizing idea in a lot of ways
.
I will say that we never setout to replace music therapists.
Our main goal was to see couldwe create something that would
be a helpful adjunct to othertypes of therapies in-person
therapies, or even an adjunct tomedication in some cases,

(08:01):
because, as the SYNC project waskicking off as a startup,
opioid crisis was in the newsevery day.
It still should be, and it'sthe kind of thing where the
ingredients that we were usingto create this system could have
been really meaningful adjunctto other types of therapies.

Speaker 1 (08:25):
Yeah, and I think that's one of the big things
that instead of replacing atherapist, you're giving them an
additional tool.
In particular, I think it'sright because they can do it at
home.
The therapist can't go homewith you, so it's like a
supplement to their work withyou.

Speaker 2 (08:41):
Yeah, and we see this more and more as I've worked
with a variety of healthcareprofessionals over the years
this idea of we have a protocolto serve people in-person, but
boy, we really don't want toleave them hanging when they're
on their own.
People need more access, and sothat's always been the lens

(09:04):
that I've looked at this kind oftechnology through.
If you can make increasedaccess and support the goal,
then there really is nothingpolarizing about it.
There's always going to be aneed or at least as far as I can
see, there's always going to bea need for human contact and
human expertise in therapeuticenvironments.

(09:26):
But people are capable ofself-directing and we do it all
the time.
This was the other insight thatwe had going into this work is
that all of us use music andmovies and TV for mood
regulation.
We do it without even thinkingabout it, yeah, yeah, and so, if

(09:49):
you think about it through thatlens, all media has potential
therapeutic value.
So what can you do to helppeople help themselves?
I like to think of it that way.

Speaker 1 (09:59):
Yeah, it sounds like by creating this very singular
experience in this app, you'recurating it for them.
Think about more of a wideenvironment, like social media,
that can take you any directionof that direction a good
direction.
But this is more narrow, withone specific goal in mind.

Speaker 2 (10:13):
Yeah, well, I think the other application that we
created, the Sync Music Bot, wasa little bit more about
curation, because that wasrelying on Spotify streaming
platforms to bring music intofunctional audio playlists,
which is a little different.
The stuff we did with Unwindwas intentionally original

(10:37):
compositions, novel music, andthe idea there is that when
you're familiar with a piece ofmusic, you can anticipate where
it's going, and this concept ofanticipation can be counter to
letting go in the moment andallowing yourself to drift off.

(10:59):
So it's a challenge, right,because, on the one hand, you
really have to meet people'spreferences.
Some people do not want to hearacoustic guitars, some people
do not want to hear electronicmusic, some people must hear
acoustic guitars in order torelax.

(11:19):
It's a really interesting thing.
So you have to provide a range,I think, of sort of styles and
genres, but by presenting musicthat is completely novel to that
person and with generative itnever plays the same way twice
You're reducing this sort ofthing that we tend to do,
naturally, which is to payattention to the music when we

(11:40):
learn it.
We learn it quickly.
Most of us can name a familiarsong within like three seconds
of hearing it, and so if you canreduce that a little bit
through novel music, then youhave that much more of a chance
to let your body and your mindrespond to those aspects of the

(12:03):
music that are designed to helpyou unwind.

Speaker 1 (12:06):
That's interesting.
So if they know the song, likeyou said, they know where
they're going.
It almost sort of activatesmore like the conscious front of
their mind, which is theopposite direction you're trying
to take them in.

Speaker 2 (12:15):
In that case, yeah, yeah, particularly when you're
aiming for sleep, you want to bereally mindful of that.
It's interesting.
This is all stuff that I'veheld on to over the years since
SYNC Project, which we startedin 2016.
But these sort of ingredientsof mindfulness, isoprincipal,

(12:38):
and then entrainment is theother sort of concept that I
carry around with me, which isentrainment describes the way
that our bodies gradually syncwith external rhythms.
So this is also again a lot ofdirectional evidence and studies
have been done around this.
But, for example, slowing yourbreathing rate is shown to help

(13:03):
reduce stress and your breathingrate actually is constantly
adapting to external audiosignals and other signals as
well.
So in music for rest, I use thisconcept to inform compositional
guidelines in tempo andinstrumental and rhythmic
density and melodic motifs andrepetition, stuff like that.

(13:23):
So with original musicdeveloped for generative and
adaptive playback, you can keepall this stuff in mind, whereas
if you just pull a track off aSpotify, you've got to work with
whatever that composition is.
That's what the composercreated, it's not going to

(13:44):
change.
But playlisting is interestingbecause you get a flow between
tracks that can have a verypowerful effect, just a
different, different approach.

Speaker 1 (13:54):
Yeah, so that was a Slack bot, right, like he wrote,
like an automated bot thatwould generate playlists for
people in Slack for certainpurposes.

Speaker 2 (14:03):
Yeah, so that was called.
We call it a sync music bot nota terribly clever name, but the
notion.
There again, we had a prettyspecific goal, which was to try
to work with a set of knownmusical parameters and
recommendation edges.
In this case, we integratedwith Spotify and we had the

(14:27):
director of research at the timeon our advisory board from
Spotify, so we were able to getreally under the hood of how
their recommendation engineworks and then, using that model
, work within a classificationsystem that was actually, for

(14:47):
the most part, driven by users.
So if you think about playlisting in Spotify, there are
millions of playlists out therethat users have labeled
relaxation or chill out orrunning.
That was a big one Focus.
And so what you have is amassive user base of people

(15:08):
giving you this data saying thisis what I listen to when I want
to focus, this is what I wantto listen to when I want to
relax, this is what I want tolisten to when I'm coding, this
is what I want to listen to whenI'm working in Adobe or drawing
or whatever.
There's all this information,all this data out there.
So we took that body of data andthe insight that people do this

(15:29):
, naturally, and we created achat bot that would get even
more data from people in termsof what they were trying to do
and what kinds of music, whichartists they associated with
doing that thing.
So we decided that a great placeto do it would be in Slack,
where people are working all daylong and where a lot of people

(15:53):
are looking to.
They keep that open whilethey're doing their work, and
for many of us, work isespecially creative people.
Work is about getting into aflow state.
There's something we have incommon with people who do a lot
of coding.
So we created a chat botdialogue that is really a

(16:14):
plug-in for Slack and that wouldinteract with both teams and
individuals, and through thatyou could tell the bot what it
is you were trying to do.
Are you trying to code, are youtrying to relax, are you trying
to focus, do you want to have anap?
And then you'd give it a coupleof indications of your
preference and it would buildfor you essentially a functional
playlist that would show up inSlack and you could play it

(16:37):
right there, whether it was fora team or for an individual.

Speaker 1 (16:41):
Wow, and was it like the algorithm that you guys
wrote for it?
Was it just kind of a matter offeeding it all those playlists
that already existed in Spotify?
And then it kind of started tobe able to think on its own, in
the sense that it could take allthe input from that user and
what he wanted and pick andchoose songs.

Speaker 2 (16:57):
I can't take any credit for any of that because
I'm not an engineer or a datascientist, but those folks had
it working intelligently out ofthe gate.
So we weren't just pullingplaylists from Spotify, we were
adding our own sort of secretsauce to it, which was rooted in

(17:17):
some of those early datainsights we were able to get our
hands on, but was really moreabout the dialogue we were
having with you in Slack, and itwas a very natural feeling kind
of dialogue.
And so the moment you gave usthis idea of an example, input
would be I want to code to MilesDavis and Deathpunk and you

(17:39):
could give just those two sortof insights and that would tell
us what your range of sort ofpreference was, and we would,
along with that then, knowingwhat we knew about the
recommendation engine under thehood and all of the parameters
within that available to us, wewould sculpt a functional

(18:01):
playlist for you that met yourpreferences as well as took you
on a sort of a journey.

Speaker 1 (18:08):
So you could speak to the bot in plain English, like
you could just say, yeah, thisis what I want to do, and it
would be able to parse that andfigure out what to do.

Speaker 2 (18:15):
Yeah, that was our first sort of foray into
approaching natural languageprocessing stuff, which, again
at that time this was around2017, I guess Now chatbots were
kind of a big deal.
So, as a startup, we were justlooking for ways we could
approach our mission, which, atits core, was using music to

(18:38):
help people in positive ways.
In more prescriptive ways,whether it was an iOS app or a
chatbot didn't matter to us.
What mattered was can we getdeeper on this, can we find more
data, Can we get more trutharound this and can we present
it to people in a way that feelsgood and honest and authentic?

Speaker 1 (18:59):
That's cool, is that bot still active.

Speaker 2 (19:02):
Unfortunately, neither of those neither on the
Unwind app or the bot are stillout there, the reason being that
Sync project was acquired in2018.
And so along with that was ourtechnology, and it's really up
to the people who acquired uswhether or not that stuff is
ever presented again.

Speaker 1 (19:21):
But well, now going back to the Unwind music app.
So you had this idea you wantedto take the ISO principles of
music therapy.
You wanted to turn it into anapp form composing music,
generative music, all that.
So how do you get I want totalk about the music part, but
how do you get to that, like,how do you get to the point from
the idea to the OK?
Here's how we should structurethis music.

Speaker 2 (19:40):
Well, it's funny because in my case I really had
to lean into my experience as adesign professional.
Everything OK with her.

Speaker 1 (19:51):
Yeah, yeah, I trapped my coffee.
It has a lid, thank God.
Glad to hear that.

Speaker 2 (19:58):
Yeah, so I have, along with this lifelong pursuit
of music, I've got asignificant career as a design
lead in Boston and I just reliedon that experience mostly to
figure out how to get from A toB.
It's not intuitive.
You join a startup like a Syncproject with a weird mission,

(20:22):
sort of like a health adjacentmission.
Everybody in the company is amusician also and I had to think
about all of the practices thatI would use to design an app or
a website or any kind of pieceof multimedia or communication,

(20:42):
and it's really justfoundational user experience
design principles that come intoplay there.
You have to first define whoyour audience is, what their
pain is, what you're trying todo, and you have to speak about
it in very human terms.
So for me it was actually superexciting because when we joined
Sync project it was about 10 ofus and we had this crazy idea

(21:05):
for Unwind.
One of the people on staff was astudent music therapist.
We had a data scientist, acouple of data scientists, a
couple engineers, full stackengineer, ios engineer.
We had our founder, who wasmainly a musician and sort of an
entrepreneur and a leadtechnologist.

(21:29):
So here we are in a room withthis idea that we all have a lot
of passion around.
For me it was a and I'm sorry Ishould also say at our advisory
board.
Not only did we have thedirector of research for Spotify
, but we had people like PeterGabriel and St Vincent and

(21:49):
Esapeca Solomon, and it's justan incredible cast of people
available to us.
I didn't get to spend as muchtime with those folks as I would
have liked to, but for sure Ican have one memorable meeting
with Peter Gabriel showing up atour offices in downtown Boston,
and that was an incrediblemoment because I'm able to do

(22:12):
that thing that you do as adesigner, where you talk about
the audience you're trying toserve, you tell a story about
what you want them to experience.
You kind of go deep on what thepotential components might be of
that system and what that sortof interface for that might feel
like, and then you get feedbackand you have to be able to.

(22:36):
As a designer, I think theobligation is to sort of,
regardless of the discipline ofthe person sitting across from
you, your obligation is to taketheir reaction to what you're
describing and incorporate it.
So it's very much amultidisciplinary,
cross-functional approach tothis puzzle of how do you create

(22:57):
something that's ostensiblygoing to augment or, in the
moment anyway, serve as sort ofa proxy for a therapist, and
there are some techniques indesign that lend themselves very
well to this, and if you havean engineer in the room, they're

(23:18):
going to ask specific questionsand then you just try to answer
them together.

Speaker 1 (23:24):
That's fascinating.
So is that what say, like PeterGabriel and Saint Vincent
Esapeca?
Is that how you work with them,where you would just talk to
them like telling them here'sthe thing we're making and
here's our idea of how to makeit?
What do you think, just likehere, with their experience,
would just prompt them to saywhat their opinion would be of
it?

Speaker 2 (23:41):
Yeah, and in those cases these are folks who are
working in an advisory way.
They didn't need to get theirhands dirty with the technical
stuff, although a lot ofwillingness to talk there, and
certainly someone like PeterGabriel.
I mean, I'll be honest, andthere's a picture of me on my

(24:03):
website with my arm around himand the whole team is there, and
I'm just kind of blown out inthe moment as I'm there Because
I mean, my God, if anybodyunderstands emotions and music
and it's him.
But yeah, mostly it was justsort of like hey, here's what we
think we're doing, mr Gabriel,and he's like oh, that's

(24:26):
interesting.
Here are some questions I haveand here are some thoughts and
with him I won't quote himbecause I would hate to risk
such a thing, but my takeawayfrom it is that his concerns
were primarily very human andvery much like he was very
interested in bringing it backto what is going to resonate
with a human being, which justtells you what kind of person he

(24:49):
is, I think.

Speaker 1 (24:50):
Yeah, yeah, it explains why his music is so
powerful resonates with people.

Speaker 2 (24:56):
I guess that's where his mind is.

Speaker 1 (24:58):
OK.
So then you go through thesedifferent processes.
You get the strong idea of howyou want to create this music.
So now, like you said, you'reusing samples, you were
composing some music and this isall being incorporated into a
generative engine.
So how does it work exactly?
Is there, say, a library ofsamples, or like chunks of music

(25:19):
you've composed, that it'spiecing together in real time?
Is that?

Speaker 2 (25:22):
Yeah Well, to answer that question, I might have to
back up into some conceptualstuff a little.
So, from a compositionalperspective, I'm just working in
, I use Reason a lot, I use ProTools, just like you pick your

(25:43):
day DAW, I might be working withacoustic instruments, I might
be working with electronicinstruments in this case I was
working with all of them butit's really all about
intentionality as well asexpression.
So you know, I would structurethese compositions so that they

(26:08):
could modulate.
So you have to make someassumptions that are somewhat
universal when you're doing that.
So when I talk about ISOPrinciple and meeting people
where they are, I had to have amental model in place where
people are listening to thisstuff on purpose, because we've

(26:29):
said that we can help you gofrom a space where maybe you're
struggling to fall asleep whichmight be as a result of stress
or a racing mind and we're goingto help you come to a place
where you can rest.
So I have to make someassumptions up front that you
maybe are a little bit moreawake or a little bit more

(26:50):
agitated, or for whatever reason.
You've chosen to do thislistening, and so that's my
starting point.
From a compositionalperspective, I'm coming at it
from the place of.
Let's assume that not everybodyis showing up to this in a
relaxed way, ready to fallasleep.
Some people are going to need alittle more distraction, a

(27:14):
little more engagement, and sohere's the basic recipe that I
keep in my mind, which isespecially for music.
For rest, there's this notion ofmoving from a high density of
layers to a minimal density oflayers.

(27:36):
Moving from a high beatsalience to zero or very low and
steady beat salience, movingfrom easy pattern recognition to
no pattern recognition or chordchanges to no chord changes,
active melodic motion or broadmelodic range to sparse melody

(27:56):
or melodic range.
So a lot of times I wouldcompose things in either a ramp.
You could actually look at myPro Tools or my Reason File and
you would see just the layersgoing down to nothing, from like
10 to zero.
You could see the MIDI datavery dense over here and almost

(28:17):
just a bunch of big long dronesover here.
Sometimes I would compose themas more like a triangle, going
from zero to a pyramid, likefrom zero to a peak and kind of
back down again, and from all ofthat, from all those different
layers, it's a lot of stuff thatI think would be familiar to
anyone who works with a DAW.

(28:38):
You've got a choice you canexport stems, which is what I
think a lot of generative gameengines use.
Or what we did is we got alittle more atomic than that.
I would get down to sort ofI've got these instrument lanes
and within these instrumentlanes I've got a variety of
motifs that are living in thereand I'm going to slice those up,

(29:01):
and so we had a beat model onthe other end where we could
pull those samples in and tellthem sort of when and where to
show up.
But we would use this conceptof generative music and
adaptability to say, well, if wethink you're sort of somebody
who's at a, let's say, there's alow, medium, high range of

(29:26):
arousal state, if we thinkyou're highly aroused, we might
start off with more energy andthen, over 10 or 15 minutes,
start to wind that down.
If we think that you've sort ofshown up in more of a medium
arousal state, we'll start youthere and bring you down.
So it's an interesting thingwhen you think about meeting

(29:50):
people where they are.
The more aroused you are and byaroused I mean the more sort of
awake you are the less sleepyyou are is another way to say
the same thing.
Or, just in general, are youshowing up with sort of a need

(30:14):
for help I'm sorry, let me goback on this your mood or your
arousal state.
You could think of it as howyou perceive your state of
wakefulness or your stress level, or your awareness of stress,
or even mental or physicalenergy.
If you think of that as acontinuum from low to medium to

(30:37):
high, and then you try to matchthat with this idea of pattern
recognition or active chordchanges or active melodic motion
that's the craft of it isthinking compositionally, how to
take the music that you'remaking and map it to that
continuum and then slice it upand tell the system when to play

(30:59):
what.

Speaker 1 (31:01):
Now a quick word for our listeners.
Jeff is talking about composingmusic in the DAW, which is
essentially music recordingsoftware.
This is how so much music iscreated nowadays, not just for
sleep purposes, but for anyreason you can imagine.
If this is something you wantto do, I highly recommend our
super effective course, BuildingBlocks.

(31:22):
It'll take you step by stepthrough the process of writing
the building blocks of beats,like drum patterns, bass lines,
chord progressions and melodies,and it all takes place in an
actual online DAW where youcreate music as you go and for
our loyal podcast listeners, youcan get 20% off Building Blocks
this month using coupon codeSLEEP.

(31:44):
Check it out ataudiblegeniuscom.
Yeah, that fascinates me and somany love us.
One thing I loved was how youintentionally create music
that's active.
Whether it's like you said, thebeats will salient or the
melodies dense.
The chords are changing a loton purpose just to get their
attention Kind of the way youhave to get like if I need my

(32:04):
kid to stop screaming around thehouse, I think it is attention
first and then I have to calmthem down, and it's that same
idea, I think that's the amazingpart of it, because, like any
of us, I could go on to Spotifyor YouTube right now and I can
find something mellow that Ithink will pull me in the right

(32:25):
direction if what I want is torelax.

Speaker 2 (32:29):
But it's amazing to think that there's technology
that can actually sort of steeryour mood over time, make it
easier for you to acclimate.
This is one of the reasons Istarted the podcast that I have
too is because a lot of times ifyou go and look for content

(32:50):
that's going to help you sleep,you'll find a lot of stuff that
just immediately starts in thatvery sleepy space.
You'll find a sleep story orsomething, and it's like you're
supposed to go from 60 miles anhour, like I've been doing
things all day.
I'm very busy, my head is justhit the pillow and suddenly
you're saying in a very sleepyvoice follow me into story land.

(33:13):
I can't do that, I'm not readyyet.
Or, like I personally, I have ameditation practice.
I'm very comfortable withmindfulness practice, but when I
get into bed in that moment I'mnot necessarily ready to start

(33:34):
doing a deep breathing exercise.
In fact, in a lot of ways,meditation is an invitation to
be awake.
It's about present momentawareness.
It's not so much about driftingoff.
It's all the things that areintuitive that would help you
relax.
But at the same time, I thinkpeople do appreciate if you can

(33:57):
meet them where they are andthen move towards a space as
opposed to just dropping themright into the water as it were.

Speaker 1 (34:06):
So you compose all these tracks of different levels
of activity, I guess we couldsay, and all those different
elements you mentioned, and youfeed them to the engine and are
you really having to tell theengine, okay, this is active,
this is less active, like almostkind of rate the activity of
each musical chunk, in a sense?

Speaker 2 (34:23):
Yeah, that's where we started, and again, I can't
take credit for how the systemultimately performed and
ultimately it did perform quitewell but I started off with
something that is composed in astatic environment, and then we
wrote what I think of as sort ofrule sets around.

(34:44):
These are the ingredients thatare allowed in a high energy
mode, these are the ingredientsthat are allowed in a medium
energy mode, these are theingredients allowed in a low
energy mode, and that might alsoinclude some instructions
around tempo.
Some of that might have beenbaked into the compositions
themselves or the samplesthemselves, but the idea and the

(35:07):
way that it was constructed wasthat we would first feed it
through sort of like thislogic-based, rules-based
mechanism and then the machinelearning would take over.

Speaker 1 (35:22):
Got it.
And when you say these are theingredients, how granular did
that get Like?
Were you telling the thing like, hey, when the melody is
playing this many notes or thatkind of thing, or is it more
like these clips are?

Speaker 2 (35:33):
active, just for sanity's sake.
I had a limited number ofinstrument lanes.
I think probably the busiestone was like 12.
And within any instrument laneI might have had like 10 or 20
different clips and so.
But they were designedcompositionally in such a way

(35:53):
that I knew which clips had moreactive melodic pieces.
So let's say it's a guitar lane, I knew which one had the sort
of busier guitar licks, I knewwhich one had the more sort of
spacey guitar licks, and I couldsort of target those not only
just as clips but in terms oflike, repetition or frequency or

(36:16):
variety.
Even so, when I'm composingthis stuff, this is the kinds of
things I'm thinking about Likeis this a melodic motif that I
will want to have repeatfrequently?
How many variations of thismotif do I want?
And then it's easy enough afterthat, because of the geniuses I
was working with, to sort oftell the system like this is
what I was thinking, that's cool, yeah.

(36:39):
That's very cool.
I mean, there's no GUI for itat that level.
This was me sitting next to avery wise and experienced data
scientist and a couple ofengineers who, thank God, we all
could talk about music together.
We had that common language.
So, as we're staring at theselines of code, we're also

(37:00):
talking about things like melodyand harmony and intended rhythm
and stuff.
We could do that.
If I hadn't been working withmusicians, I would have been up
a creek.

Speaker 1 (37:11):
Oh yeah.

Speaker 2 (37:11):
Can you imagine, yeah ?

Speaker 1 (37:13):
I would imagine they would need to have both a code
brain and a music brain, so theyknew how to connect them.
Yeah, and then you guided themand how to connect them right.
Is that kind of how it went?

Speaker 2 (37:21):
Absolutely, and it was through the design
methodology, too, that we notonly were able to leverage that
common language but apply it tobuilding this incredible piece
of technology.
But it's interesting In myexperience, I've worked with a

(37:42):
great many engineers anddesigners and technology folks
who are also musicians.
It's just, maybe it's a Bostonthing, I don't know.
I see it everywhere, though.

Speaker 1 (37:52):
Actually a friend of mine who he worked in sales for
a long, long time.
He's like you know what I want?
To become a coder.
Yeah, he lived in New York andhe went to this really good
coding school called FlatironSchool Only like 2% of people
get in.
And after he got in and theowner said to him he's like we
look for people with musicexperience.
We just find there's somethinglike I don't know some about
their brains.
They could just connect thedots like easier or faster and

(38:14):
it's no more malleable in someway.

Speaker 2 (38:17):
Yeah, I mean I think there's there's directional
evidence that linkingneuroplasticity and creativity
and music together.
It's a I don't know.
I don't like to think about ittoo much like it's a superpower.

(38:37):
I like to think that actuallyanybody could be musical,
anybody, given the time, likeany other craft, they could find
their way to it.
I just know that it has helpedme.
I don't think I would have hada successful design career if I
hadn't first spent my 10,000hours as a musician.
I didn't go to school fordesign, but nonetheless I've

(39:01):
been working in that field forover 25 years with fairly
massive teams and some verywell-known established brands.
It's just, and all of that ison the back of me being a
musician and a band leader.

Speaker 1 (39:16):
That's very cool Now, with all these different
instrument lanes, with all thedifferent clips in them, were
they all interchangeable?
Could you say, combine a clipfrom instrument lane one with
any clip from instrument lanethree?
Could they all be swapped inthat sense?

Speaker 2 (39:30):
Yeah, it was very much an exercise in intentional
arranging, okay.
So again, getting back to thisidea of intentionality, we had
to create a system where wecould, from a very literal,
rules-based approach, say ifwe're targeting a medium state

(39:55):
it was actually a little moregranular than that.
I'm making it simpler just forthe discussion.
If we've started you off atlevel three and we're working
towards level one, well, when weget to two, this is what we
think the arrangement should be.

(40:15):
And then, within that system,we also had probability.
So it wasn't just a matter ofwhat's the probability of an
individual clip in an individuallane, it was what's the
probability of these two lanesinteracting with each other?
What's the probability of thesetwo lanes interacting with each
other?
Because ultimately the machinehas to have some freedom to say

(40:38):
I'm going to render thisperformance now and I have
choices, I have probability onmy side.
So, yeah, it's a trip.
It's really wild to approachyour music just to be sitting
down with a blank canvas infront of you, like you always do
, and then these are the thingsyou're thinking about.

(41:00):
It's no longer static, it's notgoing to be played the same way
every time, and so suddenlythat shifts how you think about
your intentionality.

Speaker 1 (41:11):
Yeah, I bet, with that different end goal in mind,
how much of it was just for youwhen you're composing this?
How much for you was it justinstinct, like just feel, versus
how much thought this might beactive enough or this might not
be that kind of thing.

Speaker 2 (41:30):
Well, yeah, I think that's the craft.
That's what a music therapistdoes.
I'm not a music therapist, butI am a student of jazz.
It's an improvisational sort ofinstinct that kicks in.
And here's the thing it's likethere was never any intention to
write the one piece of musicthat was going to work for

(41:51):
everybody forever.
It was sort of a known thing.
We're going to have to composedozens of pieces.
We're going to have to.
Ultimately, what we were tryingto do was build a platform that
many artists could contributeto and over time, people would
use the system and it would getsmarter.
And, more importantly, I thinkit would get smarter in service

(42:16):
of an individual, because yourindividual needs and preferences
are not static either.
They change over time.
For me, it was very muchinstinct-based, and I think I
was along with thinking ofthings like rhythmic density or

(42:40):
how many layers do I have going?
One of the other factors thatwas really top of mind for me in
these compositions was thisidea of valence, which is
another term that I wasn't evenaware of until I started digging
under the hood of Spotify tosee how their recommendation
that didn't work, but valencedescribes the musical

(43:01):
positiveness that a trackconveys, so tracks with a high
valence sound more happy andcheerful, euphoric, and tracks
with a low valence sound morenegative, sad, depressed, angry.
So that was really mostly whatI was focused on with these
compositions and I gave myselfsort of like a baseline set of

(43:23):
tempos.
I didn't really want to doanything too up-tempo most of
the time.
It is intended to be relaxingmusic.
Ultimately, dramatic shiftsaren't good to promote sleep.
But with my music I'm typicallythinking modally.
So I like to talk about theLydian mode because basically
it's a major scale which mostpeople associate with happy

(43:45):
music or resolved soundingthings.
But of course you've got thatraised forth which to me
introduces tension anduncertainty, and it's what I
think of as sort of like aLydian.
I think of that as sort of aliminal mode, and so I

(44:06):
personally find it's verybeautiful, and since the goal is
to meet people where they are,I don't shy away from Lydian
mode or something more minor,sounding like Dorian, which also
has that sort of liminalitybuilt into it with the natural
six.
I'm just careful about how muchtension I allow for within that

(44:27):
and I'm mindful about modulatingin a direction that reduces
harmonic tension over time.
Interestingly too, it's shownthat music with a low valence is
actually shown to promote restin many people.
So this is, I think,fascinating.
Maybe this is why we love tolisten to dark side of the moon.

(44:50):
It's not always about happyrainbows and sunshine.
When you're trying to relax,it's very often something rooted
in just minor pentatonic isgoing to have that physiological
effect of chilling people out.
It's remarkable.

Speaker 1 (45:10):
So with those unexpected scale degrees, like
the Lydians raised forth,everyone's kind of expecting
that regular fourth, even ifthey know theory or not.
They just heard the major scalea million times.
So that raised fourth is justlike huh.
Is that sort of the one wayyou're kind of trying to fight
and not fight but removeanticipation?
Those little unexpected thingsthat kind of keep just sort of
rocking their boat a little bit?

Speaker 2 (45:32):
I think you're making my point in that that is the
choice we have as musicians.
Right, we have a lot of choicesavailable to us, even Western
music.
We've got 12 notes that can becombined in any kind of way.
But that's where Kraft comes inand it's where I think this is
the value of really developingyourself as an artist and

(45:56):
developing not only your sort ofempathy engine, which is, I
think, super important, but yourability to intentionally go for
things.
I think about.
This is just what greatsongwriters do you listen to?

(46:18):
Peter Gabriel's so, or MichaelHedges' Aerial Boundaries, or
any soundtrack by Hans Zimmer orMichael Giacchino, whether it's
vocal or instrumental.
There's an incredible amount ofintentionality in that writing,
in the moment.
What is the scene that ishappening?
What is the picture I'm tryingto create?

(46:39):
Hedges did it with just anacoustic guitar, but it's
remarkable.
He has a song called, becauseit's there and you're just
completely transported, and Ithink that is the power of music

(47:00):
is to transport people, justlike storytelling.
So those are the choices Will I?
Will I use in this composition,will I use the tension of that
sharp four to grab your head inthe moment and take it out of

(47:21):
the rumination that you might beengaged in.
Sure, yeah, I might do that.
I also might place it in adifferent register as part of a
melody, just to add a sense ofwonder, because it can also do
that.
Right, you take a major 7 chordwith that note floating on top

(47:42):
of it.
I mean, it is magic.
So it's a really interestingcontext, is everything.
But this is why, you know, whenI think of this kind of work, I
think it's very artist driven,it's very art driven and it's
sort of.
But once you've decided, youknow, as an artist, you're going
to, you know, especially if itwas in a context like this where

(48:06):
you're, you know, perhaps beingpresented as something that
might have health and wellnessbenefits.
You know, just be veryintentional, try your hardest to
be open to the idea of thecontext of the person you're
writing for.
They've just stopped their busyday.

(48:26):
You know, for the first time intheir day they've stopped doing
and they're trying to move fromthat doing and thinking space
to just a space of being, andthat those transitions are very
special moments, they're almostsacred.

(48:46):
I had, you know, I think and Iused this word earlier and it
kind of flew by but this, this,that moment between, you know,
trying to sleep and actuallybeing asleep.
You can think of this as aliminal state, which is that

(49:06):
word liminal comes from Latinword liman, which means
threshold, and those are thespaces where change is the only
thing that is actually happening.
You know, it's the transitionbetween what is and what will be
.
And you know we can think ofthose as physical spaces and we
often talk about mental statesas spaces or places.

(49:27):
But, like in the physical world, they're things like, you know,
stairways or hallways ordoorways.
Emotionally it's kind of likelife transitions or milestones,
or you know, things that createuncertainty.
These liminal spaces can feeluncertain.
If you've ever walked down likea long, empty hallway or an

(49:49):
empty, dark stairwell, you know.
You know what I'm saying.
And we don't we don't likeuncertainty and a lot of stress
sort of manifests as a result ofuncertainty.
And so you get to thisemotional liminality when you
are laying down for the firsttime and everything's quiet for
the first time in your day.

(50:10):
And you know there's alsoevidence that liminal states are
linked to creativity and so ourminds can get into this kind of
a hyper-reality where we'retrying to make sense of the
uncertainty in our lives, andsometimes that manifests as art,
like songs and books and poetry, but it can also manifest as
you lying in bed, staring atyour ceiling for seven hours.

(50:31):
You know this transition fromthe waking or doing to the
resting and being, ruminationcan kick in.
And so if you're an artist andyou're approaching this kind of
idea, that's what you're tryingto honor, I think, is that
liminal space.
That's how I think of it anyway.

Speaker 1 (50:51):
Yeah, no, I never heard that word before, but I
love it because I myselfmeditate.
I do a form of meditationcalled TM and it's pretty much
all about that getting youtaking you to that and passing
you through it.
And so when I sit there and Ifeel it, you know, back in the
day it would have felt like, ah,it was almost unnerving that
feeling because you're reallyletting go, like you're making
yourself vulnerable and you'rejust like giving into your

(51:13):
physiology, letting it dowhatever it wants to do.
But it's once you like, reallylean into it.
It is amazing, yeah absolutely.
Yeah, and I actually changed thesubject a bit your podcast
Sleep Fader, which is a podcastall about helping people relax.
And I was listening to anepisode last week and I started

(51:34):
to feel that and I didn't evenhave the intention of relaxing
or sleeping I was like I'm goingto listen to this podcast, I'm
interviewing Jeff next week, Iwant to check this out.
I'm sitting at my desk workingand I'm just listening to it and
all of a sudden, I didn't takelong because you started, you
talked a little bit and then youplayed this music.
You're just like, hey, listento this music for a bit, and I'm
just doing that.
And all of a sudden I was like,oh, like I started to feel that

(51:56):
liminal that I get withmeditation Wonderful, it just
did it to me.
I wasn't even, I didn't havethe goal or anything like that.
It just did it to me.
And I noticed in that musicwhat you were talking about.
You were paying attention tohow the tracks were coming and
going.
I think you started with one ortwo things and you added some,
and then again it started totake those tracks away.

Speaker 2 (52:17):
Well, I'm very happy to hear that and thank you for
checking it out.
I mean, it is, the podcast is amanifestation of all this work
that I've done over the lastseven years.
It's not knowing where thatwork may take me next.
A podcast is a wonderfulvehicle to sort of bring all

(52:38):
this experience together in aplace where I can really be
super intentional in any waythat I want.
And so, for me, it comes backto those core ingredients.
I'm thinking about mindfulnessand the effect it's had on my
life and my ability to cope withstress, and the fact that, by

(52:58):
definition, mindfulness isawareness that arises through
paying attention on purpose.
So, what am I going to give youto pay attention to on purpose?
Well, I'm going to give you, inthis case, I'm going to give
you music and I'm going to giveyou storytelling.
These are the things that Ithink are have been shown for

(53:22):
eons to help people move fromone state of mind to another.
But what I'm going to do isthink about things to like ISO
principle and entrainment, andI'm going to think about all
this stuff we've been talkingabout in terms of choices we
make with music and sound, andI'm going to apply this recipe

(53:43):
to a format that anybody canaccess from anywhere and
hopefully, before they get tothe end of the story, they're
out.

Speaker 1 (53:50):
Yeah, I started so, yeah, it was your introduction
music story and then it was likea calming sound at the end I
think in this case it was waves,you know, like ocean sounds.
And I found myself halfwaythrough the story.
I was paying attention to thestory and then, when I'm
realizing it again, thisreminded me of meditation.
I wasn't, I was just like offand just like so relaxed, just

(54:13):
sort of sitting back in my chair, and it was a couple of minutes
ago.
I was like, oh wait, I'm not,I'm not listening to the story
anymore.
Is that kind of the goal?
Like you want them at somepoint to find themselves just
off in their own world, in asense?

Speaker 2 (54:24):
Absolutely.
It's funny because I've alwayswanted to be a novelist as much
as I wanted to be Eddie VanHalen or Michael Hedges.
I wanted to be Stephen King.
I just I guess I'm an escapistof some sort, but the
storytelling piece of it youknow, as applications like

(54:46):
Headspace and Calm have beenbringing sleep stories into the
mainstream, my observation isthat a lot of these were
self-contained, kind of one-offs, and again they wanted to bring
you very quickly into thatsleepy space.
The sleepy sort of fake sleepyvoice starts immediately.
And this is not to disparageeither of those applications.

(55:10):
I actually use both and I thinkthere's amazing benefits to
using them.
But for me personally, I wasvery interested in this idea of
you know, could I use a podcastformat to help people sleep, to
practice telling a story thatwas more serialized, which is
people love to binge serializedstories.

(55:33):
So I wanted to give peoplesomething to look forward to and
could I write it in such a waywhere it really doesn't matter
if you finish the story or not?
If you listen to episode oneand you don't make it to the end
, can you listen to episode twoand still feel comfortable Like
you haven't missed anything.
So that's been an interestingexercise.
I don't know if I'm succeedingon that front yet.
I'm waiting for people to tellme.

(55:53):
Some people have told me theywill.
Actually, you know, they'lllisten to the first couple of
parts of the podcast and thenthey won't make it to the end of
the story and so the next nightthey'll go back like halfway
and then listen through to thenext episode.
I think these are the things Ican't control and I'm just.
You know, I'm so gratefulanytime I get any kind of

(56:15):
feedback from folks in terms ofthis.
But it is structured veryintentionally as a three act
thing.
Act one is I've got like thesame ambient environment every
time.
I call it the studio lounge.
This is just me talking inalmost like a very natural,
non-sleepy way.
It's conversational.
And then, within that thatfirst act, I will always play a

(56:35):
piece of music that I've createdand I'll talk.
I'll sort of try to give yousome present, aware, present,
moment, moment, moment,awareness, cultivating
mindfulness cues.
That's usually about five orsix minutes.
And then act two is going intothe actual story itself, which
is, you know, it's original.

(56:56):
I write it all.
It's episodic, so you know itplays out over, over different
episodes and within the story.
I try to include not mindfulnesspractices, but I think of them
more as sort of mindful momentswhere you know you're in a sense
invited to inhabit what thecharacter is experiencing.

(57:18):
And I write these verypurposefully to give your mind
an opportunity not only tofollow a story but to observe
and appreciate and maybe withinthat find some sense of
gratitude.
You know I try to explorethemes that people can relate to

(57:39):
, and there's always in withinthe story musical interludes too
, which is it's a commonpractice.
You'll hear it in some of thebest public radio podcasts.
It's a great storytellingdevice, right for transitional
moments to use music.
I'm being very deliberate aboutthe music, as we've been

(58:02):
discussing.
So you know from Act One, whereI'll start with something
that's meant to engage and sortof, you know, pull your mind
away from your busy day and anyrumination you might be engaged
with.
I'll use some of those sameingredients in Act Two, but
likely I'm going to reduce someof those layers.
I'm going to reduce some ofthat beat salience that we
talked about.
I'm going to reduce them a lotof cringe.

(58:23):
And then Act Three, so after thestory's done, this is usually
about 20 minutes or 25 minutesin.
It's just down to ambientenvironment.
So wherever the story leavesyou, whether it's by a waterfall
or in a rainy city or whereveryou are, that's where you're
going to stay.
And my voice, of course, ismodulating over time so that by

(58:44):
the time you get to the end ofthe story I've probably slowed
down my pace a little bit.
You might not even notice whenI stopped talking, and the
reason I have a big long usuallyfive or 10 minutes of ambient
sound at the end is because youdon't want to make any abrupt
changes with audio, whether it'sin somebody's ears or a speaker
next to them.
This is something I learnedboth at Sync Project and when I

(59:09):
worked with Bose on their sleepproducts.
The notion is a good, steadysignal.
Something like rain is a greatexample, because it's kind of
like white noise.
You just let that go without alot of modulation, without a lot
of dynamics, and then just giveit a nice long fade out.
I've reduced the chance ofwaking you up.

(59:33):
Oh, right yeah okay, becauseit's very easy to create a
waking event, like if I let twoor three minutes go by and then
suddenly started talking again,I can guarantee it's going to
wake you up.
And that happens to me all thetime when I'm listening to
podcasts myself.
If I forget to hit that littlebutton in the lower right hand
corner that says at the end ofthis episode, stop It'll start

(59:53):
up the next one.
And the next thing, you knowI'm listening to ads or
something.
Oh, okay, and that's the otherthing too.
Sleepfader has no ads.
Like, maybe someday I'll belucky enough to have sponsors,
but this thing is brand new and,as you know, the business of
podcasting is a.
You know it can go in manydirections.

(01:00:13):
For me, this has so far beenpurely an exercise in taking
what I've learned over the lastseven years, trying to put it
together in a package that Ifeel will be effective to really
help people relax, and it'sjust a great creative project
for me.

Speaker 1 (01:00:28):
That's cool.
So the music during the storyis the interludes.
That's almost more of like asubliminal reduction, yeah, like
, because the focus is more onthe story, but the music is
progressing from the introactivity to the most calm, just
wave sounds or something likethat.

Speaker 2 (01:00:44):
Yeah, yeah, and it's.
You know, those same principlesthat we talked about from music
in terms of you know theperceived energy, you know which
could be a combination of youknow layers or just a dynamic
ingredients or just any kind ofactive motion within a track.
You know I applied that samekind of thinking to the sound
design too.
So if you listen to the podcast, you know in that lounge area

(01:01:09):
I'm talking about, you knowthere's some.
There's some birds chirping.
Sometimes there's like veryquiet wind chimes playing.
In the background you mighthear some ocean sounds.
It's generally speaking.
I'm trying to create a somewhatnoticeable environment because,
again, I'm trying to like pullyou out of whatever headspace
might be distracting you fromsleep into something else and

(01:01:32):
within the storytelling you'regoing to hear very little
distracting audio because I'mtrying to modulate in that
direction of relaxation.
So, whether it's with themusical sounds or with the
ambient sounds, I'm verydeliberately creating like a
ramp down.

(01:01:52):
That's the best way I can meeta general audience of people,
sort of where they are, and pullthem in the direction of rest.

Speaker 1 (01:02:00):
Because you don't have, you don't exactly know
where the listener's state is atthat point.
But with Unwind App you'reusing biometrics to actually
measure where they were like,like, say, their pulse, like was
that big one?
Like their heart rate.

Speaker 2 (01:02:14):
Yeah, we would use in our prototype actually, which
was a it was actually a web app.
We use nothing more than theaccelerometer and the phone, so
and you can see some of this onmy website as well but you would
be able to just hold the phonein your hand and we would get

(01:02:34):
your heart rate from that anduse that as your starting point
for what we delivered and theUnwind itself.
We actually had a couple ofuser inputs too, where they were
just simple sliders.
You could say how tired are you?
Zero to 10?
, how stressed are you?

(01:02:55):
Zero to 10, that type of thing,and we could use that as an
indicator as well.
I've always felt that the sensorpiece and paying attention to
heart rate variability orrespiration are very powerful, I
think, ways to inform any kindof audio based therapeutic.

(01:03:17):
But there's no substitute whenyou're really trying to make a
machine smarter.
There's no substitute forgetting a dialogue going with
the person.
Just tell me how you feel andthen, ideally later on, you'd
ask them how did that work outfor you?
Really, because I think youneed both these things until
you've got a super smart machine.
It's artificial intelligenceand machine learning.

(01:03:41):
They don't happen overnight.
You really do have to feed themdata to make them smart, and in
the case where you're talkingabout individual preference, it
becomes even more important.
So my hope is that, as thesetechnologies are continued to be
developed and these types oftherapeutics are more widely

(01:04:04):
adopted, is that the folksmaking them consider the
importance of that dialogue.
People want it anyway.
We discovered that in researchthat we did as well, which is
that you give people too manychoices that can be paralyzing.
Even the idea of just eventhough, like I said earlier, we

(01:04:27):
all use music for moodregulation without even thinking
about it all the time themoment you sort of change the
context and put it in an app orin a machine next to your bed or
something, it's new.
It's still new this idea of hey, here's a product or a service
and if you listen to it onpurpose, it's going to help you.
People want the dialogue.
They want to actually feel likeyou're attending to them and

(01:04:51):
recognizing them for who theyare.
It feels good, it createscomfort.

Speaker 1 (01:04:55):
Yeah, so the dialogue has two purposes then.
One, it's also kind of feedingdata, because the biometrics
alone aren't going to cut it.
They're part of the story, butthey need to actually know what
you're thinking, and together itdoes a good job of generating
the right kind of music.
But it's also just the userexperience, to make them
consciously feel like they'rebeing heard.
Yeah, in a sense.

Speaker 2 (01:05:17):
Yeah, and it's about.
It's a big part of mindfulnessand possibly TM too, but it's a.
This idea of intentionalitycomes back.
You're setting an intention andso you can take that moment of
interacting with an app or amachine sitting on the side of
your bed.
That can be a mindful momentthat enhances the experience.

(01:05:39):
It's just you're stopped doingthe other stuff you were doing
and now you're doing this.
I think this is the reason whyvinyl never went away.
Really, I think that weultimately love the
intentionality of it.
You can't just hit a button andlet it play in the background.
You have to open up the dustjacket and pull the thing out
and be careful with it and putit on the turntable, and you've

(01:06:00):
got to make sure that you're notjust dropping the needle
randomly on the thing.
It's a mindful act.

Speaker 1 (01:06:06):
It makes us feel good , that's a good analogy, because
not now, I know what you're.
I kind of know what you'recommunicating, because there is
that feeling that when you hadto actually set the music up,
even set the mechanical processin place, that you're part of it
, all of a sudden it's not justthis thing being passively sent
at you from your phone.
It's a thing you are involvedin.

Speaker 2 (01:06:24):
Yeah, if I walk across my studio here I have to,
and I want to listen to music.
I might know what I want tohear before I approach my record
collection, but a lot of thetime there's a little bit of
flipping through the spines,even though I've been looking at
them for years and I have abunch of new ones in the mix too
.
It's like it's that tactilemoment of touching the records,
looking at the spines, thinkingabout you're actually thinking

(01:06:47):
about how it might make you feel.
You're thinking how could thisaffect me?
In the moment, as a designer andwhen I think about these kinds
of experiences, I love thenotion of engaging that part of
a person.
It's going to take them out ofwhatever they're worrying about,
whatever stresses they have,and it's going to sort of almost

(01:07:08):
force them to think about howthey feel and how they want to
feel.
If you can do that, the sensorscan make things more
comfortable.
If you're trying to createsomething, it's really going to
change the way someone'sbreathing, for example.
You want to make sure thatyou're not doing something
that's too fast or too slow.

(01:07:28):
That could be reallyuncomfortable.
The sensors can help with thatbut ultimately giving the person
an opportunity to engage withthe moment is what it's all
about.
Isn't that what music is?
Absolutely.

Speaker 1 (01:07:43):
Well, Jeff, this has been really fascinating.
There's other things thatyou've done.
What is your website again?

Speaker 2 (01:07:49):
The Sleepvader website is sleepvadercom.
That is where you can find allthe information about the
podcast and connect to that.
I hope folks will give it a try.
Anybody who does, please knowthat.
I will welcome any feedbackthat you've got.
My personal website isjmcreativecom.
That's just my professionalsite where I highlight some
stories about my work as adesigner.

(01:08:11):
In this case, I've focusedquite a bit of it on my recent
work over the last seven yearsor so.

Speaker 1 (01:08:17):
Yeah Well, it's all very fascinating.
I recommend your podcastbecause it really unwinding me.
That's unwinding, you'resleeping.
They helped me calm down.
You just found yourself areally fascinating niche in the
music technology world.
It's just fascinating to hearabout.

Speaker 2 (01:08:31):
Well, thanks so much.
I'm really grateful for achance to talk about it with you
.
The tools that you've producedas well have been super helpful
in my life.
Without Centorial, I would knownothing about all the knobs and
buttons sitting next to me.
Now I know quite a bit and ithelps me do a lot of the sound
design that happens inSleepvader.

(01:08:52):
Super grateful for your work aswell.

Speaker 1 (01:08:56):
Yeah, my pleasure.
Thanks so much, jeff.
Thank you.
Thanks for listening to theAudible Genius podcast.
Now, as you listen to thesemusician stories, you may find
yourself wanting to make yourown music or maybe you already
can, but you feel the need tobrush up on fundamentals, fill
in some gaps.
Well, I've got some supereffective and engaging courses

(01:09:16):
that help aspiring digitalmusicians find their voice and
create music they love.
And these courses are more thanjust a series of videos.
They have interactivechallenges in a music software
environment where you actuallycreate music as you go and get
real experience.
The first course I recommend isBuilding Blocks, where you'll
learn beat composition and musictheory in an online music

(01:09:39):
studio.
Check it out atAudibleGeniuscom.
We also have Centorial, anaward-winning course on
synthesis, where you'll learnhow to create your own sounds
with a synthesizer.
Check that out at Centorialcom,and both of these courses are
designed by yours truly and theteam here at Audible Genius.
So if you've ever had a desireto make your own music, I highly

(01:10:01):
encourage you to check them out.
Thanks again for listening andI'll see you on the next episode
.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.