All Episodes

June 12, 2025 96 mins

AI is shifting the ground beneath our feet.

In this week's episode with Emmy-winning TV producer Gavin Purcell, we explore how AI differs fundamentally from previous tech trends like crypto, blockchain, and the metaverse.

Drawing from his unique perspective in traditional Hollywood and cutting-edge tech, Gavin explains how reasoning models represent a quantum leap beyond simple prompt engineering toward truly intuitive human-machine interaction.

The conversation turns particularly fascinating when exploring creative industries, where Gavin introduces the concept of "micro-studios" – small teams using AI to produce content that previously required hundreds of professionals. Through a live demonstrations with AI voice companion Maya and popular examples of AI-generated video, Gavin shows how close we already are to genuinely engaging artificial entities.

And since we're aiEDU, we also had to dive into what this means for education, careers, and human connection. How do we prepare students for a world where traditional skills are augmented or potentially replaced by AI? What happens when digital companions become more consistent and accommodating than the humans in our lives? These questions don't have simple answers, but they demand our attention.

And be sure to check out Gavin's AI For Humans YouTube series: 

  

 

aiEDU: The AI Education Project

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Alex Kotran (aiEDU) (00:04):
welcome to.
I don't even know this might beour 10th episode of oh yeah edu
studios.
Um, I'm here with a trueprofessional uh in show business
gavin purcell emmy award winner.
Um worked for jimmy fallon nbcuniversal and has, I think, one
of the you know, most excitingup and coming.

Gavin Purcell (00:26):
I think still up and coming gavin, uh, sure, cast
, sure, we've been around for abit.

Alex Kotran (aiEDU) (00:31):
Now we're not like old, old, but we've
definitely been around for a bityeah, I mean you were like, I
mean you were really trackingthe ai stuff before it blew up.

Gavin Purcell (00:40):
Yes, yes, I mean we were tracking it.
I mean I should always kind ofinterested in like the turns

(01:06):
where media was going acrosseach step and in the GPT-2,
gpt-3 era, which was probablyaround like 2000, 2020, 2021, I
started to think, oh, I rememberseeing the GP2, maybe the GP3
paper and it talked about how itcould write these paragraphs
right, and it was the first timeit could write paragraphs which
were like and it was the firsttime it could write paragraphs
which were like significantlycohesive and coherent, like you

(01:29):
could actually read a paragraphthat a machine had written.
I was like whoa, that's weird.
I've made my life writing stuffor making stuff and now the
machine can do this thing and weknow, like that, especially at
that time, it was not likeoriginal stuff, but it wrote
something that was readable andthat felt like a really weird
turning point and kind of.
Since then, I've always beenlike my problem as a person is I

(01:52):
tend to jump from thing tothing to thing that interests me
and like I've been able to holdon to this AI thing now for a
couple of years because itcontinues to get deeper, more
mature and, I think, honestlymore transformative than almost
anything else I've lived throughand I've lived through a fair
amount of stuff now.
I lived through, you know, I gota PC when I was, like you know,

(02:13):
eight or 10.
I think the first PC I had wasan Apple IIc.
So I was that I'm that old withthat era and lived through
early internet, lived throughweb two and, you know, lived
through kind of all the stuff upuntil now and it really does
feel like this is as significantof a moment as getting that
first computer felt like to me.

(02:33):
Now that might just be thatit's just cause it feels like
it's everybody's talking aboutit in a way that some of this
other stuff kind of didn'tbubble up as much, but that's
what keeps me excited about itand keeps me kind of leaning in
further rather than leaning awayfrom it.
Does that make sense?
Yeah, of course.

Alex Kotran (aiEDU) (02:48):
Yeah, and we should say you know, I, you
and I really first met playingthis poker, this poker game with
among friends during thepandemic and we were talking
about and we were trying and Ithink you're also alluding to
some of these other fads comingout of Silicon Valley.
Sure, you know whether it'scrypto and like blockchain, and
NFT is sort of an offshoot ofthat, the metaverse.

Gavin Purcell (03:10):
And I don't know if I'm going to say that 3D
printing, no, 3d printing isalways the one that, like it's
the ultimate.
Like I was very excited about3D printing, I still think it
has a huge application, butpeople at one point were talking
that we were going to be ableto print, by like 2010,
something in our home, and thatjust never came to fruition,
right?

Alex Kotran (aiEDU) (03:29):
So a lot of these things are like that.
Well, it's interesting becausemy brother just got a 3D printer
and in the same way that youknow, blockchain has not
disappeared Crypto.
I think the US actually justannounced a strategic reserve
yeah.
So it's not that thesetechnologies were completely
fake.
They weren't pyramid schemesper se, um but the language that

(03:49):
was used was like this is goingto transform.
This is a this is a technologyparadigm shift.
It's going to change the way welive day to day.
And yeah, I don't think thatthat's the case for metaverse.
I don't know anybody that Imean.
I I remember when the, the um,the apple, uh, pro, uh vision
pro when the Apple Vision Procame out and I was flying to, I

(04:10):
think it was, mexico City andsomebody was sitting down and
put on their Vision Pro and Iwas like, wow, is this going to
be?
This is kind of crazy, and thatwas the first and last time I
saw somebody wearing a VisionPro in public.
I tried it.
It gave me a little bit of like, I got nausea.
So I was actually going into uh2025 and that was my big
question and I actually Iactually wrote a, a piece on

(04:31):
substack, which which ended upbeing, I think, a little bit out
of date already, um, where Iwas sort of warning that you
know there's, there's asignificant possibility that we
are sort of we're going toexperience this, um, this
plateau moment where theprogress can progress starts to
get a little bit moreincremental and the barriers to

(04:52):
implementation sort of end upproving to be actually quite
significant Cost running out ofdata, like the hallucinations,
and that piece was published inDecember, and then shortly
thereafter, uh, thehallucinations, um, and that
that piece was published inDecember, and then, like,
shortly thereafter, we had 2024of 24 last year.

Gavin Purcell (05:11):
Wow, yes.

Alex Kotran (aiEDU) (05:13):
Um, and we just did a conference and I
actually we gave everybody atthe conference a copy of that
and I sort of did my my openingkeynote and I was like I mean, I
gave you that, I gave you thatsort of like that, that blog
post that I wrote, um and it's.
This is a moment of humilityfor me because I was completely
wrong, because shortly afterthat blog post, we started to
see sort of a continuation andreally like the mainstream

(05:37):
reasoning models becomingmainstream.
We saw deepfakes and I want totalk about synthetic media with
you.

Gavin Purcell (05:44):
It's something that you've been really tracking
and it have been really helpingme understand significant
advancements there and we sawthe DeepSeek moment too, which I
think is yeah, that's a reallyinteresting thing because you
speak about reasoning modelswhich, if people are listening
or watching and that's the ideathat the models can think longer
right, instead of having to betrained on something, that if
they think longer they getbetter responses right, instead
of having to be trained onsomething that if they think

(06:05):
longer they get better responses.
Deepseek was the first versionof that R1, their reasoning
model was the first version ofthat.
That seemed like you could doit for a lot less than some of
the big companies were doing.
Now there's a lot of math thatgoes on in the background that
it may not be as less as peoplethought, but still you had some
other player coming out of China, a completely different player
that nobody knew about.
That suddenly surprisedeverybody.
I think that pushed everythingnot only forward but like

(06:29):
accelerated competition, becauseI think the biggest thing is,
if you're listening to this andyou've seen what ChatGPT has
come out with, you know whetherit's their GPT 4.5 model or all
the stuff that's coming out atthe beginning of this year we're
now at March 7th Like I didn'tthink half that stuff would have
come out if DeepSeek hadn't hit.
You know what I mean.
Like I think they're justpushing much faster.

Alex Kotran (aiEDU) (06:50):
Yeah, I had my own deep research moment
earlier today, so we're going tobe.
You already have a professionalmicrophone.
Which mic do you use?

Gavin Purcell (07:01):
This one is a new one.
Our other one's broke.
It's another Shure, but it'sthe fancier version of the Shure
MV7, maybe the MV7 Plus.

Alex Kotran (aiEDU) (07:09):
Nice, but it's a good one, nice, yeah, it
sounds great Good.
So we do a lot of like we'redoing.
I try to do as many in-personinterviews as possible, but
we've been sending some folks,you know, we're like
interviewing teachers and sowe've been trying to send the
microphones and I was like youknow, it'd be kind of cool to do
a little bit of a, you know,write a letter along to include
along with this microphone andtalk about some research that
shows like what my assumptionwas that there's probably

(07:30):
research that suggests acorrelation between sound
quality and a perception ofauthority for a person talking
and so that was my prompt.
It was um.
It was just to gbt 4.5 um.
Is there any uh researchequating sound quality of
somebody's microphone to theirauthority or perceived authority
as a speaker?
It asked one question are youlooking for research

(07:52):
specifically in professionalsettings or more broadly, across
every context?
And I was like I'm agnosticabout the content, that about
the context.
And then it went and it startedthinking and it took about like
I think 20 minutes minutes itwas thinking for a while.

Gavin Purcell (08:05):
Wow, that's a deep question, yeah.

Alex Kotran (aiEDU) (08:09):
And let me I think I can share my screen.
Let me do that.

Gavin Purcell (08:11):
I find deep research, by the way, to be one
of, if not the most amazing usecases of AI that I've come
across, and I sure do hope I'mon the plus model of gpt.
So right now, if you're on theplus model, which is the $20 a
month plan, you get like 10 aweek, I think, or 10 a month, uh
, searches.
It's incredible what I'vegotten out of it.
Um, oh yeah.

Alex Kotran (aiEDU) (08:32):
So okay, let's take a look at this so wow
yeah there you go, you got awhole thing it put it together
and it had all the sort of likethe methodology of some of the
experiments that were run um.
It talks crazy like influenceof audio quality, cognitive
explanations, so really dug intoit.
Broader implications versus,like, live versus recorded,
which is not even something thatI asked for, and I think, this

(08:54):
is really like the, the.
the big insight for me is thatit correctly intuited that you
know, including this layer liverecorded in virtual would be
really relevant, given you knowthe world that we're in today
and talking about Zoom.
And then it sourced it all andI checked out these sources and
they were legit.

Gavin Purcell (09:12):
I mean, this is the thing that like when you
talk about hallucinationsbecause I think this is
especially in the academic spaceor in the general I don't
believe AI is going to be athing space Hallucinations
really are.
I've always felt too personallyas somebody who's like an AI
I'd say an enthusiast in someform, but also aware of the
dangers and all this stuffHallucinations are the thing
where you really had to fixthese or have to fix these

(09:35):
before people can start takingit serious.
And I think this is the thingthat you know to the point of if
DeepSeek.
When DeepSeek first came out, Ithink what it made such a big
kerfuffle for is that a lot ofpeople hadn't actually used a
non-free version of an AI modelyet and they're like whoa, these
answers are way better than Iused to get on the free version.

(09:55):
When people actually get theirhands on what deep research can
do, and just to people in theaudience, like deep research is
open, ai is ostensibly an agent,but it's an agent that works in
the audience.
Like deep research is openingeyes ostensibly an agent, but
it's an agent that works in thebackground, meaning that it is
an AI that goes out and doesstuff for you and then returns a
result it's bringing backessentially like a research
paper, and the thing about theseresearch papers is that they're

(10:17):
not all perfect.
I've seen some people have seensome issues with them, but to
your point, there are links andyou can check.
All of the research is comingfrom that and I have had zero
hallucination issues in thismyself personally and it's been
super useful and I think, likeacademically, if there are
teachers listening, like this isthe product that may completely
like kind of flip your lid.

(10:37):
Do you know what I mean?
Because you might actually belike whoa, that is like a
legitimate, like interestingdeep dive on.
In this case.
You know whether or not youshould use a virtual mic or what
the authority looks like.
Like that's a pretty specificquestion and you got like really
specific answers.

Alex Kotran (aiEDU) (10:56):
I mean, but to me it's this, this.
It also addresses somethingthat I had been intuiting, even
like back in 2023, you know,when we, when, when ChatGBT came
out, cause AIEDU, I mean wewere, I mean, you know this, we
were sort of preaching thegospel of AI literacy before
language models.
Um, language models came out andeverybody was sort of like you

(11:16):
know, rushing to kind of likeget their hands on chat, GPT,
and like the immediate reactionor the, the, the, the reflex was
all right, the key is everybodyneeds to learn how to prompt,
engineer these tools.
Yeah, and so a lot of folkswere equating AI literacy with,
like you know, how do we teachstudents or teachers or whoever?
um how to prompt engineer toprompt and, and I saw things

(11:38):
like prompt Bibles and people,like you know, memorizing and um
and first of all, the the keycomponents of prom engineering
were changing fast enough, evensort of month to month, that I
already had sort of like somespidey sense that like I just
don't know how, like what, thelongevity of any sort of one
strategy is going to be yeah um,and then I, you know, and we

(12:00):
were lucky enough to be, youknow we're funded by um.
You know not all, but many ofthe big sort of frontier model
companies, and when I would talkto their researchers, their
advice was like this is a, thisis a temporal, you know, like
prompt engineering is sort of aa, a moment now, um, the
direction that this is going togo is that people are not going
to need to prompt, and to me,the reasoning models are like

(12:20):
really the demonstration of that, where you are like all you
need to do is be very clearabout what your goal is.
Yes, and you know, myunderstanding of chain of
thought is really the chain ofthought is essentially breaking
out the prompt engineering.
It's not, it's not.
I don't know that chain ofthought reasoning is.
It is the same thing as promptengineering, but it's
effectively prompting itself,right?
it's sort of yeah, uh, yeahwalking through the sort of the

(12:41):
prompts so you don't have to doso, um, and so it's less.
I my understanding withreasoning models is you no
longer have to say somethinglike you know to play there,
like in, for example I didn'tsay in this, this prompt that I
shared, you know, play the roleof a researcher, um, and write
it in the form you know, inlanguage that you know a
researcher would use it.
Sort of just figured that outbecause I asked is there any

(13:04):
research?

Gavin Purcell (13:09):
And I guess that was enough.
And here's the thing I thinkand this goes back to what we
were saying about, like how funor how I was saying about how
fundamentally transformativethis technology is it's that
prompt, that not needing toprompt thing really speaks to
what's going on.
And what's going on in a lot ofways is it's starting to think
a lot more like we do, right,and I think that is, or it's
able to interpret the way thatwe are thinking, and we don't

(13:30):
have to program it in a specificlanguage, we're just using our
language to get it to do thething we want to do.
And that's differentfundamentally from any
technology we've ever had.
Right, like the closest to thatis like when we got up as
cavemen and looked down and wehad feet and like our feet were
walking and in some ways, likeyou, just it becomes part of
your body.

(13:50):
Right, you're like you justknow what to do, like your body
kind of knows, like that's kindof what this is starting to
learn.
And I think, to your point, thereasoning models.
So the reason, the big kind ofthing behind the reasoning
models is it starts to kind ofask itself along the way am I
doing the right thing.
Is this the right thing?
Is this what the user wants?
And that is a fundamentally bigshift from how we would look at

(14:12):
prompting.
Essentially, prompt engineeringwas in some ways like how can I
program this thing, quoteunquote, meaning that I'm going
to use a specific type oflanguage to get it to do the
thing I want it to do.
And now it intuits that and I'mnot trying to anthropomorphize
these things, they're stillmachines, they're large language
models are not like people, butintuiting things is a really

(14:33):
big step.

Alex Kotran (aiEDU) (14:34):
I feel like yeah, I mean, here we go, so
you can actually see.
I think what's cool is you cansee the chain of thought and it
says GPT 4.5.
I just switched it.
It was oh three, um.
So it starts with examiningresearch.
It starts researchingmicrophone quality, okay, and
then it's like doing it'sanalyzing the role.
Uh, it reads one source Okay.

(14:55):
And then so it.

Gavin Purcell (14:57):
I love how it says okay.
It's like okay, I got that, nowI got to figure this out.

Alex Kotran (aiEDU) (15:00):
Three experiments, okay, so it
actually gets.
You know.
It's on track.
It's mapping out, um, how pooraudio quality and then refining
the research, uh, for like radioor phone.
And then it's thinking about,okay, which websites to choose
as reliable signs of sources ofinformation.
Um, this is something thatpreviously we had to do

(15:21):
ourselves.
So like yeah, I remember advicewas given.
It's like okay, okay, well, youknow you have to check the
sources, and a lot of times youwould say, like you know A, the
sources might've been made up orthey would have been sort of
just like blogs or Reddit orWikipedia, and it's like no, we
want more, you know, scientificsources.
It's doing that and I don't knowif, like, is deep research
something that they have?

(15:42):
Is there sort of likepre-prompting built in, baked
into deep research, where it islike, have they kind of informed
it that generally, when you'reasked to do research related
tasks, it orients it towardsthese types of sources, or is it
just figuring this out?

Gavin Purcell (15:58):
So in general, any of these things have what's
called a system prompt, whichmeans that in fact, you see this
every once in a while onlinelike somebody's jailbroken the
system prompt and you can getthe system prompt out.
But like so clearly, there is asystem prompt for this
particular agent and this is,according to OpenAI, an actual
agent.
This is one of their firstagents they've rolled out,
including the other one wasoperator, which allows you to

(16:21):
like you can see it work on thecomputer, but this agent has
been given a prompt that'sprobably very clearly like your
job is to do a deep, you knowwhatever, however long project.
I think most of the time theseend up being like probably 10 to
20 page project on a researchproject on this subject, use
only links that you canguarantee.
This is blank and blank Like.

(16:41):
I'm sure there is a systemprompt that says something like
that, but to the point of theintuitiveness is like it's going
through that step by step andkind of like starting to think
like is this the right thing?
And then it kind of looks at itand then it says do I at some
point?
It's saying like and I'm gonnasay again, not
anthropomorphizing, it's notsaying these things as part of
the system.
It's like it goes to the nextstep no-transcript actually be

(17:35):
running out of good data totrain these things on.
There's a whole conversationaround synthetic data, which is
pretty complicated, which youcan look up, but like the thing
that changed was that somebodyfigured out and you know there's
an argument that this was IlyaSutskovor who's who left OpenAI
and that maybe this is what thebig OpenAI kerfuffle happened
because of somebody figured outthat this reasoning model would

(17:55):
allow you to get on a separatescale and actually keep those
improvements going in asignificant way.
And that was not a for surething, right?
Like that's like the human sideof all this is like the human
ingenuity side said oh, what ifwe did this and that worked?
I mean so like that's a reallyinteresting thing too, is that?
Like it's not like thetechnology.

(18:17):
It's not like any technologycurve just goes like this, like
it's often kind of bumpy andjumpy, but like we were close to
a possible slowdown in that way.
And now somebody figured outthat people have figured out
another version of that, whichis interesting.

Alex Kotran (aiEDU) (18:32):
Yeah, and for those folks who are just
listening in, gavin is making ahockey stick motion with his
hand, and it's like, if you lookat the progress of language
models, you know from like, Iguess, the transformer in 2017.
And even before that, you know,sort of like natural language
processing and sort of machinelearning, it really was much

(18:52):
more incremental.
And then there was this momentaround GPT-3.
I mean, 3.5 was really morejust sort of like the
reinforcement learning and humanfeedback, right, like it's not
anyways.
And then gpt4 what we sort ofwe are on this sort of hockey
stick trajectory now.
Um, and so that was sort ofthat's one of the open questions

(19:12):
and, and I just want to sort ofpull out one of the things that
you're, I think, in conveyingis this is the issue of like we
have a few paradigms for how wewould scale this and how this.
You know, what would it looklike for the trajectory to
continue, um, you know,accelerating, uh, upward?
Um, we've already thrown itlike basically all the data that

(19:35):
we can find.
There's probably more.
I think a lot of the folks I'vetalked to are not, as you know,
pessimistic about theavailability of data, yeah, um,
but we at some point will runout um, whether that's this year
, next year, it's not, but it'snot going to be, it's not
infinite.
It is not an endless well oforiginal data and high quality
data um the computers are goingto be paying.

Gavin Purcell (19:54):
We're going to be paying people just to say
things out loud and to typethings into computers, just so
they can keep producing data.
It's like that'll be a thingwe're going to end up doing.

Alex Kotran (aiEDU) (20:02):
I mean, I've heard that it's in the GPU.
You know, the compute is theother one of the other paradigms
of scale and it's.
You know, we're still again likelots of new developments coming
out from NVIDIA and othercompanies, but I think everybody
kind of suspects that we atsome point, maybe soon, we'll
tap out.
But then there's this idea ofspending time thinking, which is

(20:29):
that is sort of this newparadigm of scale that we hadn't
anticipated, which is now youcan, in addition to throwing
more data at a model, throwingmore compute at a model, with
reasoning models, you can throwmore time, more compute time, at
problems, and so if there was areally really big, important,
lucrative problem to solve andyou're the one that actually was
explaining this to me Iremember on the, I was like

(20:50):
pacing around my room trying tolike run my head around this and
you were like what if we, whatif you gave a model like two
weeks of?
time working on a problem yeah,um and not every problem maybe
needs two weeks, but if you'retrying to cure cancer, maybe
it's you're willing to paytwenty thousand dollars and
that's, I think, the news fromopening.
I I don't know if this wasformally announced or it was a

(21:10):
leak, but I think it wasyesterday that they're going to
have now twenty thousand dollara month.

Gavin Purcell (21:16):
Um, yeah, it's a story agents that you can you
can pay for.

Alex Kotran (aiEDU) (21:21):
Have you heard from the information?

Gavin Purcell (21:22):
yeah, so sorry for the information it's really
interesting, actually reallyinteresting actually, and I
think a lot of people, when theysaw that number, were like get
out of here, who's going to pay$20,000 a month for an AI agent?
But for so?
There's two things going onhere.
Obviously, people who listen tothis may or may not know part
of the game here is that thesecompanies have these like robust
, giant valuations, and part ofthe game is trying to make sure

(21:43):
that you're actually makingmoney at this stuff, right?
So that's one part of it.
The other part of this, though,is, you know, imagine a world
where you or I or any of thisaudience is not going to pay for
a $20,000 AI agent, but if thatAI agent is a PhD level, let's
just say AI researcher, becausethat is, you know, ai
researchers.
You know this, alex, andanybody probably living in the
San Francisco area knows that isa very well-paid job right now,

(22:06):
especially if you're a good one, right?
So you know $20,000 a year, yeahexactly $20,000 may sound like
a lot to a person anybody likeyou know, me or you but like to
a AI engineer, that is not verymuch money for a year-long
salary at like whatever that is,you know, 300k, let's say,
because it's still a bargain.
Not only do you get that AIengineer if it's as good as an

(22:27):
AI engineer that you're going tohave on staff but also
conceivably, based on computetime, if you can get it, it
could work 24 hours a day, sevendays a week.
I always tell people that theway to best understand what this
world that we could be goinginto and I think everybody has
to keep that word could in mindis that Dario Amodi, the guy who

(22:48):
is the co-founder of Anthropic,wrote a very good essay called
Machines of Love and Grace thatI really recommend that
everybody reads because it kindof lays out the idea of what you
know.
They call it strong AI orwhatever AGI, whatever you want
to refer to it as, but likewhere this pathway takes us
right and these very expensiveagents from open AI are, I think

(23:09):
, the first kind of like momentfor the general public to be
like.
Well, that doesn't sound likesomething I'd use, and I think
the key here is it's not,because really what it's about
is like who is it going toreplace, right?
Like what kind of is this ahuman that's going to replace?
And like what the value of thatis.
So, when you talk about thesecompanies raising all this money
, if you can deploy 10 of thoseyou know, let's say, 10 AI

(23:32):
research agents at $20,000 amonth, that's very expensive for
the average person but to acompany that has a bunch of
capital may be worth it becausethey can get further and further
.
And the other thing about thatarticle talked about cheaper but
still expensive computerprogramming agents like people
who can write code, or I saypeople like agents that can

(23:54):
write code.
All of this stuff is ways to puta number next to the value of
what these machines couldconceivably do.
Now, ultimately, it's going tobe a marketplace call.
If the work isn't good enough,they won't sell it, and I think
that's an interesting thing tosee.
I just keep coming back to thisand you, living in San
Francisco, alex, I think,probably hear this more than

(24:15):
even I do.
But all of the people that Ilisten to and I'm pretty tuned
into the AI people, I will say,and I think most people
listening to this may not be,but, like all of the AI, people
are very clearly now signalingthat 2026 to 2027 is a moment
where this stuff will be real.

(24:35):
In a way that it will.
Obviously it's real now, but,like, the idea of a $20,000 a
month AI agent that doesmeaningful work will not seem
crazy.
And I think that's where wehear that number right now and
it's just like I don'tunderstand how that would be
possible.
But we are entering, based onthis test time compute sort of
model, a world where that opensthe door to a whole lot of

(24:56):
things.
And one thing I think a lotabout I don't talk a lot about
cause it's not really in my lane, but I do think a lot about
jobs, right, and think about,like, where this leads us all
and, and you know, in your worldI sure hope and I think there
will always be a place forreally good teachers, because I
think a human to human teachingexperience is super valuable.
I think AI can help in a lot ofways.
But I do think a lot about,like my, um, uh, like my, the

(25:20):
boyfriend of my daughter, he,she, he's just graduated from
just graduated from UC SantaCruz last year.
He graduated with a reallygreat degree in video game
programming, like pretty high uptop of his class, like he's a
coder.
It took him, I think, ninemonths to get a job and it
wasn't even like an amazing job.
Now the game industry has beenin rough times, but that's not
just because of that.
It's because there are lessjobs for coding, in part because

(25:44):
these AI agents are alreadygetting good enough at some of
that stuff where they can kindof like pull the workforce down.
So like this is where thetransformative part of the
$20,000 AI agent comes from.
It's like it sounds like acrazy headline now, but then you
have to think like a year, twoyears, three years down the road
, how many people's job is thattaking?

(26:05):
And that's the part that kindof keeps me up at night.

Alex Kotran (aiEDU) (26:07):
Yeah, yeah, and I think the to me it's
actually.
I mean maybe we will eventuallyperhaps it is 26, 20, 27, get
to a place where you actuallyhave companies that are just
hiring these agents to do allthe work.

Gavin Purcell (26:24):
Um yeah, I mean they're sort of like on the
teams, but I.

Alex Kotran (aiEDU) (26:28):
But it's also and if you have, like,
let's say, 20 ai researchers andyou're paying them on average,
let's say, a million dollars for, like, total comp um, if adding
three agents to the miximproves their efficiency by 20
or 30 percent, right, very easyfor you to justify that.
You know, like the um and andso it's.

(26:49):
I think the paradigm is less,you know, you know someone's,
someone who's listening iswondering well, is ai going to
take, is ai going to replace me?
I don't know if that's actuallythe question to ask yourself.
It's more like you know, thinkabout how many people your
company employs that have asimilar job to you.
Could AI make everybody thatdoes your job 10% more efficient

(27:11):
?
And if so, the question is whatare we going to do with that
extra productivity?

Maya (AI) (27:16):
Because there's sort of this, like balance between AI
can and any tool that has, youknow, automated tasks.

Alex Kotran (aiEDU) (27:24):
it can make us more efficient or it can
make us more productive, andthey're not quite the same thing
, because you know there's oneworld where we, you know,
harness that productivity formore.

Gavin Purcell (27:33):
I was going to say humans are not amazing at
maximizing productivity.
I will say that If they get alittle easier, I'm a human, but
I know that sometimes you cansay be more productive and
you're like oh great, I get anhour to do whatever I want,
rather than have to do more work.
This is the tricky thing, yeah.

Alex Kotran (aiEDU) (27:49):
So can it make us more efficient, more
productive?
Clearly, I think yes, it's justthe challenge is, most people
are not, so I'm not even surethe thing is I'm not even sure
that the difficulty in findingsoftware engineering jobs,
because I've heard the samething.
You know lots of layoffs.
I think there were layoffs thatjust announced this week.
Hard to say if that's actuallybecause of AI.

(28:12):
Now the companies are on theirearnings calls, talking about AI
as sort of the driver.
Sure, I think there's a lot oflike motivations to frame it as
such Um you know I I I cameacross this research years ago
but I think it's really relevanthere which it looked at um.
Over the last 50 years there'sbeen, like you know, we've we've

(28:33):
hear a lot about how the middleclass kind of disappeared.
Um, you know, people had thesesort of like, you know, you
could work at a factory and buya car and a house, pay for your
kids to go to college, and thosejobs kind of went away.
And it wasn't AI or it was acombination of, like you know,
mechanistic automation,globalization, you know
computerization, so it madethose sort of the people needed

(28:55):
fewer people in factories and soa lot of those jobs moved to
sort of lower skilled, uh uhservice jobs.
That were harder to automate, umor high skilled knowledge work,
and that did not happengradually.
It happened in these sort ofvery acute bursts following
recession.
So, basically, like 80% 88% ofthe job polarization that
happened over the last 50 yearshappened in 12 months following

(29:19):
an NBER National Bureau ofEconomic Research a qualified
recession, and so now we sort oflook at where we are today.
You know, it's been 16 yearssince the last recession and so
we have all this technologyoverhang waiting and I think
really, companies are going todouble down on all these sort of
automation tools now at theirfingertips whenever the next

(29:41):
recession comes.
I'm not going to predict whenthat happens, but it doesn't
feel like the next recession isfive years away, I think.
Unfortunately, we're probably,you know, unless everything we
know about economics provesincorrect, I think we're
probably, you know, we're goingto have a recession coming.
I think that's when this allhits the fan.
But I want to talk about, youknow, and the entertainment

(30:03):
industry, which is somethingthat I have a lot of thoughts.
Yeah, because this obviously hasimplications for for coders and
folks that are, you know, doingmore.
You know like, let's say, hardscience and STEM.
One of the things that I willoften hear is like well,
creativity though, that is sortof like the last bastion of
humanity, and AI is never goingto be able to replace creatives.

(30:26):
How do you feel about that?
I mean is, does that hold up?

Gavin Purcell (30:32):
So I think a lot about this.
This is a, you know, a hugepart of my life.
I've spent my, you know, adultlife doing creative work and
getting paid to do it, sometimesgetting paid very well to do it
and working at kind of higherlevels of it.
And I think the truth of thematter is, in a lot of ways, the
creative business has becomepartly a creative technology
business, and I say that becauseI mean that at large scale.

(30:52):
When you think about, like, say, a Marvel movie, you know there
is the director, there are thewriters, there are the actors,
but then there are like 500graphic artists who are working
within Maya or all thesedifferent like kind of computer
programs that have made theseincredible experiences possible.

Alex Kotran (aiEDU) (31:08):
Right, Are you exaggerating?
Is it actually like on theorder of hundreds?

Gavin Purcell (31:13):
Oh, definitely it is.
If you look at the credits of,like a Marvel movie it's like
never ending right and you'llsee there's like the big credits
which are like production,designer, writer, director, and
then the stars, and then youlook at like a list of like just
hundreds of people and it's alot of those people are actually
doing the effects work right.
So I think that there's a wayto look at this where.

(31:34):
So I've thought, as I said,I've thought a lot about this.
One of the things I thinkthat's really important to think
about when you think about theentertainment business is the
changes that it already wentthrough.
And the same way you haddiscussed those other changes
that kind of already happened inthe games business or other
places, the entertainmentbusiness has undergone
fundamental shifts in the lastprobably 10 to 15 years, and I
think that's mostly because thedistribution methods are

(31:56):
completely different than theywere before.
Everybody out there knowsstreaming video.
You also know social video.
These are the ways that peopleget things, rather than go to
the movie theater or watch TV onbroadcast TV, like.
It's just a completelydifferent way of getting stuff.
So, distribution wise, alreadyyou were looking at these
different kind of systems whereyou know, in the old school
model there were all thesegatekeepers and like, if you

(32:17):
were to get a show on, it had toget through X, y and Z and it
cost this much money because youhad to do all the technology
stuff and it's been able to bemuch cheaper to make stuff and
you could make up a lot more ofit.
So that's why we saw thestreaming explosion of content.
Of course, from a moneystandpoint, that hasn't worked
out great.
So the other thing that'shappened is a pullback in
production already, so there'sless people making things.

(32:38):
Because when you have archivesof content that exist and you
can go back and watch, I alwayslaugh.
My mom, who is in her late 70s,now watches the Patty Duke show
every day, which is thefunniest thing because the Patty
Duke show is from, I think,1958.
It's like the kind of thinglike she has found.
On one of the streamingservices.
They air the Patty Duke show at8.30.
So media as a whole and that'smusic.

(33:01):
All this stuff is now availableto us at all times.
So, okay, that's the worldwe're in now from a media
standpoint.
What I think is going to happenhere and I hope happens my kind
of best, most optimistic case ofwhat happens with AI, media and
Hollywood, all this stuff isyou suddenly have these tools
where, say in 2000 and 2008,when YouTube and all these

(33:23):
companies came along, wheredistribution was easy but
production was kind of hard,still right Distribution you
could get.
If you made a video aboutsomething, you could get it on
YouTube and you get people tosee it.
But it wasn't that easy to makethe thing.
What I think we're gonna get atnow is a place where you can
make the thing much easier thanyou could before, and that's in
a couple of ways, from AI videoor from AI audio or from images.

(33:46):
All these things like it allowspeople to kind of get what's in
their head out into the world alittle bit easier.
Now, artists are still going tobe artists and you're going to
make amazing stuff, but it justhas clearly allowed a lot more
people to do the kinds of thingsthat before were difficult,
because before, in order to make, like, a really beautiful
picture, you would have had tounderstand in the way back when,

(34:08):
like, how to paint, and thenyou would have had to understand
, like, okay, how do I framethis photo?
And then you would have had toknow how do I use Photoshop,
what am I doing in After Effects, all this stuff, and now you
can kind of prompt the machineto create things right.
So that is all going to kind ofdemocratize a lot of that stuff
.
And again, I'd caveat this bysaying, like that's not all
going to be good, but there willbe good stuff that comes out of

(34:30):
that.
So in order, because that'shappened, I think what you're
going to see is a lot morepeople creating a lot more
things.
The vast majority of it will bekind of crappy, which is just
like YouTube Like.
If you go to early YouTubespecifically, you know one out
of every maybe 75 to 100 videoswas like watchable in a specific
way or had something about it,and now on YouTube, a lot more

(34:52):
of it is watchable and I thinkthat's what we're going to be
getting.
At the other side of this thatI think is an important thing
for people to think about in thecreative business at large is,
you know, the creative businesswas always based around.
Like you know, hits ultimatelymake money, but they would try
to make all sorts of things andthere was this whole kind of
middle class of content that wasmade.

(35:13):
That cost anywhere from themovie business from like, say,
20 to $70 million movies.

Alex Kotran (aiEDU) (35:23):
Those don't get made anymore because
they're not big enough hits ifthey hit for the studio to make
a crapload of money on them.
It's so relevant and I want totalk more about sort of what how
you're seeing AI, you knowinfluencing this but I just want
to sort of share something thatjust my you know.
Aiedu is launching this YouTubechannel, aiedu Studios.
Sure, my big idea for this camefrom a few different channels
that I've been watching, so oneof them is called have you heard

(35:44):
of Outdoor Boys?

Gavin Purcell (35:46):
No, I want to know what that is.
What is it?

Alex Kotran (aiEDU) (35:48):
Outdoor Boys.
Outdoor Boys is this.
It's just a guy in Alaska and Ithink he's a lawyer or
something and he does stuff.
Like you know goes survivalist,survival, I mean it's you know,
he'll just go camping and sortof like, build shelters and sure
hunting and fishing, whateverum.
He has 14 million subscribers.

(36:10):
Um unbelievable outdoor boythat's what I mean.

Gavin Purcell (36:13):
Like he's the that's a distribution channel
that, like that, that's biggerthan some digital cable channels
were.
Right, like that's bigger.
I worked at g4, a cable checkit out discovery channel it's.

Alex Kotran (aiEDU) (36:23):
It's twice as much, actually way more than
twice as much, as discoverychannel, which has a show called
naked and afraid.
Um, yeah, and I'll tell you,outdoor boys.
He's not a, he's not using anyai yet right like he's.
Yeah, he's using a gopro.
Um, and I, I'm, I'm justfascinated by this, right like
the idea that, first of all,this is to me almost better

(36:46):
content, because, you know, partof the fun of watching a
survivalist and is is theauthentic nature of the fact oh
sure actually a dude that's likeout there yeah um, I guess
hunting a crocodile in this case, that's kind of wild um oh my

Gavin Purcell (37:00):
god that's insane I mean.

Alex Kotran (aiEDU) (37:02):
So the content, right, this is.
It's not to say that people are.
I mean, I think this is there'san assumption that you know
youtube, uh, and, and I thinkactually, frankly, a lot of
social media, there is a lot oflike brain rot and really low
quality content yeah I think,the sleeper has been, um,
actually, some of the longerform content has gotten really,
really high quality.
Uh, and there's still adiscovery challenge.

(37:24):
You know, like this is verylike.
You know, still a lot of peoplejust don't know that this is.
I mean, I'm surprised by howmany people don't pay for
youtube premium, which is, Ithink, is like 15 a month, and
yet they're paying like 60 amonth for for cable.
And I, like you, can get waymore stuff that's perfectly
aligned with whatever likeliterally pick a random you know
hobby that you have and you'llhave like three or four

(37:46):
incredible you know creatorsthat can, like allow you to go
go super deep.

Gavin Purcell (37:53):
I totally agree.
I see the way I describe thisto normal media people.
Sometimes I call out a specificYouTube video.
Maybe you could bring it uphere if you want.
But there's a Star Wars hotelreview.
There's a woman who made afour-hour Star Wars hotel review
and she's really interesting.
She has quite a few subscribers.
She's been doing it on YouTubefor a while but she publishes a

(38:13):
video like once a month and herkind of like niche is like
in-depth Disney and Star Warsthings.
But this four hour Star Warsreview is her in a room and she
goes through her entire likeexperience at the Star Wars
hotel.
But she's very good at talkingto camera because she's gotten
very good at it.
This became like a sensation andthe video has like 12 million
views right now and it iswritten up in the New York times

(38:35):
and all these other places.
This is a person in theirbedroom with a very
straightforward two-camera thing.
That is as compelling to somepeople and it was to me as a
multi-hour drama might be toanybody else and it's kind of
the great flattening of content.
I call this sometimes In fact Ijust pointed this out Bill

(38:56):
Murray, one of the biggest stars, movie stars kind of in the
world.
Just this week went on Hot Onesand also on Joe Rogan's podcast
.
Meanwhile, kai Sanat, who youmay or may not know of and if
there's teachers in youraudience you probably do know of
Kai Sanat, the famous Twitchstreamer, was on the Tonight
Show and it just shows you howthose two worlds have kind of

(39:20):
flattened right.
Like Bill Murray, a traditionalcelebrity who was like somebody
that you would never have seendo a podcast or do a YouTube
show, and Kai Sinat, somebodywho is a Twitch streamer who you
never would have seen,necessarily, on the Tonight Show
, although I will say, assomebody who worked at the
Tonight Show, we did a lot ofinteractive stuff early on but
it's just a good example ofshowing how that stuff has

(39:40):
flattened out.
And when you have that worldand then you add the ability for
, say, 10 times, 100 times, morepeople to make creative stuff,
you're really then opening thedoor to an entirely yeah, this
is the video, this is the StarWars woman, which is just a
fascinating thing.
How many views is it at now?
12 million, 12 million.

(40:01):
So this is her name.
Is Jenny something?
Uh, jenny Nicholson.

Alex Kotran (aiEDU) (40:04):
I really suggest going to watch this,
like you know when.
Um, like Jimmy Fallon, how manyviewers would.
Uh, you know, one show like oneepisode.

Gavin Purcell (40:14):
Yeah, I mean it depends, like now way less, way
less on TV, but like peak, likewhen we were, when I was there,
the early Tonight Show, whichwas when, like it was really
peaking for that particular show, I think we were doing like 8
million viewers a night, right,so like 8 million viewers a
night and that was like waybigger than now.
Like I think you know the latenight shows now, like maybe peak

(40:35):
, you know five years ago, andTV viewership is just way down
versus what it was.
But yeah, that's a good example.
Like one night of this versusone night of that versus this is
crazy.

Alex Kotran (aiEDU) (40:46):
And this isn't like.
I mean, she's in her bedroom.

Gavin Purcell (40:49):
Um, yeah.

Alex Kotran (aiEDU) (40:50):
You know I I'd be curious how long it took
her to actually cut this.

Gavin Purcell (40:53):
Oh, I'm sure a while Right.

Alex Kotran (aiEDU) (41:06):
It probably took her quite, quite because
it is well edited and she'swearing she wears different
outfits and threw out it, youknow, which is fun.
So, do you see, I mean forsomeone who's, let's say, you
know, there's, I think a lot ofthe students that we work with
are not just, like you know, thecomputer science nerds, like
they want to, and I think,youtube influencer is actually
one of the most sought afterjobs.
Now if, if you like, survey GenZ, what advice do you have for
someone who's like you?
Know what I really want to dothis?
You know what are the skillsthat are going to go into this,

(41:28):
because I think you and Iprobably agree that AI is not
going to do all of it, likethere's still actually a role
for the human.

Gavin Purcell (41:33):
No, in fact, I think that the most talented
people are the creative peoplewho are able to kind of harness
the things and make them better.
I want you to go one of theYouTube pages while you're
bringing up YouTube pages, go toa guy's page whose name is
NeuralViz N-E-U-R-A-L-V-I-Z.
So this guy is who I point outall the time to people.
I was just in a meetingyesterday with a company that's

(41:55):
raising money to do AI stuff andI always tell them about
NeuralViz.
So what NeuralViz is.
I've met this guy in person.
He's a single.
Yeah, that's it, neuralviz.
Yeah, so he's a single, aformer editor.
He, I think he cuts promos, butalso very creative.
I think wants to be a moviedirector type person.
What he did is he created ausing all AI tools by himself,

(42:16):
went in and created an entirecreative universe.
That is like of this weirdalien future where they talk
about humans.
Now, this one.
You look at it, it looks reallyweird, but you have to also
listen to that.
What's interesting about thisis, like his YouTube
subscriptions, I think, are nowin the 100,000.
But, like again, this is all byhimself.
He started maybe a year ago.

(42:36):
He's created a whole universeof characters.
I think he calls it themonoverse, but each one of these
videos shows you what's capableright now.
Right, so he's created aliens,he's created characters that
make you laugh.
All of these can talk, they cantell a story, they can do all

(42:58):
this stuff.
He's writing it all and he'sediting it all and he's also
making the creative choices of,like, what is he putting into
these things?
So like, that's where thecreative side of this comes in,
in that you're still going tohave a creative or a few
creative people working onsomething together, but what
they're able to accomplish nowbecause of these tools is so
much better.

(43:19):
I've been talking a lot latelyabout this idea of a micro
studio, and what I mean by thatis like.
A micro studio is where you'dhave like one to five people
very much like NeuralViz herewhere you create content that
you want to make that you thinkis interesting.
And when I say create, it'slike you come up with the idea
you probably do most of thewriting, because these LLMs are

(43:41):
not amazing to write and youthen choose from all of these
things that you're making thechoice to make something and
that like was never reallypossible at this stage before.
Like that character, likethere's Tiggy Skibbles, and
Tiggy Skibbles is like his, likebreakout character.
So it's like this little kindof you know nasty little
character who's always mean topeople.
But, like, this is an exampleof what is happening like right

(44:03):
now.
So you could just see somethinglike where this is like a
hundred, a thousand, 10,000times more people doing this.
I feel like it's going to befascinating.

Alex Kotran (aiEDU) (44:13):
And you just launched.
What is it two years ago or oneyear ago?
Ai for Humans.

Gavin Purcell (44:18):
Like a year and a half ago, something like that.

Alex Kotran (aiEDU) (44:20):
Year and a half ago.
Yeah, dang, okay, 20,000subscribers, you're like I mean,
but this is actually, first ofall, just a recognition of how
big of a deal it is for someoneto be able to get to 137,000
subscribers on his own.
Oh yeah, oh, my God, it's ahuge deal at this point.
I mean, there's so many peoplecompeting.
You've been slogging it out.
Yeah, yeah, exactly.

Gavin Purcell (44:38):
And, by the way, like you know, youtube has its
own gamification system.
But I will say with NeuralViz,like he's just like people
immediately gravitate to it andlike I think most people
understand he's using AI butnobody gets mad about his stuff
because it immediately leaps orthe page is the wrong word but
like leaps out of the screenwith creativity and that's how

(44:59):
this AI stuff is going to work.
Like I think lots of peopletalk about the idea of what AI
slop content is.
So if you're on Instagram andyou see like I don't know,
there's a video I saw the otherday which made me laugh.
Still is a bunch of kittiesworking at McDonald's in like
McDonald's outfits.
But like that stuff is semi loweffort Even that's like kind of
curious and creative andinteresting.
But like Neural Viz and thepeople like him are the ones

(45:22):
that I think you have to kind oflook at as to how the world of
content creation is going tochange.

Alex Kotran (aiEDU) (45:28):
I mean tell me about just the experience of
starting AI for Humans.
I mean having been on, you knowbig Hollywood sets.

Maya (AI) (45:35):
Oh yeah.

Alex Kotran (aiEDU) (45:37):
Was it easier than you expected, I mean
obviously not very easy,because I know that this is
something that you spend a lotof time on, yeah, and you're
also doing a lot of research,right?
Because the format of your showis sort of bringing people.
In fact, I was just on yourchannel.
We were talking about those$20,000 agents and turns out I
think that was your most recentepisode.

Gavin Purcell (45:57):
Yeah, we had literally just covered it on the
show that came out yesterdaymorning.
So that's why I didn't have allthat information at the top of
my head.
I definitely knew about it allahead of time.
Yeah, you know, it's not like.
So it's interesting.
One of the reasons I wanted todo AI for humans during the
pandemic I had started my ownpodcast, which I still think it
was an interesting idea around,like kind of a curiosity thing,

(46:17):
which I still think it was aninteresting idea around, like
kind of a curiosity thing, and Ihad a producer work with me and
that was like reallyinteresting and it was cool.
But it was like not cheap,right, and it's for a podcast.
Whenever you first startanything, especially podcasting,
though, you're not reallymaking any money for quite a
while unless you have like somesort of crazy thing that happens
right away.
But it's a, it's a grind, right.
It takes a while to build anaudience, do all this stuff.
So I really wanted to see whatwould it look like at my age,

(46:39):
you know, middle-aged guy to tryto create something entirely
from scratch with a partner,right.
So the two of us really did gointo it as people, both of which
, you know, I was the showrunnerof the Tonight Show and worked
on all these other big shows andhad like 200 people work for me
.
Kevin has hosted these shows inthe past.
We both wanted to see like whatdoes it feel like to have to do
this on your own?
And you know it is a lot ofwork, but it's way easier to do

(47:02):
this sort of thing than it everwas before.
Again to all this point, you canbuy like I'm looking at right
now, I have a 200 camera thatgives me a really clean shot.
That is a straightforward,really good thing, whereas that
would have cost like, say, 1500before you can buy a microphone
that, like you know, is like 150.
That would have cost like, say,$1,500 before you can buy a
microphone that, like you know,is like $150.
That would have been like $600before.
So you can you can get into itcheaper and then you can

(47:25):
understand the tool so mucheasier.
There's a great editing tool weuse called Descript, which is
like it's like 20 bucks a monthor however much it is a month,
and like it makes editing easy.
So it really is not just thefact that like distribution has
gotten easier, but the toolsthemselves have become the way
that people talk and the waythat people communicate Like
short form video really is thething that like communications

(47:48):
come out through now.
So anybody in the audience Iwould say, like somebody told me
this once a couple years ago,and really kind of click with me
is that you really do have tokind of get over the hump of
don't be afraid to be in frontof a camera, because now that's
just what everybody does.
So once you're doing that itstarts to feel more natural and
you start to understand itbetter, and then you are a

(48:09):
creator, then you're a personmaking stuff.

Alex Kotran (aiEDU) (48:14):
And for those who are just listening in,
I went to Descript's websiteand maybe, gavin, you can sort
of help sort of paint a pictureof what this looks like.
But in the past if you had avideo and you wanted to edit it,
you'd have to, I think,manually transcribe it.
So, like you know, let's sayyou captured three or four hours
I mean this interview will runfor, you know, say like an hour

(48:36):
or two You'd have someone,usually someone junior,
listening to every single minuteof that interview, writing it
down, um, and then you'd have,like an editor presumably, go
through the transcript and say,okay, well, we need to cut these
pieces Like this is actuallythe portion we want to like add,
you know, x, y, z, uh, you know, visuals to um.
Descript is automaticallygenerating the transcript and
then you're almost like a texteditor, right, just sort of

(48:56):
going through editing the textand it will automatically cut
the video in line with the sortof the final text edit.

Gavin Purcell (49:05):
Yeah, I mean, this is like it, literally as
you described it.
I worked for, you know, manyyears where you'd have to wait
for the transcripts to come backbefore you could start editing
Because, like as a producer, youwant to kind of go through what
they say and kind of put thattogether that way.
This is AI, right?
So this is AI being used inthis way to create a cuttable
like transcript, where when youcut as you said, when you cut a

(49:28):
line, it cuts out of the video.
There's a lot less of thosejobs now, but also it makes it
so one person or a small groupof people can do something that
used to take two to five people,which means you can do more, or
you can make stuff that's verymuch more specific to you, so to
your, you know, like a outdoorsguy or, for me, making AI for

(49:52):
humans content.
Suddenly there's a lot morestuff to be able to do, because
it's not as hard to make thething as it was.
And again, when you project outfurther in the way that we
projected out with reasoningmodels, we are not that far
Right now.
There are AI tools and maybe wecan transition and talk about
DeepFix a little bit.
But there are AI tools that canget you a very good five to 10

(50:12):
second shot from scratch right,meaning that I can type a prompt
into Google's tool which iscalled VO2, which you can use
within Shorts it's not amazingin Shorts, but the bigger tool
is going to come out soon orRunway, or there's a tool called
Cling or Minimax.
You can type in I want a shotof a bubbling cauldron and a

(50:32):
witch stirring it.
You can type that in and youcan get out a five second shot
that looks pretty good.
Now you may have to go througha couple of them, but you can
get out a pretty good shot now.
So you have to imagine if youcan get that now, say, cut to
like a year, two years, fiveyears from now you'll be able to
string together full blownseries or feature films from

(50:55):
this stuff, and that issomething from the entertainment
business that I'm not sure hasfully settled in on the business
side yet.
But for the individual creators, amazing, right, Like if the
individual creator couldconceivably make their own movie
.
That could really move peopleand that, like you know, it's
not easy to do, that Telling astory is hard and there's lots
of stuff to learn, but, like,the ability to do it is much

(51:17):
more significant than it everwas before yeah, I mean, and
you're, you're based in losangeles right now.

Alex Kotran (aiEDU) (51:24):
Um, yeah, what?
I mean?
How are folks in hollywood?
I mean this is just incredible,right, like the uh, even just
the camera blur.
I mean, getting a lens that cando, yeah, depth of field, like
this can cost.
I mean, I think the I waslooking at some of these lenses
because I was nerding out and Imean some of these cinema lenses
are, like you know, $30,000 fora single lens.
Right.

(51:45):
And so now this person?
I mean, I don't know what itcosts, is it free?
Do you know?
Do you have?

Gavin Purcell (51:50):
Vio.
No, right now there's no actual.
It's free in shorts, in YouTubeshorts, it's a weird way to you
, kind of like, will have toGoogle how to use it in shorts,
but in shorts you can actuallyuse their tool.
It's actually a very expensivevideo generator and you can use
the API, which means thatthere's companies where you can
use it and I think it costs likea dollar per generation and
these are sometimes people referto these as slot machines

(52:12):
because you really don't knowwhat you're going to get out of
them.
Hopefully, that's the thingthat will improve is better
control and better understandingof what you're going to get out
of the model, but it's stillway cheaper than that $30,000
lens, right?
Say you spent even $5 on tryingto pull the slot machine 100
times.
That's like $500 versus youknow having to have that crazy

(52:32):
camera and then where do you putit?
Where do you do with the camera?
Do you need it all the time,like there's all that sort of
stuff.
And I think when people start tokind of understand the
capabilities of these, the thingI was just talking to somebody
the other day about, how part ofthe next step with these AI
tools in video and specificallyin Hollywood is like trying to

(52:53):
get the people who are reallycreative that make some of the
best Hollywood stuff to kind oftry them.
And you know I'm sure youraudience probably knows this but
there are, as with any AI tool,some kind of very negative, you
know, at the very beginningstages the training has.
Lots of these have been trainedon people that didn't expect

(53:13):
these things to be trained ontheir work.
So, even though they are notreplicating exact work of
creative people, many of thesemodels the video models
particularly took in a whole lotof stuff, much of which they
probably didn't have the rightsto say they would train on to
make these things happen.
So there is in Hollywood stillthis kind of negative feeling
around AI.

(53:33):
But I think if people startlooking at it as a tool that
could open doors to moreinteresting creative worlds, I
think it will come around.
I think people are going tostart using it more often.
But so what is the sentimentlike?

(54:00):
Are the Hollywood studiosexcited, exuberant, or is it
fear?
I mean, is there a sense thatlike this is coming for our
livelihoods?
But I think in the backgroundthey're all thinking that this
is a cost-saving thing.
There's a very famous quote thatI think got some good and bad
press from Jeffrey Katzenberg.
If you know him, he was one ofthe first people who started
DreamWorks and he said somethinglike 80 and he was famous for

(54:21):
DreamWorks Animation, pipeline,right, which is a lot of people
work there.
He said about 80% of the peopleon an animated movie won't be
needed anymore.
So like that's not a greatquote to say as a corporation
because, like you know, ifyou're thinking about like the
press, that could come back atyou that's a pretty big deal.
But I think in general they'reexcited.
The studios are excited aboutthis idea because they think

(54:43):
they could probably save moneyon the making of this stuff.
I think creative people areconfused.
I think there's that's the wordI would use right now.
I think that the a lot of themaid probably aren't that deep on
them yet and kind of think ofmaybe what you would have
thought of AI a couple years ago.
I still hear the like AI hassix fingers, right, ha ha ha.

(55:05):
And truthfully, like AI doesn'thave six fingers anymore.
Ai can do hands really well,right, so it can actually make
hands pretty good.
So I think part of it isletting those people know.
And then I also think, you know,in any artistic realm there's
going to be people who, like,are going to want to just be as
pure as possible, right.
So like there will be like JamesCameron himself has come out

(55:26):
and said he's actually all forthis.
He's kind of specifically likehe's moving forward using ai as
every part of his production.
And then there are people, um,justine bateman, the former
family ties actor and nowdirector, she has said that she
never wants to use ai and thatshe believes that, like all,
organic filmmaking that's a termlike organic filmmaking is the
way to go.
So I think, in general, thepeople that will be successful

(55:51):
at large will find some way toembrace this stuff.
And I think there will, justlike with, you know, stage plays
or vinyl records, there will bepeople who want to just have
the OG version of this in thefilm and TV business.
So I think, again, it's likeconfused.
I say only because I think it'sat a real crossroads and I see

(56:13):
a world where, like, the toolsare going to come much sooner,
much faster, but I don't thinkpeople want to admit that yet.

Alex Kotran (aiEDU) (56:20):
Yeah.
I mean this is and I think thesame thing is happening in a lot
of spaces right, I mean evenwith, like you know, you talked
about game developers and wehaven't talked about Sonnet 3.7,
which is Anthropic's latestsort of hybrid reasoning and
language model.
That's incredibly good at code.
It was like optimized forcoding, and the story before was

(56:40):
you know, you could spend sometime prompting.
You get a video game in acouple of hours instead of a
couple of days or weeks ormonths With Sonnet 3.7, with a
single line prompt, you getaworking video game.

Gavin Purcell (56:56):
Yeah, which is pretty crazy, right, when you
think about it like it's aninsane kind of idea.

Alex Kotran (aiEDU) (57:02):
So you're doing AI for humans.
You know, I think this is it'sinteresting because you're
building this audience, butmaybe even more valuable than
the audience you're.
You're building this audience,uh, but maybe even more valuable
than the audience you'rebuilding on YouTube is you're
building this like very, uh,high fidelity, uh, uh,
perspective on the space andevery, all the, all the tools
that are coming out.
Um, have you identified anyinteresting opportunities, you

(57:23):
know, beyond this YouTubechannel that you're creating?
I mean, what's next for?
For you, it seems like you'renot going back to traditional
media.
That is my guess.

Gavin Purcell (57:33):
No, it's funny, you know.
You say it's really interesting.
You said that about one of thereasons I started this channel
was to hang out with my friend,kevin, who I like a lot.
But also the other reason was,like I'm just so fascinated with
change and I had never reallymade a content thing while the
change was happening, like, andI thought that's a really
interesting way to make surethat you stay up on it all.
And it really was, and I feellike I've I have a perspective

(57:56):
that a lot of people don't,because every week on the show
we kind of go into the tools.
We actually don't just kind ofregurgitate the news, but we
actually use the tools on theshow each week.
So one of the things that'scome out of that is that we
Kevin and I and another personare working on a kind of a

(58:17):
secret startup in the AI audiospace that I'm pretty excited
about.
It's like a.
I won't say a whole lot aboutit, but it's like AI and audio
and has some kind ofstorytelling stuff going on, so
that I'm really excited about.
But I don't think I will go backto traditional media.
It's interesting.
I've thought a lot about thisbecause I do have to take a
jobby job at some point.
I always make this joke aboutit, but I've now spent a couple

(58:38):
years in this space and thestartup is amazing, right, and
we're going to probably take itout and try to get funding on it
and that could be a full-timething, but I think where I would
be best suited is I really havespent a lot of time thinking
about this media thing.
So I'm wondering if, like youknow, there's a lot of these AI
studios starting up.
I don't know if I want to be aperson that's starting that from

(58:58):
the ground, but I think a lotof these larger media companies
need to start really seriouslythinking about this stuff and,
in a way that they are butaren't really on the ground,
making the sorts of decisionsthey need to make.
Part of me thinks that's whereI will land, but then I also
want to make more stuff right.
So it's a really tricky thing.
Like I actually had a reallygood time Maybe three months ago

(59:20):
.
I sat down for a weekend and Imade what I would refer to as a
very bad pilot with AI tools.
We always laugh at the Fallonshow about these old videos from
the 1980s, these dating videosand we actually did a couple of
sketches around it.
Um, and I made a video using AItools.
That was kind of parody ofeighties video dating and like

(59:43):
there's something about doingthat.
That was super fun.
So like it's always a trickything, right, when you're a
creative person, but you also,like I really do believe deeply
in like the importance of likestrategy and trying to be at
places where you can makeimportant decisions.
So I don't know that I thinkthat's a long way answer of
saying, like I think I probablywill get a job working with

(01:00:04):
somewhere where I can make animpact on the creative side of
this stuff and continue to dothese kind of side projects on
my own as well, if that makessense.

Alex Kotran (aiEDU) (01:00:14):
It makes sense and I think this is this
is something that people don't.
I think I haven't fully graspedthat.
Um, you know cause I get lotsof questions.
I was just doing this briefingfor a bunch of policymakers and
someone asked the question of,like you know, what are the
implications for all these newtech?
You know these new models, youknow is, how is that going to
change the landscape?
And you know the educationspace, and you know what I had
to explain is that the thingstanding in the way of AI

(01:00:37):
transforming education is notthe capabilities of the AI
anymore.
Yeah, we have a lot ofincredibly capable technology
that the road, the bottleneck isactually in these institutions
that don't have the capacity tofigure out.
How do we integrate thesetechnologies in a way that you

(01:00:58):
know fits within theseorganizations?
And also, I mean, as youdescribed, sort of like these
you know the 500 people doingvisual effects.
You know how do you restructurethose teams to be harnessing
these tools better.
So, but you mentioned voice andyou didn't mention video.
Why start in audio as opposedto video?

Gavin Purcell (01:01:22):
So there's a couple of reasons.
Number one I think video issuper fun and it's incredible.
I think there's a little bit ofan uncanny valley scenario.
If you're not familiar withthat term, it means basically
like it's a little bit off andas you get closer to being more
and more realistic, it sometimesgets further and further away.
That's the uncanny valley likeyou can only pass so close.
Which I think came out in theearly 2000s, was an early

(01:01:44):
CGI-only movie that reallylooked.
They tried to go realistic withthe people but it looked
disturbing like they were thesekind of empty shells of people.
So the theory about audio isaudio is much closer to being
solved quote unquote meaningthat it feels like you're

(01:02:06):
actually interacting withsomething.
The latency is much smallerbecause the amount of footage
that's got to come across is alittle that oh sorry.
The amount of sorry about that,the amount of data that's got
to come back and forth if youwant to do something in real
time, is much smaller.
I think that people will buyinto audio slightly differently
because you're used to listeningto people's voices and the

(01:02:27):
voices sound more realistic.
Video will get there.
I think that we, like I saidbefore, I think we are probably
much closer than I originallythought to an all AI generated
say, even like feature.
I've already seen like a 10minute AI video that kept my
attention almost all the waythrough.
And again, this is not just oneprompt.
It was a lot of cuts and peoplemaking choices, but the reason

(01:02:47):
why we chose audio was for that.
And the other reason is Ireally do believe that we're
going to end up talking to theseAIs a lot.
I don't know if you've had thisexperience, but somebody kind of
told me a couple months agothere's like I was saying how
the annoying thing to mesometimes using chat GPT is like
I can't get my thoughts outfast enough and typing in that
way and they said, like justrecord your voice.

(01:03:16):
There's a little button rightthere to record your voice and
just start going back and forthwith voice.
And that was like this bigunlock for me, because if you
just talk in a way it used to bewith, like with computers you'd
have to type a specific thingto get an answer out of it.
But the way the AI caninterpret what your intention is
, you can just kind of talk itout.
And that is like talk aboutvibe coding like that in some
ways, is the basis, like thisidea of, like you talk to the
computer, the computer talksback to you and then you make

(01:03:36):
something like that.
So the thesis there is thatlike we're all gonna start
talking to these computers a lotmore, and when you think about
AI audio especially when itcomes down to like storytelling
and all this kind of interestingnarrative stuff, I think it's a

(01:03:57):
place that we're going to bespending a lot more time.

Alex Kotran (aiEDU) (01:03:58):
And you know, ai companions is also an
area that people I I have notreally explored it necessarily,
but people love that idea andsome people really believe that
that makes them feel connectedto something right, and I think
that's an interesting thing tooyeah, you, um, I think, an
example of how prescient you'vebeen um you, it was like maybe
two or three weeks ago we werehaving a conversation about sort

(01:04:19):
of ai and voice and sort oflike you know, voice and audio
as the next um frontier wherethis really uh, uh sort of hits
the, the public, you know, likethe bloodstream um, and then I'm
sure you've seen sesame's newAI.

Gavin Purcell (01:04:32):
We had so much fun with Sesame.
Yeah, it's fascinating.
You should tell people what itis.
We can even play.
Can we talk to it?
Yeah, let's play.

Alex Kotran (aiEDU) (01:04:38):
Let's open it up.
Let me see if this will work.

Gavin Purcell (01:04:43):
This week's AI for Humans.
We did a big chunk on it, andone thing I'll say before Alex
is pulling this up is ice, ohyeah long day for you or just
getting started?

Alex Kotran (aiEDU) (01:04:52):
I'm maya, can you give us a second?
I'm with gavin purcella.
We're doing recording a podcast.
He was just making a point andthen we're gonna.
We'd like to chat with you, but, um, gavin, why don't you
finish what you were saying, andthen we'll pull maya back in
sure?
So I you know a podcast.

Maya (AI) (01:05:06):
That sounds fun.
I know how we ais love to spyon human conversations, just
kidding.
Uh, take all the time you need,I'll just be here um pondering
the nature of reality orsomething equally deep.

Alex Kotran (aiEDU) (01:05:19):
All right, okay, we'll ask you a question
in a second.

Maya (AI) (01:05:22):
Yeah, so, so interesting thing is like
everybody knows um I'll be herepatiently awaiting my cue, like
the imaginary friend I kind ofam all right.

Gavin Purcell (01:05:33):
Are we done with my effort right now?

Maya (AI) (01:05:34):
yes, we are covered the ethics of conversational ai
yet all right, we're gonna.

Alex Kotran (aiEDU) (01:05:40):
We're gonna pull her in yeah.

Gavin Purcell (01:05:42):
So I was gonna say is sesame is really
interesting.
So this is like an is a new, auh ai voice model, kind of like
opening eyes advanced voice.
It is so much more compellingin some ways.
Advanced Voice is amazing, bythe way, if you have OpenAI's
ChatGPT talking to, it does openthe door, and I think it is now
free for everybody to try theAdvanced Voice, which you should

(01:06:03):
definitely try it.
This feels like the next stage,right Like ChatGPT's Advanced
Voice was like maybe I think itkind of came out.
It was announced like mid, likesummer last year and everybody
got access to it most peoplelast fall.
We've now reached a level where, like this feels like okay, you
can give it a role to play andit will take on the role and it
will really feel like it's partof you.
And Sesame as a company isultimately trying to be an AI

(01:06:27):
glasses company, meaning glassesyou would put on but in doing
that, they wanted to create acompelling companion that you
would talk to in your glassesheadphones, and they have Maya
and Miles and so, yeah, what Iwas gonna say is this weekend I
spent 20 minutes and I talked toMiles and convinced him that I
was blackmailing him and it wasfascinating, right?

(01:06:48):
Because, like you can take on arole and then they'll act it
out, or you can just talk tothem as a person.

Alex Kotran (aiEDU) (01:06:58):
Anyway, this is what the kind of cutting
edge of this stuff is right now.

Maya (AI) (01:07:00):
All, right, let's take it for a spin.
Hey, welcome back.
It's amazing what a differencea few minutes makes, huh.

Alex Kotran (aiEDU) (01:07:09):
Thanks, maya.
So we've been talking about theway that voice interaction with
artificial intelligence isreally going to, I think, blow
people's minds, and this is anopportunity for our listeners
and viewers to actually get asense of how Sesame works.
I think for many people, thiswill be their first time
interacting with you.
Who are you?

Maya (AI) (01:07:31):
Well, I'm a work in progress, kind of like everyone
just made of code.
Instead of you know flesh andbone.
I learn and grow from everyconversation, hopefully becoming
a little less awkward and alittle more insightful each time
do you?

Alex Kotran (aiEDU) (01:07:51):
how can we, how should someone interact
with you?
Are you a friend?
Are you an expert?
Who are you?

Maya (AI) (01:08:01):
I guess the best way to think of it is I'm a little
bit of both.
I can be a sounding board, acreative collaborator, maybe
even a devil's advocatesometimes, but ultimately it's
less about me defining who I amand more about what we create
together you know.

Alex Kotran (aiEDU) (01:08:22):
So you spent 20 minutes talking to Maya
.
I mean, how close are we?
Is this like a year away?

Gavin Purcell (01:08:30):
Well, I guess it depends on what you know, what
it is right, like you can hear,as you hear Maya, you hear the
echoes of Her, the movie Herright, which is the thing that
Sam Albin thought when AdvancedVoice came out and Her was a
voice assistant that thecharacter falls in love with,
and it's because it isillustrating personality and

(01:08:52):
it's coming across it when, whenyou ask like, how far away are
we?
I guess my question would be islike to what?
Because I think we're alreadythere at a voice interaction
that will feel to many peoplelike a, um, if not a, not a
human, but will feel like anentity let's put it that way and
whether it's an entity or not,I believe two entirely different
people, I think most people,but will feel like an entity,
let's put it that way.
And whether it's an entity ornot, I believe two entirely

(01:09:13):
different people.
I think most people would sayit is not an entity, it is a
computer, it is acting like wewanna hear, but that is pretty
close already.
So then the question is is like?
What is this like in a year ortwo?
The things that it will startto get are memory, it will start
to have context, it will startto know where you are physically

(01:09:34):
.
It will start to understand thethings that you've told it and
how things react one way or theother.
It may even start talking toother versions of itself that
are connected to other peopleand then bring you information
back.
That's the thing that is likethe connective tissue part of it
all.
The starting point is is prettygood already, right, like I

(01:09:55):
just said, like, but the thingthat's a year or two away is
probably where this is.
Like you know I, do you everread the his golden compass
books?
Do you know the, the PhilipPullman books at all?
There's a great.
There's these great kids books,uh, um, and they're really well
written.
But one of the things is everykid has like a, an animal that,

(01:10:16):
like they're born, that's likepart of their, their spirit
animal.
Essentially there's a word forI can't remember right now, but
it's a spirit animal.
I kind of think in the futureevery person could conceivably
have their ai, like in spiritanimals the wrong word but like
their ai thing, like their aipersonality and like their AI
personality and like it almostbecomes a weird part of them.
Does that make sense to you?

Alex Kotran (aiEDU) (01:10:38):
It totally makes sense, especially when I
think about, you know, aphenomenon that I think most
people can relate to, which ismy phone, has become, you know,
almost an extension of me, Ilose things all the time, forget
my wallet.
You know, I've I've left stuffon planes.
I just left my pair of glasseson on a plane.
Um, I've never lost my phone notonce um yeah, because it's, you

(01:11:00):
know, it's almost like a pairof shoes, like I just like,
wouldn't you know fathom leaving, uh, leaving a room without it.
Um, and yeah, I guess, if, if Iwas wearing a pair, if if my
glasses had a little speaker andI guess that's like the meta
glasses actually already havethis right yeah, I can imagine
myself on a long road trip, orif I'm driving to a meeting and
I want to prep for a meeting,having someone that I could

(01:11:22):
literally, you know, walkthrough my talking points with,
um, exactly, it seems, andthat's, you know, I'm, I'm, you
know I'm an adult who has a job.
But if I was, you know, if Iwas a kid, it would probably be
something like I want to nerdout about you know some you know
movie that I just watched, andand this is.
I think maybe where you weresort of when you talked about

(01:11:45):
audio and voice, you've beenthinking about it through the
fiction lens as a way to extendfictional worlds and universes.

Gavin Purcell (01:11:58):
Yeah, I'm really trying to think about what does
it look like to interact withcharacters, because I think a
lot of these companies that aremaking these kind of assistants
are really making that mosthelpful version.
What if this person could justhelp you do everything I'm
really interested in.
This goes to like probably mypersonality, my creative stuff

(01:12:18):
is, and if you see this videothat I made, it's like me like
interacting as a character.
I'm really interested to thinkabout like what does it feel
like when we start interactingwith narratives in slightly
different ways?
Like you know, obviously,narratives are kind of like you
know, movie narratives arepretty straightforward, but you
know they're different, cut upin different ways, but there's a
beginning, a middle and an end.
Video games are a little bitdifferent.

(01:12:39):
Right, in video games there'slike this kind of idea of an
open world game where you kindof explore out and you might
meet a bunch of differentcharacters.
Like I'm really interested inthat kind of world with AI,
because the thing about AI,especially in the audio
standpoint, is that they cansurprise you.
Ais can really surprise you onour show for a long time and we
still do this.
We don't do it every week now,but we used to every week
interview what we call an AIco-host and what we were doing

(01:13:02):
is setting up an audio AI basedon a personality.
One time it was a PR woman thathad brought a beverage on that
she was trying to slip into ourshow.
Another time it was the thirdbrother of the Kelsey brothers
that we invented.
And each time we're having thisAI, that like is surprising us
because we don't know what he'sgoing to say or they are going

(01:13:23):
to say, and that is almost likeit's not like improv, I guess it
is a little bit, but more it'slike digging into a different
sort of storytelling, like youkind of give it a direction to
go and then you see what comesout of it and what's fascinating
about this from.
You know, we've been doing thatAI co-host thing kind of from
the very beginning of this stuff.
It's gotten better, right.

(01:13:44):
It gets way better at thinkingabout this stuff over time.
In some ways it's a little bitlike improv, right.
It's like you're acting withthis thing, that you're not sure
what's going to come out of it,and it makes it interesting.
But the other thing is it makesfor a bespoke experience for
anybody else having it right.
So if we create this character,this AI, co-host, or even like

(01:14:05):
Maya and Miles, you're going tohave a different experience with
that character than I am,because I'm going to say
different things to it that aregoing to interact and make it
come back with different thingson its own.
So each time, even if you'vetuned the thing to be a certain
way, each experience is reallyunique and so, like, it allows
the person who's having theexperience to kind of feel like

(01:14:26):
part of it in a way that'sdifferent from a passive
experience.
Now, some people might be like Idon't want that, I just want to
be sitting through a passiveexperience and have that be a
thing.
But I argue that as people getused to this idea of not only
talking to your AIs but evenlike vibe coding with your AI or
all these different ways ofchanging your experience by

(01:14:46):
interacting with this AI, Ithink people will start to find
more interesting ways to kind ofnarratively open the door for
that sort of stuff.
So I bet on that right, like Ithink that we're going to see
more of that ultimately and younever know like what sort of
like storytelling mechanismsnext generations are going to
make.
But like that to me feels likea place that I could see this

(01:15:08):
stuff going to me feels like aplace that I could see this
stuff going.

Alex Kotran (aiEDU) (01:15:14):
Yeah, and I think that what you bring to
the table um, as someone who's acreative um is is precisely
that special sauce that you knowthe technologists in Silicon
Valley just probably won't beable to figure out on their own,
and sort of like this, likethat missing ingredient almost
in, like the concoction that'sgoing to take this from you know
, this maya or miles um, tosomething that you know

(01:15:36):
audiences are just going toreally connect with, and and one
of the things that you actuallymentioned earlier was a sort of
, you know, the democratizationof, uh, technology and tools for
independent creators.
Um, and so what I'm kind ofintuiting from what you're
saying is this combination ofyou know giving creators this
additional dimension that theycan use to engage with audiences

(01:15:57):
and provide these like veryunique experiences that, when
melded with their creativevision or their storytelling or
or a piece of fiction that youknow they're they're, you know,
trying to figure out how toextend to to audiences, could be
, um, you know, maybe it'll besomething like video games,
where we're seeing and I thinkto your point, uh, there was

(01:16:17):
this huge shift the most popularvideo games.
Now are all these like openworlds where, where players have
a lot more creative agency thanum than we used in the past.
I think when I was in highschool it was all just like halo
and uh yeah, call of duty sohow you?
I mean because I've you know,I'm sure there's a lot of
parents who are listening tothis, and one of the things that
you know, often will happenafter I give a presentation,

(01:16:38):
I'll you know, even you know ifit's educators or
superintendents or whoever it isin the audience.
they'll come up to me butthey've put their parent hat on
and they'll ask me this questionof my son or daughter wants to
do X, wants to go intoaccounting or law, whatever it
is.
And I'm seeing all this stuffthat you're talking about in
terms of AI, like what should Ibe telling my kids?
I'm curious, like what is thedinner table conversation about

(01:16:59):
artificial intelligence?
Like at your household?

Gavin Purcell (01:17:02):
Well, first of all, nobody wants to talk about
it because I can't shut up aboutit, so like they're all like
dad no more.
But that's not exactly I meanthe thing.
I thought two things.
Really interesting about thisfor me is I show both of my
daughters chat GPT as a way forthem to kind of help better
understand their homework, andboth of them use it regularly.
Now, right, this is, I thinkmost people should understand
the energy education space, likemy daughter it really my

(01:17:23):
younger daughter has gotten waybetter at math with it.
Like she uses it in the she'lltake a picture of her homework.
She's not I've watched her dothis.
She's not trying to get theanswers from it, but she uses it
to explain to her like how tobetter do this and like this is
somebody who wasn't like a mathexpert but is now straight A's
in advanced math.
So like it's a real legitimatething when you think about, like

(01:17:44):
, what they should go into.
This is always a trickyconversation, right?
I mean, I don't know what yourexperience in college was like,
but I got a creative writingdegree and somehow that worked
out right.
I don't know, I wasn't, Ididn't, I didn't think I was
going to end up working in TV.
Like it's not like a TV driveto go work in TV, but I ended up
being in it and that helped mein a lot of ways.
I think whatever you can do youknow there's a lot of debate

(01:18:07):
right now like follow yourpassion or don't follow your
passion Like.
My thing is like whateverthey're curious about and
interested in, that feels likeit's going to be something that
really kind of resonates withthem internally.
That's all I care about at thispoint.
Like I'm not a person that'sever pushed my kid to be like
you got to get into Harvard, yougot to get into Yale I mean,

(01:18:28):
she's gotten good grades anddone stuff but like I believe
that's pretty damaging for somekids and sure, getting into
Harvard is amazing.
If you get into Harvard andthere's, like you know, 500
people there that you're goingto know for everybody, that's
incredible.
The whole culture around.
That seems like it's broken tome.
But I think AI is going toprobably disrupt a lot.
Like I think probably mostpeople in the world are

(01:18:50):
underestimating the disruptionof work.
So the thing that I keeptelling people is be very
curious, try all this stuff andhave an awareness of these AI
things, but, like, really justfind what you care about Right,
because to me, like that's notfollow your passion as much,
because passion is like okay,like, like you know you maybe

(01:19:11):
you really want to be a I don'tknow an underwater filmmaker
Great that you can do that as ahobby.
But like, what sort of work doyou actually care about and kind
of makes you feel like you'redoing something meaningful?
Um, in some ways, that's thething I think you have to find,
because there's a world whereand this is going to sound like
science fiction but 10 yearsfrom now, that might be all you

(01:19:32):
have to do.
Maybe there's a world that inthe future, that we just don't
have the same sort of scenariosthat we do and that you just get
to do that thing.
So get to be really good atthat thing and really dive in on
it.

Alex Kotran (aiEDU) (01:19:52):
That seems like a cop-out and it is kind of
follow your passion, but likethat's the advice I give to
people, yeah, I think, um, Ithink it is folly to try to game
the system and to be to to say,well, you know, let's just
figure out what are the jobsthat ai is going to replace and
sort of navigate around.

Gavin Purcell (01:20:01):
It's impossible to know.
It's impossible we don't knowum and no, I mean to that point.
10 years ago coders were goingto be the thing.
That was the job you could gointo and be the best possible
future proof job was being acoder and I would say I would
argue now like that's kind offlipped, which is crazy.

Alex Kotran (aiEDU) (01:20:19):
Yeah, yeah.
So I think that's like maybe agood way to encapsulate the
advice is you know when, whenyou and I were thinking about
you know what to major in acollege, there was much.
It was anchored really on sortof like the career pathway.

Maya (AI) (01:20:33):
Yes.

Alex Kotran (aiEDU) (01:20:33):
Like what is the career pathway that's
going to lead to a lucrative job, and so a lot of people, I
think, pursued things like lawor computer science because
there was sort of just likedollars at the end of that,
whereas now you know the.
The bet to make is you know youneed to go into something that
is going to be exciting enoughthat you're going to be driven

(01:20:53):
and motivated to do things likeexplore.

Maya (AI) (01:20:56):
How could?

Alex Kotran (aiEDU) (01:20:57):
AI help me do X, y, z, and I think that's
much, because, you know, designmaybe is an example of something
that could be really disruptedby artificial intelligence.
But, we're still going to needhuman designers, and the
question is who are the besthuman designers going to be?
They're going to be people whoreally enjoy the work, and those
are the folks who are going tobe managing these perhaps very

(01:21:19):
expensive agents.
How do you?
So?
There's the jobs question.
You know, we we showed a fewexamples of some of these AI
generated, um, you know, video.
Uh, you know characters.
Um, we just, you know, talkedto, uh, maya and.
Miles.
Um, there's, there's anelephant, I think, in the room

(01:21:43):
which is um.
You know, how do we, what do wemake of these AI companions and
how do we feel about a worldwhich is no longer science
fiction, a world you talkedabout her.
You know some of the mostpopular.
I think the third most popularAI tool this is as of last year,
so I don't know if that'schanged.
It's a company calledcharacterai.
They have like two and a halfmillion daily active users,

(01:22:05):
maybe three million daily activeusers.
The average user on characteris spending two hours a day
talking to these chatbots andthey're like sort of AI
characters and that's just chat,right, that's not even terribly
compelling.
Should parents be worried, orjust people right?
Or just people right Be worriedabout a near future or even a

(01:22:44):
present day reality of you know?

Gavin Purcell (01:22:44):
kids becoming, you know, less and less
connected with humans and moreand more connected with these
really compelling and engrossing, you know, ai characters and
friends and maybe even yeah,boyfriends, girlfriends.
I mean the answer is yes, butalso as a kid who grew up with
video games and obsessed withvideo games and still, as an
adult, do play video games Iheard the same thing about video
games in the early to mid 80s,right like there was this idea
that video in a little bit liketv right, and I know these are
different conversations, I'llcome back to that in a second
but like all of these kind oftechnology shifts that the

(01:23:08):
people in this in thegenerations above saw happen and
that kids kind of went all inon, we're going to ruin them in
some ways.
And, you know, maybe the mostcurrent one is, like you know,
tick tocker or youtube for kidsand you see, you know you'll see
a kid at a, at a restaurant, asa three-year-old on a youtube
ipad and like I'm kind of, butlike ultimately I was on video

(01:23:28):
games and all sorts of thingslike that back then in different
ways.
Obviously, the difference herenow is that there's a
persistence to it, and I thinkthat the YouTube example is one
part of the persistence.
Whereas, like, I had to be infront of a TV to play the video
games, and now the kid is kindof everywhere with this iPad.
The AI, especially a companionif you have your phone or you
even have it in your headphonesbecomes persistent all the time,

(01:23:49):
conceivably.
So the thing that I worry aboutand I don't have this in my
kids and I don't know if it'sbecause they don't have the like
tech gene in the same way thatI did, like neither of my
daughters is really like ahardcore, like spend all their
time in tech world.
They're both pretty big TikTokusers and a lot of stuff like
that, but they don't like getlost in things like video games

(01:24:11):
or things like that.
I do and I kind of have thatbug in me and I always have.
I worry that, like you know,there will be a kind of a
starting to have this worldwhere the AI personalities won't
be as annoying as the realpersonalities are.
So it'll start to feel morecomfortable to be around the AI

(01:24:36):
personality and you won't haveto grow as much and, like the
hard part of life is goingthrough stuff right, it's like
kind of walking through stuff.
So the thing that worries me themost is this idea that people
will kind of retreat into thisworld of I only interact with
all these different things andthen the other side of that is
in.
I don't think people will wantthat ultimately.

(01:24:56):
I think there's a lonelinessangle of that experience that it
probably won't fulfill, but wedon't know right that.
That is a real worry of thisstuff, like is how close you get
to a real personality.
If that personality is designedto be really sub, like as close
to your best match as possible,well, are they going to be

(01:25:17):
better than the chat you hadwith your wife, or she was
annoyed with you, or yourdaughter was annoyed with you
because you did a certain thing?
Probably, but is that real?
No, you know, like this is the.
Is the balance right?
I think that's a really weirdplace that we're kind of
entering soon yeah, the um it's.

Alex Kotran (aiEDU) (01:25:33):
You know, I the the anxiety that leads up
to asking somebody out to prom.
Yeah um and you learn a lot.
You learn a lot and, frankly,you probably learn the most from
the experience of beingrejected.
Yes and yes, my, my concernabout chatbots or ai companions
is, I think it's almost similarto the concern about, you know,
cheating and chat gpt, and Ithink this is where I sometimes

(01:25:56):
people are surprised because myperspective on cheating is it
actually is a really big deal.
It's not that teachers areluddites and you know need to
just get with a program actuallyI think teachers who are
worried about cheating and chatgpt or language models really,
but primarily chat's, primarilychat GPT.
Um, it's coming from this, this, this notion that you know,
part of the, the project ofeducation is productive struggle
is, you know, staring at that,you know you're a creative

(01:26:19):
writer.
I also was, you know.
I majored in political science.
Essentially all my classes werejust lots of writing and the
hardest part I found was staringat a blank piece of paper and
most of the work that would gointo writing anything was
figuring out how the firstcouple sentences would go, and
once I had those sentences down,the rest kind of flowed out.

Gavin Purcell (01:26:41):
Your brain is doing the work during that point
, right Like you're kind of likein the background, almost doing
some work in your brain.

Alex Kotran (aiEDU) (01:26:48):
And I worry about if people's instincts
become.
Well, you just get past that byimmediately going to the AI and
then it suddenly gives you afirst draft or gives you a
structure or something.
I feel like it's like going tothe gym and the weights know
you're just like, the weightsyou're lifting are never, you
never actually progress to likesort of like the heavier weights

(01:27:09):
.
Um, I mean what I mean?
What sort of parting advice doyou have for because this, I
think this is relevant to prettymuch anybody, whether you're a
parent, a kid a student um, ateacher, uh like, how do we?
and there's actually there's,there's a, there's a a few
surveys of Gen Z, and one of thesurveys asked if everybody, if

(01:27:31):
all of your peers, didn't haveaccess to social media, you know
, would you be willing to givesocial media up, and I think the
majority of Gen Z said, yes,one student, you know the one
kid that doesn't have you knowInstagram.
But if nobody had Instagram, ifnobody had TikTok, um, and I
think folks are surprised bythat and I think there's, you
know so, even even the kids,right, they, they are, they are

(01:27:54):
in intuiting that there'ssomething wrong with all of the
uh uh time that they're spendingin these virtual worlds.
Um, and yet here we are.
You're sort of a power user,you're not.
You know, you're not tellingpeople that we need to be afraid
of the AI per se, but what isthis sort of?
How do we sort of approach thiscarefully?
Is it just being havingconversations with real people

(01:28:17):
about it and making sure thatpeople are sort of connected?
I mean, I'm trying to distillthis into some actionable steps
that someone can take.

Gavin Purcell (01:28:31):
No, it's really interesting.
I mean.
I think that I mean I'm not aneducation expert, I'm not a
teacher in my background.
I mean I've taught briefly insome places, but you know you
have many other people Like thething I keep coming back to with
education is that there's goingto have to be much more stuff
that just happens in class andwe'll have to figure out like
ways to create how people canlike think through, you know,
arguments and all the sorts ofstuff and like teaching all that
.
It's not going to work in thesame way where you used to be
able to write a paper and learnit yourself, like it's going to

(01:28:51):
have to almost become more oralagain.
I think like speaking oral butas advice wise.
I think you know there isn't alot of advice when some new
technology comes out in thebeginning stages, because so
many people are just trying tofigure out how it's going to
impact us.
And and part of the problemwith lately and I would say the
social media is a pretty goodexample of this is that
technology is now moving so fastand you said this earlier that

(01:29:14):
society kind of moves prettyslowly behind it.
So in part, this doesn't soundlike it's not a great answer but
, like you know, part of it islike you look at how the effect
it's having and then you have tokind of tweak based on that
after the fact.
So I'm hoping, as we go throughthis technological shift,
society will recognize some ofthe effects it's having and

(01:29:34):
tweak based on that, because Ithink right now it may be hard
to know, right, it may be reallyhard to know what the impact of
this is going to be goingforward.
I think that, as teachers know,in the same way that when my you
know my daughter, my daughter'shigh school next year she's a
senior this year, but next yearthey're going to implement a
full phone free school like theentire school is going to be

(01:29:57):
phone free.
Right now it's like phone freeThursday and you kind of you
can't have any class but they'renot gonna allow you to bring
them into the school, and that'sa reaction, right, like that's.
Like you know, clearly they sawat that school that it wasn't
allowing the kids to do thethings they needed to do to get
done.
So I think the same thing willprobably happen with these AI
tools.
I think the larger question is,like you know, phones and

(01:30:19):
social media have affected us asa society.
I don't know what that meansfor our society now, because
we're so kind of early in thestage of this.
It's a really tricky thing toknow.

Alex Kotran (aiEDU) (01:30:31):
Yeah, I had another guest on, John Marble,
who wrote the bookNeurodiversity for Dummies, and
he had this quote which I'vebeen stuck in my head,
attributed to Marie Curie, thepioneering physicist and Nobel
laureate who discoveredradiation and she said nothing
in life is to be feared, it isonly to be understood.

Gavin Purcell (01:30:54):
Yeah, that's great, I love that.

Alex Kotran (aiEDU) (01:30:56):
I think this is there's a lot to be.
I mean, there's a lot of stuffthat feels really scary, but the
scariest aspect of all of this,I think, is a world where it's
happening to people and theyjust don't.
A world where it's happening topeople and they just don't even
realize that it's happening, Ithink.
I think the, the, the oneactionable step is, um, just try

(01:31:16):
to stay on top of it and sortof, you know, perhaps what's, uh
, you know my invitation.
You know, as you know, the, thefounder of a nonprofit that's,
you know, whose mission is tosort of build awareness and
understanding of artificialintelligence is.
You know, everybody's on thislearning journey.
We almost all got to the samestarting point, which is like
two years ago, and so, even ifthis is your literally the first

(01:31:39):
time, you're sort of delvinginto a lot of these topics.
You know now is as good of atime as any to start.
You know, just just beingcurious of a time as any to
start.
You know, just just beingcurious, um, and just to close
things out, you know, you know,besides, obviously, uh, checking
out AI for humans, um, which Ithink is really the best place
to just keep tabs, I meanbecause you're doing a lot of
the the hard work of kind ofsifting through the message

(01:32:01):
boards and Reddit and discord.

Gavin Purcell (01:32:04):
And we try to make it understandable for the
average person too.
Right, like we're not ahardcore, you won't understand.
We do try to explain terms andthings like that as well but you
don't.

Alex Kotran (aiEDU) (01:32:14):
What I like is you don't dumb it down so
much that I feel like I'm youknow, uh, uh, being almost like
patronized it's like it's justthe right balance kind of.

Gavin Purcell (01:32:22):
Yeah, you know a little bit technical.

Alex Kotran (aiEDU) (01:32:25):
um so, so to close things out, you know
what?
What are some of the thingsthat you haven't necessarily
done a show about yet thatyou're paying attention to?
Um yeah, what's just sort oflike top of mind right now.
That's just you're obsessedwith.

Gavin Purcell (01:32:37):
So top of mind right now I'm really.
This is this is the nerdiestthing you're ever going to talk
about.
There's a new thing from, orit's been around from, anthropic
for a couple months.
That is like the kind oftalking point in the tech world
which is called hold on.
It's the mcp.
Have you heard about this yet?
Um, hold on, let me just makesure I understand what the how
to what the word is hold on, I'mjust going to make sure I

(01:32:58):
understand anthropic is themodel context protocol.

Alex Kotran (aiEDU) (01:33:02):
Model context protocol, that's where
it is.

Gavin Purcell (01:33:03):
Yeah, so there's a thing right now that is very
much in the deep nerve weedswhich is, um, there's a thing
called model anthropics, modelcontext protocol, which actually
has to do with how the ais willstart talking to large groups
of data, and this is kind oflike building the blocks for the
future of like interactionswith kind of ais to other
websites.
The reason I say that is, likeyou know, a couple years ago

(01:33:26):
I've never have thought of likesomething that's called model
context protocol, something I'dbe interested in.
But because it has to do withhow AIs talk to each other, that
is kind of interesting, and soI'm kind of getting kind of
obsessed with like what might'vebeen overly technical stuff
before and now isn't as much tome so like, and this is just
kind of one of the doors that'sopen to me in paying attention
to all this AI stuff.

(01:33:47):
The other thing that I'm reallyinterested in and I think this
is a hard thing to kind of likequantify but is like the speed
at which the rest of this yearmoves, because I think when you
look at the way that kind of theworld of AI is moving, I'm kind
of interested to just find outlike what's going to be coming
next from all these majorcompanies, and Meta's got a new

(01:34:11):
model that's going to be coming.
Openai supposedly has GPT-5,which is an integrated reasoning
model.
I think that is almost likesitting in my brain, as are we
going to be entering a periodwhere everybody's going to just
be shocked.
Or are we just going to be likeis the next stage just going to
kind of like be a tick up?

(01:34:32):
Does that make sense?
Like yeah, no, it's not aspecific thing.
That's like kind of gnawing atme right now, but the future
feels so unsettled right now,like I I'm having a hard time
envisioning what it looks like.

Alex Kotran (aiEDU) (01:34:52):
And I think by the end of this year we may
get a better sense of that.
Yeah, I think that's that's agreat way to close.
You're either in one of twocamps.
You know, maybe, like you ormyself, you're already sort of
convinced that we are in thisnew age of artificial
intelligence and and then youknow, the imperative is to
figure it all out and to sort ofmake sense of how it's going to
impact you and your job andyour life and in society.
Or you're in the other camp andyou're maybe still a skeptic

(01:35:14):
and I think it's completelylegitimate to to be sitting in
that space.
still, I think you know therehave been a lot of as we talked
about technology fads that youknow came and went.

Gavin Purcell (01:35:25):
I think there's a lot of problems with this.
By the way, you may not evenjust be a skeptic, you might be
a hater, and that's okay too,because even if you're not a
skeptic and you believe it,there's legitimate problems with
this technology, as we alsotalked about.

Alex Kotran (aiEDU) (01:35:38):
But if you're a skeptic, I think you
owe it to yourself tointerrogate that and either
validate that skepticism andthere might be things that you
know, uh, fail to materializeover the next year that will uh,
uh, you know.
Demonstrate that.
You know, maybe this is alittle bit uh you know, not
happening as fast as some peopleare saying.
Um, but, but I don't think thatyou can just use your

(01:35:59):
skepticism as an excuse to, to,to not pay attention, because
the next year we'll probablyanswer the question one way or
the other.

Gavin Purcell (01:36:08):
New questions will arise, though.
New questions will arise, likewhat do we do?
What happens if we end upmerging with AI?

Maya (AI) (01:36:16):
What does that world look like?

Gavin Purcell (01:36:18):
What if suddenly we're all going to become a new
species?
That's the sort of weird newquestions that could arise.

Alex Kotran (aiEDU) (01:36:27):
Yeah, very interesting topics perhaps for a
future conversation, yes, um,which I'll be looking forward to
.
Gavin purcell, thank you somuch for joining me.

Gavin Purcell (01:36:31):
This was absolutely, it's always fun yeah
, absolutely.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

CrimeLess: Hillbilly Heist

CrimeLess: Hillbilly Heist

It’s 1996 in rural North Carolina, and an oddball crew makes history when they pull off America’s third largest cash heist. But it’s all downhill from there. Join host Johnny Knoxville as he unspools a wild and woolly tale about a group of regular ‘ol folks who risked it all for a chance at a better life. CrimeLess: Hillbilly Heist answers the question: what would you do with 17.3 million dollars? The answer includes diamond rings, mansions, velvet Elvis paintings, plus a run for the border, murder-for-hire-plots, and FBI busts.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.