Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Michaela (00:00):
And as a filmmaker
and a director, having these AI
(00:02):
tools.
Has allowed me to tell storiesusing the art of animation,
which is a very expensivemedium.
It's allowed me to use still myhigh level creative art
directors, my high levelanimators, my high level score
composers, sound designers, butempower them to tell a bigger,
(00:23):
longer story for less of abudget because they are assisted
by these AI tools.
They're not replaced, butthey're assisted.
Brooke (00:31):
Welcome to How I AI the
podcast featuring real people,
real stories, and real AI inaction.
I'm Brooke Gramer, your host andguide on this journey into the
real world impact of artificialintelligence.
For over 15 years, I've workedin creative marketing events and
business strategy, wearing allthe hats.
(00:52):
I know the struggle of trying toscale and manage all things
without burning out, but here'sthe game changer, AI.
This isn't just a podcast.
How I AI is a community, a spacewhere curious minds like you
come together, share ideas, andI'll also bring you exclusive
discounts, and insiderresources, because AI isn't just
(01:15):
a trend, it's a shift, and thesooner we embrace it, the more
freedom, creativity, andopportunities will unlock.
Have you just started exploringAI and feel a bit overwhelmed?
Don't worry, I've got you.
Jump on a quick start audit callwith me so you can walk away
with a clear and personalizedplan to move forward with more
(01:36):
confidence and ease.
Join my community of AI adopterslike yourself.
Plus, grab my free resources,including the AI Get Started
Guide.
Or try my How I AI companionGPT.
It pulls insights from my guestinterviews along with global
reports, so you can stay aheadof the curve.
Follow the link in thedescription below to get
(01:57):
started.
Storytelling in creative spacesis evolving faster than ever,
and today's guest has been rightin front of that wave her entire
career.
Michaela Ternasky-Holland is anEmmy award-winning director,
whose work blends cutting edgetechnology with timeless human
stories.
From pioneering one of the firstfilms ever created with open
(02:19):
AI's Sora to designinginteractive installations that
reimagine Filipino Americanhistory.
Her projects don't justentertain.
They spark dialogue andtransformation.
In this episode, Michaela shareshow she approaches new
technology with intention.
Why she believes AI shouldenhance creativity rather than
(02:39):
replace it.
And what it takes to craftimmersive experiences that stay
with people long after theyleave the screen.
If you're curious about thefuture of storytelling and how
to wield AI without losing thehuman touch, this conversation
will inspire you to think biggerabout what's possible.
Alright, let's dive into today'sepisodeHello everyone, and
(03:01):
welcome to another episode ofHow I AI I'm your host, Brooke
Gramer.
This week's guest is pioneeringthe future of storytelling with
immersive tech.
Welcome, Michaela.
So happy to have you.
Michaela (03:15):
Hi, Brooke.
Thank you so much for having me.
I'm so excited to be here.
Brooke (03:19):
Absolutely.
And before we get started,please, I'd love for you to
share with listeners today alittle bit about yourself and
how you ended up where you arenow.
That's a great
Michaela (03:29):
question.
So my origin story is reallynon-traditional.
I actually went to school forjournalism, but I dropped out a
few semesters in so I couldpursue my career dancing and
performing on Disney CruiseLine, and then came back to
finish my degree and continuedto perform at theme parks in
Southern California.
During this time, I was reallyin awe of the power of immersive
(03:51):
and interactive storytelling forthe audience to have a really
beautiful experience and even atransformative experience.
And so I decided to apply someof these immersive, interactive
techniques.
Two, the format of journalismand nonfiction storytelling.
And so I went on to work withTime Magazine, people Magazine,
sports Illustrated.
(04:11):
From there I went on to work inSocial Impact for the United
Nations Game for Change, NobelPeace Center.
And then even from there I waslike, I love social impact, I
love journalism.
What if we just expand this tofull narrative now?
And so I did work with Meta andI did some consulting for some
nonprofits around how to telltheir stories from a more
fictionalized.
Standpoint and now I havecontinued that process to lead
(04:36):
me to the use of generative ai.
It's been a very natural part ofmy career.
I think, I've seen many hypecycles of technology come and
go.
I've seen use case scenariosthat have been really powerful
for emerging technology.
I've seen use case scenariosthat feel a little bit more
like, we're just gonna show offthe technology, but we're not
gonna really think about theaudience, or we're not gonna
(04:56):
really think about.
The impact that we wanna havefrom a storytelling perspective.
And so what I really like to dowith my work and my artistic
practice is to take theseemerging technologies or to take
familiar technologies and reallythink about how can, we can tell
a really strong story for theaudience and how we can create a
really strong impact orexperience for the audience as
(05:17):
well.
Brooke (05:18):
Wow.
I want you to take me back towhen you saw that surge and
explosiveness into AI andgenerative ai.
When did that shift start?
When did you see the industrychanging a bit and shifting more
into this digital landscape?
Michaela (05:35):
Yeah, I mean, I
remember in 2022 having lunch
with my uncle who works sort ofin the cutting edge of Silicon
Valley doing different types ofthings, and he's just a big
nerd.
He's one of the, one of theoriginal Silicon Valley,
eighties, nineties kind of folk.
And he was telling me he was,you know, Michaela, you should
really check out, what they'redoing right now with ai.
(05:56):
It's really powerful.
And I heard, I had done somealgorithmic ai, had done some
machine learning projects, butnothing that was generative,
right?
And so he was explaining to mewhat generative AI was.
And I remember taking note of itbeing like, that's so
fascinating.
That's so interesting.
Some of the early stablediffusion models and what they
were doing, he was reading someof the research papers, but it
(06:17):
wasn't until 2023 when my reallygood colleague and collaborator,
Aaron Santiago, premiered aproject called Taltamanster,
which was a project at VeniceFestival that actually took your
answers and created a script,took your and created a 360
world all in real time wheregenerative AI really started to
come into the art scene for meand my community in the digital
(06:39):
artist community.
And so I remember in 2023thinking, okay, generative AI is
here.
It's not just this thing thatresearchers are doing.
It's not just this thing thatSilicon Valley, you know nerds
are doing, in a way, it'sactually being used now for like
creativity and artistry.
And I remember thinking like, Iwon't just use this for the sake
of using it.
I think, it was around late 2023, 20 24 ChatGPT really came
(07:02):
online.
I wasn't just using CHE PT everyday for my life or my personal
or my professional tasks.
I was really wanting to bespecifically intentional and
conscientious of how I used thegenerative AI technology, both
in my personal life.
As well as in my professionallife.
Brooke (07:19):
Cool.
And so when you started playingaround with creativity and
artistry and mapping and mergingthese tools and software
systems, what did you start tolean into first?
What was your toolbox thatyou're playing around in?
Michaela (07:34):
Yeah.
Great question.
So, I mentioned Aaron Santiagobefore on this podcast.
I have to give him his flowers.
He and I came together asFilipino American creative
technologist and artist, andasked ourselves, what kind of
story do we wanna tell?
And I remember being very clearwith Aaron.
I was like, look, I know you'rethe generative AI guy.
I know everybody comes to youfor generative ai.
(07:56):
I'm not interested in doing itunless it makes sense for the
project we wanna do.
And so we really thought aboutthe story of how we wanted to
portray the Filipino Americanexperience.
And something we kept comingback to was this loss of
culture, this loss of connectionfrom being both Filipino, which
was a culture that was colonizedby the Spanish, but also being
American, and having sort ofboth these big disconnects.
(08:20):
And we came to this idea oflike, well, what if we showed
Filipinos throughout Americanhistory?
'cause we know Filipinos werehere in America through the
forties and the fifties and thesixties.
What if we also showed whatFilipino people could look like
pre-contact with Spanish people?
And we started saying, okay,cool.
This is a really interesting wayof talking about speculative
past, speculative, futurespeculative present.
(08:43):
And one good way we could dothat is using generative AI as
metaphor for memory, as thismetaphor for something that we
know exists.
But we actually don't have anyreal quote unquote human quote
unquote viable, quote unquoteverified data.
So we can use AI to fill inthese cultural sort of gaps that
(09:04):
we have as Filipino Americans,and we decided to pair that with
social media videos of FilipinoAmericans making fun of
themselves being Filipino, ortalking about being Filipino.
And so it became really thisdialogue between the present day
zeitgeists of people onlinereconnecting to some of these
generative pieces that we weredoing with stable diffusion at
(09:25):
the time.
So that was really my first bigcreative foray into generative
with my collaborator, AaronSantiago.
And we've continued to thinkabout ways, not just that we
make quote unquote a film, whichis one part of the practice I
have, but actually making aninteraction or making an
immersiveness or makingsomething that audience engage
(09:45):
with, and the ability thatgenerative AI has to make that
special.
Brooke (09:51):
Wow.
And I know in our first initialcall you talked a lot about
Sora.
What were other tools and howdid you bring this to life?
Exactly.
Did you have a team?
Were you working with yourself?
With the creative direction andthe storytelling and mapping out
the design?
Did you have an engineer athand?
(10:12):
Take me through the process oflike bringing this to a reality
and how you had that come tofruition.
Michaela (10:18):
So it's different for
each project slightly.
But to give the audience arundown, I've done three
interactive installations thatutilize generative AI flow and
generative AI technology.
I've also made three generativeai empowered assisted, animated
films.
And so when I'm staffing foreither the installations or the
(10:41):
films, it's two very differentteams.
So with Kapwa, it was really meand Aaron and a score composer.
And we really created thisgorgeous installation, which was
about the Filipino Americanerasure.
And we called it Kapwa.
And the piece of the kind ofability that Aaron and I really
used with stable diffusion asthe gendered AI tool.
(11:02):
For our interactivity, weutilized a depth camera and
touch designer, which is areally awesome kind of almost Q
lab like tool for independentinstallation creators and
independent theater makers.
And then I did all the editingfrom an audio perspective, video
editing.
Aaron did a lot of the touchdesigner.
(11:23):
It wasn't exactly engineering,but he basically worked in touch
designer and built the mechanismof the installation.
Mm-hmm.
And then our biggest gap as acollaborators was music.
And we wanted it to be originalmusic from a Filipino American
score composer.
So, our third teammate in thatwas that score composer.
Her name's Anna Luisa Patrisco.
So that was Kapwa.
(11:43):
Now, every time we scale one ofthese installations, or every
time we do a new installation,it becomes bigger and better.
So our second installation wascalled Morninglight, which was
an installation that utilizedgenerative AI to read your tea
leaves.
And so we worked with CHICHA SanChen, which was a tea brand.
We had a black box space that wewere able to convert using
(12:04):
gorgeous projections.
We had a VFX designer createprojections based on tea fields
in Taiwan and mountainous kindof environment.
So none of those were generated.
The generative was really thetea leaf reading.
Then we worked with a scorecomposer for that one as well,
who's also a sound designer.
And so the backend of thegenerative AI Tea leaf reading,
(12:25):
we were using 11 labs, huggingface, and our own custom GPT.
So 11, the labs gave us theability to have multiple
voiceovers be the oracle quoteunquote, which is the being
that's giving you your reading.
So we could have a more femalesounding oracle, a more male
sounding oracle, and then wewould also use hugging face to
copy.
Basically what was coming out ofthe custom GBT as the
(12:47):
astrological tea leaf reading.
Then we would use the 11 labvoiceover so that hugging face
could copy the words and thevoice and have you hear the
voice say those words.
So that's really like the highlevel backend system.
There was also a whole otherbackend system for the music,
because the music was specificto the time of day.
It was specific to what washappening in the actual
(13:08):
installation.
And all of that was being ranvisually and audibly via um,
touch designer.
So all, every time we do aninstallation, it's different.
It changes.
The tools tend to be very opensource and close source.
Because we are creating thingsthat certain tools have those
quick out of the box mechanismsand then other tools.
(13:30):
We need to build ourselves.
There's not really a custom GPTout there that can give bespoke
20 to 30 second tea leafreadings based on each chain and
based on testiy and based onastrology.
So we trained our own custom GPTto do that.
I think there's something verysimilar with the film part of
that, and I can go into the filmpart of that, that if that's
interesting.
But the team looks verydifferent from a film side than
(13:52):
it does from an installationside.
Brooke (13:55):
Before we jump into the
film side, going on the
interactive exhibit, first ofall, I wanna attend one of
these.
This just sounds so incredibleto witness, but my first initial
thought just as someone who hasa marketing background and has
worked in production and andevents before, when you have an
idea or or crafting theseprojects, are you approached by
(14:18):
brands with a budget forfunding?
Are you raising your own moneyto create these installations?
How does that work?
Is it always surrounding aspecific event or walk me
through that process.
Michaela (14:30):
Yeah, so Erin and I
work as independent artists.
I think both of us are open tocommercial work, but we have yet
to find a brand that I thinkwants to do something like this
with us.
So, the first project, Kapwathat I mentioned, that was
funded by Arizona StateUniversity through a residency
program they had associated withthe conference worlds in play.
And then the second project Imentioned, which was
(14:52):
Morninglight was a part of aninaugural tea festival.
You could think of our client asWSA Water Street here in New
York City is doing cutting edgetechnology and art and is sort
of becoming a scene forinfluencers and taste makers.
And so their resident, person,her name is Karen Wong.
(15:13):
She curates and creates a lot ofthe shows at WSA for their
nonprofit, which is WSP WaterStreet Projects.
And so we were basically, ourclient was Water Street
projects, but instead of beingtreated like a vendor or a
production company, we weretechnically considered
commissioned artists.
(15:34):
So we were artists that werecommissioned to do this work
alongside tea, alongside thistea vendor.
And we were also told that theaspects of our quote tea house
that we were making was aroundimmersive design and emerging
technology.
And so we sort of took, I guessyou could call them like the
aspects of the RFP and wecreated the installation.
The third installation I'mcurrently working on, it's in
(15:54):
development, is called The GreatDebate, and it's all about
having your classic famous LLMslive debate as political
candidates.
And you, the audience are adebate moderator wanting to hear
the sides of the candidatestory.
So you get to actually createthe candidate, generate the
candidate, select thecandidate's leadings, and then
listen to the three candidatesthat live debate each other in
(16:16):
real time, and that's still awork in progress.
But like for example, a projectlike that right now is just like
in early stage development, socurrently looking to raise money
or to find a funder, to find abrand that would wanna work with
me to continue the developmentof the installation.
Brooke (16:31):
Wow, that's so creative.
What a fun industry you're inright now, and I'm surprised you
haven't had any major brandsreach out to you.
I'm sure it's only a matter oftime.
I know in New York City, theyjust have really cool product
launches and interactive spacesall the time.
So I'm really interested to seewhat happens for you in 2026 and
(16:53):
the future of that industry whenit comes to generative AI and
interactive exhibitions.
I wanna use this as a little bitof a learning moment in your own
words, because.
Maybe someone doesn't even knowwhat the term generative AI
means for starters.
And then secondly what is, inyour own words, open source
(17:14):
versus closed source?
Mean, just to have a little bitof a mini learning moment for
someone who might be a littlelost at this point in the
conversation.
Michaela (17:22):
Sure, of course.
So, I have to give credit to mycolleague Rachel Joy, Victor.
I've heard her speak and lectureon aI as a whole.
So a lot of what I'm sharingright now is the, is the
philosophy and ideology she'sputting forward.
But basically AI is nothing new.
Even as early as the 16hundreds, there was this idea of
(17:43):
machines and robots and likethis idea of technology talking
and responding to humans in theway that humans talk and respond
to each other.
Right?
So that is sort of the idea ofan artificial intelligence and
what we've seen, over the courseof the development of ai, right?
With this idea of being able todo like the Turing test, which
(18:04):
is like if I ask a machine aquestion, how will it answer me
in a way that I know whether ornot it's a machine or a person,
right?
And so oftentimes computers havefailed the Turing test over and
over again.
Unless you have a very specificdata that you're training that
computer on and they're able totalk back to you in a way,
you're like, oh, that could be ahuman.
Or an AI or that could an, thatanswer could have been a human
(18:26):
or an ai.
Often the computer was only ableto do very specific things that
would succeed.
That changed over the course oftime because what used to have
to be a very intensive training.
Training of the computer andkind of getting it to do exactly
what you need it to do to soundmore human.
They were then able to train thecomputer on masses amounts of
(18:47):
data because of the internet.
Right.
And the reality is like theywere training computers already
in this way, machine learning,but it was that classic issue of
like, they just didn't haveenough data for the computer to
be able to differentiate betweensaying something that was
intellectually, sounding smartversus just saying something
that was completely out ofcontext.
(19:07):
Because what computers arereally missing is context,
right?
When they're answering like ahuman, humans always speak to
some sort of contextual languagearound them.
Computers only can speak to thedata that they have been trained
on, and so because the internethas really opened up and allowed
us to train computers on vastamounts of data, these computers
can now quote unquote, takecontext and fill in the blank.
(19:29):
So what generative AI really isit's doing a very similar system
as what algorithmic AI is doing,but it's doing it at a much
bigger level.
So I would say that liketraditional AI is like, here's
my training data based on myquestion, find me the right
answer.
But the answer is an answer I'vealready trained you in.
(19:51):
For example, if I'm interviewingsomebody.
And the interview answer is, Igrew up in 1928.
I might send that to a computerand say, Hey, computer, tell me
when this person was born andthe computer say she was born in
1928.
Right.
So it's retrieving the databased on what I'm asking it.
That's all like natural learningprocessing.
Right.
That's the ability for it torecognize my question and then
(20:13):
give me an answer.
Where this has really explodednow, is that what generative AI
does is that instead of justtaking big answers that you've
trained it on, it's actuallytaking the context of every
single pixel for an image, everysingle letter for a word or a
answer.
So you would say, Mary had alittle blank.
(20:34):
You would probably say Lamb.
At one point the AI did not knowit would been lamb.
Mary had a little.
Mary had a little schoolbook,but because of the internet and
the training data, it can nowsay Mary had a little lamb where
these generative AI start toreally, disintegrate and I think
most people see this is whenthere's so much contextual data.
(20:55):
It could be anything and themathematical formulas this
computer has to do to give youthe quote unquote right answer.
Right.
So let's see.
Like.
Mary was in its classroom.
Mary had a little pencil.
Mary had a little pen.
Mary had a little crayon.
So the illusion that this thingis intelligent is because it
just can take more contextualdata and spit out an answer it
(21:16):
thinks is right.
This is the same issue we seewith like image generation and
video generation, right?
If there's not a lot of dataaround how fingers move, right?
Or if there's not a lot of datawith how people and physics
move, then every time they'retrying to create a human or
fingers of the human, it's notgonna get it quite right because
it's not looking at the human asan overall anatomical human.
(21:38):
It's literally looking at itpixel by pixel.
So if the pixels start to warpor if the AI starts to
hallucinate, it's actually notthe AI quote unquote
hallucinating.
Because the AI knows it'shallucinating.
It's actually us interpretingthe answer as a hallucination.
It's just the AI struggling tofill in the context of the data
it has.
So generative ai, what I like tosay is like it's a big illusion
(22:00):
of intellect because it'staking.
Word by word, letter by letter.
Pixel by pixel.
A good example of this would beyou pick up your phone and you
call your friend, your friend'sphone rings, they pick up their
phone.
You start talking to each other.
That is quote unquote Intelintelligent technology, where
like it's very seamless.
It's very easy.
(22:21):
I call, they pick up, theyanswer.
What generative AI is doing isinstead of calling your friend
or you calling your friend, it'sdoing what Lord of the Rings
did, where it's like lighting afire.
And then once that fire's lit,then it lets another fire.
And once that's fire's lit, itlets another fire.
And so it's just sending thesefire signals to itself over and
over and over and over and overagain until it has what a thinks
(22:43):
is the right answer.
And then your friend's like, oh,I guess Michaela's calling me
'cause I see her smoke signal, Isee her fire signal way off in
the distance, so then I need tosend a fire signal back to her
over and over.
It's like a very, not eleganttechnology.
It's like very brute forcecomputing, and this is why
generative AI takes so muchpower, right?
So if you look at the backendsystem, one Google search for
(23:06):
the computer to be able togather all that Google search
versus one che GPT search.
Two very different powermechanisms.
One is like when you give yourfriend a quick call, and the
other is like when you light afire.
And the fire has to get lit inseven different places, one
right after the other until yourfriend sees the fire on the
other side of Manhattan.
Right.
So it's like the technology ofgenerative AI is still very much
(23:30):
brute force computing it's,yeah, I know I went really in
depth there, computer science-y.
But that is also why when youthink about video and you think
about image, or even when youthink about words and you're
looking at this thing, you'relike, something's not quite
right.
It's because it's.
It's very limited in certainways.
It seems like it does thingsreally well, but it's looking at
(23:51):
something pixel by pixel.
It's diffusing the data.
It's learned and refu it in away you ask it to.
That's why maybe when you askChatGPT to like.
You ask it to make an image andthen you ask it to change the
image, but you just ask it tochange one thing, it actually
changes six things because it'snot changing one thing based on
your context.
It's actually diffusing thewhole image.
(24:13):
It already gave you into noiseand then refusing it based on
your feedback.
So there's multiple ways andareas that generative AI fails.
That's often because it'scontextual.
Open source versus closed sourcedata.
Basically open source models orversus closed source models.
This goes back to this kind oflike.
(24:34):
Theory of technology goes backto the theory of the internet,
right?
When the internet first came inthe scene, there were a lot of
people who believed that theinternet should be open source.
What does that mean?
It means the knowledge of theinternet.
The knowledge of coding shouldbe shared amongst everybody.
It should be democratized.
It should not be pay walled.
Then people started creatingwebsites that were paywalled.
(24:54):
They were like, Hey, if you wantaccess to these products and
these goods, such as software.
Such as hardware, right?
You had to pay to use Wordperfect.
You had to pay to use Excel.
This supposedly, all of whatExcel and Word Perfect is just
software and coding and data.
There's a whole group of peoplethat think that should be open
source and you should just haveaccess to it.
(25:16):
As a way to empower everybody tobe a technologist.
And then the closed sourcecommunity basically believes no,
like what we do as technologistsor what we do with coding is a
magic power that people shouldpay for.
It should be commoditized.
So if we make products that areeasy for people to use, like
chat GPT, like Google Drive,then people don't have to make
(25:36):
their own, or they don't have togo to the open source community
to learn how to make their own.
They can just use our closedsource communities.
We take on all of the labor.
We take on all of the design,but we also gain all of the
profit.
So that's really thedifferentiating idea of open
source versus closed source.
And it gets stickier when itcomes to ai.
'cause open source models meanseveryone knows what the training
(25:59):
data is.
Everybody has access to what thetraining data is.
Closed source, like a closeddoor, you, it's a black box.
You don't know where thetraining data's coming from.
Some of it ethical, some of itunethical.
So these are sort of the like,ties of open source versus
closed source.
But it, it, it's a.
Debate that expands just beyondai.
It's a debate that's beenhappening ever since the birth
(26:20):
of modern day computer sciencetechnology.
Mm-hmm.
Especially with the birth of theinternet.
A great example of this would belike Wikipedia is an open source
model of knowledge sharing andknowledge.
Information people can add toWikipedia and people can take
away from Wikipedia people canaccess Wikipedia for free.
That's a really great example ofopen source.
(26:41):
Um, Versus a closed sourcetechnology would be something
like a university library whereyou have to pay to go to that
university.
You have to pay to be able tocheck out that library book
because you pay tuition to theuniversity.
It's like a closed sourcesystem.
Um, so it's just two very, it'sand both are valid in very
specific ways.
Right.
(27:02):
There isn't a good versus a badhere.
Both are not
Brooke (27:05):
perfect.
How I AI is brought to you inpartnership with the Collective
AI, a space designed toaccelerate your learning and AI
adoption.
I joined the Collective and it'scompletely catapulted my
learning, expanded my network,and showed me what's possible
with AI.
Whether you're just startingout, or want done for you
(27:26):
solutions, the Collective givesyou the resources to grow your
business with AI.
so stay tuned to learn more atthe end of this episode, or
check my show notes for myexclusive invite link..
Okay.
Thank you for diving into that.
Those are terms that are.
Thrown out here and there on alot of these conversations, and
I try to accommodate listenersthat are both seasoned and new
(27:48):
to ai.
Um, So thank you for divingdeeper into that.
And it's so interesting to heareverybody else describe it in
different ways, especially withyour background.
You have a different way and youalso mix in storytelling in your
educational moments.
So I love it.
My next question for you is.
How do you feel like AI haschanged you as a creator and a
(28:11):
leader and a thinker?
Maybe you can share aboutprojects before and after.
How does your brain rewire andrestructure with this technology
versus in the past?
When we didn't have these toolsat our disposal.
Michaela (28:26):
Hmm.
That's a, yeah.
Well, okay.
So I think I have to break thisup into buckets.
Okay.
'cause I have so many buckets.
So the first bucket would belike my day-to-day personal.
I think when it comes toutilizing ai, I try to be very
intentional about how and when Iuse it.
I don't just use it for the sakeof using it, I still, want to
(28:47):
work with an administrativeassistant.
I still wanna work with people,right?
So I'm not so keen on justsaying, oh, that AI tool can
replace, someone on my team, orthat AI tool can make my life
easier by helping me manage myinbox, right?
Like, I'm very conscientiousabout how I employ it in my
day-to-day work.
That being said, I think underthe bucket of installation and
(29:10):
exhibition design, I'm veryexcited by AI's potential to
give personalized moments toaudience members.
Right?
I think a lot of what we try todo when we try to make something
impactful or immersive ortransformative, is we're trying
to give an audience member themoment where they go, wow.
Or a moment where they go, I amin awe, or I'm in shock.
(29:31):
And I think when you.
Have the ability to personalizesomething, it allows you to cut
through some of the more kitschyideas or the, some of the
catchier elements, and whetheryou do it with AI or not.
Ai, like I've done it with likeforms, I've done it with
surveys.
I've done it with design.
Right.
But AI's ability to like getright to it.
(29:53):
That personalized moment I thinkis really exciting for an
audience member within thecorrect story.
Right?
So the tasseography example Igave earlier in the interview
with the Tea Cup leaf reading, Ithink that's a perfect example.
Like no one was giving awaytheir personal data.
They were just coming with avery specific tea that had tea
leaves at the bottom that were adifferent color, a different
(30:14):
pattern, and just based on thatimage that our multimodal,
installation saw.
It was then generating ideas andcontent and some people walked
away and were like, wow, howdoes this AI know I'm going
through a divorce?
Some people were walking awaygoing, wow, like this has
nothing to do with me.
But the fact that they were eventalking about it and responding
to it, right, especially in thisera of kind of people wanting to
(30:37):
be very disconnected or notwanting to engage in
conversation.
The fact that the installationwas getting a reaction from
people, or it was making peoplethink that's like the, that's
kind of the work I like to dowith installations and so that's
one example.
And I think in the last bucketreally is in like the export
(30:58):
process, right?
What I say that, I mean likeimages and text and video.
And as a filmmaker and adirector, having these AI tools.
Has allowed me to tell storiesusing the art of animation,
which is a very expensivemedium.
It's allowed me to use still myhigh level creative art
(31:18):
directors, my high levelanimators, my high level score
composers, sound designers, butempower them to tell a bigger,
longer story for less of abudget because they are assisted
by these AI tools.
They're not replaced, butthey're assisted.
And I think that has been reallyinteresting because now as a
creator and as a director, wehave a very traditional way of
(31:40):
approaching animation and wehave to understand how some of
these AI tools change thatapproach or evolve that approach
or transform that approach.
And that's been a reallyinteresting sort of production
processy conversation andcreativity from a problem
(32:01):
solving standpoint, discussionpoint for me as a director and a
creator.
Brooke (32:07):
I think that's a great
way to transition.
You shared a lot about yourprocess for these interactive
exhibits.
Let's shift more into your workwith VR and animation and AI
generated filmmaking.
Can you share a bit about theprojects you've done on that
already?
Maybe you can tease what you'reworking on now.
(32:28):
But next I want to go into yourwhole process.
Michaela (32:33):
Yeah.
Well, I, I fell in love with theart of animation when I was
working with VR for Good on aproject, and I realized that,
showing live action is verypowerful and there is a process
of animation where you'recreating the world from scratch
that allows people to suspendtheir disbeliefs in a lot of
way.
This is, especially when we'retalking about issues like
(32:54):
poverty and children, right.
I think animation allows us todo that in a really, innovative
way that opens our imaginationversus creating poverty porn or
creating this sense ofdisconnect of like, oh, these
poor children, or, oh, poverty.
And so I fell in love with theart of animation, also from a
world building perspective.
And so, I pitched Meta, thisidea of doing an animated series
(33:18):
with my co-creator, JulieCavalier.
We created three volumes of VRanimation that was retelling
mythologies.
Fairytales and folktales fromall over the world, and the
first project I worked on withMeta VR for Good, went to a lot
of festivals, won a lot ofawards.
Same for the Reimagined series.
We showed at Venice, we showedat Tribeca, we showed at South
(33:39):
by Southwest.
We were Peabody nominated, wewere Webby nominated.
We won the best directing in XRat the Collision Awards, which
is the inaugural Motion Graphicsand Animation Awards.
So.
We've been recognized in a lotof spaces, and I think big part
of that recognition isn't justthe way we're using the
technology, it's also thestorytelling we're doing and the
(34:00):
script and the characters.
And so I really was thinkingabout this when Tribeca and Sora
approached me to do a shortfilm.
Last year, I was one of the fivefilmmakers from the Tribeca
Alumni program that was asked tomake a film using Sora.
This was pre Sora launching tothe public.
And it posed a big question forme as a creator to be like,
(34:21):
okay, am I ready to jump in toembrace these generative tools?
Not just from an installationstandpoint, but also from a
filmmaking standpoint, becausethat's another part of my
practice.
And the way I approached thatwas the same way I approached
any project.
I called people and hired them,right?
So I brought in two animators.
I brought in a score composerwho was also a sound designer.
(34:42):
I brought in a voiceoveractress.
I said, I'm not sure exactlywhat film we're making yet'cause
I don't know what this tool cando.
But I know that I wanna workwith you all because I want
there to be human animation,human hand drawn graphics in
communication with thegenerative graphics.
And so we ended up creating apaper craft style of story
(35:03):
called Thank You Mom, that wasinspired by my journal entries
growing up.
And it's a really interestingway for us to say, okay, the way
we use the generative tools.
Are not necessarily with theintention to replace people.
It's not in with the intentionto cut people out of the
process.
(35:23):
It's with the intention ofsaying, we're a small indie team
with a small indie budget.
How can we get the biggest bangfor our buck?
And we've really taken that,that idea or that sense of.
Sort of approach into everyother project I've done with AI
as well.
So beyond just using Sora, we'veused um, Hedra for Lip Sync,
(35:48):
we've used Comfy UI forcharacter design, we've used mid
journey, we've used Kling, we'veused vo, we've used 11 labs
again for voiceover and anytimewe're using these tools, it's
really just a way of saying,okay, we only have this much
budget.
We wanna make this story come tolife, and we have this team.
(36:10):
How do we make sure with theseparameters, Make something, and
AI is just a very expensive,time consuming medium, right?
It's one thing to go out andshoot a quick live action, have
an editor.
Have a pass at sound design.
Audio mixing with animation, youare starting from scratch.
(36:31):
There's no capture of the worldyou get to start with.
It's like it's all created.
And I love that about animation.
And what AI allows us to do, itallows us to take things like
concept art and plus it up to beanimation ready.
It allows us to take stillimages and animate those still
images so that the in betweenmean.
Is what we would call it inbetween is much faster of a
(36:51):
process, right?
That doesn't say the animatorstill is not involved, the
animator's still not drawingover in painting, maybe making
adjustments to the acting.
But it's just to say, how can wemake a process that would
normally take two and a halfplus years and do it in the
course of six months?
And that's been really excitingto see because it allows more
stories to be told.
(37:13):
Because animation, the biggesthurdle around animation is just
the institutionalization oftime, energy, and budget.
If you don't have a lot ofbudget, you can't put in a lot
of time and energy, and that'soften the difference between an
amazing animation and a prettygood animation.
Brooke (37:31):
I love how you touched
on time and money saved and
really just leaned into thepositive lens of ai.
It almost seemed like a bit ofan oxymoron of how can we use AI
to bring people together when alot of people see AI generated
content just removing us fromlike humanity and the creator
(37:52):
and the brand, right?
It's almost like, it feels likeit's a, a further step in
between, but through your lensand your interactive exhibits,
you're able to make sure peopleleave talking and have a
conversation.
And I know what you're sayingwhen.
You're walking down the street,people are less likely to even
(38:13):
talk to you now than they were10 years ago.
And a lot of people blametechnology for that.
But um, I love that you'reflipping the script and you're
using technology to bring peopletogether.
Michaela (38:27):
Well, and I think too
with like, with the work I'm
doing.
I really value the idea that ifsomeone walks away and doesn't
even know AI was used in thefilm, or if somebody walks away
and no one knows that AI wasused for the installation, then
I've done my job well.
Like my job is not to make theAI look good.
(38:48):
My job is to make AI andtechnology disappear.
Into the background so that thestory and the audience
interactivity and the artworkthat we're showing comes to the
foreground.
And that's just my personalapproach to it, and I think that
is often why the work I do iscentered around people coming
together and connecting orcentered around people, being
(39:10):
entertained.
The most recent generative AIproject I did is an animated
series that is AI assisted andit was an independent series.
Each episode's between one and ahalf to two minutes long.
It's nine episodes long.
It's a completely original ip.
(39:31):
It had little to no marketingbudget.
We posted it on YouTube and ithas received over 2.2 million
views and it received that levelof views within the first month
of it launching.
And so I think wow.
I think there is a sense that Ihave that's like, I'm sure some
people could see it was madewith ai, but I can't help but
think not that many peoplewould've watched all nine of
(39:53):
these episodes if they weren'tat least entertained whether or
not they were looking at itbecause it's ai.
Whether or not they were seeingthe ai I think, is beyond the
point because the fact is like,it's not like we had a hundred
thousand views on one of theepisodes and then 10,000 views
on the other episode.
All the episodes across theboard have like two hundred, two
hundred twenty, two hundred andtwo.
(40:15):
One episode has 510.
These e episodes are gettingthousands and hundreds of
thousands of views.
And so yeah, I think it comesback to the story you're trying
to tell, the art and the craftyou know.
The projects we made weren'teasier because we made them with
ai.
Just'cause we saved time andbudget doesn't mean they were
easier.
(40:35):
They were still very difficult.
They were still very exhausting.
We were just able to do it a bitfaster and a bit cheaper because
we had some of these tools, butjust because we used the tools
doesn't mean it was easier.
Like that's the thing I alwaystell people, like, yeah, I was
more burnt out making thegenerative AI animated series
than I was making the VRanimated series.
(40:56):
Yeah.
There's this huge misconceptionthat, oh, because you're using
ai, you can have a cheaperbudget, you can have a faster
timeline.
Creativity is still creativity.
Creativity still takes time andenergy.
And if you came back to me andsaid, what would you have done
differently?
I wouldn't have changed thetools we used.
I wouldn't have changed what wedid.
I would've just said I would'vejust wanted more time and money.
(41:18):
Our team would've been way lessburnt out and our team would've
been way less overdone.
But we did it because we knewthis was the money we had and
this was the time we had to doit.
And so, yeah, I think it's hardwhen the misconception around AI
is that the creative process ofthe creative form is now
suddenly not as important, andthat's like not true.
If anything, because of ai, it'seven more important.
Brooke (41:40):
Do you see that getting
better?
Do you think that has a lot todo with maybe upskilling and
learning new ways of putting theoutput that you haven't or your
team hasn't before?
Or do you think it'll still beharder to use ai?
Michaela (41:58):
That's a good
question.
I mean.
I don't advocate for it gettingbetter or worse personally
because mm-hmm.
I, as a creative, I'm notlooking for a magic button.
I don't think any creative islooking for a magic button.
Like I talk to engineers andthey're like.
What do you want the tool to dobetter?
Like I don't want the tool to doanything creative better, but I
would like the tool to talk toother tools so that my
(42:18):
production time or my admintime's a lot faster.
Right?
Yeah.
I would like to be able to nothave to download, upload,
download, upload one piece ofasset to seven different
platforms, but if that's what Ihave to do, that's what I have
to do.
But I'm not looking for theplatform to get better at.
One or two creative things, Ilike that.
Sometimes the characters aren'talways super consistent.
(42:39):
I like that the environmentisn't always consistent.
Does that mean we have to workharder?
Yes.
But that also means like we haveto problem solve in a creative
way.
Like how do I make the audiencebelieve that that shot of the
mountain.
Is the reverse shot of the othermountain, even though they're
two totally different mountains.
We use color, we use design, weuse our thinking caps.
(43:03):
It's the same things we dealwith in animation, right?
Like you could watch an animatedfilm, you're like, oh, this is
all one world.
But the reality is that worldyou're seeing was multiple shots
put together to create theillusion of a world.
And I think the same way, Imean.
I think from a creativestandpoint, it would be cool to
see more like platforms thatutilize 3D renders and 3D
(43:25):
models.
'cause that's another superexpensive area that a lot of
people can't make work becauseit's time consuming and hard to
learn 3D modeling and 3D toolsand game engines.
So if there's generative AImodels out there that can help
make 3D assets, that can helpgenerate 3D worlds or 3D
environments or help suggestgame engine, bug debugging, like
(43:50):
all of that would be really coolto see as an expansion of the
toolkit.
I think we.
My filmmaking community is justreally focused on the 2D film.
But because I come from a moreexpansive community, like I know
I have my, my game engine folks,my folks who are constantly
working in Unity and Unreal andslaving away in those platforms,
and I think if there was somesort of generative platform that
(44:12):
helped make their lives a littlebit easier.
Again, not to replace thedeveloper, not to replace the
engineer, but more just to like,my friends don't have to spend
three, three days debugging onebuild, it's the same thing.
My animators don't have to spendthree days animating one minute.
So it's like those kinds oftechnologies.
I think this idea of automationor this idea of expansion of
(44:36):
cognition, which is again,something my colleague Rachel
Joy Victor talks about a lot.
I mean, I think these are allthe things that I like and
prefer.
'cause the idea of automationisn't always the idea of
replacement.
And I think humans we connectthe two together, but it's, it's
not, it's not the same.
Replacement is, it really comesout of a necessity for.
(44:59):
For a skill that is no longerlike needed.
And I don't think it's that yourskill as a puppeteer, your skill
as an animator is no longerneeded.
It's that your skill is beingsupported and automated by a
system, but you still need youin there to make that happen.
And that differentiator is like,are you easy to work with?
(45:19):
Like, are you creative andthoughtful about the way you
work?
Or are you just walking in anddoing everything haphazard
anyway?
The idea of AI in the creativefield is very scary, and that's
very valid.
But I go back to like, well thenwe need to make the case for why
humans really matter, and thatis emotional intelligence, that
(45:43):
is creative expression and.
Again, this is why I don'tadvocate for these tools to get
quote unquote better at any onething, because that, to me is
then replacement.
It's like if you're creating atool purely to replace somebody,
then that is not okay.
But if you're creating a tool tohelp automate somebody's
process, that's great.
(46:04):
Like we're, we've been doingthat since the dawn of time.
The idea of writing on pen andpaper is this idea of automating
what our brains already do.
Oh, if I can write it down andset it aside, I can always come
back to it.
So yeah, I, I have a lot of likephilosophical thoughts on that
and maybe I just blurbed it allout and it doesn't always make
sense.
But yeah, I think this is a bigdifferentiator and upskilling, I
(46:27):
think is a good word to say, butit's almost like, it's less
about upskilling to me and it'smore about just familiarizing
yourself with what these thingscan or can't do.
Mm-hmm.
'Cause the technology companiesare marketing them to you as the
best amazing thing.
Brooke (46:44):
Mm-hmm.
Michaela (46:44):
The minute you get
under the hood in any of these
creative platforms, even fromlike, I'm sure other
non-creative industrystandpoints, you quickly see
where these platforms or LLMsfall apart.
Mm-hmm.
And where they're not viablesolutions and where they're not
good.
(47:05):
So it's like, I'm like, youcould upskill but why don't you
just start by educating yourselfso you can advocate for what
these things can or can't do.
So you can advocate for why theystill need you.
Versus like, I think upskillingin the sense that you have to
dedicate your time and energy tototally learn them inside out.
I don't necessarily know ifthat's viable because I don't
know either how much longer allof these currently, like
(47:28):
creative AI platforms are gonnabe sticking around.
They're super expensive to run.
Most of the people that areusing them are using them under
a free subscription becausethey're artists who have like an
alpha license or some sort oflicense that is like a creator
program.
The people who are using themjust for fun aren't necessarily
the people who are buying thehighest level license.
So I can't help but think likethere might not be as many as we
(47:53):
think there will be in 10 to 15years because these closed
source platforms are looking tomake a profit.
And I think there's a sense incommunity, and again, I know I'm
lingering on and on thisquestion, but I do think there
is a sense in the communitiesright now that.
Especially people who are comingup through the ranks have this
(48:15):
deep nostalgia and desire toconnect to analog.
They want to take their photoson a camera where they have to
go get the film developed.
They want to be back in spacestalking to each other and
playing records, right?
They're so digitally fatigued.
So it's a question of like.
(48:35):
Also, if there isn't an audienceto buy into the market of
generative AI, creative tools orbuy into the market from both an
audience and a consumerstandpoint, this is also the end
of creator tools that arepowered by generative ai.
I think there still will be afew, but I don't think there'll
be synonymous with creativity.
I think that's actually a veryfalse way of thinking about
(48:55):
this.
Brooke (48:57):
Thank you for that.
It seemed like one very longanswer, but your mind went in so
many different ways.
Touching on key points, Iusually like to ask everybody,
so you did me a, a favor.
You just went through a lot ofthe questions I typically ask
and.
I feel like as you weredescribing your personal process
(49:18):
and experience, it mirrors somuch of what the general
workforce is experienced, right?
A lot of people assume it's somesort of light switch and really
it's getting us to that nextpoint where all of these ais are
talking to each other andthere's better
interconnectivity, is definitelywhat I'm excited to see next.
And you chatted a bit abouttools or platforms or next level
(49:41):
features of what you want to, tohave in your space and talked
about these landscapes andenvironments that you're hoping
will really ease the workloadfor your team.
And, it gave a little insightinto where your industry is
headed in the next year or so.
But um, to wrap up and close outour conversation, I would love
(50:02):
to first hear what's next foryou?
What are you working on ifyou're able to share about it?
I'm sure there's some justreally creative, exciting things
in your pipeline.
If you don't mind touching onthat.
Michaela (50:16):
Yeah.
I can't speak to details rightnow, but I am trying to continue
to expand my practice as adirector.
I am looking to do morecommercial work right now.
So I am currently in the midstof being signed by an agency
that would represent me to docommercial work, that's also a
production company.
I am pitching my own original IPto major studios as a way of,
(50:41):
getting my voice as a writer anddirector in the animation space
scene on a more largermainstream platform or in a more
larger mainstream Hollywood sortof way.
And I am continuing to consultfor really amazing nonprofits
and organizations here in NewYork.
One of them is my favorite isthe Museum of the Moving Image.
Just to continue to think aboutnot just the emerging technology
(51:03):
programming they're doing, buteven just like the community
program they're doing, thedigital artist community
programming they're doing, andjust thinking about it as like a
really beautiful space andinstitution that we can do the
things that people who have abit more, narrow curatorial
vision can't necessarily do likethe Museum of the Moving Image.
(51:25):
It's such a big, broad idea ofhow the moving image connects us
all beyond just films.
And I'm really excited tocontinue to, collaborate with
that team and be a part of thatteam and think about ways that
mission and that statement andthat curatorial vision can
embrace not just what peoplethink that embrace, but also
embrace a much larger scope ofthe moving image and a much
(51:47):
larger scope of community
Brooke (51:49):
Well, I'm looking
forward to following along your
journey and to what's next foryou.
And thank you so much for takingthe time to connect and share
today.
How can listeners reach out toyou?
What's the best way to connect?
Michaela (52:04):
Yeah, I would say
over my website just my name,
michaelaternaskyholland.comthere's a contact form in there.
Mm-hmm.
There's also a way you can justbook time with me directly just
to consult, if you have aproject or an institution or
even an idea that you would likehelp germinating or strategizing
around or just asking mequestions, you can literally get
(52:25):
on my calendar right away tostart having those discussions
with me.
You can also of course follow meon Instagram or LinkedIn.
Both are just my name as thehandle.
And if you would like to juststay in touch but not
necessarily engage, there's alsoa newsletter you can subscribe
to on my website as well.
Brooke (52:42):
Wonderful.
I'll be sure to link all that inthe show notes and description
as well.
Thank you so much, Michaela.
Thank you,
Michaela (52:49):
Brooke.
Thank you for having me, and Ihope you have a wonderful rest
of your podcast season.
It's great to be here.
Thank you so much.
I appreciate it.
Brooke (52:57):
Wow I hope today's
episode opened your mind to
what's possible with AI.
Do you have a cool use case onhow you're using AI and wanna
share it?
DM me.
I'd love to hear more andfeature you on my next podcast.
Until next time, here's toworking smarter, not harder.
See you on the next episode ofHow I AI.
This episode was made possiblein partnership with the
(53:19):
Collective AI, a communitydesigned to help entrepreneurs,
creators, and professionalsseamlessly integrate AI into
their workflows.
One of the biggest game changersin my own AI journey was joining
this space.
It's where I learned, connectedand truly enhanced my
understanding of what's possiblewith ai.
(53:40):
And the best part, they offermultiple membership levels to
meet you where you are.
Whether you want to DIY, your AIlearning or work with a
personalized AI consultant foryour business, The Collective
has you covered.
Learn more and sign up using myexclusive link in the show
notes.