All Episodes

April 25, 2024 23 mins

The rapid advance of AI writing tools, image generators and text-to-video models opens a new world for creative possibilities. It also raises questions about the role of the artist, the nature of creativity – and ethics.  

 

Hosts Beth Coleman and Rahul Krishnan dive into these topics with guests Sanja Fidler and Nick Frosst.  

 

About the hosts: 

 

Beth Coleman is an associate professor at U of T Mississauga’s Institute of Communication, Culture, Information and Technology and the Faculty of Information. She is also a research lead on AI policy and praxis at the Schwartz Reisman Institute for Technology and Society. Coleman authored Reality Was Whatever Happened: Octavia Butler AI and Other Possible Worlds using art and generative AI.  

 

.css-j9qmi7{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;font-weight:700;margin-bottom:1rem;margin-top:2.8rem;width:100%;-webkit-box-pack:start;-ms-flex-pack:start;-webkit-justify-content:start;justify-content:start;padding-left:5rem;}@media only screen and (max-width: 599px){.css-j9qmi7{padding-left:0;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;}}.css-j9qmi7 svg{fill:#27292D;}.css-j9qmi7 .eagfbvw0{-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;color:#27292D;}

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
- From the University ofToronto, I'm Beth Coleman.
- I'm Rahul Krishnan- This is What Now? AI.
- The New York Times has fileda lawsuit in federal court
against Microsoft andChatGPT maker OpenAI.
- When the demos of Sorawent online,
there were a lot

(00:23):
of people,especially in Hollywood
who really had sortof a panic about it.
- Movies, music, writing.
Did I leave anything out?
- Video games.- Video games, absolutely.
Video games. So anythingthat has like aspects
of digital production to it

(00:43):
now emergingly has aspects
of AI kind of builtinto kind of production
and it both opens things up
for new avenues of creativity
and it also opens thingsup for new avenues
of lawsuits.
- That's right.
- We've got an amazing lineupof guests for today's episode.

(01:05):
- Nick Frosst is a co-founder of Cohere,
a Toronto based startup thatdevelops large language models
for enterprise use.
Frosst did his undergraduatedegree in computer science
and cognitive science atthe University of Toronto,
and was the first employee
of Geoffrey Hinton's GoogleBrain Lab in Toronto.
He is a singer in an indierock band called Good Kid.

(01:25):
- Sanja Fidler is an associateprofessor of mathematics
and computational sciences at U of T
and co-founded the Vector Institute.
She's the vice presidentof AI research at NVIDIA,
a technology company that designs
and manufactures graphic processing units
and drives advances in AI,high-performance computing,

(01:46):
gaming and creative design,
autonomous vehicles and robotics.
And Bloomberg News has calledNVIDIA's chips the workhorse
for training AI models.
- Actually, the
the work on 3D contentcreation was inspired
by me joining NVIDIA.
So I joined in May 2018
and they didn't reallytell me what to do.

(02:09):
So it was kind of me to decide.
And I joined this organisation
or sub-organization at NVIDIA
that was working on Omniverse.
So Omniverse is this real-time rendering
and simulation platformthat basically allows users
to connect differentexisting 3D modelling tools
and then work collaborativelyon creating 3D worlds.

(02:32):
So you can almost think of it
as like a super advanced, 3D Google doc
or something where multiplepeople can edit together.
And so basically like rendering
simulation was real time.
Everything looked amazing, photo-real.
Those were the early dayswhen they really made huge
advances on this front,

(02:54):
but content was the bottleneck, right?
So we felt that AI couldreally make a difference
on this front.
- Was that drivenprimarily by, you know, an
interest in pushing outthese ideas for gaming?
Or were there other applications
that you had in mind even in 2018?
- Yeah, Omniverse was early stage,

(03:15):
but it was always meant fordifferent domains, right?
Gaming is potentially one.Robotics is a huge one.
Jensen loves simulation for robotics, A/V,
but also human robotics and so on.
There's obviously other spaces as well.
Engineering needs digitaltwins to simulate, right,

(03:36):
architecture – so notprimarily I would say gaming,
there's a lot of reallyimportant domains for 3D content.
Now, for example, like at thetime I was kind of inspired
by gaming as well.
So if you look at high-endgames, I don't know,
Grand Theft Auto is a good example, right?

(03:56):
That requires so much content
and so much software to be written.
I think I look at theWikipedia, actually,
it was a thousandpeople over several years
replicating Los Angeles,writing storyboards
and the entire game, right?
So it's really justthese very large studios
that can afford this kind of production.

(04:19):
But there are a lot ofcreatives out there, right?
So the thinking was, if we can,
if we can democratise this AI,
if we can really bring AIinto the loop in creating this
content, we could unlock alot more creatives to actually
work on creating games and

(04:40):
similar kind of content,movies as well.
For robotics,
especially for humanoid robotics
and autonomous driving, whichis also where my interest kind
of lies,
actually like simulation
or let's say rendering this content is,
is not just a nice-to-have,it's actually a critical-to-have
because you need toat the very least, do testing

(05:04):
of your robots before deployment.
So there it's not so much about creativity,
but about reproducing the realworld and doing it at scale.
And you just simply donot have enough artists
to reproduce earth
and everything that can happen in it.
With artists, AI is absolutelycritical in that domain.

(05:25):
- So really if the piece
where AI is super useful isin accelerating this process.
So when you, so rather thanhaving computer graphic artists,
you know, painfully stitchtogether these models
of entire cities, you canaccelerate that process by
either having the artists say I want

(05:45):
to use these building blocksin order to create some frame
or sequence and then fine tuneit according to, you know,
what the specific need actually is.
Is that like one of the ways that
you've used AI in this space?
- Yeah, yeah. There'sdifferent use cases, right?
So what is a virtual world, right?
Virtual world has different aspect.

One is scene (06:05):
buildings, road, traffic lights, right?
Then there's an object.Moving objects in the scene.
And that can be anything fromcars, trucks, humans, animals,
garbage bags, right?
There's a huge variety.
And then there is thebehaviour and animation.
So behaviour, how is the car gonna drive?

(06:26):
How is the human gonna walk in the scene?
And animation would be how the motion
that the characters are producing
or the dogs that are walking needs
to also look realistic.
And the AI, what it can dohere is different things.
So for example, for robotics,
we really wanna make a digital replica.

(06:49):
Imagine something happen on theroad, you wanna kind of have
that exact scenario reproduced.
So you can use somethinglike reconstruction methods
to collect videos.
And then from that you gointo a simulation environment.
So AI is doing all thedigital twinning, if you will.
For gaming or even actuallyfor A/V, you do want

(07:11):
to generate that as well.
So generative power, right?
I wanna write some text
and then start doing somesort of advanced editing
of both the objects, maybe scenes,
but then also animation,
what the characters are gonna be doing.
So AI would really be kindof assisting this kind
of creation, digital twinand simulation process.

(07:34):
- A digital twin is avirtual representation
of a physical object, system or process.
It's created using real-timedata from sensors, machinery,
or other sources to mimicthe physical counterpart.
It is a bridge between thephysical and digital world.
- So when you talk to artistswho you know are using some

(07:55):
of these tools, do you findthat there are aspects of AI
that they like and that they find helps
their creative process
and do you find that there'saspects that they like less?
And you know, where in thespectrum of creativity
does AI really help?
- Yeah, so I think, so when,
when artists see just thesemethods like text to X, right.

(08:20):
text to image, text tovideo, I feel that they,
they have pushback
because you know, now thereis only text that allows you
to control the content.
I do see this modeljust as foundation models.
You go in the internet and text and video
and images is the datathat is widely available.
And those are themodels that you can train.

(08:42):
And then from those modelsyou can actually create,
you know, or fine tunethem, like you were saying,
into models that are moreamenable to creatives.
That gives you more adaptability,
creative power and so on.
So right now, I would saywe are very early stage
in developing this technology.

(09:02):
So it's really, you know, not to be judged
by just text to x.
I think artists do want
to have this iterative creative control.
They have some idea in their head, right?
Previously they had allthese tools that allowed them
to go from that ideainto the final product.
We wanna do the samething with AI as well.
And these tools are kindof early on, you know,

(09:25):
the quality is just simplynot at the level
that an artist would produce.
So sometimes they would,
either completely re-texture it
or just take the inspiration
from what the AI generate and completely redo it.
So it's more like it is giving me
some ideas about what to produce.
We do have an artist on ourresearch team, actually.

(09:48):
So that's super useful
because she's kind of making us honest
and focused on developingthe right kind of tools.
- You know, when you thinkabout working with artists
and the artists on yourteam, there might be a range
of reactions to using automatedtools for generations.
Some might feel very protectiveabout the creative IP

(10:10):
that they're generatingthrough their own art,
and they might not want to have it be part
of a training data set
versus others might bemuch more open about it
and happy to have it be used widely
because it broadens the reach of
the artistic content that they create.
I'm curious about, youknow, how you think about
where the ethical considerations come in
when developing this kind of AI technology?

(10:33):
- Yeah, yeah. I mean, theseare super important topics.
There are basically twomajor considerations
what you're mentioningis consideration around
what goes into the model, right?
Training data. And then there'salso consideration about
what's coming out of the model.
So you also wanna haveguardrails around, you know,
the outputs of the model.

(10:55):
So actually, like even
before joining NVIDIA,this was probably 2016,
2017, in the U of T lab,
we had long discussionsaround the training data.
So we kind of anticipated thatdata will be hugely important
and a problem to solve at the same time.
So we actually wantedto create a marketplace

(11:17):
for machine learning datawhere people would upload,
possibly annotate,
and then get compensatedfor the data in some sort
of a revenue-sharing business model.
It turned out that was super hard,
especially if you're anacademic in a university.
So that would be superawesome if someone can create.
I do wanna say that like both U of T

(11:38):
and NVIDIA Labs are research labs,
so we are not necessarilyfocused on creating technology
that makes any penny, right?
It's all about creating orchasing the art of the possible.
- So what do you likeabout co-creating with AI?
- It's super experimental.

(11:58):
Like, I didn't know what I was gonna get
and
I like that
there's just discovery.
- Have you played around with those tools
where you can create cartoons in the style
of Picasso?

(12:18):
- I have, and I have to sayit's my least favourite version
of any of the gen AI stuff.
So something that's comeup with creatives and AI
and particularly artists is
they don't want theirwork just ripped off.
So just making something in a style of,
and you, so when like the fake Drake,

(12:41):
the AI Drake track came out
and one of the things that was difficult
for people is it was a really good track.
I mean, what kind of thingsdid Nick say in your,
in your conversation?
- Well, Nick uses LLMs to get a sense of
how his audience mightreact to some of the work
that he does, but he fiercely believes

(13:01):
that he should be theone creating it.
[Good Kid singing "Drifting"]♪ Sorry baby I just don't know ♪
♪ the words ♪
♪ or how to conjugate them ♪
♪ Was it really past tense I heard ♪
♪ Or was it present perfect ♪
- How did you guys allmeet and form this band?
And that answer is all in undergrad
at U of T.
- You know, on the one side you do music
and a part of this band thatgoes on tours

(13:21):
On the other side,
you're also, you know,head up Cohere, which is
an LLM company based out of Toronto.
- They, they scratchsimilar parts of my brain.
[Good Kid singing "Drifting"]
♪ Oh, when you're always out of timeWhen you say it will be fine ♪♪
I mean, I find somethinglike very satisfying
about working on a song

(13:42):
and findingissues with the structure
or issues with the lyrics
and like thinking aboutthe whole piece together and,
you know, sometimesdropping it completely
and coming back with a new idea.
Like, I find that process satisfying.
In the same way that
working on a new research idea
or these days I spend a lot of time just like
building stuff with LLMs.

(14:03):
So, you know, coming up with a new demo idea or something.
And workingthrough the issues there.
I find those very satisfying and satisfying in a similar way.
- So do you use one and the other?
So, you know, you, you talked about
building demos with LLMs.
When you write lyrics foryour songs, do you like ask
the LLMs that you have in-house

(14:23):
about what might be interesting to write about?
- No. People always ask that[Laughter]
- I don't, I don't do that. - Okay.
And the reason for thatis I'm not really looking
to write lyrics faster.
I'm not really looking to optimize
my artistic expression.

(14:44):
It's kind of the process that I like.
That's kind of part of the fun of it.
So I don't really wantto say like, you know,
how I write a new Good Kidsong and be less involved.
I kind of want to be more involved.
[Good Kid singing "Summer"]♪ Like a hand reaching for a handle ♪
♪ But to find that it was never there ♪
♪ Candidly... ♪♪

(15:05):
What I do now, however,
is after I've written lyrics,
I often give them to our model
and say like, analyse this or tell me what,
you know, write a paragraph describing
what the lyrics mean to you or something.
And I do that as away of testing,
like, are my metaphors or analogies

(15:28):
too subtle or are they too obvious
or are they too... So sometimes I'll be like,
oh, I really want there to be this,
I really want these lyrics to
kind of bring about this theme.
And if the model doesn't pick up on the theme,
then I'll be like,okay, maybe
it's only me who's seeing that in there.
Maybe I should likeadd in a few more lines
or a few more images to bring that out.
- So it's sort of aniterative process

(15:50):
- Yeah. - To think about the content
that you create independently.
-Yeah. - But at the same time
wanna reflect on with the help of
like an external entity. - Yeah.
- Yeah. I think a lot of the times we,
we especially these daysseem to take as a given
that like everything has to be optimized,
that everything needs tobe done as effectively
or as fast as possible.

(16:11):
And that somehow thatoptimization is just inevitable.
But that's really not, that'sreally not the way things
happen a lot of the times.
Like there's lots of thingsthat we as a civilization
are not trying to do faster.
- And, and you know, the flipside of this might be that
there's an entire industry around music

(16:31):
where revenue's a big part of that
and there might be content creators
who are interested in using LLMs
for various artistic purposes.
- Yeah.- If you think about the idea
that Taylor Swift comes to a town
and the entire town's economy - Yeah.
- changes for the weekend that she's here,
there's clearly like a notionof a degree of connection

(16:55):
and how to optimize for that
that's happening withinthe music business.
- Yeah, absolutely. I mean,music is a business.
There's a whole economy around it.
But there too, I think it seems
to me increasingly thesedays, the way we interact
with art is by connectingwith the person who made it.
Like, I think about this alot, like last year, you know,

(17:16):
Oppenheimer was one of the biggest movies.
We didn't go seeOppenheimer, we went
to see Christopher Nolan's Oppenheimer,
like connectingwith the director
who did it was a huge component
of why people were interestedin going to see that movie.
The same is true with Taylor Swift.
Like Taylor Swift is one
of, is the biggest star wewe've seen in a very long time.
And connecting with her story

(17:38):
and like, you know, whoshe is as a person is one
of the reasons why peoplereally enjoy the music
and her performance.
So I imagine machine learning
and neural nets will likeimpact the way we do art
in the same way that like, you know,
all other technologies in the same way
that all musical technologies have.
But I don't think we'regonna get to a point

(17:59):
where we suddenly are notinterested in the artist
behind making the art.
Like art exists mostlyto allow you to connect
with the artist and thereare some artistic expressions
that will be best achievedthrough automation.
Like, there are some really, like,
I've worked on a few art projects
where we like automated some parts
and that was part of the art,
but it's still not removing the artist.

(18:22):
Like it's still not, youknow, moving to a world
where all art is justpumped out of a computer.
And when you listen to it,you can appreciate only it
and not it through thelens of who made it.
So yeah, I don't, I don't thinkwe're getting there.
- Right.
So in the context of, youknow, the ensuring that art

(18:42):
has this human perspective, I think one
of the concerns at leastthat's been raised online is
ownership and creation of music as data.
I'm curious about how you think
of it since you wear both hats as someone
who is embedded into thegenerative modelling technology
world as well as an artist himself.
- Yeah. Yeah. So yeah, that'sa very interesting question.

(19:04):
I'll start with like a little bit
of background on this.
It's like my band's musicis all DMCA free,
so Good Kid's music,
If you want to use it foranything, you can.
You can put it in a YouTubevideo, you can put it in Twitch,
you can put it like whereveryou can use it in... yeah.
in wherever you want. Andyou wouldn't need to pay us.
So like that decision, we'rekind of trading like, you know,

(19:25):
we're trading short-term profits
for a longer-term fan base.
Now, like that'sa position of privilege
for a band to be in, right?
Like, not a lot ofmusicians are in the state
where they can say, "Hey, I'mgonna trade some royalties this
year for more people to listen to."
So like, I don't, I know thatthat's, that's not the luxury
that everybody has, but that was a thing

(19:45):
that was successful for the band.
So I, I don't like that that works
because there's attribution, - Right.
- So like when peoplelisten to us, you know,
in a YouTube video and theylike it, they're like, "Oh, hey,
who made that YouTube video?
Who made that music?"Then they go find us.
That wouldn't work inlike, you know, a bunch
of music gets put intoa, into a neural net.
- Yeah.
- Now somebody listens to it and they say, "Oh, that song's great."

(20:07):
And the neural net made it and they never
find out about the act.
- Yeah. - So there's definitely,
like, there's not, the parallels kind of break there.
- If we would ask the LLMto, you know, create tokens
that represented music,it might create things
that light the convex hull of things
that other people have created.
- Yeah. - Which is both new
in so far as it's not been created by a human before,
but is also in some sense inspired by

(20:29):
things that humans have created.
Like is this thing just a convexcombination of things that
have existed before?
Is that – do you construe that
as art is really my question to you.
- When we train neural networks,we take a big data set.
We then optimise a bunch ofparameters in a neural network

(20:49):
to be able to accurately represent the
statistics of that dataset.
We then sample from the dataset.
Sampling from the dataset isjust a mechanistic process
that can be understood
through the statisticsof the original dataset.
To me, like I think that's avery different process than
what happens when a person is making art.

(21:10):
Like I've seen our model
and other models create some things
that I thought were very beautiful
and I have occasionallybeen like moved by the stuff
that I have seen neural nets create.
I thought they're really cool.
I enjoy it in a verydifferent way than when I
see a person do that.
When a person does that,I feel seen, right?

(21:32):
Like when an artist doessomething that moves you,
part of the experienceis being like, damn,
they're sharing what I'm going through.
You know, like they'resharing the experience
and it's, you know, itmakes you feel connected
and like seen as a person.
Which is arguably whatwe want most of the time.
Like that's really oneof the things we're all
after is just to be, justto be seen and understood.

(21:53):
Like understand thatyou're sharing the human
experience with other people.
I think when you guys askedme to come in
and talk about this, Iwas thinking a little bit
what the conversation might be like.
Like yeah, like a centralthesis is to remind people
that we make and,
and enjoy art, to connect
with the person making it.Like we don't just passively
consume art the way weconsume, you know, rice.

(22:18):
Like we, we want to know who made it
and that's what is mostlywhat's enjoyable about it.
- No, that makes a lot of sense.
I mean, I personallythink food is a form of art.
- It can be. Yeah. No, Ichose rice as a, I mean,
- I shouldn't have said, - I, there's some,
- I'm just messing.
- There's some delicious rice that I have been moved by
how good it is.

(22:38):
- How will writers, likejournalists, adapt to the challenges
and benefits of artificial intelligence?
- 2024 is gonna be apivotal year for the use
of AI in journalism.
The reason is that we'vegot the technology,
everyone's curious about it
and there's a lot of enthusiasmfor how it might work
and solve journalistic problems.
The reason that I'm skeptical about it is

(22:59):
that AI cannot report independently.
It can't go walk down the street
and see what kind of day it is
or talk to people about their lives.
And so it doesn't replacethe fundamental work
of journalism, which is research.
All of us know that goodwriting or good podcasting
or good broadcasting comesfrom having really good,
reliable sources.

(23:19):
And AI loves to fib stuff.
It loves to gloss over holes in research.
It's literally designed to do that
because at heart it's nota content creation tool,
it's a writing program and it's jazzy.
So we'll see how that plays out.
But each newsroom that I know
of is currently working outthose ethical standards in its
own way right now in real time.
[Good Kid playing "Drifting"]

(23:41):
♪ Sorry, baby, I've been drifting again ♪
♪ Could you repeat the question? ♪
♪ No, I guess I do not know when ♪
♪ It first became a problem ♪
♪ Sorry that it's all such a mess ♪
♪ The words, they seem to leave me ♪
- Goodbye
Advertise With Us

Popular Podcasts

Dateline NBC
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.