All Episodes

February 18, 2026 35 mins

Cristóbal Valenzuela co-founded Runway to rethink how movies are made, and now his technology is spreading across Hollywood. Cristóbal sits down with Oz to discuss how far AI media tools have come in just the past six years, and why the next leap forward could happen even faster than anyone expects. He also addresses many artists' AI fears, by saying that film has always evolved alongside technological breakthroughs and that AI is simply the next chapter in that long history. And finally, Cristóbal and Oz explore Runway's next frontier after Hollywood and why video models might be the key to training humanoid robots.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Welcome to tech stuff. I'm Os Voloshin, and this is
the story. A couple of weeks ago, I was at
the Web Summit in Doha Cutter and I had the
opportunity to speak with someone who is central to Hollywood's
AI transformation, one of the most controversial and to some
frightening topics facing the industry. Christobal Valenzuela is the co

(00:37):
founder and CEO of Runway, which is best known for
developing a very powerful AI video generator along with other
tools for film production. They are actually used by professionals
in the industry. Its software has been used in the
Oscar Winner, Everything Everywhere, All at Once, on The Late
Show by Madonna, and Runway has partnerships with everyone from

(00:59):
Disney Line, Skate and AMC to Adobe and Nvidia. At Runway,
Crystabal is hoping to transform more than just movies.

Speaker 2 (01:08):
We're gonna get to a point where you can generate
anything you want. Literally, you can create media as fast
as you can think of. And when I mean media,
I think most people think about like films and ads
because that's what we imagine media to be. But we
are going deeper than that. These are pixels on a
screen Every screen that you see in the world has pixels,
and they're gonna get to a point where you can

(01:29):
simulate all those pixels, and those are gonna be pixels
that have been coming from our model.

Speaker 1 (01:34):
Crystabal thinks Runway can improve gaming education robotics. In fact,
he's one of the people racing towards a new frontier
of computing called world Models. And I've actually met Christabell before,
more than six years ago, when Karen and I were
making the Sleepwalkers podcast. It was a very different world

(01:54):
back then, and that's where I want to begin. Take
a listen. Well, Christy, it's great to see you again. Yeah,
I'm not sure how well you remember. You've had a
busy few years. But we met back in twenty eighteen
and I was hosting a podcast called Sleepwalkers, and you
helped my partner Kara clone her voice, and then we

(02:16):
used her voice to see if we could trick her
cousin into thinking it was really her, and we kind
of half vay succeeded. What's changed for you since then?

Speaker 2 (02:24):
I remember that was a long time ago. A lot
has changed. The world has changed. I've changed running it
has changed a lot. I think from thementally, we've got
in just way further when it comes to what I
can do in media. So I guess the fact you
guys were early on trying to figure out how you
could use models for like cloning voices or imageries or videos.

(02:44):
I think the world went from like could it be
possible to like, yeah, it's really possible. It worked really well.

Speaker 1 (02:50):
It's kind of amazing. I remember then it was like,
I think you needed like ten hours of Kara's voice,
and it was like quite a manual process to make
a digital copy, and digital copy was good but not
indistinguishable from the real voice. And it's just remarkable to
think that in such a short period, the synthetic media
revolution that has just taken off in an extraordinary way.
Did you always believe we would be here?

Speaker 2 (03:12):
Yeah?

Speaker 1 (03:12):
I think we were.

Speaker 2 (03:12):
I always believed we will be here. I think the
only question was like how fast we'll get there, And
to be honest, I think it's been faster than like
our best estimates. Like when we started thinking about this,
the question was would you be able to generate or
create any piece of pixel or imagery or content that
feels right or feels real? And the answer is like absolutely,

(03:37):
Like you can create incredibly compelling media, stories, films, audio,
whatever kind of format you want. And now we're like, ah,
needs to be faster, you know, it needs to be
like better, and it's like we're gonna get there, but
it took us five years to get here, and I
think the next five years are going to be condensed
in the next like eight months, you know, and that.

Speaker 1 (03:58):
Isn't going to see as much progress in the next
eight month sas in the last five years.

Speaker 2 (04:01):
Yeah, I think. I mean, over the last twelve months,
we've seen enough of the similar progress of the one
we've seen over the last four or five years.

Speaker 1 (04:08):
And what will that feel like the average person? That's
an interesting question.

Speaker 2 (04:12):
So you know, I think I've been thinking a lot
about this, and I think there's a there's like I've
noticed there's like sometimes like two worlds. There's the world
of people who are very deep into AI and the
latest models and the things you could do and the
ins and out and within that community seeing people who
are even like further down the road, you know, and
there's another world, which is people who were completely unaware

(04:35):
of this, yeah, completely unaware, Like and I kind of exaggerate.
A's like people who are just discovering like chat GPT,
you know, and for me, that's creating like an interesting
asymmetry where on the one end, you have these people
living two hundred years into the future now and others
are just like still trying to understand how to like

(04:55):
properly use like a language model, and that for me,
will continue to exacerbate the asymmetry of like those who
are using the latest and are curious about it and
those who are like still trying to adjust. But the
interesting thing is here, is that if you want to
transition from like the past to the future, there's nothing

(05:16):
preventing you from doing it. Like it's just a curiosity.
It's like mindset of like growth. It's like do you
want to learn, Like you can just learn how to
do it. And the most interesting I would say, like
thing that will happen of the next year or so
is that asymmetry will continue to grow.

Speaker 1 (05:34):
And she's a huge social problem.

Speaker 2 (05:35):
It is. It is, it is definitely, But it's the
first one that you're not constrained by like your social
economic status or your political position or like it's mostly
about do you want to like try this new thing.
And I'll an example. We work a lot with media companies,
So studios, media filmmakers work with all these students, all
of them, and what are you doing for them? So
if you're the thing right where we're building this like

(05:57):
incredibly powerful technology that allows them to do things they
only dram of doing at some point, and the way
that you need to help them understand it is they
need to show it to them, You need to walk them,
you need to kind of understand how they're going to
transition to this totally new way of making things. And
the biggest bottleneck in adoption is not compute. It's not
the capacity of the models, it's not the safety of

(06:19):
the models. It's basically people's apprehensions or kind of expectations
or knowledge of how the technology works. Right, So we
spend a lot of time helping people adjust, helping people
train themselves. Like, if you're made a movie and the
movie involves fifty different steps with tools that are one

(06:40):
hundred years old, then guess what You're going to make
a movie the same movie with like four tools in
twenty minutes and in ten steps, and for you to
reor organize your mind around it, it's just hard, and
that's where we spend a lot of time, have people
dedicated to just like helping you adjust.

Speaker 1 (06:56):
You raise more than three hundred million dollars earlier this
year at a value above three billion dollars. So you're
the only Unicorn founder I know since you were before
Unicorn founder, since you were more or less in the
graduate graduate school at NYUS. So that must be an interesting,
interesting journey and experience. But there's one moment between when

(07:16):
we first met in twenty eighteen and today where you
suddenly kind of felt the wind at your sales and like,
oh my god, this is really going to work because
may change the world. Like what was it?

Speaker 2 (07:26):
I think it's been. It's been a series of like
I would say milestones that we've seen in the last
three to four years. Where we released one on twenty
twenty three, So just for like contexts, Gene one was
the first ever publicly commercially available video model in the world.
It was such a big deal that it was in
the front page of the New York Times back then,
and the quality of few was like bad compared to

(07:49):
today's standards, right, Like seeing the level of progress within
those two years, there's enough moments where like you see
what people can make that you never thought you could
make yourself. That I'm like, Okay, it's working. Like if
a tool is general enough, you're gonna see things that
you cannot even think about because they don't you don't
come from the main right, So I see musicians using

(08:09):
runway ways they couldn't think of before. I see architects.
We saw a lot to architects using runways that I
could never think of before. And so we have all
these very interesting use cases that I'm like, Okay, this
is working, this is there.

Speaker 1 (08:22):
We go and the business is more of a B
to B business in terms of the partnerships with Adobe
and Disney and others, or is it more of a
B two C business ware individual artists and creators are
trying to maximize what they can do.

Speaker 2 (08:34):
Yeah, we're working a lot with companies, with enterprises, with businesses.
We sell to almost every studio, all studios. We sell
a lot to brands, to agencies. We sell it to
both production teams.

Speaker 1 (08:45):
Madonna, we saw a lot.

Speaker 2 (08:47):
So we work a little with artists themselves as well. Yeah,
so we work a lot with the teams behind the artists.
So if you're creating a like a show like Madonna
puts out in the world. You imagine there's like hundreds
of people behind the scenes crafting the visuals and the
videos and everything else. And a lot of those teams
use Runway.

Speaker 1 (09:04):
But on the consumer side, I mean you you grew
up in Chile, correct, You wanted to be a filmmaker,
I think, and you didn't have access to all of
the equipment that you wanted. And I have a sense
that for you, the individual user is something which is
important to you. And you've created a film festival, I think, yeah,
partly in service of that, correct.

Speaker 2 (09:23):
Yeah. So one of those barriers I think AI is
breaking down is the definition of like who gets to
make things in the world. And look at Hollywood, like
a movie takes a couple of million dollars at the minimum.
You have to know a couple of production people, you
have to be within the network and the insider community
of the industry itself. It's really it's really expensive and

(09:46):
really powerful. General purpose technology like AI allows it to
basically break down barrier. We've seen now filmmakers in their
houses like teams of one, two, three people making films
that five four, three years ago, you I thought were
made by a team of one hundred or more people
with a budget of a couple million dollars. And that's
just remarkable. Like a few times in our lifetime we've

(10:09):
seen technology like advanced so rapidly that those barriers are
kind of like broken so fast. And so the film
festival that we started four years ago now is a
celebration of those people. It's not about technology, it's not
about the latest and the greatest. It's about like people.
And so we host this festival both in New York
and in la and Paris that showcases the best of

(10:32):
artists experimenting with a news technology. And years look four
years ago in a lifetime is like one hundred years, right,
the things that moved too fast, and so four years
ago it was even before Jenu one. So it was
like a lot of experimental art, a lot of like
people who are like just tinkering with this. We got

(10:53):
like a hundred submissions. It's been four years since then.
The last festival, we got six thous and submissions from
people over the world. We rented the Lincoln Center in
New York City, which is like this beautiful historic venue.
We partner with like they try back up in festival
with the American Cinema Editors. We partner with like so
many industry experts in teams and companies, and now these

(11:16):
years not even bigger, we expect to get tenth of
thousands of submissions, and so it's a sign of the times.

Speaker 1 (11:22):
I would say, what are the criteria to submit? Because
you know, there was an h twenty four film that
came out about a year ago where Hugh Grant plays
like a demonic character and there are two Mormon missionaries
and then he like traps them in the basement. I
can't know what it's called, but there was a disclaimer
at the end saying no AI was used in the
making of this film. And I was thinking that that
just by definition can't be true, right, because all of

(11:42):
the tools that people are using to make this film
in terms of Adobe and stuff have jen AI R.
So when you think about a film festival, like what,
like surely every film could be eligible.

Speaker 2 (11:52):
Yeah. Look, I think there's a lot of vert to
signaling in the industry as well, where you want to
say something just to like make sure you like say
the right things, but like you're right, you're using AI,
whether you like it or not. You're taking your phone
to take a photo. Guess what That whole photo is
partially generated. Just you can't tell the sensor, the way
light is adjusted, like the composition, the sharpness, most of

(12:13):
those things are generated.

Speaker 1 (12:14):
For the films that are submitted. Do they have to
have a certain amount of AI in that production process
or what are you? Can anyone submit a film?

Speaker 2 (12:21):
Anyone anyone can submit a film that there the record
site is like you have to use AI somehow, right,
you have you have to explain how I mean to
be like interesting and.

Speaker 1 (12:29):
Creatively, not the usage of the techno. But it's not,
it's not.

Speaker 2 (12:32):
It's not. So their judges and they're the judges are
experts and their filmmakers. We have gazpart No like, who's
like a great filmmaker, like be one of the judges
last time last year. We have Harmony Kreen, who's like
the director of Kids, Like so many great filmmakers are
evaluating not the technology, they are really in the story, right,
So they're evaluating how are you using tagality in the

(12:53):
service of a story. If a story is not good,
it doesn't matter, like as is, you can make the
greatest increase as visuals, but like, if you're not telling
a story and then it's not gonna matter. So at
the end of the day, that's what I'm saying is like,
these are really powerful technologiest and if you know how
to use him, you're gonna get far. I'll give you
an example that the winner of last year's festival. It's
a beautiful film. You should watch. It's in our website.

Speaker 1 (13:16):
It's a reflection.

Speaker 2 (13:16):
It's like a documentary of sorts of where like AI
is gonna take us. It's a kind of a meta
documentary of sorts. Who was made by one person during
the course of one year. He was a musician, he
got approached by a production team. He's now making a
future film. He quit his musician like life, and now
he's like fully devoting himself just like film. He has

(13:38):
a team. None of that would have be impossible without
a eye within a runway, without this signalogy first, and
now he's doing what he always dreamed of doing because
of technology. And I found that just amazing.

Speaker 1 (13:50):
On the other hand, there are plenty, plenty, plenty of
artists who will say to you, you're stealing generations of
people's work, You're running us out of business is the
greatest enemy to creative people in the whole world. What
do you say to them?

Speaker 2 (14:05):
I say the first thing is like I think you
should you should deeply understand the all of you first.
I think if you haven't yet explored, you're missing a lot.
Like you're missing a lot, You're missing a lot of
context and nuance. You're probably like misunderstanding what's happening here.
I get this, like I've got this maybe when even

(14:25):
met the first time, Like AI is like taking creativity away.
I find that just like tropes so repetitive and like wrong,
like it means that people haven't yet experienced what this
thingolity can do for them, and you're going to figure
it out just by experiencing it like I can. We're
in Doha. I've never been to Doha. I can read

(14:47):
all I want about it. I can like talk to
anyone about it. There's only one way could know what
it means to be there. There's only one way you
could know, which is like, go there and you'll know
in the first ten minutes what it means to be
in Doha. This is the first same thing. If you
want to criticize AI, if you want to, like think
about it. If you know how it works and everything out,
but you haven't used it, then like you're missing a
lot some ways.

Speaker 1 (15:08):
A cool subject to a class action lawsuit. You're part
of the class of AI companies who are being sued
by a graphic illustrator called Sarah Anderson and others. Does
that keep you up at night?

Speaker 2 (15:20):
No, I can't really speak about like ongoing stuff like that.
But I think again, like we're trying to engage with
the community to show them how things really work. There's
a lot of misconceptions about the models actually do.

Speaker 1 (15:31):
So what is what is when people say, look, it's
takes the whole corpus of human creativity, learns rules from
it and then applies those rules. That is a fair explanation.
What's unfair is to say it's stealing individual ideas or
segments or how do you characterize.

Speaker 2 (15:47):
Look like think about like these are theseel models I learn? Right,
if you go you need to go to the basis
of whether you actually doing to understand like that that
that argument, which is models learned from the world. So
when when you want to start film, there's one thing
you should do, which is well and watch films. You're
gonna watch films and you're going to understand the language
of film. The language of film has certain taxonomies and

(16:08):
like specific ways of cutting things, And there's a pattern
in most of the stories. There's a beginning, a middle,
and an end. There's characters, there's different scenes. Right, you
build some sort of a world understanding of this industry
by watching films. An AI model that's exactly the same.
It watches things and understand the taxonomy of things. Now,
if I ask you to make a film, you're going

(16:30):
to take your knowledge of what you've seen, also your
experiences of your life and what you've seen in your
like pastime and everything you've seen and everyone you met,
and you're gonna make something out of it. A model
works pretty much the same way. You show them data,
and the model learns about the data, learns about the
patterns of the data, and uses that to create something. Now,
if I tell you what movie or what part of

(16:50):
your life influence that particular scene or that particular frame,
you'll be like, it's a mix of things. It's combining
things that I've learned over time. Right, the model is
the same and so arguing that that you can you
know that you can pinpoint that there was one particular
thing that influenced the model is kind of missing the
point because there's nothing in particular that influenced the model.

(17:12):
Is the aggregated date of the world that influences the model.

Speaker 1 (17:29):
After the break why the history of Hollywood is the
history of technology and why Christobal believes that AI fits
right in stay with us. Do you think we'll see

(18:02):
people like Matthew McConaughey trademarking the image as a kind
of a new facet of what it is, Like.

Speaker 2 (18:09):
I think that's happening already. Yeah, I mean it makes sense.
I think like sometimes forget specifally within Hollywood and median
art that like the history of Hollywood is the history
of technology, like the history of filmmaking is the history
of technology. Does because hollywoods was born out of technology
a breakthrough, which is the camera. One hundred and fifty

(18:29):
one hundred and sixty years ago, the ability for us
to capture light in a photo sensible device wasn't fysible,
and scientists at the time figure out a way of
capturing rays of light in photo sensible materials that gave
birth to cameras obscuras, and that gave birth to cameras,
and that get bursts to moving images, and that get
births to moving pictures and eventually, like an industry known

(18:52):
as hollywoods was born in the twenties, nineteen twenties, the
art of film was born out of science. Like those
two things are intertwined, and every single movie that you
watch today is a technical masterpiece. It uses breakthrough technologies
and like visual effects in CGI and streaming and transmitting,
like media in organizing, like you're making something with science.

(19:16):
Film is a huge of technology, and this is an
evolution of the same. Like there's nothing that has we
define that. We've set up a society to say films
should be this and nothing else because I say so,
it's like who says so? Like film was silent in
the twenties, and when audio was around because of a
technological breakthrough, people revolved, Like people were really against audio movies,

(19:41):
and there we go. I think it was a big
plus to have audio movies, and the same with analog
to de though, like people were against transitioning to UGO
cameras for whatever reason. I think some people might have
strong reasons about it, but I think It's like digital
cameras are just way more accessible to anyone else, to
many more people, and many of the greatest films were
born out of that. You can continue on that trend

(20:01):
and you'll realize it's just the same trend and the
same evolution we've always been.

Speaker 1 (20:05):
You had a vision, and you had a first mover advantage. Right,
You've been at this since twenty eighteen, and you mentioned
twenty twenty three the New York Times story about the
first example of you know, true AI led video generation.
Of course since then, you know, there are some big
gorillas who have entered the enclosure so to speak. You know,

(20:25):
it should come beating that chess, Google and open AI.
How do you how do you compete?

Speaker 2 (20:34):
Yeah? I mean, first of all, I think when we started,
if we realize if we're going to be something that's
going to change the world, that's going to be meaningful,
then like these gorillas are going to pop up, Like
if we're not, then they're not going to be interested enough.
This whole thing is not going to be interesting enough.
They don't start paying more attention. And I think they are,
which means that yes, this is like very interesting. It's

(20:55):
a huge opportunity.

Speaker 1 (20:56):
But they have capital, they have engineering talent, and they
both have distribute.

Speaker 2 (21:00):
Sure, yet we're still winning, so we have something. I
guess we're still we have better models than them technically,
so I'm not saying that they're not capable. They're really
strong teams that are like really thinking about the problem
and I think really good. But there's something around like
speed and taste that it's really hard to get in
a large company. And so what I mean by that
is I think taste. People think about it as like aesthetics,

(21:21):
you know, like the things you like visually or your
choices that you made. I think of taste as within
the spectrum of opportunities and directions that you can take
a product, a roadmap or research direction, which is are
you going to choose and why? And we've chosen things
that I think most people three years, four years ago
wouldn't have chosen, such as such as investing time and

(21:44):
training video models or making the real time right. And
there are things we're working on right now that feel
again like that. And the incentive for a large company
is not to be a different tier of innovating and
creating that which is a different mindset is to just
be on the look at the things that are working
and try to like win by copying them. Right, So
when you're making a decision, you're always between exploration and exploitation.

(22:09):
So if you have to figure out where you want
to get a coffee tomorrow, you can enter into exploit
or explore mode. Exploit mode it means you're going to
go to the coffee shop you always know because it's
fast and it's cheap, and you're you know what prices,
and you're not going to like spending more time figure
out anything else. You know exactly how to get there.
It's very efficient. Explore mode is what happens if you

(22:29):
go to the other part of the city you've never
been and you found a coffee shop that's ten times
better and cheaper, and also you meet like your wife, right, Like,
there's opportunities of things that are going to get there
depending on which choice you want to make, and you
can think about that same exploit explore mode when it
comes to research. So there's a lot of exploit more

(22:49):
when it comes to like models today where people are
trying to make them efficient, more good and whatever, and
there's a lot of explore mode, which is like, what
are the things can you do that you have not
even thought of before? And I think that has a
cultural element. You have to be disposed as an institution,
as a company to try to go to explore mode.

Speaker 1 (23:07):
Twenty twenty six has been a year when people have
really started to talk about this idea of world models,
and you have indicated in a couple of different places that,
in a sense, you're building a company around that opportunity. Explain,
explain what it is and what it means to you.

Speaker 2 (23:22):
Sure, yeah, So the first thing you need to understand
is like video model, So how do video models learn?
And this goes back to some of the things we're
just talking about. So video models are this incredible powerful
systems that can ingest data, learn around the world and
then make consistent videos out of the learnings of what
they've seen in the world. Right, which is not that

(23:42):
dissimilar to what we do. So if you want to
have a bottle of water here and it can open
the bottle and drop some water and you will know
what should happen. You can predict in your head what's
going to happen with the water, right, how it's going
to flow. I'm sure you can predict and you can
tell me the mathematical formula that describes how the water
flows in the table right. And what we've realized is

(24:03):
that video models basically are doing the same. You can
show them a photo of the water and tell them
to predict what should happen next if the water comes down,
and they're going to do it with such a curacy
that it looks real, so real. Yet the model has
no mathematical physical knowledge of why that should happen. You
just knows it happens because I've seen enough things of

(24:23):
the world. And so when you take that idea further,
you start realizing that video models eventually are just really good,
generalizable machines around the world that can create their own
world models. They can understand and create their own predictions
of the world at large. And so if you train
these models bigger and like better, these video models are
eventually becoming world models. They can model the world and

(24:46):
they can understand what actions should take in a world.
But you can also influence those things in the world.
And I think for us that there's a lot of
opportunities for product and growth and creativity that will steam
from models are not just like brute forcing reality, but
understanding the world and then making decisions around the world.

Speaker 1 (25:08):
So today's AI videos tend to kind of lose coherence
or the laws of the physical world and not always
obeyed in AI videos. Right, is that a fair assessment.

Speaker 2 (25:18):
I think that's like a temporary, like I would say,
condition of the current way models are trained. But you
can now train large, long sequence video models that don't
lose consistency. That's possible.

Speaker 1 (25:31):
So these video models are starting to intuit or to
learn without being taught laws of physics.

Speaker 2 (25:37):
Correct.

Speaker 1 (25:38):
Correct, you said AI lambs have been very obsessed with
simulating the human mind, but I think it might be
the wrong approach long term. What you want to do
is not simulate how humans work, but how the world works.

Speaker 2 (25:50):
Correct?

Speaker 1 (25:50):
What does that mean?

Speaker 2 (25:52):
So think about language? Right, So we're having a conversation.
Language is a human made symbolic system that we've created
on top of reality. Language doesn't describe reality. Se describes
a way we can refer to it in a way. Right,
But there's natural opreservations in the world that might not
have words that we can describe. And so one thing
we've done is that we've trained and I mean the

(26:13):
industry at large has train models on the symbolic obstruction
of reality, which is language. Right, what we're saying is
like you can go deeper than that. You can train
on reality. You don't have to train the model on
like what the meaning of the things are, or what
the meaning of a particular thing is. You can train
on the substrate of reality, which is just capture the
world at large. See how physics works, how water moves,

(26:34):
not by telling the model literally water flows that this
particular float and density and whatever you're showing the video water.

Speaker 1 (26:41):
Learning visually rather than through words.

Speaker 2 (26:44):
Correct, it's learning through experience. And then what you could
do is technically and this works already when we're doing it.
If you want to take you want to create a robot,
You want to create a robot that can pick this
water bottle that I have on the table. The way
you do that today is to show the robot thousands
if not tenth of thousands of arms picking the bottle,

(27:07):
and then you train a policy which is a specific
robotic model that then has learned watching those videos right,
and in fact they're in China.

Speaker 1 (27:16):
I think a lot of sort of physical locations where
there are hundreds of humans during action so that robots
can learn from the warehouses.

Speaker 2 (27:22):
They're not kidding about this. There are warehouses with people
with GoPros stuck to their heads that everything they do
from nine am to five pm is fold clothes. So
you go to a table, you stuck a camera on
your head, and you start folding clothes, and you fold
it and then you unfold it, and you folded and
you that's training data for the models to understand how
to fold clothes.

Speaker 1 (27:43):
How does that differ from what you're doing.

Speaker 2 (27:45):
I can synthetically generate all of that, so I don't
have to have a human show me how to fold
the clothes. I can have an AI model simulate that.
I can have a video model create a synthetic version
of that video and a million other video and we
can use that data to train the robot. And here's
the interesting thing. So you've now cut the time it

(28:07):
takes to train the model because you don't have to
get the data. You can just synthetically generate the data.
But then once you've created the policy of the robot
that can grab the thing, you need to test it
in their world. So the way you tested is you
put the policy in the robot and you see how
effective it is. You can actually do that testing in
the simulation as well. So you're gathering the data, you're

(28:29):
training us policy, the model learns the policy, and then
you're testing how effected the policy actually is by deploying
again on the simulation. So it's simulation to simulation.

Speaker 1 (28:38):
But every policy has to be individually defined by a
human like or is there some extrapolation possible where once
my robot has learned to pick up the bottle of water,
it can also pick up, you know, the microphone.

Speaker 2 (28:51):
That's generalization. So that's a greatly something. The idea of
like journally, of models is you want them to have
the ability to journalize, right, to take one knowledge and
apply them to somewhere else. But again that journal decision
comes from scale, from training at larger data points, and
gathering data becomes a bottleneck. That's why people are investing
so much in gathering data or companies that can help

(29:12):
us in creating data our reports is that you can
synthetically generate all of the data. Think about another problem
still driving cars. If you want to have cars who
can drive in the streets, one of the hardest things
to train the models are is on collisions because collisions
are very hard to like create, I can just go
and create a collision of a car hitting a person

(29:32):
because it's technically hard, right, and there's probably not enough
data out there that shows me collisions. So wouldn't it
be great if you can just simulate a collision and
the model can learn about what happens in a collision.
And that's what we're doing.

Speaker 1 (29:46):
You said, soon everyone will have access to their own
world simulator. This will be the most important technological development
of our time. That sounds to me bigger than innovation
in Hollywood. Yeah. Absolutely.

Speaker 2 (30:01):
I mean it's a general purpose like technology being able
to simulate how a car moves, how robot works, how
entertainment works, how games are played. We have done experiments
where you can simulate computer interfaces. So when you log
in your computer where you see the screen, I can
generate that screen and it can create synthetically, like on

(30:21):
the fly, real times like interactions for you on that screen.
You're basically creating simulation systems, and simulating is way cheaper
than deploying things in real time in the world.

Speaker 1 (30:32):
And what is the biggest limitation on world model is
it chips? Is it models themselves?

Speaker 2 (30:37):
The most its energy to supply chain is how you
scale the model is to do what they need to
do at scale. It's moving atoms from one place to
the other. I think the technical foundations are there, it's
now how fast and how good can you scale those
foundations to do what they can do? And so that's
where we're focus on right now.

Speaker 1 (30:54):
And you released your first world model at Runway in December. Correct,
what does it do that other world models don't do?

Speaker 2 (31:01):
So we took our latest video model four point five
and we're training in such a way that the model
can understand actions and you can interact with the model,
and the model has a world understanding, right, And so
we've basically released three main kind of core applications of it.
One is robotics, which is the one I was selling
about think about it is physical AI. This is the
first ever world model that has commercially been available that

(31:24):
does this that allows you to take data from our
robotics for example company or av car and create more
synthetic data for that. The bottomneg on AI and physically
AI is data and gathering data in the real world
is very expensive. It's very hard. It's very time consuming.
So unless we figure out something, it's very hard to
scale that. So that's one. The other one is what

(31:47):
we call agent or real time agents. So real time
agents are a layer of our world models that can
function in real time. So what we do is imagine
you're logging into a computer. You click one button and
there's an avatar that comes and that speaks with you,
and it's similar to how we're speaking now. It's a
human or an animated character. It looks incredible real I

(32:09):
responds to you in real time and he knows everything
you're watching and everything you're seeing. So think about I
don't know and I want to learn physics. I'm a student,
I want to learn math, I want to learn whatever
I want to learn. I can now have a personalized
studor just for me. It's one thing that education has
proven way and over and time and time again, is
that like the ratio of students to professor really matters,

(32:30):
and if everyone in the world can have one professor,
everyone's education will be so much better. This is literally
what you can do right now. And the third one,
which is like where we started, is we've deployed the
models in media and so entertainment, how do you create
end to win films or stories that are completely generated?

Speaker 1 (32:49):
And that's like the end to end means a single prompt.

Speaker 2 (32:51):
Correct more than a single prompt. But like ideally, yes,
you want to have a bit more fautomation and how
you make it?

Speaker 1 (32:58):
Because I was going to ask you, how do you
The first two things you describe sound quite far from
the core of what you're famous for, which is helping
in the creation of media. So how do you, I mean,
how do you balance all these different.

Speaker 2 (33:10):
Yeah, so this is where you go back to, Like,
remember the art, the history of art is extra of technology.
So the history of filmmaking was the history of scientists
in the nineteen beginning of the nineteen hundreds experimenting with
this idea of cameras. Right. Eventually the camera became known
as cinema and our new art form was born out

(33:32):
of it. But cameras these days are way more in
our life involved than just like filmmaking. You have cameras
pretty much everywhere. There's a camera in all the satellites
that are in space. There's comras in our cars, there's
commers in our houses, there's comers in our cell phones.
There's covers everywhere in the world. So the view of
the cameras a scientific breakthrough that was only using storytelling

(33:53):
has evolved to become a general purpose machine or a
tool that allows us to do way more. I think
of the work where doing not way in a versimilar way.
Many times describe runways a new kind of camera, and
when I say a camera, runways were a new kind
of camera because we're stimulating reality. The first applicable use
case was art. You went and you use this new camera,

(34:13):
and the same when they used the first camera to
make movies, to make stories, to make art. And then
you start realizing that the camera has way more applications
than just like filmmaking. It has ways of improving scientific progress.
We should totally be there, like we have the models
to allow us to do that. Why not, Chris, thank
you of course, appreciate your.

Speaker 1 (34:32):
Time on test stuff.

Speaker 2 (34:33):
We should do this again on former years to see
how much we progress.

Speaker 1 (34:36):
I don't want to wait that long. Yeah, that's very
so much.

Speaker 2 (34:38):
Thank you.

Speaker 1 (34:56):
That's if it takes stuff this week, I'm as velocian.
This episod SO was produced by Eliza Dennis and Melissa Slaughter.
It was executive produced by Me, Karen Price, Julian Nutter,
and Kate Osborne for Kaleidoscope and Katrina nor velfa iHeart Podcasts.
The engineer is Matt Stillo and Jack Insley mixed this episode.
Kyle Murdoch wrote our theme song. Please do rate, review,

(35:19):
and reach out to us at tech Stuff podcast at
gmail dot com. We love hearing from you.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Betrayal Season 5

Betrayal Season 5

Saskia Inwood woke up one morning, knowing her life would never be the same. The night before, she learned the unimaginable – that the husband she knew in the light of day was a different person after dark. This season unpacks Saskia’s discovery of her husband’s secret life and her fight to bring him to justice. Along the way, we expose a crime that is just coming to light. This is also a story about the myth of the “perfect victim:” who gets believed, who gets doubted, and why. We follow Saskia as she works to reclaim her body, her voice, and her life. If you would like to reach out to the Betrayal Team, email us at betrayalpod@gmail.com. Follow us on Instagram @betrayalpod and @glasspodcasts. Please join our Substack for additional exclusive content, curated book recommendations, and community discussions. Sign up FREE by clicking this link Beyond Betrayal Substack. Join our community dedicated to truth, resilience, and healing. Your voice matters! Be a part of our Betrayal journey on Substack.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.