Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
I thought a lot about what I would feel about
the world and my future kids would grow up in
Like my highest order bit is to not destroy the
world with AI, Like it doesn't matter how good everything
else is if we do that.
Speaker 2 (00:12):
Today, I want to take you to the home of
Sam Altman. He's the CEO of Open AI. Now. When
I traveled there, I was struck by how green everything was.
There were rolling hills and birds chirping in the background,
and as we were driving in there were cows grazing
by the road. And I paint this picture because I
think it's a really interesting backdrop for a conversation on
(00:35):
the tech revolution. And before I interviewed Sam, who is
a founder that I've interviewed for the last fifteen years,
my team sat down with me. When I told people
it's going to come interview him, it was there's such
a visceral reaction. It's either people were so amazing, that's
so cool, or man like, that is so terrifying. Tell
(00:58):
him not to do this, and tell him not to
do this. Everyone cares. Everyone seems to care about artificial intelligence,
and there's a good reason why, and that's because it's
going to impact every single one of us. It's going
to impact your children, it's going to impact your parents,
it's going to impact you, it's going to impact everyone.
And I think the reason for all of that is
because this company that he's the CEO of is now
one of the most important companies in the world, and
(01:20):
the technology that's built at open Ai will impact every
facet of our lives. And there's a lot of fear
around that. There's a lot of hope, but it's really
this uncomfortable period that we're in, and I think Sam
Altman very much represents what happens when we're going through
a technological revolution and there are a lot of questions
because it feels really high stakes. The now of this
(01:44):
feels really important because I want to go see the
person that I've known for fifteen years and look at
who that person was and how that person is going
to stand in this moment. Sam, how long have we
known each other.
Speaker 1 (02:00):
Almost twenty years.
Speaker 2 (02:01):
We've known each other a really long time. So I'm
excited to sit down with you at this moment because
I sat down with you throughout your career and I've
seen all sorts of iterations of it. I would argue
this is the most important seat you have sat in.
For sure, the stakes feel really high. We are in
this moment where we're wondering if this technological innovation is
(02:22):
going to be incredible for all of us or incredible
for some of us. Do you feel that.
Speaker 1 (02:29):
Yeah? I mean we are clearly entering the phase where
AI is going to be one of the high order
bits of how our future society gets shaped, and I
think people feel all of the excitement, fear, anxiety, hope,
nervousness all at once.
Speaker 2 (02:47):
I'd love to go back to when I met you,
like in twenty ten. This is long before the release
of Chat GBT and AI's integration into the world. I
remember I was just a cub reporter and I was
obsessed with this thing called startups that people weren't really
talking too much about. It wasn't normal to get into
a stranger's car that's Uber, by the way, or sleep
(03:10):
in a stranger's home that's Airbnb, And it was just
this really interesting moment where the iPhone had come out,
the app store had launched, and you could have this
idea and you could code it into the hands of
millions of people. It didn't feel like tech feels today.
It felt anti establishment, it felt a little bit punk rock,
(03:30):
and so I just want to ground this. And I
met you at that moment, right, I met you on
a random bench to put you on camera for the
first time. You had a geolocation app called Looped, and
it was a moment in time. You didn't have a
massive company, didn't have pr people. It was just like,
(03:52):
you know, you, I assume like a handful of folks
building out this thing that you thought could be the future.
Tell me how you would describe yourself then and that
moment in time before we get to this moment now,
which is just so incredibly different.
Speaker 1 (04:08):
The world had really changed. You could make mobile apps,
a whole new kind of startup potential had unleashed itself.
And it did feel kind of like punk rock is
a nice way of saying it, but like chaotic, unorganized, unprofessional,
like most of us didn't really know what we were doing.
And you know, now it's been more systematized, and startups
(04:32):
feel very different and they raise a ton of money
right away and they grow really fast, and there's like
a playbook that's been figured out. But it almost felt
like that was like the pre paradigm stage and it
felt really fun, kind of felt like we all didn't
know what you were doing, but clearly something was working.
So I look back on that phase with like a
very pleasant nostalgia.
Speaker 2 (04:52):
Yeah.
Speaker 1 (04:52):
Also the stakes felt solo. That was nice too.
Speaker 2 (04:54):
I think that was something I was thinking about when
I was sitting here thinking about coming to meet you today.
Like viewed you at many points in your career, and
it just feels like, I mean, the stakes are so
high now you Let's go back to the last time
I interviewed you was twenty twenty. You're the CEO of
open Ai at the time, but it's before you guys
(05:15):
had launched chat GBT. You said something to me that
I think was really interesting to hear back now that
I want to play for you really quick.
Speaker 1 (05:25):
I may be wrong, but what I actually believe is
we're going to create this technology that is more transformative
than any technology humans have ever created, and almost no
one is paying attention or talking about it. I really
genuinely believe that this is going to change the world
in unrecognizable ways. It's going to make what we talked
about happening earlier with the iPhone look like a warm up.
(05:45):
And we got to talk about that.
Speaker 2 (05:48):
Okay, I think you were right.
Speaker 1 (05:51):
I mean, yeah, it was pretty accurate.
Speaker 2 (05:52):
I think that was pretty accurate. So let's fast forward.
Speaker 1 (05:54):
The thing that's amazing is like, just how that's six
years That was six years ago. That's not very long. Yeah,
and like so, and at the time that I think
that probably sounded like a crazy thing to say that
was like pre GPT. I mean maybe we had GPD three.
It depends when in the year that was. But yeah,
a lot has happened very quickly.
Speaker 2 (06:14):
Yeah, I mean, and I think that's what you talk
about with the rate of acceleration, and which is why
you talk about the stakes. We talk about the stakes
being so high. So like now it's twenty twenty six.
Open ais reportedly valued it like a trillion dollars yore
now for profit, no longer a nonprofit, and you're building
one of the most powerful technologies in human history, and
(06:35):
the company is incredibly valuable. So you talked a little
bit before about where we are and AI you said something.
You said that based off the current trajectory, it's possible
we're only two years away from a world with more
of the cognitive capacity in the world inside of data
centers than outside of them. I mean, tell me the
(06:56):
implications of that.
Speaker 1 (06:57):
Yeah, it's easy to say, oh, we're going to have
super here, is super intelligence here, but that there. But
the reason I try to say things like that is
I think if you just say we're gonna have AGI soon,
people are like, Okay, I don't really know what that
means how to think about it. If you say, like,
the AI is going to do more of the total
thinking than people are, that kind of like gets across
(07:20):
the magnitude of what's happening in a way that to me,
at least, I feel differently than just saying like AGI
is coming soon. In what sense, I'm not like a
long term jobs doomer. I think we'll figure out new
things to do. I think we always do. But in
the short term, I think it may. Again, we don't
(07:41):
know it could work in all these different ways. I
think it may and likely will be quite disruptive to
a lot of the current jobs. If more of the
intellectual horsepower in the world is GPUs, and I think like,
right now, we've got to start talking about the principles
for how we want to design an economy that works
for everybody. If that's going to happen. Now, this tremend
is upside too. If you have that world, you can
(08:03):
also you could imagine curing diseases extremely quickly. The coolest
meeting I had this week was a guy who was
not an expert that used chatchipet to make a custom
mr NA vaccine for his dog's cancer. WHOA, and it's
just incredible.
Speaker 2 (08:18):
It was crazy, and he's meetings like that every single week.
Speaker 1 (08:22):
For whatever reason, this one hit me in just hearing
him tell the story. He flew in from Australia and
like what he went through for this dog and what
he was able to do, like not just figuring out
how to make the vaccine, but developing the relationships with
the professors that had to like do the work for
him once he told them, like here's the sequence and
here's what to do, and this the way he was
(08:44):
able to use chatgipt full stack to like do what
would have taken a full research institute and save his dog.
And now he's trying to figure out to do this
for other people's dogs. It's so incredible. So they're you know, abundant,
you can do all of these things. Assuming we get
the new version of the world right in a way
(09:07):
that this works for everybody and it's not just like
a small number people have the access and the resources
to do this.
Speaker 2 (09:12):
Well, you know what. I also hear where I'm like, Wow,
the guy cured his dog's cancer. That's amazing. And then
I'm like, wait if he could do that, Like, couldn't
someone just build a pathogen? Right? Like what about building
out viruses? So it's like as always, it's you know,
I imagine that's something you guys think a lot of,
like a lot about when it comes especially these open
source models.
Speaker 1 (09:31):
Yeah, are thinking around safety has evolved quite a bit,
given the way that ecosystem has developed. We still think
everything we used to think is important. You know, you
still have to build. You have to align these models
to get them to behave in a way you want.
You have to build many levels of safety defense. But
as you said, there's this new thing going on now,
which is the world has to be resilient to lots
(09:57):
of day I from lots of different people, and not
all that will have the same safety restrictions. So if
someone does build that bad pathogen, we should do everything
we can to prevent them from building that or releasing it.
But we also need the ability to defend against it,
have rapid response treatments and vaccines, have the ability to
detect pandemics earlier. This is a particular area that I'm
(10:17):
nervous about and the reason we've started talking about this
whole concept today a resilience at least in my own head.
The thing that first got me thinking that we needed
to evolve our approach was this pandemic point in.
Speaker 2 (10:31):
Particular, when you say resilience, like, what does that mean in.
Speaker 1 (10:35):
Practice, like the ability for society to rapidly defend against.
Speaker 2 (10:39):
New threats and how do we do that?
Speaker 1 (10:41):
Well, it depends on what the threat is. But for
this example, it's not enough to just say, like, hey,
three frontier labs in the world, don't let any of
your models be used to develop a new pathogen. In
a few more years, when some open source model is
capable of it and a terrorist group or someone else
does that you doing today? Frontier labs that have an
(11:02):
advantage to help help us be ready for that moment.
Speaker 2 (11:05):
Right right, And there can be like a collective effort.
Speaker 1 (11:08):
I think it has to be different levels too, you
know mentioned vaccines, earlier response. I honestly I hoped out
of COVID the world was going to be more ready
for this. And I won't say we learn nothing, but
we did not make the progress I was hoping.
Speaker 2 (11:24):
I want to talk about open AI today and what
your biggest priorities are.
Speaker 1 (11:30):
So Codex has really been amazing for us all to
watch and use, and it is the first time that
I feel like I have gotten to use the future
since CHATCHBT launched. In what sense, I'll say the positive
sense and the negative one. The positive sense is I
(11:51):
any idea I can come up with any piece of
software I want, I can have built, you know by
the time I wake up the next morning. And I
use this for fun stuff, for serious stuff. I'm like
very interested in what it means to mostly automate or
heavily augment my job. And you know, it's rather than
(12:12):
like ask someone else to build stuff for me. The
fact that I can build these tools myself is amazing.
That's like, Wow, this is what it's going to be
like when people get to use a huge amount of
compute and anything they want they can create, Like this
is the sci fi future. This is awesome. And then
the negative I had with Codex was I had this
list of side projects that I wanted to build and
for the last you know, few years open and it's
(12:33):
been so busy, things only ever got added to it.
I never did any of them, I never built any
of them. And I finished the list. I ran out
of ideas. I ran out of side project ideas. And
the models are not yet good enough, or at least
I had figured how to use them to help me
come up with more ideas.
Speaker 2 (12:50):
Well, you could use your brain, yeah, but.
Speaker 1 (12:52):
I had like used my brain for this list. And
so then I had this like weird thing one Friday
a few weekends ago where I was like, I'm going
to go to bed tonight without Codex because I don't
have like another idea of thing to build. That was
like a strange feeling. But but it's really amazing what
is happening with how good Codex has gotten. I mean,
the like we just watched the usage of this expand
and with the next model, I'm sure it'll explode upwards again.
(13:16):
The fact that people can now build such sophisticated things.
It's totally changed our workflow. It's changed the workflow of
many companies. It's really changed what individuals can do and
you know, we've talked about these one person startups that
may feel like the punk rock era again that's going
to be fun to explore. And then on the super upfront, historically,
(13:36):
we've talked about two major kind of priorities that we're
going to work towards. One is the automated researcher, where
we have systems that can do scientific research, including AI research,
on their own. And the other, which touches on this
kind of startup thing, is you know, automated companies, like
(13:57):
not literally fully automated companies, but where you can sow
accelerate a company because the AI can do not just
coding work, but huge amounts of what it takes to
run and operate a company. The world can have much
more stuff and we can create you know, way better
products and services, and it'll be accessible to way more
people to do that. But then a thing that we
(14:18):
didn't talk as much about, not because we weren't excited
about it, we just weren't sure how I was going
to manifest itself was when you take that level of
capability and give it to an individual and just say,
you know, help make my life better. Open claw really
I think showed us a path there and the I
(14:40):
hate the phrase super app, but the combined personal agent
that we're building. The dream that I have is I
get to a point where there's like enough robustness and
trust I have in the system where I just say, hey,
go look around my computer, go use the web for me,
go read my messages, and you know, you can just
you can sort of like listen to my meetings and
(15:01):
you know my just intermediate my interactions for me and
just start doing useful stuff to me. I don't have
to think, I don't have to ask you questions, but
I've run out of stuff on my side project list. However,
you can know everything about my life, start suggesting more
things I should build, and then like help me do them.
So I'm really excited for that.
Speaker 2 (15:19):
It's interesting. You know, there is this concept that floats
around of could the next billion dollar company be created
with like a solo entrepreneur and AI agents.
Speaker 1 (15:29):
Do you believe that has happened? I promise I would
not share details until he's ready to announce it, But
I believe that has happened.
Speaker 2 (15:36):
Can you give us any details around the details.
Speaker 1 (15:40):
It is a legitimate, single person billion dollar company as
far as I can tell I'm not like reviewed the financials.
I mean, but I think it's just happened.
Speaker 2 (15:50):
I think folks listening to that and think, well, this
is AI can be so unapproachable. But the idea that
that we are entering a world where you don't have
to have Silicon Valley contacts. You don't have to be
able to be in this club to build out a
company and have evaluation. And I think open Claw is
(16:10):
a really interesting example for folks who don't know what
open Claw is. It's kind of this like AI agent
that just does anything, which of course had lots of
safety issues. I would say, like, if you're going to
give it access to everything, this is the most Silicon
Valley thing. I just see everybody doing this and I'm like,
what could go wrong? Like, you do have to be
very careful about the security of this, but you the
(16:32):
founder ended up joining the company, and so I don't
know if this was like an acquahi or whatever you
want to call it. But he built this with AI agents,
and I imagine.
Speaker 1 (16:43):
He was like one of the top users of Codex
of all time, so he built this whole thing with
Codex and it was just like unbelievably productive in a
way that no single person could have been.
Speaker 2 (16:52):
Is he one of the early examples you think of
someone who was able to exit cell or you have
a company of.
Speaker 1 (17:00):
A successful company with one person.
Speaker 2 (17:03):
Does that surprise you at all?
Speaker 3 (17:05):
No?
Speaker 1 (17:05):
And I think it's awesome. I mean the individual empowerment
that happens, as you were just saying, when you don't
have to be part of the Silicon Valley network, you
don't have to know all these people you can hire
and these vcs that you can convince to give you money,
you can just like just have AI build it for you.
I think that's awesome. And I think we're seeing the
same thing now in science and a lot of other
fields too. It's not just startups.
Speaker 2 (17:26):
Yeah, I'd love to talk about as we're talking about
the company and the focus of it, it's what are
you doing and what are you not doing? And so
apparently you're not doing Sora anymore. You guys have made
the decision to shut it down. So take me to
the room. Sam. It's like three months ago, Opening Eye
signs a deal with Disney. This is like a Landmark
deal and in so many ways, Disney's going to license
(17:47):
two hundred characters, Pixar, Marvels, Star Wars. Everybody has an
opinion on this. Bob Eiger, who led the deal, he said, this,
not me. We're bringing together Disney's iconic stories and characters
with open Ay's ground technology. They're going to invest a
billion dollars. Fast forward three months later, open Ai shuts
down Sora. What's the note under the note here?
Speaker 1 (18:09):
We have a few times in our history realized something
really important is working are about to work so well
that we have to stop a bunch of other projects.
In fact, this was the original thing happening with GPT three.
We had a whole portfolio of bets at the time.
A lot of them were working well. We shut down
many projects that we're working well, like robotics, which we mentioned,
(18:31):
so that we could concentrate our compute, our researchers, our
effort into this thing that we said, Okay, there's a
very important thing happening. I did not expect three or
six months ago to be at this point. We're at
now where something very big and important is about to
happen again with this next generation of models and the
agents they they can power. But I love soa I
(18:55):
love generated videos. And I love our partnership with Disney
and we're working hard with them to find a world
where they can still do something amazing and we can
help with that. But we need to concentrate our compute
and our product capacity into these next generation of automated
(19:18):
researchers and companies.
Speaker 2 (19:19):
And the note under the note is it's about compute.
Speaker 1 (19:22):
It's always about compute.
Speaker 2 (19:24):
God to be a fly on the wall for that conversation,
did you call Bob Eiger and tell him personally?
Speaker 1 (19:30):
Of course?
Speaker 2 (19:30):
Yeah, how was that?
Speaker 1 (19:32):
Disney is an amazing company all around, and the like
the very first thing that the new Disney CEO Josh
said to me, and I felt like terrible and I was, okay,
you know, it's like I get it, but it's super
sad always to disappoint a partner or users or a
(19:54):
team all of what you're doing incredible work and the
I mean, there are like many hard parts about being
a CEO that you don't get sympathy for. Understand that lead.
But one of them is like you have to make
a lot of very tough resourcing calls and a lot
of good things get caught up in that because they're
(20:14):
not the most important thing.
Speaker 2 (20:28):
I want to go back to February twenty seventh, the
Trump administration had to deal with Anthropic. They're renegotiating the deal.
Anthropics CEO said he's not going to allow cloud to
be used for autonomous weapons or to surveil American citizens.
There was a big public back and forth. People who
didn't even really know about AI were seeing Anthropic on
the front pages of the paper. They were given a
(20:49):
five pm deadline to comply. They didn't agree. The Trump
administration accused them of being traders, labeled them the tech
a supply chain risk, and ordered federal agent to stop
using Anthropics. So hours later, OpenAI announces a deal with
the Department of War. Was that a mistake.
Speaker 1 (21:08):
I'll separate what we did and why from the way
that we rolled it out. We had decided not to
work with the government on classified networks. It's not that
it's not important. It's just that Anthropic had gone hard
in that direction, and I think they're, you know, reasonably
safety minded, and we're doing good work there, and we
(21:29):
were busy with other things. I do think it's very
important that our government and our military in particular have
access to advanced AI models. I do not think that
they will be able to carry out their mission for
national security without that. I also think that we talked
about kind of the changes happening to society earlier. One
(21:51):
of the most important questions the world will have to
answer in the next year is our AI companies or
our governments more powerful? And I think it's very important
that the governments are more powerful, like the future of
the world, and the decisions about the most important elements
of national security should be made through a democratically elected process,
(22:15):
and the people that have been appointed as part of
that process not me and not the CEO of some
other lab. However, there are areas where the law hasn't
caught up yet. This is a new technology and it's
going to take some time to legislate it, and during
that time period, I think it is reasonable for the
companies to say, hey, we understand something about this technology.
It is not ready yet for autonomous weapons. The rules
(22:36):
about protecting against domestic surveillance were well thought through for
a world without super powerful AI and are not yet
adapted for that. So in areas like that, I think
it's reasonable to say, hey, we just need to go
slower on this, which we did, like that was part
of our contract as it's written, that is part of
our safety stack that those are we share the same principles.
(23:00):
Thropic is right on those principles, and as I've heard,
they like very nearly got a deal done like ours
with those principles. In my dream world, they would have
just kept working with the government. I don't think it
works for our industry to say, hey, this is the
most powerful technology humanity has ever built. It is going
to be the high order bit in geopolitics. It is
(23:22):
going to be the greatest cyber weapon the world has
ever built. It is going to you know, be the
determinant of future wars and protection. And we are not
giving it to you. We are not going to tell
you have it. You know, we're going to make a decision.
We're going to make the decisions here. If I were
the government, or if I were you know, a citizen
voting for representatives of my government, I would say, like, that's
(23:42):
really not okay. And I thought, and I still think
that had we not gotten involved with the government, things
could have gone fairly off the rails pretty quickly. Where
the government was saying, like you know, they were making
DPA threats. They were making some stronger statements than that.
So we need to say, hey, like going to leave you.
We the industry can't leave the US government without something here.
(24:04):
We will as long as we can have something that
respects our red lines. I wish we'd announced it very differently.
I think, like my goal was to lower the temperature.
It clearly had the opposite effect. I think I like
learned something about how people feel about the government, how
much people want to tell like a competitive story in
the industry. So I'm sure we'll do better next time.
But on the merits of like, the government needs support
(24:26):
from US companies to carry out the mission and national security,
and we need to answer that call.
Speaker 2 (24:29):
I feel good, isn't it? Also? The devil's in the details, right,
you know, if we look at how people feel, we're
in a contentious time. You know, there's increased domestic scrutiny
over ice, the war in Iran. So what do you
say to folks who say, okay, like your red line,
but you sam like, what exactly like is your red line?
(24:52):
And how will you knowing that things can go awright,
knowing that these technologies can be used in these ways
that you might not be able to control. What is
your process for knowing when we're too close to the cliff?
Speaker 1 (25:06):
Yeah?
Speaker 2 (25:07):
And also like, can you enforce your red lines?
Speaker 1 (25:10):
Yeah? I mean one question that people say is if
the government's nationalized the AI companies, can you enforce any
red lines at all? And I'll be honest at at
that point, probably we you know, like it's you can
imagine a world of that happening where no, we couldn't
anymore as an independent company. And certainly in our interaction
with the government, they have been really great about saying, hey,
(25:31):
you know, we understand these issues. You will do it contractually.
You can build systems there. Now. Again, I don't know,
nor does anybody else know. How weird the next few
years are going to get. I hope the governments don't
nationalize the AI labs.
Speaker 2 (25:46):
You don't think that's a possibility, though, do you.
Speaker 1 (25:49):
I would don't say it's not a possibility. I don't
think it's.
Speaker 2 (25:53):
Have you had conversations about that.
Speaker 1 (25:54):
I don't think it's likely, but I like I will
say that if I want, if we want that not
to happen, and then I think we better find a
way to work with the government if we're not doing it,
it would seem more likely. The I've said before, and
I still really believe that in a well run society,
developing AI would have been a government project. The government
(26:16):
is the government project. The Apollo program was a government project.
Even the Eisenhower Highway System as a government project. But
it doesn't feel like we're in a time like that anymore.
It doesn't feel like the government could effectively do this,
and I think it does need to happen.
Speaker 2 (26:29):
You know, there's this assumptions that federal agencies operate within
the law, but history shows that it can be fluid.
That surveillance programs exposed by Edward Snowden Essentially they were
authorized by secret courts and considered legal at the time,
but they were later ruled unlawful in reform once they
became public. And I think that's why I go back
to your red lines of a lot of these things
(26:50):
sound great in theory, but when we look at the
current reality of the world that we're living in, how
do you define these red lines and what happens if
you can't pull them that?
Speaker 1 (27:00):
So two things there. One that was also one of
my biggest takeaways is there are a lot of people.
I don't know if it's a representative sample or just
a lot of people online, but there is there's at
least a group of a lot of people online who
really don't trust the government to follow the law, and
that feels like a very bad sign for our democracy.
I mostly do. I mean, I realized they're not perfect
(27:22):
and some things are going to get screwed up, and
I think we have a system of checks and balances,
but I mostly trust it. As you said, this is
like a time where people are looking at things happening
and saying, I'm just, you know, don't feel bad about it.
And maybe this is I don't feel good about it,
and maybe this is such a uniquely bad time that
like companies, patriotic duty is not to support, not to
(27:42):
work with the government. I totally disagree on that. I
think again, given what I see on the horizon, if
we don't help the government with national security and it's
not just wars and traditional sense, if we don't help
them with you know, defending this cyber infrastructure of the US,
if we don't help them with the biodefense we were
talking about earlier, I think it's really I think we
have to work with the government, but the intensity of
(28:05):
the current mood of mistrust I was miscalibrated on. And
I understand something there. Now.
Speaker 2 (28:10):
A federal judge just ordered a preliminary injunction against the
Pentagon's actions on Anthropic, calling the administration's actions what appears
to be a First Amendment retaliation. So I'm curious for
your response on that.
Speaker 1 (28:22):
We said all the way through that we thought, like publicly, privately, loudly,
that we thought the government doing anything like an scr
or threat of the DPA against Anthropic was really bad.
We were trying to help provide an off ramp there.
I think that's like a very bad thing.
Speaker 2 (28:38):
What would be your message to the folks at Anthropic
and the government on it.
Speaker 1 (28:42):
Stop the stuff on boat, stop the escalation on both sides,
and find a way to work there.
Speaker 2 (28:46):
I was thinking a lot about our last conversation in
twenty twenty, and I was struck by a story you
told me that I you know, it's a kind of
a personal story, but I think it's go for it.
You talked about how you were one of the only
openly gay students your high school growing up in Missouri,
and how you essentially gave a speech in front of
oh yeah, in front of an assembly to talk about it,
(29:10):
to put it out there so other people would start
talking about it, whether they liked you or not you said,
or whether they you know, they felt this way or not.
And I think there is something interesting about this moment
and the anger and the and the interest, because I
feel like you're under the bright lights again and people
are asking you what you stand for, and honestly, if
(29:33):
I'm being totally honest with someone who's known you for
fifteen years, I want to know the same thing.
Speaker 4 (29:39):
You know.
Speaker 2 (29:39):
This is a moment where things aren't normal, all these
things that tech folks say of like Okay, we can
do this and this, but we're this playbook is different. Yeah,
this is different, and so what do you stand for?
Speaker 1 (29:50):
I think this technology has to be democratized. I can
see a world where this is either something that is
concentrated from a perspective of wealth, but also power over
the future in the hands of a small number of
companies who want to make the decisions, and no matter
how much they care about all of us, the idea
(30:11):
that we would like trade off our agency and our
collective will over the future to a few companies doesn't
sit right with me. I think we need to empower
people with the technology and have our governments decide how
society is going to evolve, what the economic system looks like,
and people need to have this technology. And I think
(30:32):
one of the most important things that we ever came
up with was this idea of iterative deployment. We're going
to put the technology out in the world early and
often and let people figure out what works for them
and what doesn't and get a feel for it, and
society to decide what should happen. Now. The criticism of
that I'd give is we started doing that a little
over three years ago. Society has not made a lot
in the way of the decisions I was hoping for,
(30:54):
but people at least do have a sense for what's coming.
So that's that theme of democratization is a big one.
Empowerment I think goes closely along with that. But we
talked about a bunch of examples where people have been
really empowered with this technology. Yes, we should all collectively
decide what the guardrails should be, and we should have
(31:14):
democratic governments have a lot of power. You should strengthen
the maxu where we can. We should also, and I
think this is like a deeply held American value that
has worked very well. We should really empower individual people,
you know, like we should set some broad rules and
then let people go figure out how to cure cancer
for their dog, or figure out how to make a
one person startup, or figure out how to set up
(31:36):
open Claw to like automate something amazing in their life.
And this is uncomfortable for people because individuals are going
to have so much power. But I think the democratization
and empowerment pieces are linked importantly. Abundance is something that
I care a lot about. If I look at my
career and the stuff I've been interested in and kind
(31:57):
of what I believe about what it takes to have
a better world, and I could pick only one word,
I pick abundance. I think you want to just create
huge amounts of resources, opportunity, the ability for people to
have like an amazing life and to sort of express
it in all kinds of in all kinds of different ways.
(32:18):
And that's AI. It's energy, It'll be things like robots
in the future. It's certainly making sure that we build
enough compute to make all of this happen, but that's
a place that I really want to drive towards. Safety
has evolved or been added to. But that's another thing
that I like. The trade off of all of this
(32:39):
individual empowerment is that one person could do a great
deal of harm, and so building up guardrails in the
world to prevent that and where we can feel okay
about this is critical. I named a bunch of principles.
There's more I'd like to name, you know, like fairly
sharing all of the upside and the voice and the
(33:00):
agency and the decision making of this technology. Is that
one seems like one that would be very difficult to
ever change my mind on. Yeah, But other things, like
maybe the trade off between safety and agency. There'll be
periods where we have to say, ah, this technology went
in a different way than we thought. We really love
empowering people, but not at the risk of destroying the
(33:20):
whole world. So we're going to have to do something
differently for a while. So we will wrestle with these
principles all the time, and we, I would say, with
some certainty, will have to make changes as the technology progresses.
And I don't think people like hearing that, but it is
the truth.
Speaker 2 (33:34):
Man named Blasio runs the mc Government Institute. He asked
a question, and I think it's an important question, because
to be a founder now is really different than it
was back when we started talking talking about this, when
it felt a little more anti establishment and chaotic. Now
there's kind of like a different playbook. And so he
had a question about how you navigate it.
Speaker 3 (33:55):
Okay, many of us became founders or builders because we
wanted to make something useful. Used to be about a
fanatic focus on the product, building something great and getting
it into people's hands. But the job of a founder
seems to have fundamentally changed over the last few years. Now,
especially in AI, being a founder might mean having to
(34:16):
raise sovereign capital, or stand next to heads of state
at a public meeting, or endorse political statements. It might
mean making decisions that reshape labor markets and affect billions
of lives. They're the decisions that used to be the
domain of governments and required a very different kind of
preparation than building a product. So Sam, as a quintessential founder,
(34:41):
I'm curious, is this what it means to be a
founder now? And if so, who are founders accountable to?
When they're making those kinds of decisions, how are we
preparing them to hold that responsibility.
Speaker 1 (34:53):
I don't think that's what it means for most founders.
I think, you know, if you're one of these one
person or small team companies build in with AI. It
feels the closest to that twenty ten period we were
talking about that it's ever felt since then. My own experience, unfortunately,
does feel more like a politician in what sense I
have to do those things. I have to go like
(35:14):
negotiate deals with governments and fly around and you know,
talked to war leaders and work on these deals for
land and power for data center expansion that are kind
of the kind of deals governments used to do. It's
not my natural environment, say more. It was what you
were saying earlier, Like I like my whole life has
(35:34):
been around kind of the garage version of the founders.
I've won a lot more suits in the last couple
of years, and I've worn in my whole life put
together before.
Speaker 2 (35:42):
And it also feels like the stakes are much higher.
You're in very different rooms than you used to be.
What have you learned about that? Becaus it been an
easy process to kind of to lean into them.
Speaker 1 (35:54):
No, no, no, not at all. This sounds obvious, But
the difference in skills and temperament and worldview between the
average politician and the average founder, I kind of knew
they were crazy different. They're crazier different than I could
have ever imagined a lesson. I feel like I've learned
repeatedly in life, at higher and higher levels of power
(36:15):
and status and whatever is that everyone thinks. I always
have thought, there's like some adult in the room. You
always get to some final boss. There's someone with a plan,
there's someone who knows exactly what they're doing. And the
leaders of the world are also you know, uncertain and
insecure and trying their best but don't have the answers,
And like maybe this, maybe that kind of going to
make a call while I'm.
Speaker 2 (36:36):
Tired either way, doing kind of a hard pivot to
something I think we both care about, but I think
is probably like the most high stakes thing I could
think of is being a parent in this era of
(36:58):
AI both raising young boys, and I think I said
this to you before, like we're both raising young boys,
but in a sense you're raising my son too. Like
the technology that you build will be integrated into every
facet of my son, Charlie.
Speaker 1 (37:14):
I hope I'll let them use it, Yet I am.
Speaker 2 (37:17):
Absolutely not letting him use it. When are you letting
your son use it?
Speaker 1 (37:20):
Not for a while?
Speaker 2 (37:21):
Do you have an idea?
Speaker 1 (37:22):
You know? People ask me all the time, like, oh,
now that you have a kid, are going to have
more kids? Like do you feel more responsibility about how
you don't destroy the world with Ai?
Speaker 2 (37:31):
Do you?
Speaker 1 (37:32):
No? Like, my highest order bit is to not destroy
the world with Ai, Like it doesn't matter how good
everything else is if we do that. So I knew
I was going to have kids, and I knew that
I thought a lot about what I would feel about
the world and my future kids would grow up in.
And you know that I used to write my kid
a letter every night. The lawyers eventually convinced me to stop,
(37:54):
but it was a nice thing to do about just
because when I would put him to bed, I would
just sort of get home late, like for his bedtime,
and I'd, you know, i'd be rocking him, and he
needs something to talk to, you kid about. So I would
tell them about my day and what was going on.
And you know, obviously a five month old four month
old baby's not going to understand these things. So I
was like, you know, i'll write them down. I'll give
them to him later because it was sort of an
interesting time. But an interesting thing about writing those letters
(38:17):
would be like you can't really hide anything, like you
will be the most honest version of yourself and the
decisions you're making and the thing you write to your kid.
So I really loved doing it. It was like my every
Sunday night ritual about like here was the hard stuff
that came this week, and here's why I decided and
why and you know, here's what I'm worried about. But
the like the mindset of writing that to your kid
to read, and yeah, you know, fifteen or twenty years
(38:38):
very interesting, but no on the like not to story
the world that I always very said on that. A
thing that has changed hugely is how I feel about
algorithmic feeds and iPads in small children's hands and stuff
like that. And when I watch kids just a little
bit older than mine that you can not take the
(39:00):
iPad away from watching the sort of like short you know,
watching like whatever they're watching. That I feel very strongly about.
So I don't know when I would let him talk
to AI, but I'd rather be on the late end
of what's reasonable. They're not the early end. Of course.
I think it's like great, and he'll grow up in
a world where computers are smarter than him and do
(39:20):
anything he wants. But you know, I wanted to like
play in the dirt for now.
Speaker 2 (39:25):
Well, I mean, what's interesting is you talk about these
algorithmic girls. I think a lot about this is a
mom too, Like, oh, these tech guys are just going
to create all the stuff and then send their children
to school with like no iPads, right, And so I
think about like I think about AI and how powerful
this product is. I mean, it is highly personalized, it's
always on, it's always there. You have people who are
(39:48):
becoming increasingly delusional by talking to AI all day. We've
had issues of AI psychosis, people ending their lives because
of AI companions. I'm curious, Like, you don't want AI
to become the next social media. That's terrifying, but we're
in a world where that could be a possibility without
the correct guardrails.
Speaker 1 (40:07):
Just as we learn more about the relationship people are
going to have with AI, we should make it easy
for them to succeed, which make it easier for them
to have the healthy relationship and then give adults a
lot of freedom. But kids teenagers, I think they will
need a lot of guardrails around AI. This is this
is such a powerful and such a new thing. You
mentioned like sending how we're going to send our kids
to school. There is a version of the future here
(40:30):
that I'm very excited about. These new schools are popping
up where you have, you know, like maybe a couple
of hours a day of intense personalized one on one
tutoring with AI and then you kind of explore project
based in the school sets up whatever you want to
work on. That seems great. That's the kind of thing
where I'm like, I can totally imagine that, But you
can also mention a lot of worlds where it goes wrong.
Speaker 2 (40:51):
And what world do you see AI being a very
a huge value add to the way our children grow up.
Speaker 1 (40:58):
So I think there's like two stories you can tell
about A and education right now. One is you can
say everybody's using it to cheat, they're not really using
to learn their And you know, schools are making their
kids like write essays by hand because they have not
been able to figure out how to teach a curriculum
where people can use AI. And the other is you
can say, well, the world has been through transitions like
(41:18):
this before. We've freaked out about calculators, We've freaked out
about Google, we freaked out about computers. And it takes
a little while. It's only been three years, but we
do figure out how to teach people to think and
achieve at a higher level and to work with the
tools that they will have as an adult in society.
And you know, I definitely meet kids and teachers who
are like, Okay, fine, I can't teach people. I can't
(41:42):
like evaluate an essay as well anymore. And I think
the kids aren't learning to write in the same way,
but probably adults won't write essays in the same way
in the future either. And what really matters is do
you teach the students to think and create and figure out,
you know, stretch their brain and come up with like
explore ideas. And then I see what some people are
doing where like students are building entire new complex pieces
(42:03):
of software and worlds, and you know, figuring out very
difficult new ideas, much more difficult to figure out than
anything I had to figure out in school, and I
assume my kids when they finish high school will be
like wildly smarter and more capable and have like developed
their brain much more than I did.
Speaker 2 (42:21):
I can't stop thinking as a mom about this idea
of friction, right, Like this is a weird thing to
say to a tech person, but like humanity is messy,
like vulnerability is weird, intimacy is awkward. It's all a mess,
but it's in that mess that we find purpose and
we build strength. And AI systems we now interact with
(42:44):
are so incredible in so many ways, and they are
a frictionless experience. And so I wonder thinking about my
son growing up, like will there be shortcuts? And if
we're not careful, we'll there not be the friction for
him to develop into the person that he could be.
Speaker 1 (43:07):
Allie and I talk about this a lot, like Ali's
your husband. Yeah, I would maybe call it adversity, not friction,
but maybe friction is a better word.
Speaker 2 (43:16):
Adversity to one person, friction to the other.
Speaker 1 (43:19):
But like, so much of what I think I learned
was in the messiness, in the difficulty, actually even in
the boredom, Like you and I were kind of the
last generation of kids that grew up board. Yeah, and
at the time I hated it. Looking back, I think
it was just like super valuable in all of these
(43:40):
strange ways.
Speaker 2 (43:41):
I grew up in the suburbs, hanging out at male
parking lots like that. Looking back on it, I'm like,
God help me.
Speaker 1 (43:46):
But we didn't have we just know, no phones.
Speaker 2 (43:48):
But there was a certain not to be nostalgic with
the person leading the tech revolution of AI. But like
there was a magic to that, And I also think
made me who I am totally.
Speaker 1 (43:58):
So I don't want to like hang on to the
past too much, but I do want to find that,
like figure out how we don't throw out the good
parts of that and what it's going to take to
create an environment for kids to grow up in that
still has some of that. I think it's my intuition
is it's very important.
Speaker 2 (44:18):
I want to play a little sound from doctor Becky.
Do you know who doctor Becky is? Okay, Well, doctor
Becky said something on this. I just interviewed her like
last week, and she had a thought on this.
Speaker 5 (44:28):
If I know one thing about tweens and teens growing
up in an AI age, they're gonna have a lot
of tricky situations, a lot of messy situations. I said
to my kid recently, my fourteen year old, you know,
if you were walking to a town and you wanted
to get there and all of a sudden there's a
short cut, But do you take it? He was like, yeah,
this is a trick question. I was like, yeah, I
(44:49):
would too.
Speaker 1 (44:50):
Good.
Speaker 5 (44:50):
Okay, you're growing up at a time when there is
this shortcut for every act, pandemic, an emotional thing you
ever go through.
Speaker 1 (45:03):
Like that is so.
Speaker 5 (45:05):
Hard because this shortcut isn't like the shortcut to the town.
It is a shortcut. But the way it will add
up over time is it And things we've talked about,
It's going to be harder to think about things yourself.
It's going to be harder to tolerate people not getting
it right. It's going to overtime maybe take away from
the things you're trying to build to function in the world.
But it is a shortcut. I'm not going to pretend
it's not.
Speaker 1 (45:25):
And so what a hard thing. I don't agree with that.
I mean, I agree with the worry, but the reason
I don't think that's what's going to happen. If you
were a kid from twenty thirty and you only had
to compete with kids from twenty twenty, yeah, you would
take the shortcut and you would do great, and you
(45:45):
would never really learn to think or struggle, and you
would just hugely outperform them. But the world we live
in is a competitive world, and I don't think that's
going to stop even if you did a lot of redistribution.
You know, people, we have a deep desire to excel
and be competitive and gain status and be useful to others.
And it's a multiplayer game, and all of the other
(46:07):
kids from twenty thirty are gonna have the same tools,
and it will push us, like the tools will raise
what we can do, but also they will raise expectations
even more.
Speaker 2 (46:16):
Okay, I take your multiplayer game and I go back
to I know we don't want to keep going back,
but I go back to my childhood. I didn't have
I would say sometimes the happiest like my parents got
divorced when I was like I think I was like
eleven or something, and I just remember I had this journal,
similar to how you talk about writing to your son.
I had this journal that I would write, I you know,
(46:38):
paint the picture. I'm kind of a I'm you know,
Jewish girl at a largely Christian conservative school, growing up,
parents getting divorced, all sorts of stuff happening that I think,
are you know, not unique to the teenage experience. And
I wrote so much of it in my journal. And
I thought a lot about this because I did a
story on character AI and a chatbot that was incredibly
unhealthy in an emotional way for a fourteen year old
(46:59):
that was interacting with and didn't have the correct guardrails.
And I think about my journal. What if my journal
talked back. What if it was the therapist I didn't
have at the time. What if it was the guy
that wouldn't look at me in school right? What if
it was the parental figure. Would that have helped me?
Would that have been a level up in the game
of the multiplayer game that we talk about, or would
(47:22):
that have taken me further away from my reality and
the hardships that I think gave me a certain amount
of empathy that allows me to enable to do what
I do in my career and live the life I have,
And would it have made me more isolated and more lonely.
I don't know the answer, but I worry that it
could have taken away the special sauce of messiness and
(47:45):
adversity as you and Ollie liked to talk about that
makes me human. And so I just wonder, as someone
who wants to democratize artificial intelligence, and he thinks a
lot about these things, how do you make sure that
you don't democratize a shortcut that sacrifices the things in
us that are especially human.
Speaker 1 (48:04):
I'm happy not to have to be a teenager again,
but I do remember the difficulty and the pain I
and I also remember just the absolute self centeredness and
how I thought like my struggles and life and was
the only thing that mattered in the world, you know,
(48:24):
and if I had and also how I kind of
only cared about my peers and whatever thing was happening
to me. And I don't think there's any well, I
loved video games, and we could spend a lot of
time playing them, but I don't think there's anything on
a computer that could have kept me from caring about
(48:45):
my peers in the world, and you know, like the
sort of normal teenage experience. I think it would have
helped me if I could write to something that would
talk back, but I don't think it would have been.
That's there's a lot of things I do worry about
for people who are going to grow up. This technology
that one seems like it should be should be pretty
positive to me.
Speaker 2 (49:05):
I think with the correct guardrails, if it's talking back
and and you know, and not being so affirmative, that
it just goes because if you think that's intoxicating, if
you're a young person or any person and you have
a very powerful technology that's speaking back to you, that's
that goes where you wanted to go, that sees you like,
think about this.
Speaker 1 (49:25):
Centetic AI models are incredibly dangerous. But AI models that
are kind of like thoughtful, carying listeners that will push
back as necessary, I think that could be a great thing.
Speaker 2 (49:40):
But that's in the product, right, That's not our fault
on the outside, right, Like, that's in the product of
how these things are built. So take me into open AI,
into these conversations behind the scenes of you know, there
are these dials and you get to push them. I
remember when chat Gypt, when four oh the model was discontinued,
was really you know, that was really affirmative. There were
(50:02):
people online that were almost treating it like it was
a death. That is a powerful signal to someone like you.
Speaker 1 (50:09):
Absolutely, I think that points to not just the danger
of a model that is too affirmative, but also how
I mean, there were these really heartbreaking messages people would
send stopping four O, which is, hey, this is the
only thing in my life that has ever been a
positive voice.
Speaker 2 (50:30):
You know what I think when you get those messages.
Speaker 1 (50:33):
I mean, it's heartbreaking. Obviously you really feel the weight
of getting this right. Where it's easy to say, hey,
we don't want a model that's too affirmative because it
can make you know, it could like it can have
negative mental effects on people. But then you hear somebody
else say, like, actually, I never had any confidence. I
had parents who told me I was terrible. I you know,
didn't have any friends in school. And because I've had
(50:55):
this model, sure maybe it's like a little too positive,
but it's given me the confidence and I've gone out
and got a job and found a girlfriend and like
this has been the most important thing in my life
and I'm doing great, but don't take it away. So
it's like.
Speaker 2 (51:08):
It's almost this fine line because there are all of
these stories and it's like, push this far and it
can help you get offline. Push this far and you
begin to lose that capacity for other humans. And I
think that's a really interesting moment. I think this is
probably one of the most important questions that we have
to ask our tech leaders as they build out the
future that our children will grow up in. What have
(51:31):
you decided not to do?
Speaker 1 (51:32):
I mean, the specific thing we decided not to do is,
although we think the models have extreme power to be
a positive force in people's lives, we don't know how
to balance a model that can be pushed too far
in that direction with the concerns of I mean, I
(51:53):
think you said is like kind of pushing people into
a psychotic episode. And so we've made a decision, which
is we know we're keeping something. And I'm not saying
we shouldn't have warm models. Of course we should, but
there are a lot of people who would like four
row back, and we've decided, when looking at the full
balance of upsides and risks, we can't quite offer that responsibly.
(52:17):
There are a lot of people who would like us
to have less restrictions on what our models can do
from a bioperspective. Probably more people could you know, save
their dogs lives with custom and my name vaccines if
we made that a little easier and turned up the
power on the model there. That would come with a
big risk on novel pandemics that we've decided it's not
worth the trade off now. Eventually, I think these decisions
(52:40):
will be made by society. You know, it's like not
up to the see of a company that makes airplanes
what the safety regulations are. But for now we kind
of got to do it.
Speaker 2 (53:02):
We've obviously talked a lot about the fears, but there
is so much excitement about what this technology can do.
Open Air has a foundation that's focused on humanity's biggest problems.
Where do you see this being good for humans in
a very specific way.
Speaker 1 (53:17):
My most exciting meaning of last week was with a
physicist who's using one of our latest internal systems. He said,
you know, my mind has been completely blown and it
makes whatever happened. We are going to make decades worth
theoretical physics progress in the next couple of years. And
you know, some people will say, Okay, who cares, you
(53:38):
can write a better equation, But I am a believer
that scientific progress is one of the most maybe the
most important thing we can do to make the world
better and better. Obviously, we can do things like developed
new cures for diseases, but there's so much more material science,
new ways to produce clean and cheap energy and bond
and energy that if we are really on the precipice
(54:01):
of being able to use AI to dramatically accelerate science,
then I think we can solve some of the biggest
problems in the world. And we now have one of
the most maybe the most well funded foundation in the world,
and the ability to use that amount of capital with
the technology to go create a bunch of new scientific
(54:22):
understanding for the world. I hope that'll be one of
our greatest contributions ever.
Speaker 2 (54:26):
A big moment happened recently, which is a California jury
found that Meta and Google reliable and damage is for
harmful and addictive platforms. So the first time that social media
has been treated as a defective product from a design perspective.
I'm curious what the outcome of this case means for
AA companies like open Ai.
Speaker 1 (54:45):
I mean, I think society is going to decide that
creators of AI products bear a tremendous amount of responsibility
to the products we put out in the world. But
I kind of thought that before this case, and I
haven't reviewed this enough to say if there's like a
specifically they will apply to us.
Speaker 2 (55:01):
Yeah, I think so much of the fear around AI
and who has the power is based in grounded and
you know, people worried that their jobs might not exist,
their livelihood might not exist. I'd love to play a
question from a mom who lives in Rhode Island. A
big part of this show, and what I want to
do is be able to give people like that access
(55:23):
to people like you to ask real questions. So I
want to play that.
Speaker 6 (55:27):
Hi, I'm the mom of two middle schoolers in Rhode Island,
and my kids are kind of coming around to the
idea of having a backup plan for their careers. You know,
as most kids, they have aspirational careers of being professional
sports players and you know, fighter jet pilots. And we've
been chatting about other options for careers and I'm curious
(55:51):
to see what careers you might recommend that would be
AI proof. My husband and I have been talking a
lot about skilled trades. How you know, as a homeowner
it's difficult to find skilled trades people to come and
help you out with electrical problems.
Speaker 2 (56:07):
With heating and cooling options.
Speaker 6 (56:09):
Maybe that's a great path for kids these days, and
more traditional like great career paths and engineering might take
a backseat to these skilled trades. Obviously, kids will do
what you know, very gravitate toward, and we encourage that.
But if you have thoughts on AI proof careers, I'd
(56:31):
be interested to hear them.
Speaker 1 (56:33):
Thanks so much. One thing I will point out, though,
in terms of things that I think are truly AI proof,
is I don't think anyone cares about the AI version
of Lori's Siegel. I don't think anyone wants to watch
an AI come interview people like me. We are obsessed
with other people. I don't think anyone care. You know,
(56:53):
she talked about our kids want to be sports stars
and fighter jet pilots. Maybe fighter jet pilots. Society is
happy to say, like, yeah, put the AI in there,
and that's you know, safer and better, and okay, the
sports stars. Are we really gonna watch like eleven robots
play sports against each other? Some people will.
Speaker 2 (57:14):
I mean a lot of that stuff goes viral. I
saw like a robot staring more dancing.
Speaker 1 (57:18):
It has novelty, But like, are we gonna get caught
up in a sports league of just robots without the
personal interest? Are we gonna read ai written novels and
not have an author that we could like understand their
life story and went into that. I would bet there's
a huge cluster of people care about people, and in
the future people care even more about people. But like,
(57:39):
this is so deep in evolutionary biology that those things
are like all quite a I proof someone said to
me recently. I thought it was with the time, it
seemed not that profound, more insightful the more I've thought
about it. In a world with true abundance, there's still
one limited thing, which is human attention. And we care
(58:03):
a lot about human attention. So there's a whole category
of stuff there's on the specifics, I'd go into it
if we have more time, But like, yes, clearly there
will be a lot a huge amount of short term
demand for electricians and skilled trades people. Eventually. I think
the robots will do some of those things. But you know,
all of these things are just sort of like periods
of time. If you just look at how much jobs
have shifted every last decade that will keep going.
Speaker 2 (58:25):
You talk about human attention, and so much of the
last decade of tech was kind of like really a
business model that preyed on human attention. I'm curious how
you think about the lessons of the last decade because
now you're a for profit company and there is this
idea that eyeballs on screen, more people using the product,
(58:45):
go go go, is what you know is what makes
the company successful? And how do you not fall into
the same business model trap that I would say the
last decade.
Speaker 1 (58:55):
We decided to just totally shut down, soa we were
thinking about other versions of keeping it before the computer
crunch came, Like we were talking about putting it into
the chatcheapy t app, really focusing on generation and creativity.
But one thing that we had realized is that to
succeed with it, as the product was currently conceptualized and
(59:17):
sort of this way, you could watch a lot of
videos that would have put a series of incentives on
us and would have led to a bunch of decisions
to win that we just didn't want to make. So
there was some version of that, and there still is
some version of letting people generate video that I think
is awesome to compete in short term video feeds and
where that's going to go on A world of AI
was not something I wanted to have.
Speaker 2 (59:38):
To go do, right It's an interesting decision of what
you don't do like and what that business model could demand.
Speaker 1 (59:45):
And there can be other versions of like this with chat,
like we could make a version of chatchbyt that was
very addicting to talk to. Now I have a bunch
of complications too.
Speaker 2 (59:55):
I want to play another another voice memo from a
guy named Joe.
Speaker 1 (01:00:00):
The clips are a fun little I know.
Speaker 2 (01:00:02):
Let's play Joseph.
Speaker 4 (01:00:03):
Hey, my name is Joseph. I'm a recent college grad
trying to work in film in New York City, and
I have a couple of questions for you. Number one,
what do you dream about? Do you ever have recurring
nightmares about AI? Do you often have dreams about AI?
Number two, if you were in my shoes as like
a young person trying to find themselves in a career
(01:00:26):
or just trying to like live in this world, what
do you think is important to read and learn about
right now? And what skills are most useful? And then finally,
what specific social causes do you care about? Thank you
so much.
Speaker 2 (01:00:41):
It's a lot of them, so you can pick a couple,
pick your favorite one. Thanks Joseph.
Speaker 1 (01:00:45):
You them all fast. When I until my early thirties,
I had this recurring kind of bad dream of that.
Right when I dropped out of college to start my company,
I would be in that period, but I was like
always I was in my dream, I was still in school,
(01:01:06):
and I was always missing something, and I was always
like you know, late for an exam, er for out
about a class. And this went on to like deep
into open AI time and by far the most frequent
dream of my life, and I it was so vivid.
It was a little different every time, but it was
like a kind of a crazy, most intense dream. So
I don't know what to make of that. What I
(01:01:28):
would study. I think the answer should just get really
good at using AI tools more than any specific thing
to study, and then the kind of skills that I
expect to matter in the future. And I think these
are learnable like resilience, adaptability, kind of like calmness, being
comfortable with a lot of change social causes. I am
(01:01:48):
very interested in this question of what the new version
of the social contract looks like, you know, I don't
think it's enough to say, hey, this job transition is
going to be really hard, but there's going to be
a great time on the other side. I think we
can say that, and then we have to say, and
here's our proposal for a new tax system and a
new way to think about collective ownership that makes sense.
Speaker 2 (01:02:05):
What would you propose?
Speaker 1 (01:02:07):
I am interested in ideas where everybody is an owner
in the magic of capitalism. I think that the thing
that happened with these new Trump accounts for kids, that
direction of stuff. I really like. So a world where
like we kind of preserve as much of what's worked
about capitalism of last couple of centuries as we can,
(01:02:30):
but we say, hey, everybody, just by virtue of being
a citizen is going to own some and we're going
to have like a tax system that enables that. I
feel very excited about that. I think as long as
people can afford a good life in the price of admission,
they will outperform expectations.
Speaker 2 (01:02:46):
Being in front of you and having spent so much
time over the last four years, we discovered there was
this site on the internet right where a man who
is anonymous for seven years prior had hyperrealistic, sexually explicit
deep fakes of women using artificial intelligence. It was it
was horrific looking at like deep fake porn and the
(01:03:08):
way that these impacted women and so much of that.
I think, you know, when we look at agency and control.
I talked to so many women and it's not just women,
all sorts of victims who are victims of deep fake pornography,
and they felt like they lost their sense of agency,
They had no control. They went to the laws hadn't
caught up, so they went to the police. They said,
(01:03:29):
we don't even know what you're talking about. I had
a woman talk about wanting to end her life, and
so the human impact of this is so incredibly visceral.
And I think about this through the lens of like, well,
whose job is this? Right? You know? And granted I
say this with the caveat that OpenAI has gotten in
front of a lot of this stuff. We're not seeing
a lot of these were open source models. But I
(01:03:50):
think that you're an important voice to be able to
weigh in on this type of thing.
Speaker 1 (01:03:55):
I mean that one seems like such a that's not
a hard decision to make. Right, Like, that's so clear
that we're not going to our models to be used
for that. Open source models. Yes, people are going to
use a lot of that. I would kind of go
back to the principles that I outlined earlier. You can
say a lot of bad things about the Republic, but
(01:04:15):
I think it has done a good job of representing
the will of the citizens and protecting against the tyranny
of the majority in that process. I think the Constitution
remains one of the most amazing governing docs the world
has ever produced, and I believe in it as a
governance system. So like with other technologies that have changed
(01:04:40):
the world, I hope that is our systems, our governments
figure out how to run the process to decide how
to make these trade offs about say freedom and protection
and what's in the best interest of society. Now, I
think you might say, and I would agree with, is
(01:05:02):
maybe that worked really well in a different time. Our
leaders didn't do a great job with the Internet or
with social media whatever, and that is fair, which is
part of why we've been trying to talk about this
more and earlier.
Speaker 2 (01:05:15):
But if I could say specifically, so much so many
of the folks victims I've talked to finally had recourse
when there were state laws. You know, there yet to
be a federal law. Takes the it takes a long time.
I know open AI has looked at a supported cribbing
state legislation in the name of innovation, and we have
to move fast. So I'm curious if you can explain this.
Speaker 1 (01:05:37):
May be one we disagree on. I don't think state
legislation will be helpful here. I would much rather see
the system urgently try to get a federal framework right now.
Maybe we try that for a couple of years and
we can't and we say, hey, it's just taking too long,
and you know, we need something. But like, I don't
think I think we need one stance on this. I mean, really,
(01:06:02):
there's even trickier thing which I think we need a
global stance on a lot of this. A lot of
these things are going to affect the whole world. We've
talked about things like bioterror, but certainly what happens in
one state will affect a lot of other states.
Speaker 2 (01:06:13):
But in theory, but in reality, sometimes the change happens
quicker at the state level.
Speaker 1 (01:06:18):
I think even if it does, it doesn't mean that
that will have the impact on the industry or the
technology you would hope for. You know, if we take
an area like social where I think we'd both agree
that government didn't really do their job. Public pressure did
a lot, yeah, and the companies, you know, I think
like most people that work at these companies probably want
to do the right thing too, And the companies not
(01:06:38):
every decision I would have made the same way, but
I think they made a lot of good ones. So
it's not like we only have governments to rely on here.
Speaker 2 (01:06:48):
I want to do a quick lightning round what will
be the most valuable human skill set that's AI.
Speaker 1 (01:06:51):
Proof caring about other people?
Speaker 2 (01:06:55):
Open AIS biggest opportunity and biggest missed opportunity.
Speaker 1 (01:06:58):
Biggest opportunity in front of us is this sort of
cluster of automating researchers and companies and the super personal
assistant for people's life. Biggest missed opportunity is we passed
on a compute deal once we shouldn't have passed on.
I got big one, like what I can't say exactly what,
but there was like a very large compute deal we
could have taken.
Speaker 2 (01:07:18):
Okay, how likely is open A I do acquire an
entertainment company.
Speaker 1 (01:07:22):
Not a top consideration right now.
Speaker 2 (01:07:24):
How likely is open A to go public this year?
Speaker 1 (01:07:26):
Could happen?
Speaker 2 (01:07:27):
Not?
Speaker 1 (01:07:27):
Yeah, I don't know.
Speaker 2 (01:07:28):
How likely is it that you could one day be
replaced as CEO by an AI.
Speaker 1 (01:07:34):
On the on the skills of my job? Very likely.
On the will the world want a human responsible for
the decisions of a company like open AI? I think
the world will demand that.
Speaker 2 (01:07:45):
Do you say please and thank you to check GPT?
Speaker 1 (01:07:47):
I do?
Speaker 2 (01:07:49):
Why force of habit? Not because you're expecting if it
all goes bad, you want to be like on the
other side. Okay, got it. Last time you spoken to
Elon Musk and what was the conversation?
Speaker 1 (01:07:59):
Aland I was just some emojis back and forth?
Speaker 2 (01:08:03):
Have you speaking? Have you seen the fruit AI videos online?
Speaker 1 (01:08:05):
No?
Speaker 2 (01:08:06):
Oh, they represent AI slop at its finest when what
we see AI wearables mainstream.
Speaker 1 (01:08:13):
Two to four years. There's the reason I give such
a broad range is right now, if you're talking to
someone that's got like a ford facing camera on their
glasses is incredibly off putting. Yeah, maybe that gets more
comfortable quickly, or maybe we just figure out a very
different kind of wearable.
Speaker 2 (01:08:32):
Could you give any sense of how you believe that
will look. Is it like a pin people are wearing?
Speaker 1 (01:08:37):
Pin? Doesn't feel quite right to me. I mean, I
hope it like more as a general thing, not about wearables.
I hope that the whole concept of products in the
age of AI fade into the background. Like the ideal
AI assistant I want, there's almost no product at all.
It's an extremely smart model that's got full access to
all the contexts in my life that I can just trust.
(01:08:58):
I can say one thing to an it's going to happen.
It'll be proactive with me only exactly what I want.
And so when you think about like software products in
that sense, and also wearables, they could be quite subtle
or quite restrained. Interesting.
Speaker 2 (01:09:11):
The craziest thing coming down the pipeline when it comes
to innovation that you don't think we're talking enough.
Speaker 1 (01:09:15):
About, uh, probably automated researchers. Like we know we're talking
about some but like if we talked about if we
waited every conversation in this room relative to it's that
we had today, relative to its importance of the future,
we probably would have talked way more about research, automated research,
and way less about everything.
Speaker 2 (01:09:32):
Else you're saying. One of the most important conversations we're
not having is about automated research, Like what what's make
for He made now.
Speaker 1 (01:09:39):
Like ten years of the whole world's scientific progress in
one year, and then one hundred years of progress in
one year. The transformation that will have the quality of life,
to the economy, to new risks like this is just
there's no analog for this.
Speaker 2 (01:09:52):
Do you think that's a possibility.
Speaker 1 (01:09:54):
I think it's a possibility. Yeah.
Speaker 2 (01:09:55):
What has to happen for it to happen.
Speaker 1 (01:09:58):
The models need to get smarter, we need to eventually
we need to figure out how to connect physical labs
to them. Things like that.
Speaker 2 (01:10:04):
I want to play one thing you said, and then
we'll wrap up.
Speaker 1 (01:10:06):
I'm excited for this.
Speaker 2 (01:10:07):
What do you think is the single most important ethical
question we need to ask ourselves when it comes to
the future of like us tech in humans, in.
Speaker 1 (01:10:14):
A world where we're going to have computers that can
think like humans, what is the society we want to design?
This question of what do we want the role of
humans to be in the world, and how do we
make sure the world is good for humans in the
broadest sense. I think that is the biggest ethical question
(01:10:36):
of our lifetime. Do you agree with I want nothing
to add or change to that.
Speaker 2 (01:10:40):
How do you think we're doing on answering that question?
Speaker 1 (01:10:43):
Better than I much better than I feared we would
be doing. You know, obviously not as well as I
hope to what I'm always like a little too optimistic.
Speaker 2 (01:10:54):
I had a lot of time with Sam, but I
have to say I still am left with so many questions.
But maybe that's the point of it. Well, I want
to live in the version of the world that Sam
is hoping to build, one where we all have a shot,
where AI equals abundance, and technology just fades into the
backdrop of our lives, working for us and not against us.
(01:11:16):
I don't think that's the full picture of our current reality,
because over the last years I've spent quite a bit
of time talking to women who were impacted by deep fakes,
and folks worried about losing their jobs and their livelihood.
And yeah, I actually think there's so much to be
excited about medical advancements, scientific breakthroughs. We're just in the
first inning of this. But I keep coming back to
(01:11:37):
this idea that for so many years, the divide between
Silicon Valley's optimism and the reality of the world was
its superpower. But now, as we navigate an increasingly fractured world,
my hope is that we can actually find a way
to bridge that gap, that we can ground the promise
of tomorrow's innovation with the realities of what's happening today.
(01:11:58):
I keep going back to say prediction in twenty twenty,
the last time we did an interview. He told me
at that time that AI would be more transformative than
any technology in human history. And he said, no one's
talking about this, no one's paying attention to this. He
was one of the early people to talk about not
just the upside but also the potential risk. Now we're
(01:12:19):
not just talking about it, we're actually living it. And
my hope is that we all get the opportunity to
participate in conversations about the future and how our world
is shaped. The interview with Sam might be over, but
my hope is that this is just the beginning of
the conversation. I'm going to dig into these questions, these themes,
these concerns, the excitement, and I'd love to ask you
(01:12:39):
folks who are listening and watching, how do you feel
about AI? What do you want me to cover? What
thread should I pull on? Who should I talk to?
I will make sure that people like Sam and the
folks who are building the future also hear your thoughts.
Mostly Human is a production of iHeart Podcasts and mostly
human Media. It's produced and edited by Laurie Siegel, Lauren Hanson,
(01:13:03):
and Nicole Bouchet. Sound design and mixing by Derek Clements.
Additional production health from Abouzafar, Special thanks to Mark Weinhaus.
Find us on all socials at mostly human Media. You
can also watch mostly Human on our YouTube page. If
you want to get in touch, email us at hello
at mostlyguman dot com. And if you like what you're here,
please rate and review the show and share it with
(01:13:23):
your friends. See you next week.