Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Hi everyone, I'm Emily Chang and this is the circuit.
Mark Zuckerberg has fired his latest salvo in the AI wars,
and he thinks it's a big one. Of course, you
all know him as the face of Facebook, in the
spotlight and in the hot seat. For twenty years, he's
been on the offensive, buying Instagram, WhatsApp, and Oculus, and
the defensive, battling a never ending cascade of criticisms about
(00:24):
the social, political, and business impacts of his expanding empire. Lately, though,
Zuckerberg seems to be having something of a zuck Aissance
or a zuck Renaissance. He's rocking a different vibe. Gold chains,
fuzzy jackets, martial arts on a barge in Lake Tahoe.
Zuckerberg appears to be reinventing himself and his company, Metta's
(00:48):
latest push is a major play to open source AI
in contrast to closed source competitors like open AI and Google.
Zuckerberg just unveiled his company's newest class of AI models,
LAMA three point one, by the way, that stands for
large Language Model Meta AI, and he believes this approach
to AI will have a profound impact on the progress
(01:09):
of tech, business, and maybe the world. I took a
trip to Meta headquarters in Menlo Park, California to meet
Zuck and hear about the AI future he says is
the path forward. We also talked about what's next for
social media, what's at stake in the US presidential election,
and his famous side quests joining me now, Meta CEO
(01:32):
and founder Mark Zuckerberg.
Speaker 2 (01:35):
Goodness, see you again.
Speaker 3 (01:36):
Thank you for having us.
Speaker 2 (01:37):
I'm excited to do this.
Speaker 4 (01:38):
I thought about sending my avatar to meet you, but
I decided this was too good to pass up.
Speaker 3 (01:43):
So we're gonna walk slowly.
Speaker 2 (01:45):
Okay, you want to set the pace. I will set that.
Speaker 3 (01:47):
I will try it. Sometimes I walk too.
Speaker 2 (01:49):
Fast, so you will. This is your thing.
Speaker 4 (01:51):
So so in the future, will we be meeting as avatars?
And will that feel totally normal?
Speaker 2 (01:58):
I think probably some of the time. Yeah. I mean,
you know, I think you're going to be.
Speaker 5 (02:03):
You're gonna have meetings where instead of having to use
zoom and look at a screen, you're gonna just have
the people you're talking to as kind of three D avatars,
just on your couch or around the table. And that's
going to be pretty good. I mean we'll also be
able to probably have AI coworkers who will be embodied
as avatars sitting around the table with us too. So
(02:24):
that's something that you're not gonna be able to do physically.
Speaker 2 (02:26):
But I don't know. I like it because I mean
I'm a.
Speaker 5 (02:29):
Big believer in like kind of physical presence and interaction,
and I mean I just think that like the ability
to actually be in a physical space, like around a
table or at a couch and kind of have the
person there, even if they're not physically there is a
lot better than just looking at a screen.
Speaker 4 (02:48):
How much of your day to day is meetings versus
product reviews versus like big strategic thinking.
Speaker 5 (02:53):
It really depends on a day to day basis. I
don't know that there's like a normal day, the ideal
day I get to to work on all the long
term stuff. But I mean, in this role, you sometimes
get sucked into stuff that is not.
Speaker 2 (03:06):
What you want to be focused on within a given day. Yeah.
Speaker 5 (03:09):
So yeah, but I view and part of this is like,
all right, how many days am I actually focused on
the things that I want to be building stuff for
the long term?
Speaker 3 (03:18):
How much are you still coding?
Speaker 2 (03:19):
Oh? I don't code for work at all anymore. I
do it for fun.
Speaker 5 (03:23):
Yeah, I mean probably most of the coding that I
do these days is with my daughters, teaching them.
Speaker 2 (03:28):
They love it.
Speaker 5 (03:29):
My daughter Aggie, she basically uses code as a medium
for art and just to create somethings. She calls it
code art, just like telling stories and different things by
like making I don't know just's it's just coding in scratch.
So I spent a bunch of time doing that with
the kids, and that's my coding these days.
Speaker 4 (03:48):
So I found an article in the Harvard Crimson going
back to two thousand and three where you were talking
about open source.
Speaker 3 (03:56):
Okay, like over twenty years.
Speaker 4 (03:57):
Ago, really, so you've been thinking about this really long time.
Speaker 5 (04:01):
Well, yeah, it's a big part of the tech industry.
I mean you wouldn't have been able to build the
early version of Facebook without that. I mean all the
stuff that we used, just the early versions of my
sequel and PHP and Apache and Linux, I mean all
that stuff. It's like, there's just no way I would
have been able to build the first version without that
whole stack.
Speaker 3 (04:21):
So it's like super critical.
Speaker 2 (04:23):
Oh yeah, yeah, I was a student.
Speaker 5 (04:25):
I didn't have access to a lot of capital I
couldn't have gotten like a really expensive proprietary Unix system
or something. So I mean there's the whole kind of
hacker mentality. You just take the code, use it for
the thing that you need to. It's more cost efficient,
and I mean that's how you can start something like
this in a dorm room.
Speaker 4 (04:44):
So speaking of that, some people see you as an
unlikely champion of open source today you're laughing.
Speaker 2 (04:50):
Well, I don't know why. I mean, I actually think, well,
I get it, but yeah.
Speaker 3 (04:55):
You understand, you understand the word unlikely.
Speaker 5 (04:58):
Yeah, Well, we've actually Meta have been pretty big proponents
of open source for a while, and it maybe is
a little bit of an accident of history that you know,
we got started after companies like Google and built up infrastructure,
but it was never a proprietary advantage for us because
(05:18):
we grew up after them. So instead of kind of
hoarding it and saying, Okay, this is going to be
something that we keep as an advantage, because it actually
wasn't an advantage relative to Google and other companies.
Speaker 2 (05:28):
It was just we were just catching up, like all right,
let's create.
Speaker 5 (05:30):
An open ecosystem and make it so we can benefit
from the innovation that other people bring and.
Speaker 2 (05:37):
It's been a really good formula for us.
Speaker 5 (05:39):
So dating back to how we design servers, how we
design our data centers, we have this whole open compute
project that kind of standardized a lot of the infrastructure
for the industry. It was really good for us because
now that all these other companies are using the same
stuff as us, the supply chains got more built out
around it, which meant that it was cheaper for us.
So we've saved billions of dollars by kind of having
it out there. And that's going to how with AI too.
(06:01):
So you can look at our history, there's all these
different projects that we've open sourced, and I think that
the positive experience that we've had is one of the
reasons why we're just a little more willing to put
ourselves out there and open source AI models and generally
believe that we're going to get the positive benefits from
the ecosystem on that too.
Speaker 4 (06:18):
You're really putting a stake in the ground by open
sourcing your AI in this attempt to build the AI
rails for the future. How much of this is a
strategic way to control or own the next technology wave.
Speaker 2 (06:33):
It's a few things.
Speaker 5 (06:35):
One is, if you just think about, like our kind
of psychology and strategy on this, A lot of how
we've grown up over the last ten or fifteen years
was building our apps through phone platforms that our competitors controlled.
And you know, there's all these analyzes that we've done
where we would be like a lot more profitable, our
business would be bigger if we hadn't gotten all these
(06:56):
random taxes or rules that the mobile platforms.
Speaker 2 (06:59):
Had put on us.
Speaker 5 (07:00):
But honestly, that's not the big thing that bothered me.
It was how it limited our creativity to build the
best things that we could imagine. It's somewhat soul crushing
to like go build something that you think is going
to be good and then just get told by Apple
that you can't ship it because they want to put
us in a box because they view us as competitive.
So you know, we're a big enough company now that
(07:21):
one of the things that I've resolved is that for
the next generation of technology, I want us to build
and have more control over the next set of platforms
that we're going to build. So I think AI is
a critical one, and I think augmented and virtual reality
is another critical one. But the thing is is that
these platforms are not just It's not such a thing
that you can go build in a lab.
Speaker 2 (07:41):
It's an ecosystem.
Speaker 5 (07:42):
So we can build the best AI model, but over time,
like Linux, the way that it becomes valuable is that
there's going to be a whole ecosystem of companies that
integrate with it and build out all these different capabilities,
And if we were to keep that to ourselves, we
actually wouldn't benefit.
Speaker 2 (07:57):
So it's kind of the way that we.
Speaker 5 (07:59):
Can control our own destiny on this and make sure
that we have access to leading AI is by building
it and having it become an industry standard. So it's
somewhat counterintuitive, and I think a lot of people woul
look at that and say, Okay, why don't you just
if you build this thing, why don't you just keep
it for yourselves? But it actually gets stronger by being
able to share it and have the ecosystem around it.
Speaker 4 (08:18):
How much of an opportunity here do you see to
take Apple out.
Speaker 3 (08:22):
Of the middle.
Speaker 5 (08:22):
Interestingly, Apple is not really the biggest player in AI,
or at least major AI foundation models, so I'm sure
that they'll do great stuff with their Apple Intelligence and
the on device stuff, But I was more using that
as an illustrative example that really kind of shaped me
and I think has shaped the company over the last
ten or fifteen years of one of the struggles that
we've had building apps in their ecosystem. I mean, they're
(08:44):
just not a neutral player in that right. I mean
we're a competitor to them, and we have to deliver
our services through a competitor, and that's a very difficult
situation to be in. So going forward, I'm not sure
that they are going to be the biggest challenge that
we have, but I think the lessons from that If
AI is going to be as important in the future
as platforms are, then I just don't want to be
in the position where we're accessing AI through I don't
(09:05):
know whether it's Google or you know, whoever the other
company is that also may be eventually down the road
a competitive for us. It's just I think it's important
if we want to build services that help people connect
in all the ways that we want to do. It's
just a thing where we're a technology company. We need
to be able to build stuff not just at the
app layer, but like all the way down, and it's
worth it to us to make these massive investments to
(09:27):
do that.
Speaker 4 (09:27):
You're continuing to improve meta AI across all of your products,
but also as a standalone chatbot. Why should we use
meta over chat GPT?
Speaker 5 (09:36):
Well, there's a bunch of things where it's better, And
I mean one is there are all these tools for
producing content.
Speaker 2 (09:42):
You know.
Speaker 5 (09:42):
One of the things that we're rolling out soon is
the ability to just like imagine stuff.
Speaker 2 (09:46):
You're typing something in real time. I do this with
my daughters all the.
Speaker 5 (09:49):
Time, and as you're typing and entering the query, it's
just generating the images as you enter the keystrokes.
Speaker 2 (09:56):
It's just really cool.
Speaker 5 (09:58):
The other is just that it's kind of in integrated
into the experiences that people use. So you can add
medai to your chats with your friends and WhatsApp or
Messenger or Instagram, so it can be there in group threads.
I think that's really neat. But look, at the end
of the day, I think the most important product for
an AI assistant is going to be how smart it is.
(10:19):
And the LAMA models that we're building are some of
the most advanced in the world, and I think that
that's why people want to use them that and obviously
it's you know, we try to build our services and
make them free. So a lot of other companies they
take their best models and they charge for them, and
we're going to try to take our best models and
make them free so that way as many people as
possible around the world can use them. And it's early,
(10:41):
but it's basically working. My goal for the Medai launch,
which I mean it's only really a few months old
at this point, was to by the end of the year,
have Metai be the most used AI assistant in the world.
Speaker 2 (10:54):
And I think we're basically on track for that.
Speaker 5 (10:55):
I mean, there's hundreds of millions of people who are
at the end of this year, yeah, and I think
we're going to be there before the end of the year.
But there are already hundreds of millions of people who
are using it. We're not even rolled out in all
the languages yet. We have the big launch coming this
week that's just like much better models across the board,
so people are using that AI. It's just going to
get smarter this week automatically. You don't have to pay
for it. It's just going to keep on getting better
and better.
Speaker 4 (11:15):
So you're releasing LAMA three point one, this family of
models big and small, including the biggest open source model
ever four hundred and five billion primers.
Speaker 3 (11:23):
Yeah, yeah, what does that jump unlock?
Speaker 5 (11:26):
In Layman's term, the bigger the model, the more intelligence
can be encoded in it, but also the more expensive
it is to operate and run. Right, So you don't
always just want to use the biggest model. You want
to use the most sophisticated model for what.
Speaker 2 (11:40):
You're trying to do.
Speaker 5 (11:41):
So if you're trying to run it on a phone
where there's limited compute, you actually want a much smaller model,
maybe like a two billion parameter model or three or
something like that.
Speaker 2 (11:48):
People want to run stuff on.
Speaker 5 (11:50):
Their own laptops, and you can run like a seventy
billion parameter model on a laptop. The four hundred and
five billion parameter model is a really big, very sophisticated
models that we're shipping, is basically competitive with all the
state of the art models. People can run it directly
if they want, But I actually think the main thing
that people are going to do, especially because it's open source,
(12:11):
is use it as a teacher to train smaller models
that they use in different applications. We've actually done this ourselves,
so we have smaller models, which actually the main ones
that we use in our products because again they're more
cost effective to run. And as soon as we finish
training the four hundred and five billion parameter model, we
used it to now train a better version of the
small models. And I think one of the really powerful
(12:32):
things is that if you just think about like all
the startups out there, or all the enterprises or even
governments that are trying to do different things, they probably
all need to at some level build custom models for
what they're doing. And it's really hard to do that
with closed systems out there, whether that's open AI or Gemini,
Google's thing or whatever.
Speaker 2 (12:52):
But with open source.
Speaker 5 (12:53):
That's really easy because you have the weights, so you
can basically use the model to distill and train whatever
size model you want. Gets to a pretty core part
of our philosophy is we don't believe there's gonna be
like one AI to rule them all. Our vision is
that there's going to be millions or just billions of
different models out there, and I think that's really what
the LAMA three point one, four or five billion is
(13:14):
going to allow. It's just going to be this teacher
that allows so many different organizations to create their own
models rather than having to rely on the kind of
off the shelf ones that the other guys are selling.
Speaker 3 (13:24):
So not one god, but many, Is that the way
to think about it?
Speaker 5 (13:27):
Well, I don't think they're gonna be gods, but I
mean I think I think there's this weird semi theological
thing that I think. I think you're referring to this
conversation that I had on Instagram recently, where I do
think that to some degree, if you're like an organization,
you think you're going to create like this one super
intelligence that does have this feel to me of like
people trying to create a god. And that's just I
(13:49):
find that both the wrong way to look at it,
but just also very unappealing. I would rather create a world,
or help people create a world where there can just
be like a lot of different diferent people and services
that you interact with. So, you know, one of the
things that I'm excited about is making it so that,
like you know, there's almost two hundred million creators on
our platforms, they all are trying to build their community.
(14:12):
People want to interact with them, there aren't enough hours
in the day, Like, I want to make it so
that every single one of them can easily train like
an AI version of themselves.
Speaker 2 (14:20):
That they can make it what they want.
Speaker 5 (14:22):
So it's almost like a kind of artistic artifact that
they're putting out there for their community that allows their
community and interact with them, but also gives them control
over how that interaction happens.
Speaker 2 (14:31):
And I think that that's going to be great, and
there's going to.
Speaker 5 (14:33):
Be millions there are, you know, eventually hundreds of millions
of those simple businesses. There's hundreds of millions of small
businesses on our platforms, and I want to make it
so that any small business can just very easily pull
in all their information from social media and their business
catalog and get an agent that can help their customers
with customer support and can help them with sales and
(14:54):
can recommend new products. And I think that just like
today every business has a web site and email address
and social media accounts, I think in the future every
business is going to have an AI business agent too,
and I think that's going to be great. So I
don't think that there's just like one AI use case,
which is what we're trying to do. I think what
we're trying to help enable the whole community to do
is create all these different ais for all these things
(15:16):
that people want to do. And that's kind of how
I think the sense of being a good thing for
the world.
Speaker 4 (15:20):
Some people don't see your AIS so open. So how
open are we talking?
Speaker 2 (15:25):
Really?
Speaker 4 (15:25):
Will you show us how the model was built or
what data you've trained on it?
Speaker 5 (15:29):
The model has open weights, so you can take the
weights and you can modify them, and you can see
how it's built, and you can see the architecture and
all that. I'm not aware of anyone else doing more
open work than this. The data question, I think is
a kind of sensitive and important one. I mean, there
is this question in science where people want to be
able to reproduce science experiments, and I think some people
would like to be able to do that here. The
(15:50):
reality is, you know, it also costs for the LAMA
three model at least hundreds of millions of dollars of compute,
and going forward it's going to be billions and many
billions of dollars of compute. So is it reason or
is anyone actually going to be able to reproduce it.
Speaker 2 (16:02):
I'm not sure.
Speaker 5 (16:03):
I don't think that that's like as important of a
thing to push on and for data, I think that
even though it's open, but we are designing this also
for ourselves and we want to make sure that this
works well. And we do work with different organizations to
license data in different ways, and some.
Speaker 2 (16:19):
Of those sources are proprietary.
Speaker 5 (16:21):
Even if the data is public, you know, you don't
necessarily have the kind of right out of the gate
to be able to train on it. You sometimes have
to go make deals, and so we can't just go
ship all the data even if it's public data that
we've used to train on. And sometimes it's not even
clear that it's the right thing to disclose or tell
(16:41):
people what data we've trained on. So yeah, I mean
I get the point on that. You know that you
can always take what you're doing and make it more
open and even more extreme way. But look, I mean,
I think what we're doing is state of the art
and kind of open source a eye models. I think
for people who want to use this to build stuff,
I think that they're pretty happy with.
Speaker 2 (16:59):
What we do.
Speaker 4 (17:00):
You are using Facebook and Instagram data right as a
public data, public data as I understand it, and is
that does that give you an advantage versus other models,
and should those users get something in return for their
data being used.
Speaker 5 (17:14):
I actually think I'm not sure how much it's an
advantage because a lot of the public data on those
services we allow to be indexed and search engines, So
I think Google and others actually have the ability to
use a lot of.
Speaker 2 (17:24):
That data too. In some way.
Speaker 5 (17:26):
If we didn't use the data, we might actually be
at a disadvantage because if it were in like Google Search,
and we somehow couldn't use it because we were saying, Okay,
we're not going to use data from our services even
if it's public, but Google can use it because it's
not from their services.
Speaker 2 (17:39):
It like kind of doesn't make sense.
Speaker 5 (17:41):
But I do think that overall, there's code and technology
that you develop, there's compute to train the models on,
and there's what your data mix is, and those are
probably some of the biggest pieces that contribute to the
end quality of the model. So we try to innovate
on all of those, and we want to build the
best thing because it's the foundation that we use for
our products. But we also want to build the best
(18:02):
thing because we want it to become the open source
standard that then other people adopt and then it makes
it even better, which helps us serve the people that
we're trying to build for as well.
Speaker 4 (18:11):
So let's get into the broader strategy and how this
is all going to work, Like, will we have AI
generated influencers with AI generated captions and avatars talking to avatars?
Speaker 2 (18:22):
Yeah, I think we'll have all of it.
Speaker 4 (18:23):
What and like, how do you want to create the
first AI generated social network?
Speaker 5 (18:30):
Well, I think that that will be part of this,
but it's not going to be the only thing. I mean,
it's people come to our services because they want to
connect with people. But you know, actually, one of the
most interesting use cases that people have for medai today,
it's like in the top four use cases is role
playing difficult social interactions that they're going to have.
Speaker 2 (18:46):
So whether it's in a.
Speaker 5 (18:47):
Professional context like okay, you want to ask your manager
for a raise, or like I'm having this hard conversation
with my girlfriend or my friend, they.
Speaker 3 (18:54):
Could have role played this conversation with you today.
Speaker 5 (18:57):
I mean, like, hopefully it's not stressful for you, but
it's But yeah, I mean I think that that's like
it's a clear social tool where there's there's no judgment. Right,
the AI isn't sitting there, there's no like social repercussion
for what you ask it.
Speaker 4 (19:12):
But what evidence do you have that people want to
live in this virtual world and socialize with avatars or
that it's actually good for us?
Speaker 5 (19:19):
Well, I think that people want to connect with each other.
I actually think like all the other stuff is generally noise,
but all the technology allows you to do that in
a better and better way. Right when I got started
with Facebook, it was mostly text, and then this whole
there's been this whole kind of like technological evolution where
we got smartphones that we could primarily be taking photos,
(19:40):
and then the mobile network's got good, so then you
could be kind of sharing video and videos a lot
richer and consuming videos a better experience. So I just
don't think that's the end of the line. I think
it gets more immersive. And that's one of the reasons
why I'm so convinced that ar glasses by the time
you just get to like normal stylish glasses where you
can have a sense of presence.
Speaker 2 (19:58):
I mean, I think within.
Speaker 5 (20:00):
Five years we're going to be at a point where
we could be having this conversation and like I could
just be like a full kind of three D avatar here,
and it would feel like a much more realistic sense
of presence than just talking to each other over a
zoom screen or.
Speaker 2 (20:13):
Something like that. That's what it's about, though.
Speaker 5 (20:15):
I think people want to connect, and we're basically building
all the different technology that allows people to do that.
Speaker 4 (20:22):
You renamed your company meta, You're still pouring billions of
dollars into the metaverse.
Speaker 3 (20:26):
Are we as far along as you thought we'd be
a few years down the line? Are there any lessons
in the urgency of the pivot?
Speaker 2 (20:34):
Well?
Speaker 5 (20:35):
A lot of the reason why I did that is
because I think we were getting pigeonholed as just this
like social media app company, right, And that was like
so ingrained, right.
Speaker 2 (20:45):
So we were called Facebook.
Speaker 5 (20:46):
Facebook is one of our apps, right, and the others
are just as important now. And I've never thought about
the company as a social media app company. I mean,
we didn't start as that. We weren't you know, there
weren't apps when we got started. We were a technology
company that is to help people connect and kind of
to build the future of human connection. And I thought
that the rebrand around meta both was healthy because.
Speaker 2 (21:11):
We just have more products at this point than just
Facebook and two.
Speaker 5 (21:14):
It kind of re anchored people's thinking about the company
in terms of, oh, this is a company that's developing
kind of longer term technology around people connecting, which is
really how I think about what we should be doing.
And when you're trying to push something in a direction,
it's helpful for people both inside the company and outside
to think about it in terms of what you're actually
trying to do. So I've been very happy with how
that's gone. The metaverse thing was always going to be
(21:36):
a very long term thing. I think some things have
gone better than I thought. Some have gone slower. The glasses,
I think is probably the best example of something that
is going better. I would have thought a few years
ago that ar glasses wouldn't really be a mainstream thing
until we got kind of these full holographic displays, like
I think the type of stuff that you've been able
(21:57):
to see. It's been one of the most positive surprises
that the collaboration that we have with ray Band, it's
going extremely well, and I think part of it is
they're stylish, they're good glasses. Part of it is that
it's a great form factor for AI. We didn't know
that AI was going to be a thing when we
started working on that project, or I mean, we thought
it was going to be a thing like ten years
from now. But if you asked me five years ago,
(22:18):
I would have guessed that AR would come before AI,
not the other direction. So some of this is just
about kind of setting yourselves up to ride the different
waves when they come in.
Speaker 4 (22:26):
I've gotten to try out a bunch of your future
facing technology here with Quest with the ray band, metas
with O Ryan. How big an opportunity do you see
to own your own operating system? And down the line
do you see your reshuffling of the power relationships in
the tech industry Now.
Speaker 5 (22:42):
I don't always think about things primarily from a business perspective.
I mean, I like building stuff. What I've found is
that if you build good things, then eventually you were
able to build successful products and businesses around it, even
if it takes a while. And one of my lessons
for the last ten years is just building apps that
we ship through our competitors' mobile platforms. I just think
(23:05):
we're going to be freed up to be more creative
and build better stuff if we can control more of
the core technology. So I mean, look, we're not going
to be the only kind of operating system and augmented
in virtual reality. There will be others. We're not going
to be the only AI system. There will be others.
But I don't necessarily think about it as trying to
shuffle the industry. I just want to be able to
(23:25):
build good things, and I think that like not being
dependent on competitors that are trying to put you in
a box, is an important part of doing that.
Speaker 2 (23:33):
But it's also a luxury, right.
Speaker 5 (23:34):
It's like when I was a startup, where we were
a startup, like you don't necessarily have the resources to
go invest tens of billions of dollars in building out
these technologies. But you know, now we're here and we
have a successful business and we can do this innovation.
And it's just one of the things that I want
to do. I don't want us to just like rest
on our laurels and maximize our profits in the near term.
I want to pour all the profits and success that
(23:56):
we have into building the next generation of things, to
do it in an open way. I think one analogy
that people misread from history is there have always been
multiple operating systems. Right with PC, it was Windows and Apple,
and Windows was the open one back then, and it won,
right So I think that a lot of people now
(24:17):
have this massive recency bias and they think you just
have the iPhone Android analogy in your head and you're like, oh, like,
well Apple won that. It's like, yeah, they won this
generation of computing. But it's not always that way. For PCs,
the open one was actually the primary platform. And part
of my goal is to make it so that for
the next generation of platforms, whether it's AR glasses or
(24:38):
mixed reality headsets or AI systems, it's not just that
there should be an open platform. I think the open
platform can actually be the best one, and I think
that that's kind of a cool thing for the industry.
Speaker 4 (24:50):
Sakoya calls AI the six hundred billion dollar question. There's
allays investment in chips and the infrastructure and the data centers,
but when does it start paying off?
Speaker 3 (24:58):
Like is it a bubble? And well, if not, like,
when do you start seeing the money.
Speaker 5 (25:04):
I think bubbles are interesting because a lot of the
bubbles ended up being things that were very valuable over time,
and it's just more of a question of timing, like
you're asking, right. Even the dot com bubble, it's like
there's all this fiber laid and it ended up being
super valuable, but it just wasn't as valuable as quickly
as people thought.
Speaker 2 (25:20):
So is that going to happen here? I don't know.
Speaker 5 (25:22):
I mean, it's hard to predict what's going to happen
in the next few years. I think AI is going
to be very fundamental. If the products were able to
grow massively over the next few years, which I think
there's a very good chance of, then I'd much rather
over invest in play for that outcome then just try
to say, Okay, maybe we'll save some money by developing
it a little more slowly. I think that there's a
(25:44):
meaningful chance that a lot of the companies are overbuilding
now and that you look back and you're like, oh,
we maybe all spent some number of billions of dollars
more than we had to. But on the flip side,
I actually think all the companies that are investing are
making a rational decision, because the downside of being is
that you're out of position for the most important technology
for the next ten to fifteen years, whereas if you overinvest,
(26:07):
then you're probably just losing some amount of money that's
for these companies generally an affordable amount of money that
they can lose for something that's just a really important prize.
Speaker 4 (26:17):
So with this economy, does the year of efficiency continue
into the AI era? Are we talking years of efficiency
or has it belt loosened?
Speaker 5 (26:26):
Well, I'm really glad that we did all the efficiency
push because I think that that basically created enough capital
for us to go invest in massive amounts of infrastructure.
And I think going forward, I would guess that most
of the investment that we make is going to be
building out AI compute rather than massively growing the number of.
Speaker 2 (26:45):
People at the company.
Speaker 5 (26:46):
We are growing, We're going to hire more people, but
I think at this point the biggest part of the
new investment that we're making is in building these kind
of giant AI superclusters to train the future AIS.
Speaker 2 (26:59):
So it was.
Speaker 5 (27:00):
An interesting thing for like the first almost twenty years
of the company that it was just like growing quickly
in people every year, and I think it's pretty healthy
to focus on efficiency around that. But yeah, I'm definitely
glad that we kind of gave ourselves the flexibility.
Speaker 2 (27:13):
To build this.
Speaker 4 (27:14):
You said your goal is getting to artificial general intelligence
or AGI. How do you define AGI and do you
get there first?
Speaker 2 (27:22):
That's a good question.
Speaker 5 (27:23):
I'm not sure we can answer the second question about
who gets there first, but actually maybe maybe we'll start there.
I do think open sources gaining ground pretty quickly. If
you look at LAMA two last year, we were like
a whole generation behind the frontier, and for LAMA three,
we're basically competitive with the state of the art models
LAMA three and the LAMA three.
Speaker 2 (27:42):
Point one release that we're putting out now.
Speaker 5 (27:44):
We're basically already starting to work on LAMA four, and
our goal is to completely close the gap with all
the others on that. So I don't know, I mean,
do we get to AGI first? I mean, I think
that there will probably be some breakthroughs between now and then.
It's hard to just predict in a straight line, but
we're certainly putting together a world class effort and we're
focused on it.
Speaker 2 (28:01):
Then you get to the more complicated question, which is
like what is it.
Speaker 5 (28:04):
I don't know that there's one specific definition for this
because I think intelligence is multivariant, right, It's not like
there's one number that is your intelligence. I think of
AGI is intelligence has different kind of capabilities, Right, So
the first set of models could reason over text, and
then you added in the ability to do photos, and
(28:26):
now you're adding in the ability to do videos, to
understand videos and produce videos, and then you're making that
it works well with audio. I think the ability to
reason and be able to produce three D worlds and
three D content is going to be really important. I
care a lot, and our company cares a lot about
people interacting with each other. So like when you think
about the human brain, there's like whole parts of it
(28:47):
that are just focused on basically like reading people's expressions.
It's like if you move your eyebrow a millimeter, it
means something. It's like I could pick that up, whereas
if something moves over there in the corner a millimeter,
I'm not going to notice it. There's probably a specific
aspect of intelligence or modality which is like reading people's
faces and emotions, and that's something that I care about,
so I think we'll probably try to build that in
at some point. So I don't know if it's just
(29:09):
like increasing the amount of knowledge or some like IQ score.
It's kind of layering in all these different things, which
is why it's a little bit hard to say who
gets their first because I actually think over time, different
companies might optimize for different things.
Speaker 4 (29:23):
I know, you've always been fascinated by China and you
learn to speak Mandarin, and what do you know about
where China is on AI and AGI?
Speaker 5 (29:33):
I don't personally know a ton. I know on open
source they're leading models. You know, LAMA three point one
is well ahead of that. The kind of leading Chinese
model before was right at the level of the previous
seventy billion parameter model. But there's been no Chinese version
that's anywhere near the frontier, like a four hundred billion
(29:53):
parameter model, and the LAMA three point one seventy billion
is ahead of where they are.
Speaker 2 (29:57):
So I feel good about that. I don't know.
Speaker 5 (30:00):
I think that's going to be one of the interesting
questions over time. You know, geopolitically, when you think about
the rivalry, there's this question which is how should the
US approach kind of AI competition with China. And there's
one strain of thought which is like, okay, well, we
need to like lock it all down and I just
happen to think that that's really wrong because the US
thrives on open and decentralized innovation. I mean, that's the
(30:23):
way our economy works. That's like how we build awesome stuff.
So I think that locking everything down would hamstring us
and make us more likely to not be the leaders.
And the other question is if we do lead, what's
the chance that we're actually going to prevent them from
being able to steal it anyway.
Speaker 2 (30:39):
I mean, it fits on like a thumb.
Speaker 5 (30:41):
Drive or so it's I just think that that's not
a realistic way to approach it. I personally think that
the right configuration of this is open source.
Speaker 2 (30:50):
I think is going to be the leading ecosystem.
Speaker 5 (30:52):
It will be very robust, it will eventually be available
to everyone in the world, including China. But I think
the leading companies should work with the US government and
make sure that our national defense and things like that
have sort of a perpetual first mover advantage on the
leading technology in the world.
Speaker 4 (31:08):
What do you say to the skeptics who think, you know,
this could be exploited by our rivals, criminals could get
their hands on it, Like, how do you think about
those risks.
Speaker 5 (31:18):
Well, I think that there are different kinds of risks,
so you have to take them one by one. The
risk of China or a very large sophisticated state that
has a lot of resources, I think is different from
like an individual criminal who might just try to do
something bad. I think to neutralize a criminal, I mean, look,
you want to do a lot of testing up front.
We do all this work before we release every single
(31:40):
one of the models to make sure that we understand
what the risks are and that we mitigate them as
much as possible.
Speaker 2 (31:45):
At the end of the day.
Speaker 5 (31:46):
The way that we've approached this with social media, and
the way that I think society approaches this overall is
you have kind of better AIS or more resources to
go fight people who have less resources. So there are
all these sophisticated folks who try to do bad stuff
on the social networks, but we invest like billions of
dollars a year and have these various sophisticated systems.
Speaker 2 (32:05):
So I do think that will play out with AI.
Speaker 5 (32:07):
Too, which is that more sophisticated AIS with more compute
will be able to generally check less sophisticated folks, whether
they're just kind of everyday criminals or whatever trying to
use something but they have less compute.
Speaker 3 (32:19):
So we win the AI wars this way.
Speaker 5 (32:21):
Well, I think we're checking less sophisticated actors with less
compute that will work most of the time. Now that
doesn't mean it automatically works for everything. I think a
lot of attention needs to be paid to it. So
it's a thing that we need to focus on. But
I do think that that's how society has been stable
for hundreds of years, is you have larger forces, whether
it's polices or armies or whatever, checking kind of smaller
(32:42):
groups that are trying to create harm. How you deal
with the geopolitical rivals is a more complicated question, but
I think there there's the question of what can you
hope to achieve. If you're trying to say, okay, should
the US try to be five or ten years ahead
of China. I just don't know if that's a reasonable goal,
because I think China has a lot of resources and
(33:03):
a lot of great scientists, and they're great at espionage,
right it's like all the stuff. So I'm not sure
if you can maintain that. But what I do think
is a reasonable goal is maintaining a perpetual six month
to eight month lead by making sure that the American
company is in the American folks working on this continue
producing the best AI systems, and then having a direct
(33:25):
effort to try to make it so that those efforts
are integrating with the National Security establishment, so that way,
the US government has a kind of perpetual six.
Speaker 2 (33:34):
Month eight month advantage on what everyone else in the
world gets.
Speaker 5 (33:37):
And at some level six months may not seem like
a lot, but part of how I think about it
is like you're using an iPhone, you know all the competitors,
and it's like, is the iPhone from three years ago
better than the best Samsung phone today?
Speaker 2 (33:50):
No way.
Speaker 5 (33:51):
But you know, the fact that Apple has just generally,
in the eyes of most consumers, been a little bit
ahead at each point has meant that over fifteen years
or however long, in almost twenty years, they've just had
this big compounding lead, And I think if the US
can maintain that advantage over time, that's just a very
big advantage.
Speaker 2 (34:08):
So that's my philosophy.
Speaker 5 (34:09):
I think it's what I think is the most reasonable
thing to shoot for, and that optimizes for making sure
that Americans continue to lead in this.
Speaker 4 (34:19):
Well, look, you're not an AI dooomer, but some really
influential people are like, how can you be so sure
there won't be a roboc ellipse or THATAI will lead
us to human extinction?
Speaker 5 (34:29):
Well, I don't think you can ever be one hundred
percent sure. There are a lot of bad things that
could happen. Climate change is terrible, nuclear war is terrible.
But I generally think you can manage this in a
responsible way where you just maximize the chance of the
net prosperity that's created in society being really good, and where.
Speaker 2 (34:50):
You can manage the risk.
Speaker 5 (34:52):
And it's like I think that for the AI risks,
especially like the existential type stuff that you're talking about.
I just personally think that we will have more of
a sense of the different capabilities of these systems with
each model that we build, and they're getting better, right,
So LAMA three is better than Lama too, Lama four
will be better than LAMA three. And this is why
we do all the safety testing up front. We study safety.
(35:14):
We try to have as much of a sense of
here are the different capabilities that would be bad. Right,
It's like we don't want it to be able to
lie or self replicate, or like, just is it intentionally
deceiving people or stuff? And I just don't know that
we're seeing those things yet. So I think there are
different questions that happen once you start getting to a
level of intelligence that's a lot greater than where we
(35:36):
are now. But there is this myth that I think
people kind of anthropomorphize and assume that intelligence is going
to take the form of something that's conscious or sentient
or physical.
Speaker 2 (35:51):
And I actually think in some ways.
Speaker 5 (35:52):
This has been one of the most surprising things over
the last few years, is you can have something that
is pretty reasonably intelligent that is actually completely separate from
a consciousness. I do think that there's this thing through
the history of science where people keep thinking that they're
special or that they're like the center of the universe
in different ways. And I think people are very special
(36:16):
and like life and kind of our consciousness and the
connections we make. I think all that stuff really matters
in like a deep way. But I'm not sure if
the specific like intelligence of productivity is actually the thing
that makes our lives meaningful. It's obviously it's one of
the defining characteristics of being a person.
Speaker 2 (36:36):
But I don't know.
Speaker 5 (36:37):
I think it's like as much of it as like
the love and connection and camaraderie that you feel with
other people. And I don't know, I think that that
stuff exists, even if you could like completely separate out
intelligence from kind of the rest of the human experience.
Speaker 4 (36:51):
Yeah, there are obviously a lot of players, you know,
trying to do, you know, some similar things. What do
you make of Sam Maltman's leadership. Do you trust him
and Opening Eye to get to AGI responsibly.
Speaker 2 (37:04):
I know Sam pretty well.
Speaker 5 (37:05):
He's actually on the board of the Biohub with me
at the chan Zuckerberg Initiative with me and Priscilla, So
we kind of sit in there and we grill all
the scientists and push them on using AI in more
effective ways. And that's kind of a fun way that
we get to work together. I mean, look, he caught
a bunch of stuff early on what was going to
scale with large language models that a lot of other
(37:27):
people had written off, and I think he deserves a
lot of credit for how that organization has developed, also
having gotten a lot of public scrutiny. Myself, I think
it's like, look, when you're going through it for the
first time, you don't handle it as as perfectly as
you would like. But I think he's handling it very
gracefully and is generally doing I think he's doing better
(37:50):
than I did, and I.
Speaker 2 (37:52):
Respect him for that.
Speaker 5 (37:53):
Over the long term, there's just the question of which
model ends up being the right one. And it's somewhat
ironic thing to have an organization that's named open AI
but as sort of the leader in building closed AYE models.
And it's not necessarily bad, but it's kind of a
little funny. But I'm not like such an absolutist on
open sources. I think anything closed is bad. I mean,
(38:14):
at Meta we do a lot of open source stuff.
We do a lot of closed source stuff, right, So
it's like I do both. I think I get the
value of it. I would imagine that open ai will
continue being an important company for a while to come,
But personally I am more optimistic about a more positive
AI future where open source is the industry standard.
Speaker 2 (38:32):
And that's that's just my view.
Speaker 4 (38:33):
I want to talk about the twenty twenty four presidential election. Okay,
Facebook has been a flashpoint in many elections around the world,
and you personally have been called out, most recently by
former President Trump.
Speaker 3 (38:47):
This is a big election. What do you think is
at stake?
Speaker 5 (38:52):
Well, I mean, look, it's it's obviously a very important
and it'll be a historic election. And look, I mean
the main thing that I hear from people is that
they actually want to see less political content on our
services because they come to our services to connect with people.
So you know, that's what we're going to do, where
we give people control over this, but we're generally trying
(39:13):
to recommend less political content. So I think you're going
to see our services play less of a role in
this election than they have in the past. And personally,
I'm also planning on not playing a significant role in
the election. I've done some stuff personally in the past.
I'm not planning on doing that this time. And that includes,
you know, not endorsing either of the candidates. Now, Like
(39:34):
I mean, there's obviously a lot of crazy stuff going
on in the world. I mean the historic events over
the last like over the weekend, and I mean a
personal note, it's you, I mean seeing Donald Trump get
get up after getting shot in the face and pump
is fist in the air with the American flag is
one of the most badass things I've.
Speaker 2 (39:53):
Ever seen in my life.
Speaker 5 (39:55):
At some level as an American, it's like hard to
not get kind of emotional about that spirit in that fight,
and I think that that's why a lot of people
like the guy. But look, I mean, we're living in
a pretty crazy time, and I view our role here
is to make it so everyone can express their views
on this stuff, but we're going to try to manage
(40:16):
it so that way the politics doesn't drown out the
human connection and the community, which is I think the
main thing that people come to our services for. And
you know, we're not always going to get that right,
but that's what we're going to try to do, and
I think that that's probably the best role that.
Speaker 2 (40:30):
We can play.
Speaker 4 (40:32):
President Biden signed a bill to ban TikTok into law,
but whether it happens is an open question. Former President
Trump has given a reason to not ban TikTok is
that he would see the market.
Speaker 3 (40:44):
To you, what do you make of that logic.
Speaker 2 (40:47):
I don't know.
Speaker 5 (40:48):
The national security questions around whether TikTok should be allowed
is obviously you know, above my pay grid as something
that the folks in our government and Congress need to
go figure out.
Speaker 2 (40:58):
From what I see on a day to day basis,
I think the competition is focusing.
Speaker 5 (41:02):
It's good. I like competing with different companies. I think
we're doing pretty well here. We're gaining market share. So
I don't know they'll go do what they need to do,
but I think you know where we're going to be
fine and we're going to continue doing well in this
space either, right, So.
Speaker 3 (41:19):
Any thoughts on whether or not it should be.
Speaker 2 (41:20):
Banned, I really think that's that's above my BIY grade.
Speaker 4 (41:23):
So between looking at the broader social media ecosystem, these
little apps, between TikTok and Snap and Instagram and threads
and Facebook, how do you see the social media ecosystem
changing in an AI world?
Speaker 3 (41:36):
Do new players emerge? Is their consolidation? How do you
win the battle for the young people?
Speaker 2 (41:42):
And I think it's going to be all of this stuff.
Speaker 5 (41:44):
There's definitely going to be an opportunity for people to
build new apps. I mean, I think we've seen that
it's very competitive TikTok guru from being very small to
now having more than a billion people.
Speaker 2 (41:55):
I'm sure there will be more more apps. There are
also part of.
Speaker 5 (41:58):
The reason why we want to innovate and push on
that is that we also want to integrate that stuff
into our experiences, right. We want to make it so
that creators can create more interesting contents, that there are
new ways for people to interact with the.
Speaker 2 (42:09):
People that they care about.
Speaker 5 (42:10):
AI has already been so important for just recommending people
great content in these platforms. I think that will continue,
but also there will be another wave where AI now
is helping people not just get good recommendations, but also
create new content. So I'm pretty optimistic about that. I
hope it's not just going to be like one new
format like video or photos. I think that there's going
(42:32):
to be I would guess dozens or hundreds of new
types of content formats, and some will be made by startups,
and there'll be new apps built around them. Some hopefully
will pioneer and popularize. But it's going to be a
very dynamic space.
Speaker 4 (42:46):
We're facing a crisis in mental health, especially a teenagers.
Speaker 3 (42:51):
The Surgeon General is now calling for a warning.
Speaker 4 (42:53):
Label on social media, saying that it's partially to blame
with everything that you know?
Speaker 3 (42:58):
Now, does you have point? How are you thinking about this?
Speaker 2 (43:03):
Yeah? So I guess I come at this from a
few directions.
Speaker 5 (43:08):
One is that there's clearly an issue with mental health
in the country, So I think that's like a really
important thing, and for kids and teens it's especially important.
Speaker 2 (43:18):
And I think the focus on that is right.
Speaker 5 (43:21):
You know, I have three young girls and being a
parent is hard, and you just want to make sure
that they have like good lives. And from that perspective,
what I aim for us to do is build our
services in a way that's aligned with parents, giving them
the controls that they need to basically oversee how the
services work for their kids. I mean, I think different
(43:42):
families are going to have different rules for how they
want this stuff to work, and there's probably not a
one size fits all thing that's right. But I think
we have a role in making sure that we study
the stuff, understand what's good what's not broadly, but then
for individual families, giving the parents the.
Speaker 2 (43:58):
Controls that they need.
Speaker 5 (44:00):
What the data says today is a little bit different
from what the basic meme is that's out there. I
think a lot of people kind of act as if
there's been this proven connection between these and I just
don't think that the science supports that. Today It's obviously
something that will continue to be studied over time. There's nuances.
I think social media is different from phones overall. It
might be that phones have an issue, right if you're
(44:21):
getting notified for something or it's buzzing you and it's
like preventing you from sleeping. I mean that is different
from social media. But I think there's all these other
issues too. But I just haven't seen of all the
most rigorous studies that have been published, including in the
most prestigious scientific journals, the link between this on kind
of a causal basis has not been well established. Now
(44:42):
that doesn't mean that there isn't work for us to do.
I mean, we want to make sure we do a
good job on this, but I think for whether it's
the Surgeon General or other folks, kind of jumping to
a conclusion that the science doesn't yet support is not
the most helpful thing. But look, I mean there are
clearly issues here across society, and want to make sure
we're part of the solution to that.
Speaker 4 (45:02):
Facebook, and you have been blamed for a lot of things,
whether you like it or not, whether you agree or not,
why should we trust you with AI?
Speaker 5 (45:11):
Well, the loaded question we have gotten blamed for a
lot of things, and I mean, look, I take our
role in all this stuff seriously, and I think we've
tried to handle all this as well as possible. I'm
not sure that it's all been fair, but look, I
mean we're in I'd like to think that we're an
important and relevant company. So I think the scrutiny is
generally healthy. I mean for AI, I mean look compared
(45:32):
to the other companies. I think doing this in an
open way actually increases the chance that it ends up
being done safely. If you look at the history of
open source, open source software counterintuitively ends up being more secure.
Speaker 2 (45:43):
At the beginning of the open source movement, a.
Speaker 5 (45:45):
Lot of people thought, hey, it's not going to be
secure because anyone can just see who the bugs are.
It's like, yeah, well, anyone can see where the bugs
are is then you fix them. Whereas in closed systems
you have these holes that are just out there that
like governments or hackers or whatever exploiting for a long
time because there's less transparency on them.
Speaker 2 (46:00):
So I think that that's.
Speaker 5 (46:02):
Going to be one of the defining things around open
source is that anyone can scrutinize the work, and because
of that, I think it just puts a lot of
pressure to make sure that the quality of the work
that you're doing gets better really quickly.
Speaker 2 (46:15):
And so I guess that's just a big difference.
Speaker 5 (46:18):
In our approach here compared to the approach that others
are taking, and I think that people should take confidence
in that, and look, when we get stuff wrong, then
we're going to get called out on it, and I
think it'll just generally create these systems that are more
hardened and more secure and safer. But I know that
there's the whole debate around isn't going to be safe
to do open source AI. My personal view is that
(46:41):
open source AI is going to be safer than closed
for the exact same reasons that open source historically has
been safer and more secure than closed source software.
Speaker 4 (46:50):
Let's talk about regulation, because in the shape of AI,
regulation is going to be an issue around the world
for the next several years. What should this regulation look like?
Do you support something like an independent agency that would
oversee AI like we have for nuclear energy.
Speaker 5 (47:08):
I think it's pretty early and thinking through this stuff,
and I haven't seen any proposal yet that I look
at and I'm like, oh, that's exactly what we should do.
I think there's a time question for this, right. I
think in a lot of things, if you regulate it
too late, there could be too much harm. If you
regulate it too early, you could stop innovation. I do
think we're seeing this issue in Europe now, where they
just like put in place a ton of regulations and
(47:30):
a lot of companies are just not launching stuff there.
So I mean that's an issue. So I think there's
clearly going to need to be this interaction between governments
and companies to go oversee the work.
Speaker 2 (47:42):
I'm not sure what the right model is.
Speaker 5 (47:44):
The thing that I've focused on most is the viability
of open source, because I obviously believe in this deeply
and it's both kind of our strategy and approach.
Speaker 2 (47:55):
But I think it's the best.
Speaker 5 (47:56):
Chance for creating an AI future that is positive, where
the prosperity that is created can be shared by the
most people, and that we end up with the safest outcome.
Speaker 2 (48:07):
I think that open source will do that.
Speaker 5 (48:09):
But there's a big intellectual debate around this, and I
think that there's a bunch of people who think, okay, well,
open sources inherently maybe like a little less controllable because
there aren't just like a small number of companies that
you can go choke when you want to, like go
get them to do something right. So it is like
a little more forward leaning on. Okay, we're going to
try to enable this kind of decentralized innovation, and that's
(48:32):
what is typically worked on the Internet, and I think
it's what gives the best shot and the scrutiny around
that I think is the thing that will be the
most likely to produce a safe outcome. But you know,
there's a real debate around this today or is like
should that be the way that it goes now? I
think that open sources gaining popularity with every day, and
I think the LAMA three point one release, with the
four or five billion model that is basically going to
(48:54):
be kind of this teacher model where like all these
different companies or institutions, universities, academics are going to use
it to train different models. I think it's just going
to keep on gaining more and more popularity and I
would guess will eventually become sort of the industry's standard
for how this stuff works. But I just think we
need to be careful about not doing things that are
going to prevent what is actually the best outcome over time,
(49:15):
just because it's maybe like a little less deterministic or
something like that.
Speaker 3 (49:20):
You've been at this now for twenty years years. Yeah,
it's been a journey.
Speaker 4 (49:28):
How would you describe your leadership style before versus now?
Speaker 2 (49:34):
I don't know, I probably need to think more about that.
I mean, the roles have changed so much at the company.
Speaker 5 (49:39):
And when I started early on, I really knew nothing
about building a company. I was a kid, and I
was an engineer, and I like coded the first version
myself and did most of the coding for maybe like
the first couple of the years of the company. And
then we got all these amazing people, right. So, I mean, Cheryl,
I often joke that she raised me as a manager
(50:02):
like a parent, and I think that that's like really true, right,
It's like, I mean, I literally I think it's hard
to overstate how little I knew about running a company.
And she's just like a really special person has played
this hugely important role in the history of the company
and like training me and so many of the other
leaders of the company. And then you know this funny
thing happened over the last fifteen years where there were
(50:24):
all these kids who just kind of grew up and
have had this context and these relationships with each other,
and you know, now I think we're all a little
bit more sophisticated, hopefully about managing something like this well
and building stuff for the longer term and doing it
more responsibly. And I don't know, there's like a lot
of lessons that you learn along the way, you know,
(50:45):
when I talk to other founders or CEOs and ask
how I built the team here, that I think is
really kind of like the key to how we get
everything done. I mean, there's always a lot of attention
placed on like the CEO or the top person, but
it's never one person, right, It's always a group of people,
and I don't know, it's hard to replace some of that.
Speaker 2 (51:02):
I think a lot of this is like you.
Speaker 5 (51:03):
Just grow up and you learn together, and you bond together,
and it's like a really kind of interesting and special group.
Speaker 4 (51:10):
Looking forward, will the next Mark Zuckerberg start the next
Facebook or will Ai do it for him?
Speaker 2 (51:16):
Yeah?
Speaker 5 (51:17):
Well, I think we use technology tools for stuff. I
think one of the things that's going to be interesting
in the future is you know, now to build meta
and the services that we have. We have tens of
thousands of people working at this company. And one of
the things that I think is going to be really
powerful in the future is the next entrepreneur, you know,
sitting in their dorm room or high school or whatever
it is, is going to now have all these tools
(51:39):
to be able to have the productivity of like a
large company, but maybe it's just them doing it, or
them and like a small.
Speaker 2 (51:46):
Group of friends working together.
Speaker 5 (51:47):
Part of why I was able to start this is
because there's the saying in science that you stand on
the shoulders of giants, and I think that's true with
open source and technology too. And you know, like I
couldn't build this myself, I was able to build it
because there's all this other technology that I can build on,
and I just think that that's going to keep getting
better and better.
Speaker 2 (52:05):
So, you know, it's like I watch my kids.
Speaker 5 (52:07):
Be creative and the stuff that they can do today,
like I wouldn't have been able to do when I
was a kid because it just didn't exist. So I
think we're going to live in a profoundly more creative
future where more people can have the capacity to do
like pretty amazing things than has been the case in
the past.
Speaker 3 (52:23):
What about Elon? You're kind of like the anti Elon
now and it's working for you. Where do you see
your differences?
Speaker 2 (52:30):
I don't know.
Speaker 5 (52:33):
I know him less well, to be honest, I think
he's obviously an amazing entrepreneur and has done a lot
of really great stuff, you know. I mean, it takes
some courage to go out there and speak your mind
as much as he does. I don't think like I
would be comfortable doing that as much. Maybe I'm just
like a somewhat more reserved person, but I feel like
all my career people are just telling me like, oh,
(52:53):
go out like be more yourself. And so there is
something that I think you have to admire about someone
who does that and maybe even takes it to an extreme.
Speaker 2 (53:00):
But I don't know what that version of me would be.
Speaker 3 (53:03):
Like, We're going to go.
Speaker 4 (53:04):
Visit you in Tahoe and follow you on your side quests.
Speaker 2 (53:08):
And you're going to surf too, well, I'm going to try.
Speaker 3 (53:11):
I'm going to try.
Speaker 5 (53:12):
You're going to teach me, right, I think that was
what I was told, but I'm not really I don't
know if a good teacher.
Speaker 4 (53:17):
But either way, you're always one upping yourself. And I'm
sure there's a metaphor in there for the company. You also,
you know, you're doing all these things to make us
more virtual, but you also love so many things about
the whole world. How do you wrestle with those two ambitions.
Speaker 2 (53:32):
Well, I don't know that there are odds.
Speaker 5 (53:34):
I just think that people are very physical beings, and
I think that we sort of mythologize intelligence. I don't know,
there's like a bunch of people in the tech community
you think like, oh, we'll just like separate out our
consciousness and intelligence and like upload it to the cloud.
And I'm like, that just sounds ridiculous to me. I
mean to me, it's like part of what makes you
a person is you like are active and you have energy.
Speaker 2 (53:55):
We're not just minds.
Speaker 5 (53:58):
Like I think the energy and like the love and
all those things are probably as foundational to what makes
you a person. And so I don't know, I think
my life has gotten a lot better. You know, in
the early days of the company, I just didn't have
time to do anything else. We were like always about
to die, and you know, just and it's like being
a startup is really stressful, right, and obviously there's stress
(54:21):
now too, but it's just it's a little more managed
and we have like more good leaders at the company
and all that. And my life has just gotten so
much better since I took the time to go make
sure that I can go do physical things all the time,
and I think it's made me a better person. I
injured my knee last year fighting, and I thought, you know,
Priscilla was gonna give me a hard time about it.
(54:42):
Like I kind of thought like she was just gonna like,
you're an idiot. Why are you fighting. You're like running
this company. It's like you shouldn't be doing this. But
she actually was like, you know, I know it's a
long recovery, but when you're done, you better go fight again.
It's like just because you're like a much better person
when you're like going and and doing all this physical stuff.
And it's like it's just kind of like a family value.
(55:03):
I mean, we do it. I mean I do it.
Speaker 2 (55:06):
Priscilla does it with me. She actually she hits pads.
That's what I was referring to, tell us, serve with you.
Speaker 5 (55:11):
But you know when when she's sitting pads. You can
hear it from like down the streets. So I mean she,
I mean, she's like I think maybe one day we'll
talk her into fighting. But but she's quite good at surfing.
And we teach the kids too, and it's just like
a fun thing that we do together. And I don't know,
I really believe in that. I think that that, like,
it just makes you like a better person.
Speaker 3 (55:30):
Tell me about the necklace.
Speaker 5 (55:32):
Oh, this this is something that I worked with a
designer to make that has engraved on it. The prayer
that I sing to my daughters every night when I
put them to bed. It's a Jewish prayer called mesche
Birach and it's basically a prayer for health and courage,
(55:52):
and it says, may we have the courage to make
our lives a blessing. And I just think that's like
and I've sung it to them basically every night of
their lives since.
Speaker 2 (56:02):
They were born.
Speaker 5 (56:03):
And I unless I'm out of traveling or something, but
I try to be around for bedtime. That's kind of
my thing and when I hang out with the kids,
and yeah, I don't know, it's just it's meaningful for me.
Speaker 2 (56:16):
In our family.
Speaker 3 (56:16):
That's beautiful. Thank you. Thank you for sharing your time
with us.
Speaker 4 (56:20):
Thank you for letting us into your home, and you know,
taking the time to explain all of.
Speaker 3 (56:25):
This, like I think it's really important. It helps us
to get to know you cool.
Speaker 1 (56:32):
Thanks so much for listening to this edition of the Circuit,
and please watch our video episode with Mark Zuckerberg on
Bloomberg Originals. I visit his retreat in Tahoe, we hang
with his wife Priscilla, and yes we go wait surfing.
Speaker 3 (56:44):
You'll see how that works out. I'm Emily Chang.
Speaker 1 (56:47):
You can follow me on Twitter and Instagram at Emily
Chang tv and watch new episodes of the Circuit on
Bloomberg Television, streaming on the Bloomberg app or on YouTube.
And check out other Bloomberg podcasts on Apple podcast, the
iHeartMedia app, or wherever. You listen to your shows and
let us know what you think by leaving a review.
Those extra reviews make a big difference. I'm your host
(57:09):
and executive producer. Our showrunner is Lauren Ellis, our editor
is Alis and Casey.
Speaker 3 (57:14):
Catch you next time.