Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Bloomberg Audio Studios, podcasts, radio news.
Speaker 2 (00:09):
I was always a curious kid. I wasn't curious about
nightclubs or her or other things. I was an avid
lover of science.
Speaker 1 (00:20):
Faith a Lee Tech CEO, Stanford professor and godmother of AI.
Are you conscious of the power that you have?
Speaker 2 (00:31):
I'm conscious of my responsibility. I understand. I'm one of
the people who brought this technology to our world. The
agency is not the machines, It's ours.
Speaker 1 (00:45):
From Bloomberg Weekend. This is the Michelle Hussein Show. I'm
Michelle Hussein. Have you ever thought about where AI came from?
The origin story of tools like Chat, GPT, Claude, or Gemini,
which are part of many people's everyday lives. I don't
(01:08):
mean the principle of training large language models on data,
but the long, often lonely work of the scientists who
believed years ago that you could teach machines to be intelligent. Well,
today's guest has lived that story because she was at
the forefront of unlocking this world for us all, and
(01:31):
she has a deeply resonant personal story. She was fifteen
when she came to the US from China with her parents.
She had little English life was hard, money incredibly tight,
but she excelled at school and set herself audacious problems
to solve. Her name is fay fe Lee, and by
(01:53):
the end of our conversation, I was asking myself, why
do most people know the name Sam Aortman and not hers.
She's the one whose breakthrough was critical, No wonder they
call her the godmother of AI.
Speaker 2 (02:08):
We ended up.
Speaker 1 (02:09):
Talking about everything from her own childhood to yes, whether
AI will do away with us humans, and what we
should teach the kids of today. Her answer on that
might just surprise you, but I hope you enjoy our
conversation as much as I did. Doctor fay fe Lee,
(02:30):
thank you for coming on the show. I'd love to
start with this remarkable period for your industry. It's three
years since chat GPT was released to the public, and
since then there have been new vehicles, new apps, huge
amounts of investment flowing towards the industry. How does this
moment feel to you?
Speaker 2 (02:50):
Well, first of all, Michelle, thank you for inviting me
to this show. I'm very excited. It's a good question
because AI is not new to me. I've been in
this field for twenty five years. I live and breathe
it every day since the beginning of my career. Yet
this moment is still as daunting and almost surreal to
(03:13):
me in terms of its massive, profound impact. This is
a civilizational technology, and it's surreal personally to me because
I'm part of the group of scientists that made this happen,
and clearly I did not expect it'll be this massive.
Speaker 1 (03:34):
When was the moment it changed? Because I know you've
talked about the years when it was like an AI
winter and what you're describing now, how the moment feels
extraordinary even to you. Is it because of the pace
of developments or is it because the world has woken
up to it and therefore turned the spotlight on people
like you.
Speaker 2 (03:52):
I think it's intertwined, right, But for me to define
this as a civilizational technology is not about spotlight. It's
not even about how powerful it is, is about how
many people it impacts. Everyone's life work, while being future
(04:13):
will somehow be touched by or impacted by AI.
Speaker 1 (04:18):
In bad ways as well as good.
Speaker 2 (04:20):
What technology is a dubba sword?
Speaker 1 (04:22):
Right?
Speaker 2 (04:22):
Yes? I think both ways, Because since the dawn of
human civilization, we create tools we call technologies and these
tools are meant in general for doing good things, but
along the way we might intentionally use it in the
wrong way, or we might have unintended consequences.
Speaker 1 (04:43):
And strands of this I know will emerge in different
ways over the course of this conversation. But you said
the word power, and I'm struck by the fact that
the power of this technology is in the hands of
a very small number of companies, most of them American.
How does that sit with you?
Speaker 2 (05:03):
You're right, the major tech companies, through their massively reaching products,
they are impacting globally our society the most. I would
personally like to see this technology being much more democratized.
I would like to see no matter who builds or
(05:24):
holds the profound impact of this technology, do it in
a responsible way. And I also do believe every individual
in this era should feel they have the agency to
impact this technology.
Speaker 1 (05:39):
We'll talk a bit more in a moment. I think
about the democratization and how that might be achieved. But you,
of course, in terms of companies in this field, you're
part of that because you are a tech CEO as
well as an academic. In fact, I think you're very
young company, little more than a year old is reportedly
already worth a billion dollars.
Speaker 2 (05:57):
Yes, co founder CEO of war Laps, and we are
building the next frontier of AI, which is spatial intelligence,
which people don't hear too much about today because we're
all about large language models. Yet I believe spatial intelligence
is as critical and complimentary to language intelligence.
Speaker 1 (06:20):
Releasing this idea of virtual world, which again we'll dig into.
But before we do that, I want you to take
us back again to the fact that you've seen the
whole trajectory of this industry. You've been in it for
twenty five years. I know that your first academic love
was physics. Yes, what was it in the life or
work of the physicists you most admire that made you
(06:41):
think beyond that particular field.
Speaker 2 (06:43):
I grew up in a it's not a small town,
but it's a less well known city in China, and
I come from a small family, so you could say
life was small in the sense it was in the
eighties my childhood, which is fairly simple and isolated. You're
a child I was, and my family in general is
(07:08):
just small. Physics is almost the opposite. It's vast, it's audacious,
The imagination is unbounded. You look up in the sky,
you can ponder about the beginning of the universe. You
look at a piece of a snowflake, you can zoom
into the molecular structure of the matters you think about time,
(07:32):
you think about magnetic field, you think about nuclear It
takes my imagination to places that you can never be
in this world. And what really fundamentally fascinates me to
this day about physics is not be afraid of asking
the most boldest audacious question about our physical world, of
(07:56):
our universe, of where we come from.
Speaker 1 (07:58):
But your audacious question, I think, was about what is intelligence?
Speaker 2 (08:02):
Yes, And I think it's rooted in my love for
physics and my training in physics that I look at
each physicist I admire, look at their audacious question, right
from Newton to Maxwell, to Schrodinger to i Instell, my
(08:23):
favorite physicist, and I wanted to find my own audacious question.
And somewhere in the middle of college, my audacious question
shifted from physical matters to intelligence.
Speaker 3 (08:38):
What is it?
Speaker 2 (08:38):
How does it come about? And most fascinatingly, how do
we build intelligent machines? And that became my quest Minor Star.
Speaker 1 (08:48):
And that's quantumly because from machines that were doing calculations
and computations, you're really talking about machines that learn and
that are constantly learning.
Speaker 2 (08:58):
I like you use the physics pun the quantum leap.
Speaker 1 (09:03):
Crossing over into into ordinary parlance. But I think, didn't
you have this light bulb moment as you're thinking about
intelligence you realize that a critical part of that is
our ability to recognize objects. The fact that around us
right now there are multiple objects, we know what they are.
Speaker 2 (09:19):
The ability that humans have about recognizing objects in the
world is foundational. So I decided that was my first
love start my PhD dissertation is to build machine algorithms
to recognize as meaning objects as possible.
Speaker 1 (09:37):
I think the key question is how right how do
you teach the machines? And what I found really interesting
about your background is that you are reading very widely
and there are two key breakthroughs that make your ultimate
breakthrough possible. And one is that you start thinking and
learning about what psychologists and linguists are saying that it's
(09:59):
really to your field.
Speaker 2 (10:01):
Well, that's the beauty of doing science at the forefront,
because it's you. No one knows how to do it.
As an AI scientist, you look for possible answers, and
it's pretty natural to look at human brain and human
mind and try to understand or be inspired by what
humans can do. And one of the inspirations I got
(10:23):
in my early days of studying or trying to unlock
this visual intelligence problem is look at how our visual
semantic space is structured. There are so many tens and
thousands and millions of objects in the world. How are
they organized? Are they organized by alphabets or are they
(10:44):
organized by size or colors?
Speaker 1 (10:47):
And you're asking that because you have to understand how
our brains organize in order to have something to teach
the computers.
Speaker 2 (10:54):
That's one way to think about it. And I come
upon this linguistic work call word net word that is
a way to organize semantic concepts, not visual, just semantic
or words in a particular taxonomy.
Speaker 1 (11:11):
Give me an example.
Speaker 2 (11:12):
An example is in a dictionary, an apple and an
appliance are very close together, but in real life, an
apple and a pair a much closer to each other,
and an apple and pair both belong to fruits, whereas
an appliance belong to a whole different family of objects
(11:34):
and I came upon this taxonomy called word net that
describes this organization of objects, and I made a connection
in my head. Is that two things is very clear.
One is that this could be the way visual concepts
are organized, because apples and pairs are much more connected
(11:55):
than apples and wash machines, for example. But also even
more importantly is the scale. Is that if you look
at the number of objects described by language, realize how
vast it is, how big it is. And that was
(12:16):
a particular epiphany for me. We as intelligent animals, as humans,
we experienced the world with massive amount of data, and
we need to endow machines with that ability.
Speaker 1 (12:28):
Hence the big data sets needed. And actually it's worth
noting that at that time, I think this is early
part of the century, the idea of big data two
thousand and six exists.
Speaker 2 (12:40):
No big data was not this phrase did not even exist.
The kind of scientific data sets we were playing with
were tiny. For example, in images, most of the graduate
students in my era were playing with data sets of
four or six or at most twenty object classes each
(13:04):
class there were a couple of hundred examples and most
and that's how tiny it is. Whereas fast forward three
years later, after we created image that it had twenty
two thousand classes of objects and fifteen million annotated image.
Speaker 1 (13:23):
And image net was a huge breakthrough. It's the reason
that you've been called the godmother of AI. I'd love
to understand more about what it is about you that
enabled you to make these connections and see things that
other scientists didn't. And one that immediately comes to mind
is that you acquired English as a second language after
(13:44):
you moved to the United States. Is there something in
that that led to what you're describing.
Speaker 2 (13:51):
Michelle, that's a great question. I've been asked this question,
and my true answers I don't know. Human creativity is
still such a mystery. People talk about AI doing everything,
and I disagree. There's so much about the mystery of
human mind that we don't know. So I can only
conjecture that a combination of my interest and my experiences
(14:14):
led to this, including the curiosity and the hunger for
audacious problem definition I find myself. I'm not afraid of
asking crazy questions in science, not afraid of seeking solutions
that's out of box. And maybe my appreciation for the
(14:37):
linguistic link with vision might be accenterated by my own
journey of learning the different language. But I don't know
the answer.
Speaker 1 (14:46):
What was it like coming to the United States as
a teenager, because I think that's a particularly difficult stage
in life to move and to have to make new friends,
even if you weren't battling a language barrier.
Speaker 2 (15:00):
That was hard, I think being an immigrant. And I
came to this country when I was fifteen in New Jersey. Parcipitately,
in New Jersey, none of us, my parents nor I
spoke much English at all. I was young enough to
learn more quickly. It was very hard for my parents,
(15:23):
and we were not financially very well at all. My
parents were doing cashier jobs and I was doing Chinese
restaurant jobs. Eventually, by the time I entered college, also,
my mom's health was not so good. So my family
and I decided we have to run this little dry
cleaner shop in New Jersey to make some money to survive, right,
(15:46):
And you got involved yourself, I joked, I was the CEO.
I ran the dry cleaner shop for seven years from
what age I was eighteen nineteen to the middle of
my graduate school, so all the way through college and
graduate school, seven years, even when you're at a distance
you're running your parents strike cleaning business. Yes, because I
(16:08):
was the one who speak English, so I take all
the customer phone calls, I deal with the building, I
deal with inspections or the business. What did it teach
you resilience? As a scientist, you have to be resilient
because science is a nonlinear journey. Nobody has one conjecture
and all the solutions in front of you. You have
(16:30):
to go through such a challenge to arrive and find
an answer. And as an immigrant you learn to be resilient.
Speaker 1 (16:39):
Were your parents pushing you because they clearly wanted a
better life for you, That's why they left China for
the United States. So how much of this was coming
from them and how much from your own sense of
responsibility to them.
Speaker 2 (16:53):
To their credit, they did not push me much. They're
not tiger parents in today's language. I think part of
it is they're just trying to survive, to be honest,
and especially my mom, she is an intellectual at heart,
she loves reading, but the combination of a tough, survival
(17:16):
driven immigration life plus her health issues, she was not
pushing me at all. As a teenager, I kind of
had no choice. I either have to make it or
not make it, and the stakes are pretty high. So
I was pretty self motivated. I was just curious. I
was always a curious kid, and my curiosity had an outlet,
(17:40):
which was science, and that really grounded me. I wasn't
curious about nightclubs or other things. I was an avid
lover of science.
Speaker 1 (17:50):
But I also going to avid reader that from your
early childhood, you're putting away the children's books, tanning towards
the classics, the drawn up books.
Speaker 2 (18:00):
My mom liked reading, and she probably thought that you're
talking about when I was nine ten. At that time,
she probably did have influence more influenced. She thought I
was precocious enough that just read some grown up books.
Speaker 1 (18:19):
You also had a teacher who is really important in
your life. Tell me about him.
Speaker 2 (18:23):
I excelled in math, and I liked math, and I
befriended the head teacher, mister Bob Sabella. And the math teacher,
we became friends through the mutual love of science fiction reading, and.
Speaker 1 (18:41):
We start read in Chinese.
Speaker 2 (18:43):
At that time, I was reading in Chinese. Eventually I
started reading English. He's a remarkable person because he probably
saw in me this desire to learn, so he went
out of his way to create opportunity for me to
continue to excel in math. I remembered I placed out
(19:06):
of the highest curriculum of math, and so there was
no more courses for me. He would use his lunch
hours to create a one one class for me to do.
Now that I'm grown up, I know he was not
paid extra. It was really out of a teacher's love
and sense of responsibility. And he really became such an
(19:32):
important person in my.
Speaker 1 (19:33):
Life because he still alive mister Sabella.
Speaker 2 (19:36):
Mister Isabella passed away when I was assistant professor at Stanford,
but his family, his two sons, his wife, and I
I think they are my new Jersey family.
Speaker 1 (19:45):
At this point, you used the word love about what
he did for you, and I wonder if they also
He and his family introduced you to American society with
your first friends and an entrance to the whole world
of America beyond your school.
Speaker 2 (20:00):
Absolutely, they introduced me to the quintessential middle class American family.
They live in a suburban house, they have two lovely kids.
Of course they're all married and have the grand children
generation now, but it was a great window for me
to know the society, to be grounded, to have friends
(20:23):
and have a teacher who cared.
Speaker 1 (20:46):
Do you think you could have had the career that
you have had in China because now there are like
significant advances happening in AI in China.
Speaker 2 (20:57):
I don't think I'm able to answer this question because
I think life is so serendipitous, right the journey would
be very different, and in a way we could have
simulated all possibilities. But what is timeless or what is
invariant for anyone is the sense of curiosity, the pursuit
(21:19):
of north stars. So if I were me, I will
still be doing AI somehow, I believe.
Speaker 1 (21:27):
Do you still feel connected to China?
Speaker 2 (21:30):
It's part of my heritage. I feel very lucky that
my career has been in America, higher education and in
Silicon Valley then as also in and out of industry
and being in tech. The combination of all these ingredients
is very global. The environment my family right now is in,
(21:51):
which is Stanford, San Francisco, Silicon Valley, is very international,
so I feel very connected. And the discipline in which
is A is so horizontal it touches people everywhere that
I do feel much more like a global citizen at
this point.
Speaker 1 (22:09):
And of course it is a global industry. But there
are some really striking advances in China, not least the
number of patents, in the number of AI public papers
coming out the Deep Seek moment earlier this year. As
you look ahead in this century, do you think China
will catch up with the US in the way that
it has in other fields like manufacturing.
Speaker 2 (22:28):
I do think China is a powerhouse in AI. In
this moment, most people would recognize the two leading countries
in AI are China and US. The excitement, the energy,
and also frankly, the ambition of many regions and many
countries in the world of wanting to have a roll
(22:50):
in AI, wanting to catch up or even come ahead
in certain areas AI. That is pretty universal.
Speaker 1 (22:58):
And your own next frontier spatial intelligence. Tell me what
you mean when you use the word spatial intelligence. What
are you working on right now?
Speaker 2 (23:07):
Yeah? So, spatial intelligence is the ability for AI or frankly,
any intelligence to understand, perceive, reason and interact and also
create spaces worlds. It comes from a continuation of visual intelligence.
(23:29):
The first half of my career around the image that
at time was trying to solve the fundamental problem of
understanding what we are seeing, and that was a very
important problem. But it's not enough because that's a very
passive act. It's just receiving information and being able to
understand this is a cop, this is a beautiful lady,
(23:52):
this is a microphone. But if you look at evolution,
you look at human intelligence, perception is profound linked to action.
We see because we move, we move, therefore we need
to see better. And how do you create that connection
has a lot to do with space, because you need
(24:14):
to see the three D space. You need to understand
how things move. You need to understand when I touch
this cup, how do I mathematically organize my fingers so
that it creates a space that would allow me to
grab this cup. All this intricacy is center around this
(24:36):
capability of spatial intelligence.
Speaker 1 (24:38):
And I've looked on your website and I've seen the
preview that you've released Marble. You can see essentially a
virtual world which one goes into and from room to room,
and one door opens and you go from one place
to another. But I'm not sure how you use it.
Is it essentially to you a tool for training AI.
(25:00):
It's a different way to train AI, rather than for example,
meta saying this is in the metaverse. This is a
world you can go into and spend time and as
a human being.
Speaker 2 (25:10):
Right, So let's just be clear with the definition. Marble
is a frontier model. It's a frontier world model. What
it does is not just to see and go into
a world. What's really remarkable is it generates a three
D world at a simple prompt. A prompt could be
(25:32):
give me a modern looking kitchen, or the prompt could
be here's a picture of a modern looking kitchen. Make
it a three D world. The ability to create a
three D world is a fundamental ability. It's fundamental to humans,
and it's I hope one day, is fundamental to AI.
(25:52):
If you are a designer or architect, you can use
this three D world to ida to design. If you
are a game developer, you can use it to obtain
these three D worlds so that you can design games.
If you want to do robotics simulation, these worlds will
(26:13):
become very useful as training data for robots. If you
want to create immersive educational experiences in ARVR, this model
would help you to do that.
Speaker 1 (26:25):
Interesting I'm imagining girls in Afghanistan, Maybe you could do
virtual classrooms in a very challenged place.
Speaker 2 (26:32):
Yes, Or I'm imagining, for example, how do you explain
to an eight year old what is a cell? One day,
we'll create a world that's inside of a cell, and
then the student can walk into that cell and understand
the nuclears, enzymes, the membranes. So you can use this
(26:54):
for so many possibilities. So that's your next frontier, can
we conscious? Yours is a very big, complex industry, but
there are some immediate pressing issues, and I wonder if
I could put a selection of those to you for
you to give us an instinctive or even a nutshell
response on how you see them. For example, and you'll
(27:16):
have heard this many times before. Number one, is AI
going to destroy jobs? Large numbers of jobs? Technology do
change the landscape of labor. A technology as impactful and
profound as AI will have a profound impact in jobs.
Speaker 1 (27:35):
Which is happening of their customer support roles are going
to go because of AI.
Speaker 2 (27:43):
Right and software engineering, contact centers, analyst jobs, So it will,
there's no question about it.
Speaker 1 (27:50):
It's not going to create as many jobs in its place,
is it. I wonder if that worries you.
Speaker 2 (27:55):
The juris store out there. Every time humanity has created
a more advanced technology, for example steam engine, electricity, PC cars,
we have gone through difficult times, but we also have
gone through real landscaping of jobs. So only in talking
(28:17):
about the number bigger or smaller doesn't do justice. We
need to look at this in a much more nuanced
way and really think about how to respond to this change.
There's the individual level responsibility you've got to learn, you
gotta obscure yourself, but there is also company level responsibility.
There's also societal level responsibility. So this is a big question.
Speaker 1 (28:41):
Number two is an even bigger question. If you take
those headphones, I want to play you the voice and
a view on the existential question about whether human beings
are going to be replaced by AI. And this is
a voice you will know Professor Jeffrey Hinton, whose work
has overlapped with yours and who is a no laureate
as well.
Speaker 3 (29:01):
When AI gets super intelligent, it might just replace us.
How do we prevent it taking over? Even if all
the countries collaborate, what do you do? And I think
at present, all the big companies and governments have the
wrong model. Their basic model is I'm the CEO, and
this super intelligent AI is the extremely smart executive assistant.
(29:23):
I'm the boss. It's not going to be like that
when it's smarter than us and more powerful than us.
Speaker 1 (29:30):
What do you think of that? Because Professor Hinton thinks
it's a ten to twenty percent chance that AI leads
to human extinction.
Speaker 2 (29:38):
So, first of all, Professor Hinton or I call him Jeff,
because I have known him for twenty five years since
I was first a graduate student. He's someone I admire
and studied his technical papers. But this thing about replacing
human race, I actually do respectfully disagree, not in the
(29:58):
sense that it will never happen. It is in a
sense that if human race became really being trouble, in
my opinion, that's a result of humans doing wrong things,
not machines doing wrong things.
Speaker 1 (30:16):
But the very practical point that he put in that
clip is where he says, how do we prevent the
super intelligent creation taking over at the point that it
becomes more intelligent than us? We have no model for that. Now,
if that creation that is more intelligent than us, says,
turn off human beings life support or do something else.
(30:39):
That is existential? How would we stop it?
Speaker 2 (30:43):
So? I think this question has made an assumption, which
is from today, which we don't have such a machine,
super intelligent machine, Yet we still have a distance, We
still have a journey to take from today to that day.
And my question is why would humanity as a whole
allow this to happen? Where is our collective responsibility? Where's
(31:06):
our governance or regulation?
Speaker 1 (31:08):
Which is why I wonder, then, do you think there
is a way to make sure that there is an
upper limit to superintelligence?
Speaker 2 (31:14):
I think there is a way to make sure there
is a responsible development and usage of.
Speaker 1 (31:21):
Technology internationally agreed, like at the government level. Is it
a treaty? Is it just companies agreeing to right?
Speaker 2 (31:30):
You're right? The field is so nascent that we don't
yet have the level of international treaties. We don't yet
have the level of global consent. I think we have
global awareness, and I do want to say that we
shouldn't over click out the only one possible consequences or
(31:51):
negative consequences of AI. This technology is powerful. It might
have other negative consequences. It also has a tone of
benevolent applications for humanity. We need to look at this holistically.
Speaker 1 (32:07):
Do you get frustrated by some of the questions? And
because I know you talked to politicians, people with political
power a lot. You've done that in the US, You've
done that in the UK and in France and elsewhere.
What's the most common question they ask you that you
find frustrating.
Speaker 2 (32:21):
I wouldn't use the word frustrating. I would use the
word concerned, because I think our public discourse of AI
need to move from the very simple question of what
do we do when machine overlord is here? So I
don't get frustrated, I get concerned if the only way
to ask this question is is very simple binary do
(32:45):
you want it or not want it? Another question I
get asked a lot, possibly more than this question, is
question from parents worldwide. Parents ask me AI is coming.
How do I advise my kids? What's the future of
my kids? What should they do? Should they study computer science?
(33:06):
Are they going to have jobs? And so?
Speaker 1 (33:08):
Answer it? Then people listening to this are probably thinking
exactly the same. What do you say?
Speaker 2 (33:13):
I say that AI is a very powerful technology, and
I'm my mother. The most important thing we should empower
our kids is to empower them as humans with agency,
with dignity with the desire to learn, and there are
(33:37):
timeless values of humanity. Be an honest person, be a
hard working person, be creative. Worry about what they'll study.
Worry is not the right word. Be totally informed and
understand that your children's future is going to be living
in the world of AI technology and depending on their interests,
(34:01):
their passion, their personality, their circumstance, prepare them in that future.
Worry doesn't solve the problem.
Speaker 1 (34:10):
I've got another industry question, which is about the huge
sums of money that are flowing into again, not that
many companies like yours, and whether this might be a bubble,
whether this might be like a dot com bubble where
it turns out that some of these companies are overvalued.
Speaker 2 (34:29):
First of all, my company is still a startup. When
we're talking about huge amount of money, we really look
at the big techs. AI is still a nascent technology,
and from a development point of view, there's still a
lot to be developed. The science is very hard. It
takes a lot to make scientific and technological breakthroughs. This
(34:52):
is why resourcing these efforts are still important. The other
side of this is the market. Are we going to
see the payoff from the market. By and large, I
do believe that the applications of AI is so massive,
whether we're talking about software engineering to creativity, to healthcare,
(35:14):
to education, to financial services, that I think we're going
to continue to see an expansion of the market of AI.
I look at it as there are so many human
needs both in terms of wellbeing as well as in
terms of productivity that can be helped by AI as
(35:37):
an assistant, as a collaborator, and that part. I do
believe strongly that this is an expanding market.
Speaker 1 (35:46):
But what does it cost in terms of power and
therefore energy and therefore climate. There's a prominent AI entrepreneur
you probably know, Jerry Kaplan, who said that we could
be heading for a new ecological disaster because of the
amount of energy consumed by the vast data centers that
we're going to need in growing numbers.
Speaker 2 (36:08):
This is an interesting question. I do think that in
order to train large models, we're seeing more and more
need for power or energy. But nobody says these data
centers must be powered by fossil fuel. For example, Our
innovation and the energy side will be part of this
(36:29):
innovation cycle, right.
Speaker 1 (36:31):
I think it's just because the amount of power they
need is so enormous it's hard to see it coming
from renewable energy alone.
Speaker 2 (36:38):
I think right now this is true, but I also
know that as for example, I visit Middle East, there's
a lot of effort in building renewable energy for big
data centers. I do think that countries that need to
build these big data centers need to also examine its
(36:58):
energy policy and industry. This is an opportunity for us
to invest and develop more renewable energy.
Speaker 1 (37:07):
What worries you about your industry because you're painting a
very positive picture, and you've been at the forefront of
this and you see much more potential. So I understand
where you're coming from.
Speaker 2 (37:16):
But in quite a moment, I'm not a tech utopian,
nor am I a dystopian. I think I'm actually the
boring middle. The boring middle wants to apply a much
more pragmatic and scientific lens to this. So what worries me?
Of course, aning tool in the hands of the wrong
(37:38):
mindset or wrong intention would worry me since the dawn
of human civilization, if fire was such a critical invention
for our species, yet using fire to harm people is
massively bad. So adding wrong use of AI, worries speed,
the wrong way to communicate with the public, Worry speed
(38:00):
because I do feel there's a lot of anxiety in
different parts of the world. The one worry I have
is our teachers. These people and my own personal experience
tells me they're the backbone of our society. They are
so critical for educating our future generation. Are we having
(38:21):
the right communication with them? Are we bringing them along?
Are our teachers using AI tools to superpower their profession
or helping our children to use AI instead of letting
them feel left out, frustrated and just we should be
(38:41):
embracing them. We should be helping them. And this would
be a concern if we don't do the right thing
for our teachers, for our nurses, for our doctors, for
many parts of our society.
Speaker 1 (38:53):
This is the Bloomberg Weekend Interview. And so we're always
interested in people's lives as well as their work. And
I realize that your life, the life that you're living today,
is so different from the way that you grew up,
working in your parents dry cleaner, keeping it a lot.
Speaker 2 (39:09):
Of laundry at home.
Speaker 1 (39:10):
So are you conscious of the power that you have
as a leader in this industry of our time in
the future.
Speaker 2 (39:20):
I'm conscious of my responsibility. I understand I'm one of
the people who brought this technology to our world. I
understand that I'm so privileged to be working at the
one of the best universities in the world, educating tomorrow's
(39:40):
leaders and doing the cutting edge research. I'm conscious of
myself being an entrepreneur and a CEO of one of
the most exciting startups in the gen AI world. So
everything I do has a consequence, and that's a responsibility
I shoulder, and I take that very seriously because this
(40:04):
is what I keep telling people in the age of AI.
The agency should be within humans. The agency is not
the machines. It's ours. My agency is to create exciting
technology and to use it responsibily.
Speaker 1 (40:23):
And the humans in your life, your children, what do
you not let them do with AI or with their
devices or on the internet.
Speaker 2 (40:33):
It's the timeless advice of don't do stupid things with
your tools. You have to think about why you're using
the tool and how you're using it. It could be
as simple as don't be lazy just because there is AI. Right,
if you want to understand math, maybe large language models
(40:56):
can give you the answer, but that is not the
way to learn. As the questions. You could use the
AI tool to prompt a good question so that you
learn because of it. So don't be lazy is one
of them. The other side is don't use it to
do bad things. For example, integrity of information right, fake images,
fake voices, fake texts. These are issues of AI as
(41:21):
well as our social media driven communication in our society.
Speaker 1 (41:26):
I think you're sort of calling for old fashioned values
amid this amid this world full of new developments and
challenges we couldn't have imagined even three years ago. Yeah,
you can call it old fashion, you can call it timeless.
As an educator, as a mom, I think there are
some human values that are timeless, and we need to
recognize that. Finally, you're reading from the child who is
(41:50):
already reading the classics at an early age. Nowadays, What
do you read when you're kicking back at the weekend,
if that ever happens.
Speaker 2 (41:58):
Yeah, honestly, I read a lot of technical papers these days.
It's it's kind of sad. I also read to my kids.
I have to say that my favorite book these days
is Harry Potter because it's such a great book and
I read a bed time reading to my kids.
Speaker 1 (42:19):
Well, that series is going to keep you going for
quite a long time, Yes, depending on I'm most cure at.
Speaker 2 (42:23):
Yeah.
Speaker 1 (42:23):
So you're not taking away the children's books in their lives, then.
Speaker 2 (42:26):
No, because they live in a totally different time. They
actually are exposed to information in a completely different way
from my time. There's like barely TV in the nineteen
nineties or eighties in China. Now they have the entire Internet.
I give them a kindle and Chinese. Do you speak
(42:47):
to them in Chinese? I speak to them in not
great but I do speak to them in Chinese, and
their father speak to them in Italian.
Speaker 1 (42:56):
Okay, citizens of the world, Yes, as you've become as well,
Doctor Fayfaeley. Thank you so much. Thank you, Michelle, And
that's the Michelle Hussein Show for this week. Do subscribe
if you haven't already. That way you'll know as soon
as we have a new episode. Thank you for the
comments and the ratings, and don't forget you can email
(43:18):
us the addresses Michelle Show at Bloomberg dot net. If
you want to see the written versions of these conversations
with my notes, they're at Bloomberg dot Com Slash Weekend,
and you can watch them on YouTube and Bloomberg TV.
The show's producers are Jessica Beck and Chris Martleu. Guest
(43:39):
booking is by Dave Warren, Social media by Alex Morgan.
Our sound engineer this week was Richard Ward. Video editing
was by Laura Francis. Our executive producer is Louisa Lewis
at Bloomberg Weekend, Brendan Francis Nunam is Editorial director of
Audio and Special Projects, and our executive editor is Catherine bell.
(44:02):
Our music is by Bart Walshaw. And we'd also like
to thank Alana Susnow Samasadi and Sage Bauman and thank
you for listening until next weekend. Goodbye,