Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
I would love to see a governancemodel where the upstream
scientific discovery, the research is encouraged because
that's the innovation engine of our society.
But by the time this technology is closer and closer in the
hands of consumers, we do need to put guard rails around it to
(00:22):
ensure it doesn't cause too muchharm.
That's artificial intelligence pioneer Dr. Fei Fei Lee
explaining how to regulate AI without stifling innovation.
I'm Margaret Hoover. This is THE FIRING Line podcast.
She knows more than anyone else alive exactly how to make AI
(00:45):
work for us. And that means all of us.
Known as the Godmother of AI, Doctor Faye Faye Lee received a
lifetime achievement award at this year's Webby Awards.
Artificial intelligence must benefit humanity.
As Co founder of Stanford Universities Human Centered AI
Institute, Lee has been a leading advocate for the ethical
(01:06):
development of AI. Every technology is Adobe Edge
Sword. I think focusing on the
hyperbole and driving policies through that lens is not very
constructive to our society. She sees vast potential for
applications of technology in healthcare, the economy and
more. AI can help drug discovery.
(01:26):
AI can help making our patients safer.
AI can help our government to bemore efficient.
There's a lot AI could do to make life and work better.
But is it safe to put AI in the hands of our children?
We teach our kids to use fire. Think about the day you teach
them how to turn on the stove, right?
It's kind of frightening. They have to learn both the
(01:48):
utility and the harm of fire. The same thing with AI.
Doctor Faye, Faye Lee, welcome to FIRING LINE.
Thank you, Margaret. I'm excited.
We are now one decade in to the artificial intelligence
revolution and I want to know what you would say right now.
(02:11):
How intelligent is artificial intelligence?
What a great question. It's very intelligent.
But on the other hand, that wordhas so it's so loaded, right?
It it absolutely can think in a way that Turing, Alan Turing
wanted machines to think. But can it think like humans?
(02:34):
I don't think so yet it's a it'sa rapidly advancing and some
part of AI artificial intelligence is very advanced
like some of the language intelligence, but some part it
is nowhere compared to humans like emotional intelligence or
(02:55):
uniquely human creativity and all that.
You have written that despite its name, there is nothing quote
artificial about this technology.
It is made by humans, intended to behave like humans, and
effects humans. In what sense is it not
artificial? The human impact, the human
interaction, it's it's influenceon our world, even in our human
(03:22):
lives. For me, those are not
artificial. The new Pope, Pope Leo the 14th,
is not optimistic necessarily about the prospects of AI to
contribute to humanity. He revealed that his papal name
was inspired by former Pope Leo the 13th, who led the Catholic
Church through the disruption that was rocked by the
Industrial Revolution. And he recently said, in our own
(03:44):
day, the Church offers to everyone the treasury of her
social teaching in response to other industrial, another
industrial revolution and to thedevelopments in the field of
artificial intelligence that pose new challenges for the
defense of human dignity, justice and labour.
Does the new Pope have a point? Absolutely.
I totally agree with him in the sense that no matter what
(04:08):
technology can do, human dignityis absolutely central to our
civilization. And this is the point I'm trying
to make. This is the reason I Co founded
Stanford Human Centre AI Institute putting human in the
centre because I think technology is always a double
(04:30):
edged sword. No technology will by itself do
good or bad. It can be used in both ways.
You were born in Beijing. You were raised in Chengdu.
You came to the United States asa teenager with your parents
after the protests and massacre at Tiananmen Square, and later
received a full scholarship to Princeton University, which then
(04:52):
LED you on a storied career veryyoung, in which you pioneered
breakthrough research, which ultimately garnered you the
title the godmother of artificial intelligence.
At one point when your mother was asking you what you do, she
also said, what else can AI do to help people?
(05:14):
Yeah, this, you write, altered your career to focus AI on how
AI can help people. I want to point out though that
you do have a pretty idealistic outlook of AI, and it seems as
though you are convinced that the technology can be human
centered in order to advance thehuman condition.
(05:37):
How can AI enhance the human condition?
AI is a tool, and I do believe humanity invents tool by enlarge
with the intention to make life better, make work better.
Most tools are invented with that intention, and so is AI.
(05:59):
The conversation with my mother is in a greater context.
About 12 years ago, as AI was taking off, I was thinking about
what's my responsibility as AI scientist, as the the generation
that brought this technology to humanity.
And it really became very important for me that I do have
(06:22):
a responsibility beyond just coding codes and creating
computer science technology, butreally to do good with this
technology. And I think, for example, AI can
help drug discovery. AI can help making our patients
safer. AI can map out biodiversity.
(06:43):
AI can help us discover new materials.
AI can help social scientists sift through and learn from
enormous amount of data to understand how economics work.
AI can help our government to bemore efficient.
There's a lot AI could do to make life and work better.
(07:04):
In your book The World's IC, youwrote about an experience at
which it first was confronted with the ethical questions when
you attempted to apply AI in a hospital based setting.
So I wonder if you could explainwhat happened and how the
experience of trying to apply the science in the hospital
(07:26):
ledge turned your career down, acourse in which you are
beginning to think about the broader civilizational
implications of the AI's application.
Happy to talk about that. So that was about 12 years ago.
My field in AI is called computer vision where we, we
(07:47):
Simply put, we make computer seesee as well as humans.
And and then because of my experience in the medical
healthcare setting with my mother over the past decades,
I'm very familiar with healthcare setup and I wanted to
(08:07):
use AI to help. So a Stanford Medical School
professor, his name is Doctor Arnold Milstein, and I started
collaborating on how we can use smart cameras to help keep our
patients safer both in hospital rooms and possibly in at homes.
And the idea is that we don't have enough caretakers or nurses
(08:32):
to keep an eye on our patients 24/7, right?
What if they fall? What if they have a change of
critical medical condition? What if they need help?
If we have a smart sensors around the hospital room or or
at home, it can glean insights from the situation and and call
(08:56):
for help sometimes on behalf of the patient.
So one project we were doing with Stanford Hospital was about
hand hygiene because it turns out hospital acquired infection
is one of the top killers of ourpatients in hospitals.
And germs do carry to get carried by health health workers
(09:22):
or doctors, nurses from room to room, patients to patients.
So there is actually protocols about hand washing.
But in order to have smart sensors to understand what's
going on and remind doctors and nurses about hand washing, we
need smart sensors. We need to put smart sensors
around, you know, the hospital room or the the washing station
(09:45):
and so on. So we were very excited to do
this project to collaborate, butit was also the moment I started
understanding the multi stakeholders in the situation
have multiple perspective. For the patients and patient
family, they want to be safe. For the nurses and doctors, they
(10:07):
also want patients to be safe, but they also don't want to be
surveilled. They also want to have their
privacy respected. And as this feedback came in, I
began to learn the challenges oftechnology in a very messy human
setting. It cannot just be a naive
(10:27):
thinking, oh, I believe this technology can do good.
Therefore that's the answer. We need to begin talking to
multi stakeholders, understanding their concerns and
try to preserve the the good intention and the good utility,
but also work with them to resolve some of the issues.
(10:50):
You helped found, as you mentioned, you helped found and
are a Co director of the Stanford Institute for Human
Centered AI. And you wrote in the New York
Times in 2018 that you worried that quote enthusiasm for AI is
preventing us from reckoning with its looming effects on
society. So if we want to play a positive
role in tomorrow's world, we must be guided by human
(11:11):
concerns. And you have divided these human
concerns and AI into 3 categories, dignity, agency and
community. What does dignity mean in the
context of a human centered AI? Dignity is essential to each one
of us. It's essential to human being.
(11:32):
It's it's that it's the self respect that we all need.
We need that from in our daily lives, from by ourselves, but
also from the people we interactwith.
And now machines are getting smarter and smarter.
We work with machines and we need that dignity to ourselves
(11:55):
to be respected by the by the machines.
So. What's an example?
Here's an example and I talked about.
This is one of the most strikingexample I experienced with my
mother, who was recovering from a from a surgery that the
doctors prescribed her to use the spirometer.
(12:17):
So they were asking her to breathe out of.
Us and and the doctor issued a like a hourly exercise, but for
a patient who just came out of surgery, it's not just lung
exercise. They're in pain, they are
emotionally distressed and there's a lot going on.
And I was the dutiful daughter who was putting my cell phone on
(12:41):
a clock and reminding my mom to do that exercise every single
hour. And I was counting.
I was doing everything I can. I was almost imagining if I was
not there, a robot could do thator a computer program could do
that. I I can totally do that.
But at some point that just totally stressed out my mom and
(13:02):
that my heavy-handed approach ofalmost taking away her agency,
her dignity, and just almost treating her like a machine that
she has to be on the clock doingthe exercise and she just
actually rebelled. She's like, I can't do this.
Even if you tell me this is goodfor me, my dignity at that point
(13:25):
requires me to take control of my own action and my own
healing. And, and I think that was such a
learning moment for me that whether it's a human, in that
case me or a machine, what we have to do is to respect
people's dignity and agency in many situations.
(13:49):
We cannot forget about that whenwe make powerful and smart
machines. You've written that the founding
ideals of this country, however imperfectly they've been
practice in the centuries since,seemed as wise a foundation as
any on which to build the futureof technology, the dignity of
the individual, the intrinsic value of representation, and the
belief that human endeavors are best when guided by many rather
(14:13):
than few. What is the tenant of community
involvement in the development of AI?
Give us. I think community means many
voices. It means multi stakeholder.
This technology is a civilizational technology.
It changes every industry, it changes many aspects of life and
(14:37):
many people in the community andmany different communities are
going to be impacted. When I say the word impact is
very neutral, it could be positively impacted or or
adversarially impacted. I think it's important that we
or engage and and try to make a difference.
When you say Community, do you refer to the various
(14:59):
stakeholders from the private sector to the public sector to
the government, to the universities, or are you
referring to even different aspects of our, you know, civil
life? Different.
I think everybody. I mean all kinds of yes,
industry, academia, public sector.
I also mean artists, musicians, you know, legal workers,
(15:23):
manufacture workers, you know, white colour, blue colour.
I mean all of them. You have been concerned about
bias and artificial intelligenceand you even mentioned in your
book incidents of AI mislabelling black Americans or
black people as gorillas. Studies have shown self driving
(15:43):
cars are less likely to detect darker skin pedestrians.
Some AI images have generated imagery that is explicitly
racist or sexist. You have Co founded AI for All
to help with diversity in the field.
How can greater diversity and inclusion improve AI outputs?
(16:06):
So first of all, I believe AI for All is an education
organization. I believe that this technology
is so important that the more people participated, the better.
I loved it. In AI for All program, which is
AK12 education program. There are high school students
(16:27):
who love dancing, but they join AI for all to learn about how
dancing and music and and AI canwork together.
There are students who love journalism and they joined AI
for all to learn about the the, the, the power of this
technology, how it can help journal the future of
(16:48):
journalism. And this is what I believe.
I think that a technology as horizontal as AI will empower
and change so many different kind of jobs as well as just
activities in our lives. So the more people are aware of
(17:09):
this, the more people are embracing it, the more that
people feel they have a way to participate in it, the better it
is for our society. You say these are devices that
have human inputs and those human inputs then put out human
outputs. And one of the things you even
mentioned in a congressional testimony was that there are not
(17:30):
very many women involved in AI. There are not many people of
color involved in AI. To what extent does that impact
the algorithms? Does that impact the outputs?
Does that impact society at large if it's not reflective on
the front end of society at large?
Yeah, you're totally right. I you know, look, AI is, is a
technical system. And when we design this
(17:53):
technical system, maybe it is for, for chat or for creators or
for, you know, better movie recommendations, whatever it is,
every step of the way people areinvolved.
You know, some work is by curating data set or labeling
data or designing algorithms. All this, every step people are
(18:17):
involved. So when we invite more people
with different background, theirinsights, their knowledge, their
emotional understanding of the downstream application will
impact the the input of the system.
It seems to me that we're in a moment right now where there is
an intense pressure to eliminatediversity, equity, inclusion
(18:41):
programs, to not think so much about the inherent diversity of
the groups that we're participating in or perhaps even
the technologies that we're engaging with on some level.
Are you swimming upstream as youthink about these inputs?
I think we all want a better world in our own ways.
(19:05):
We all want, we want a world that more people get benefits
from, whether it's technology orbetter economy.
So I think I really believe in this kind of common sense values
that we want more people to benefit, we want more people to
be involved. And I also really believe in
(19:28):
education and involving people early in the education of
technology, of course, implementation.
How do we translate these beliefs into implementation?
And I still believe we need to involve as many people as
possible. I still believe that students
from all background, whether they're in from rural community,
(19:53):
inner cities, artists, immigrants, immigrants, yes
girls, art lovers, you know, future journalists, future
lawyers, future doctors, they all should be learning AI and
have a say in this technology. You talk about the importance of
community collaboration, you've talked about the various
(20:14):
stakeholders as part of a human centered AI approach.
Yet majority of investment in AIis coming from the private
sector. The advances in AI seem to be
happening mostly in the private sector and within industry.
And I've heard you say that quote, AI is too important to be
(20:35):
owned by private industry alone.How do you address that?
Yeah, Margaret, I'm actually concerned about this.
On one hand, I absolutely take alot of pride, especially in
America, that our private industry is so vibrant in
developing wonderful AI technologies and they are
(20:56):
translating that into products that to help people.
On the other hand, this vibrancythat we see today from the
private in the sector is a result of a very healthy
ecosystem in the past decades where the federal governments,
public sector, academia and private sector work together to
(21:20):
to grew this technology together.
So for me, the ecosystem is almost like a really healthy
relay race where the public sector and the academia takes
the first baton and and does thebasic science research.
As we run more and more advanced, we we passed that to
industry and eventually everybody in the society
(21:42):
benefits. And yet the model now is the
opposite. The the beginning investment and
the beginning research isn't always happening at the
university. It's.
What's happening is that a university has been so drain of
resources. You know that the chips are not
in universities, the data are are very rarely available in
(22:03):
universities and a lot of talents are going only to
industry and we're not getting enough resourcing back into the
academia. This is where I get worried
because training, a lot of good training is done in
universities. Even if you look at today's big
(22:24):
tech company, most of their talents come from programs that,
you know, academic program that provided computer science
education, PhD programs, master programs, and we still need
that. What is the risk of having it
all be in industry and not be shared by either government or
(22:45):
universities? And that, of course it it's
worth mentioning, of course, universities have been drained
of resources. Perhaps you know, the other
actor here is government investment.
Yes. How important is the
government's role in investing in AI?
The government's role in investing in basic science is
fundamental to our country and to our society because in
(23:08):
academia the kind of curiosity driven research produces public
good, and public good is in the form of knowledge expansion,
scientific discovery, as well astalents.
When students come to the universities and study under the
best researchers, getting to labs go to lectures that they
(23:33):
can glean the latest knowledge. This is a fundamentally critical
thing for our society. Do you think that's at risk in
this environment? I think it's been, I've been
saying this for, gosh, almost 10years.
I'm seeing the draining of the the resource, you know, starting
(23:56):
quite a few years ago and I continue to be worried.
I continue to be advocating for a balanced ecosystem.
I I'm again, I'm very excited what private sector is doing,
but I will I'm equally excited that my colleagues at Stanford
are discovering cure for cancer,are uncovering how the brain
(24:18):
works, are speaking, listening to whales and understanding how
they talk to each other and migrate across the ocean.
These are important knowledge and scientific discovery that we
continue to need. You said earlier this year,
quote, it's essential that we govern on the basis of science
(24:40):
and not science fiction, yes. Can you give me an example of
the wrong way to go about governance in AI?
An example of the wrong way is starting with hyperbole.
Hyperbole of this technology would end humanity, or this
(25:00):
technology is only utopia. There's nothing you can do wrong
with AI and these two things hardly exist for any technology.
And even when humans discover fire it could be deadly.
It's true, but it also has changed the way we live and eat
(25:21):
in the early days to to become stronger.
So every technology is a double edged sword.
I think focusing on the hyperbole and driving policies
through that lens is not very constructive to our society.
So you've written that AI governance should, quote, ensure
its benevolent usage to guard against harmful outcomes.
(25:45):
In practice, how do you advise policy makers to do that?
Yeah, this is a topic we talk a lot about at Stanford Human
Center and IAI Institute. I really think a pragmatic
approach that focuses on applications and ensuring a
(26:06):
guardrail for safe deliverance of this tech technology is a
big, it's a good starting point.For example, in medicine, right,
we have FDA, a regulatory framework.
Is it perfect? No, but it does a lot of the the
guard railing to keep our consumers safe and as AI becomes
(26:29):
more and more impactful in the area of food or drugs, we need
to update FDA to to answer to the to the new changes of these
applications. On the other hand, just because
this is a even better example transportation, you know,
(26:50):
clearly now we have, so we're getting more closer and closer
to self driving cars and we needthe the regulatory framework to
up to be updated so that we can understand the guardrail, the
accountability. But just because there is
potential harm doesn't mean we should stop creating cars.
(27:12):
Think about 100 years ago, people were there were fatal
accidents, more fatal accidents than now using cars.
But instead of shutting down GM or Ford, we created seatbelts
and speed limits. So good regulatory framework
helps to keep the utility of thetechnology safe, but also
(27:35):
continues to encourage innovation.
It's a hard balance to strike. It is.
It is a hard balance so. How do we do it?
Is it, is it legislatively done?Is it, I mean you had a
recommendation for the state of California, I mean practically
how can we implement sort of that perfect balance?
(27:56):
I think we begin with education and dialogue.
We there's so much to be done inAI education.
Not just in classrooms in elite universities, it's everywhere,
especially in public circles. I think public education of AI
is severely lacking. This is when I get very worried,
(28:17):
when the hyperbolic voices gets amplified the most and the
public only hears about the extremes.
And a lot of education and dialogue needs to be done
between the tech world and the policy world.
This is why that my institute goto Washington DC and talk to
(28:37):
policy makers and lawmakers across the aisle.
This is too important to be a political topic.
And through these education and dialogue, oh, we also have
Congressional boot camp at Stanford to provide information.
I think that would be a really important beginning.
(28:58):
And then keep the technologists and experts at the table as the
policy is being made. Yeah, you say sci-fi science
fiction is not the way to createAI governance.
Is there a science fiction moviethat has gotten AI correctly?
(29:18):
You know, I'm so busy I don't even watch movies.
But OK, sure. Big Hero 6, I think that's the
Disney movie. Baymax, the Baymax, the white
big friendly robot that is thereas a friend, as a support.
It doesn't take over anybody. And I at least.
(29:42):
You see, this is a more realistic portrayal.
I don't know if it's realistic. But it's what we should strive
for. It's heartwarming, it's heart,
and it's constructive. This is what we hope for, yes.
This is what we hope for. The original version of this
program, Firing Line, which aired in the 1990s through the
1990s as the Internet was emerging, had dealt with the new
(30:04):
technology of the time. Listen to this PN to the
Internet on the original programin 1996 by John Barlow, who was
a poet and an essayist and he called himself a cyber
libertarian. I come to you from cyberspace,
and that sounds to you like a ridiculous thing to say.
I mean, I must be some kind of cyberspace cadet, but I'm
(30:26):
telling you that there is a social space that includes the
entire geographical area of the planet Earth and a fairly large
and VAT and rapidly growing percentage of the Earth's
population. And there is a culture in there.
And there is a way of understanding ideas in the
exchange of ideas in the free market of ideas.
(30:47):
And those folks are not vulnerable to the excesses of
the United States Congress. We are free and sovereign from
whatever the United States Congress may wish to impose on
the rest of human, on the rest of the human race.
As we look back at that era, it seems like one of the mistakes
(31:08):
policy makers made was that theydidn't anticipate or have a
mechanism for dealing with the real risks that would develop
from the Internet, from social media, from threats to privacy
and even threats to democracy. What lessons can we take from
that era as we enter this age ofartificial intelligence when it
(31:32):
comes to a regulatory framework and governance?
Yeah, it's kind of stunning to revisit that this is again every
technology, every powerful tool is a double edged swirl.
I, it is, you know, it is great to be hopeful to, to want to use
(31:54):
technology for good to come fromthat right place.
But we need to know that alien technology can harm people and
we cannot be naive about that. You call me a idealist earlier.
I think I'm a pragmatist. You know, I also see that we
absolutely need to take into account the potential harm of
(32:16):
this technology. Which is, which is perhaps why
you focused on ensuring that we create a human centered and a as
a as a counterbalance to. And also.
Human nature could take it. Yeah, and this is also why I
believe in dialogues. I think that we especially, you
(32:37):
know, I myself am a computer science expert.
I'm I'm AAI expert, but there's policy expert, there's
healthcare expert, there is legal expert.
We need to come together to makethis work.
The vice president of the UnitedStates, JD Vance, warned at an
international AI summit earlier this year that excessive
regulation could stifle the AI industry.
(32:59):
The AI future is not going to bewon by hand wringing about
safety. It will be won by building.
As the government goes about crafting AI policies, how should
we think about the balance between innovation and safety?
This goes back to what I believe, which is that a
(33:25):
pragmatic approach that understands the power and
constructive utility of this technology is important, but
also understand the potential downstream harm or unintended
consequences is also important. So I would love to see a
(33:47):
governance model where the upstream scientific discovery,
the, the, the, the, the, the research is encouraged because
that's the innovation engine of our society.
But by the time this technology is closer and closer in the
hands of consumers and users andand small businesses, we do need
(34:09):
to put guard rails around it to ensure it's it it doesn't cause
too much harm. Earlier this year, the Chinese
startup DeepSeek unveiled a chatbot that has outperformed
models that were developed here in the United States at a much
lower cost. And the breakthrough, of course,
triggered concerns from policy makers in Washington that China
(34:33):
could outpace the United States in AI development.
You're a scientist. I know you're not a, you're not
a policy maker necessarily, but you're a scientist.
How do you look at the race between at the and the
geopolitical competition betweenthe United States and its
foreign adversaries when it comes to artificial
intelligence? So, Margaret, I do travel to
(34:55):
many parts of the world. I was in Europe, I was in Asia,
Singapore. Every government around the
world now cares about AI. It's not just US versus China or
just two countries is that this technology is so important that
every government, every country is doing everything they can to
(35:20):
ensure that they stay in this, whether you call race
competition, development, it's it's, it's very important.
Does it matter where these advances are made?
It matters what values we care about.
This is why I continue to come back to human centred AII love
(35:40):
this line that there's no independent machine values.
Machine values are human values.If we are a society that
believes in we talk about dignity, agency and and liberty,
then we know we need to create technology that doesn't harm
these values. Sam Altman, who's the CEO of
(36:02):
Open AI, told senators that the future quote can be almost
unimaginably bright, but only ifwe take concrete steps to ensure
that an American LED version of AI built on democratic values
like freedom and transparency prevails over an authoritarian
one. Do you agree with that?
I absolutely believe that democratic values are very
(36:25):
important. The Chinese military has already
reportedly started to integrate DeepSeek into non combat tasks.
From a national security perspective, does the US
military need to focus more of its resources on AI development?
I think defence industry in general should absolutely work
(36:47):
on AI. You know, I'm learning from my
colleagues. Difference industry in the US,
for example, is not only, you know, the, the industry that
keep our country safe, but it's also one of the biggest employer
of, of American citizens. It also takes care of veterans
(37:08):
and their families. There is a lot of medical usage,
they also you know helps in disaster relief and all that.
So I absolutely think differenceindustry need to continue to
develop advanced technology application using AI.
With the world's most advanced militaries deploying this all
(37:31):
powerful tool, of course it won't always be human centered
and it's priority. What concerns you the most?
Great question. I honestly have a lot of
concerns seeing AI if you focus on national security, of course
I, I, I'm worried about AI harm to people, right?
(37:56):
Nobody wants harm. Nobody wants worse.
Nobody wants families to be taken apart.
And you know, I I was a physics student when I was in Princeton.
You're inspired by Einstein. Exactly.
So we have seen that technology can become harmful for for for
(38:19):
people in warfare and obviously I don't want to see that.
In the meantime, I'm also concerned about AI leaving
people behind the social economic well-being.
This is why I care so much aboutAI for all and just human
centred AI. Because as jobs change, we also
(38:40):
will face the impact of AII. Also would love to see that
humanity's creativity, these things so dear to to humans,
continue to thrive in this technological era.
So there's a. There's a lot.
There's a lot to be worried about.
Listen, you're also one more thing to worry about.
(39:01):
You're also a mother. You have small children.
I do, too. Google recently announced that
it would make Gemini Chop Bat available to children.
Now, it includes many safeguards, and Google has still
warned parents that it may encounter, you know, information
or content that children don't want to see.
But is AI ready to be placed in the hands of children?
(39:25):
In general, I think anyone who is a learner and our kids learn
since the beginning of their life should use as should use AI
as a tool. I do believe that.
And how do you prevent the, you know, the, the, the, the loads
of reporting that students are over relying on chat bots to
(39:47):
complete their papers to cheat on their homework to that's the
thing. And and that and that the the
criticism is that the chat bots in the AI actually stifle the
process of independent thinking and critical thinking
developing. If that happens, it's the
failure of education, not the failure of the students.
I believe that if we teach responsible tool using, students
(40:11):
will be super powered by AI, by calculator, by computer.
Think about is it like the calculator?
It's more the calculator, right?But.
In the way that we integrated the calculator, the scientific
calculator into which we did, which we did.
Which we did successfully. I would say we should this is
come back to Human Centre AI. It's our generation's
(40:32):
responsibility to integrate AI into learning.
Not just K12 learning, but lifelong learning.
You know, as mothers, we teach our our kids to use fire.
Think about the day you teach them how to turn on the stove,
right? It's kind of frightening.
And then the next thing they want to do is to make an
omelette. But we still have to teach them.
(40:54):
They have to learn both the utility and the harm of fire.
The same thing with AI. So I really think it's not
constructive to just focus on students are cheating, students
are cheating if we don't teach them well, if we're not creating
a learning environment that theyknow how to use constructive
(41:15):
tools. I think we should absolutely
incorporate AI into kids learning, into classrooms, into
into teaching, into upscaling, rescaling continuous education.
This is a useful tool for us. Final question In an essay on
artificial intelligence from 2018, Henry Kissinger wrote the
(41:36):
most difficult yet important question about the world in
which we are headed is this. What will become of human
consciousness if its own explanatory power is surpassed
by AI and societies are no longer able to interpret the
world they inhabit in terms thatare meaningful to them?
I think the question is, for allthat we stand to gain from AI,
(42:01):
are we in danger of losing something fundamental about our
humanity? Great question.
This comes to the word agency. If we give up, we would.
If we give up to not just AI, ifwe give up to authoritarianism,
if we give up to Internet in a harmful way, we would lose our
(42:30):
agency. And AI is the same.
I don't think we should give up our agency.
Faye, Faye Lee, thank you for joining me on FIRING LINE.
Thank you, Margaret.