Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:14):
Welcome to a special bonus edition of The Middle podcast.
I'm Jeremy Hobson, and we've done a few live shows
on the radio where we're getting so many great calls
and great questions that we could easily go on for
quite a while longer, but we can't because the way
The Middle works on the radio is we have to
start at a specific time, we have to take breaks
at a specific time, we have to end at a
(00:34):
specific time because the show is on over four hundred
and twenty public radio stations, and that's just the way
it works. So we did a show a couple weeks
ago that was one of those occasions. It was about
artificial intelligence, which is becoming a bigger and bigger part
of our lives every day, whether we like it or not,
and we were asking for your questions about AI with
two guests, one of whom is here to answer some more.
(00:57):
I'm delighted to welcome back to this podcast extra episode
of The Middle. VLAs Dar, president of the Patrick J.
McGovern Foundation, which is trying to make sure that AI
is being used for good. Vilas, welcome back. Thank you
so much for joining.
Speaker 2 (01:10):
Us, Jeremy, thanks for having me. This is so much fun, and.
Speaker 1 (01:13):
If I had a nickel for every time I read
the word Skynet in our email inbox after that show.
But anyway, there were a lot of people with a
lot of questions and comments. So first of all, before
we get into the extra questions that people had, did
anything surprise you about what our listeners were asking?
Speaker 2 (01:30):
You know, it's didn't surprise me, but it's something I
hear all the time. Is even for those of us
who are in AI. There's one way of thinking about
this that's about techno optimism. Everything is going to be
great and don't worry about it. It's not the school
of thought I come from, but sometimes I reflect on
how much public fear and concern there is about these technologies.
(01:50):
Some of it's tied to what AI is and will be,
and some of it isn't jeremy. Some of it is
just a lot of public narrative about all the things
we should fear, and I think one of them. The
most worthwhile things we can do is not to try
to confront those directly, but maybe make sure that we
all understand where we are at this point in time
on AI and where we're going, and realize that some
of these narratives are hurting more than they're helping.
Speaker 1 (02:12):
It was interesting that we got one call during the
show where somebody said, are we being too negative about this?
Is there a risk in even talking about the risk
of AI?
Speaker 2 (02:22):
It's a fair question. Look again, I start from common
sense in these conversations that technology is nothing more than
an expression of our values, and we should be able
to talk about the risks and fears. But if we
don't balance it with an idea of what we're trying
to solve for, we're just spinning in circles.
Speaker 1 (02:40):
One thing that has happened since we did our show
is that the chairman of the Chinese company Ali Baba
said he sees signs of an AI bubble in the
United States because of all the investment that's being made here.
Speaker 3 (02:52):
Do you think that that's a concern.
Speaker 2 (02:54):
You know. The funny thing, Jeremy, is just today there
was an article in the MIT Tech Review that talked
about the AI bubble in China. One of the things
that's happened is they have invested so heavily in data
centers and AI infrastructure, and they don't have the commercial
demand to meet all of the things they've built so
AI tech bubbles will happen. I think we might be
in one now, and I bet you that we'll have
(03:16):
five more of them over the next ten years. To me,
the focus is on what are the core principles and values.
What are the ways we're thinking about policy in the
space that's more than just responding to whatever hype cycle
we're in the moment. How do we think about five,
ten and twenty years in this debate?
Speaker 1 (03:34):
All right, well, let's get to some of our listener questions,
because they're not all doom and gloom.
Speaker 3 (03:38):
Here is one of them.
Speaker 4 (03:40):
My name is Josh, I'm calling from Demver. My question
would be, how do you guarantee that everyone will participate
in ethical use of AI? And what protection are in
place when that epical code is broken?
Speaker 3 (04:00):
What do you think?
Speaker 2 (04:01):
So I have a starting point to this journey that
I think is really important, which is that ethical AI
doesn't exist. There's no such thing as it. And over
the last few years we finally moved away from that term.
Ethics is a human function, It's a product of human decisions. Now,
one of the real questions that we have today is
who's going to be responsible for building ethical decisions into
(04:23):
how we make these technologies. And there's an easy and
incorrect answer, which is that the people who build these
tools should be responsible for its ethical use. But to me,
that's too restrictive of a starting point. We can't just
fix this by building ethics classes into university computer science curriculums.
That's just not how the world works, because ethics is
a product of where and how these tools are used,
(04:45):
what they're built for, who they're supposed to benefit. So
the question that this listener asked is exactly the right one.
How do we make sure that ethics is something that
we hold as a society, and that means a few
different things. It's certainly the technologists wo will build it,
they have an integral role. It's also policymakers, and it's consumers,
and it's creators who build content that has nothing to
(05:07):
do with AI, but it's being used in these platforms
writers and artists and authors. And what we need to build, Jeremy,
that we don't have today is a convening space that
lets all these people come together and actually say, what's
our shared vision of what ethical use of AI looks like?
And we need to build that fast. We're seeing the
seeds of it.
Speaker 1 (05:26):
Now, how does that kind of a thing get built?
Who builds it?
Speaker 2 (05:30):
The first thing I think that would be really helpful
is for us to see some action by the government,
whether it's Congress or by the White House, to say
this is a priority for us to figure out at
a whole of society level. I've spent today actually on
the hill meeting with legislators from across the country, and
I'll tell you this is one of those uniquely bipartisan
issues because everybody is concerned about the same topic. So
(05:52):
the first thing we need is for legislative action that
says we need some sort of federal regulation of these tools.
The second, and you'll here are my bias in this,
because I lead a civil society institution, is to have
civil society step in a meaningful way. Universities, nonprofits, governance
think tanks to step in and say we're going to
start building the public infrastructure for a discourse about the
(06:13):
ethics of AI. I spend a lot of my time,
very unusually for a philanthropic CEO in small towns and
communities across America, asking people what they're concerned about, and
when we open up space for those discussions. I learned
just as much in those conversations as I do out
in Silicon Valley talking to somebody who's building a tool.
We need to take that and lift it up. And
(06:34):
some of the places where I'm seeing it happen are
great op eds, are great creators and public culture people
like fran Dresser, who might remember the nanny who came
out to lead a public conversation about the use of
screenwriting and IP and how AI will intersect with how
we make movies, artists and musicians who are using it.
We need to make sure that there's a pattern of
these kinds of questions being asked in the public sphere.
Speaker 1 (06:58):
Well, that's why we do this show, so that we
can let everybody into that conversation. Let's get to another comment.
This came in online at Listened to the Middle dot com.
Sylvie in Atlanta, Georgia rights AI uses a tremendous amount
of water. Will AI ever realize it is competing with
humans for water?
Speaker 3 (07:14):
And what will it do? You know?
Speaker 1 (07:16):
We heard the lost during the show about some of
the environmental consequences of AI, But what about this question
from Sylvie?
Speaker 2 (07:23):
I love the question and I love the way it's framed.
Will AI ever realize that it's competing with us? Honestly,
I hope not, because I don't want AI to realize anything.
I don't want AI out there thinking on its own. Instead,
what we need to make sure is that we build
AI that's conscious of this. I'll give you an example.
Just last September, I worked with one of the world's
leading AI artists, a fellow named Rafique Anadol, And if
(07:44):
you haven't seen his work, it is absolutely stunning. It
is beautiful. It was at the moment, and we actually
showed this work at the United Nations. But here's why
this particular piece that we built in collaboration was so important.
It has three elements, and I'll be very quick. One
is ethically sourcing one hundred million images of underwater environments
from around the world. These are corals and natural seascapes.
(08:07):
We brought them together with the consent of communities, and
then we trained a new model. You might think of
it like a GPT, but we trained it only using
power from renewable sources and only trained when the grid
wasn't overwhelmed. Now, as you know, electricity is a direct
connection to water, right so if we could build AI
tools that are conscious of how we're using water, we
could make it a part of the design process, and
(08:28):
we could show it that you can actually build AI
that protects water resources at scale. And the third element
of this ethical data use of renewable energy and sustainable
water resources. And the third was the product of this
was this beautiful engagement for people with the visual ideas
of what our natural seascapes look like that actually caused
people to walk by and say, wait, this is what
(08:49):
it could look like. Why aren't we doing more to
conserve it. The takeaway from this is the connection between
AI water use and a broader response to climate change
and sustainability of our natural resources. We shouldn't just focus
on AI. We should focus on why we can't actually
have a public conversation about building renewable and sustainable power
that actually protects our natural water resources.
Speaker 1 (09:12):
But the way that you answer that question, it makes
it sound like you think, oh, well, AI would never
do something that we don't want it to do. AI
is never going to be mean to us and take
that road. As long as we build it the right way,
it'll never figure anything out and start being evil.
Speaker 2 (09:27):
Well, you're taking me down a different rabbit hole, and
I love it because where you've taken me too, is
where we've heard some of the most sensational and crazy
talk coming out of Silicon Valley. This is Skynet all
over again. Right. What we're worried about is the specter
of an AI that's going to sit there and say,
I need to realize that I'm doing something that is
in opposition to humanity. And as soon as you start
(09:47):
at that point, you immediately get to what happens when
AI decides it's more important than we are. But that
everything I've just said is still in the realm of
science fiction. There's no credible science today that talks about
whether AI will have agency, or consciousness or an identity
all of its own. We should be thinking about it,
we should be planning for that as a potential future.
(10:09):
But I'm going to say it very clearly here, there
is no scientific evidence or consensus that we are anywhere
on a path to building AI that's going to have
its own agency, its own sense of purpose, that's going
to be oppositional to humans.
Speaker 1 (10:24):
Let's get to another email comment. This is from David
in Minnesota. He writes, how could AI advance quantum computing
and vice versa. And as you answer that question, you
can be as condescending as you want, because I don't
know anything about quantum computing.
Speaker 2 (10:38):
Never condescending, I hope. But quantum computing is fascinating, and honestly,
I have a bunch of degrees, and even today's day,
I'm still trying to figure out how quantum works, as
is everybody on the planet. But let me kind of
give a little bit of a brief explainer or spliner
as we like to call it. So quantum computing. You
know it's in the news a lot. What it essentially
is is a totally different way of thinking about computing
(10:59):
entire and instead of being limited by what we've always
thought computers would let us do, now there's this brand
new set of things that quantum will let us do.
Some of them are really as ateric. They're things like, well,
we could break all cryptography that we've ever had, and
there's no such thing as secure communication. Maybe some of
the more interesting things are in foundational science. Quantum computing
could unlock how we do drug discovery and understand proteins
(11:23):
and biologic functions. There's two areas where quantum and AI
are intersecting in really amazing ways. The first is AI,
as it does with so many other disciplines, is changing
the speed of scientific discovery and how we create quantum computing.
It lets us test new ideas and virtual environments. It
lets us design quantum chips faster so we don't actually
(11:45):
have to build them. AI becomes a tool that supports
our scientific discovery. On the flip side, and we don't
really have any answers, and Jeremy, I'll tell you I
have no answers in this space yet. But some speculation
quantum computing might change how quickly AI works, and coming
back to our earlier questions that we found in it might
totally change this idea that the way we think of
AI today, which is take a box, throw a lot
(12:07):
of data into it and somehow it's able to reason,
would actually fundamentally change because of the nature of quantum computing.
It might mean that we don't need to train a
model using enough power to power a small city for
six months. We might have a totally power efficient way
to do it. This is all speculation, but in the
next ten to twenty years this will be a really
fun conversation.
Speaker 3 (12:25):
For us to have. All right, let's get to another caller.
Speaker 4 (12:29):
Listen, Hi, my name is Mary. I'm coming from Atlanta, Georgia.
I'm wondering how people utilizing cat GPT to write legal
documents or documents in school systems, how.
Speaker 1 (12:42):
That's going to affect PURPA and.
Speaker 2 (12:44):
Other things like that.
Speaker 1 (12:45):
She mentions FURPA, which is the Family, Educational Rights and
Privacy Acts of This caller is worried about privacy.
Speaker 3 (12:51):
What's your response.
Speaker 2 (12:53):
Let me start with the absurdity of AI and legal documents.
I love this thought experiment. I was a lawyer for
a period of time. Zeremy and I went to an
to legal professionals. I explain this to them. Think about
a world that's maybe five years away, where your lawyers
use a GPT product to write their briefs. A judge
has their clerks use GPT to summarize the briefs. The
(13:14):
judge makes a decision, then has GPT write their decision down,
and then that goes into the academic literature and legal
scholars use GPT to understand it. At that point, what
are humans really contributing the process. Well, it's going to
force lawyers to actually justify what they do in a
way that maybe is going to be really good for
the rest of us. So that's a little side comment
on the legal professional at large. AI is going to
(13:36):
have a transformative effect on it. But the privacy question
is really important because remember that the way it works
today is all the GPT, the AI products we use,
they're started and owned by a very small handful of companies.
There's a couple of open source ones like deepseek, but
really it's often OpenAI or Google or Microsoft. And when
those tools are used to evaluate your records and potentially
(13:57):
be used by a lawyer to write something, remember that's
your personal data that's being transmitted to that company servers
that sometimes is then being used by them to train
their models, and we have no visibility or control over that.
So this privacy question that's being asked by as listener
is really important. In the United States, as you well know,
we have no structural constraint around privacy regulation. We don't
(14:20):
have a single national law, and that's going to be
one of the most important things that has to happen
in the next five years is a way for us
to be able to tangle with these questions. If my
private data is going to an AI system. What are
my rights to privacy and how do we keep companies
from using that material data information for their own good?
For now, there's not a good answer, and I hate
to say it, and which is why if you're contracting
(14:42):
a lawyer, need to be very careful to tell them
exactly how you're willing to let them use AI on
your data.
Speaker 1 (14:48):
Yeah, I have to say I'm not mad about it
because it gets me through the airport faster. But it
is kind of amazing that I don't know what I signed.
But now I just look into a camera. I don't
even give the tsagent my ID, and all of a
sudden I'm through and it's like, oh, okay, I guess
I guess we're done with that level of privacy.
Speaker 3 (15:04):
You've got my eyes.
Speaker 1 (15:05):
Now you've figured out how I look, and you're willing
to trust that just to let me through security.
Speaker 2 (15:11):
Look, I travel all the time, and I'm with you,
And this is the real challenge that our listeners need
to pay attention to. For the last thirty years, this
has happened to us over and over again. We get
a terms and conditions, we click, I accept, We rarely
read it, and in that moment, we are giving up
some of our privacy rights and we get some value
back for it. It's been easy when that was in
order to go online and read a news article or
(15:32):
on a dating site. But when it gets to really
fundamental information like our biometrics, our digital identity. Now is
it time for all of us to wake up and say, hey,
wait a minute, I don't want to do that passive
consent anymore. I want to actually know how you're going
to use my data. And again it comes back to
the baseline. We got to call our legislators and tell
them what we care about and tell them that we
want privacy regulation.
Speaker 1 (15:53):
I'm speaking with VLAs Star, who's the president of the
Patrick J. McGovern Foundation, and you're listening to a special
edition of The Middle podcast.
Speaker 3 (16:00):
We will be right back. This is The Middle. I'm
(16:21):
Jeremy Hobson.
Speaker 1 (16:22):
I am talking with Patrick tim McGovern Foundation president VLAs
Dar on this podcast extra episode to answer more of
your questions about artificial intelligence. And let's get to another voicemail.
Speaker 5 (16:33):
Hi, my name is Jaye Martin calling from Mount Pulaski, Illinois,
and I was kind of curious you all talked a
little bit about how CHATGBT is bringing up kind of
the bottom level of the entry level workers. That's not
exactly screwing over the middle class, but the average worker
might be having a more difficult time. You all thought
about bringing the bottom up, sound like it was just
(16:56):
a negative, But I don't know if there's some positive
in there too.
Speaker 1 (16:59):
So it sounds like JA is talking about the fact
that chat GPT and other generative AIS are sort of
making entry level work a little bit easier, while at
the same time potentially disenfranchising people whose jobs are now
potentially being streamlined or made obsolete because artificial intelligence can
just do it for them. So is there a way
to make it so that AI can benefit people's professional
(17:21):
lives and not just take their jobs at all levels?
Speaker 2 (17:25):
Jeremy, I have a very good friend named Jamie Marisotis
who leads the Lumina Foundation, and years ago, even before
chat GPT, he wrote a book and it's called on
Human Work. And in that book he identifies all of
the different parts of our jobs and make some speculations
about how AI might automate them. And he has some
very clear insights in that book. One of them is
he says, you know, there's a lot of our jobs
(17:46):
that are mundane, banal kind of tasks that you just
kind of check the box on. But in every job,
and it doesn't really matter whether it's entry level or
super senior, there is something that actually index us to
human creativity, to human empty to something that we do
as a part of our social connection to each other that,
as far as we know, AI systems are never going
(18:06):
to be able to replace. Let's take that proposition and
try to answer this question. Even at an entry level job,
there's gonna be something there that AI probably isn't going
to do as well as a human. So the choice
in front of us is are we actually going to
try to protect those tasks? Are we going to make
sure that we leave open opportunities for people to do them,
or are we okay with going to a society where
(18:26):
we say we don't care. There's a great meme going
around lately if somebody pulling up to a fast food
restaurant and there's an AI order taker at the drive
through and they just kind of mess with it a
little bit, and you know what happens is it might
be funny for the first order, but at some point
you realize that our lives are made up of hundreds
of social connection points we have with people. Sometimes they're
(18:47):
deeply frustrating or annoying, like when you call the customer
service at your cable company. But we're social creatures and
a lot of what we do is engage in that way.
So I think the answer is, can we actually find
ways that even in those entry level rolls, those first jobs,
we're prioritizing human work, and we're training people to do
those things so effectively that maybe it actually makes the
(19:08):
world a better place, a more fun and enjoyable, happier
place for all of us.
Speaker 1 (19:13):
But if I just think about that an example of that,
let's think about, you know, a job that has definitely
been replaced in many areas by machines, which is the
checkout at the supermarket. What is the entry level workers'
contribution that they can make.
Speaker 3 (19:32):
I mean, they are faster, they are faster.
Speaker 1 (19:33):
If I go and bring up everything myself, I'm not
going to move as fast as the person who knows
that the you know, heirloom tomatoes are this code and
they've moved quickly. But what is the other benefit of
that kind of can you see a human element that
is better than the machine in that way.
Speaker 2 (19:50):
But this is the perfect example of a question of
just because we could doesn't mean we should. I've been
in those stores from one of the big tech companies
where it's you walk in and you wave a credit
card and there's not a human to be seen, and
you walk out and it's novel and fun Jeremy and look.
I like technical things. I'm like, this is great, but
if you give me my choice, I don't really want
to go to that store. I want to go and
(20:11):
see somebody and say, how's your day going right, and
have a chat about the weather. This is the point
is a lot of these tools are going to give
you more productivity and efficiency. They're going to drive profit
margins for big employers. But when did you and I
get to make a choice and say, you know what,
We're happy with the fact that you got rid of
all the tellers of the line. I actually like talking
to the tellers in the line. I'd rather go to
(20:31):
the store and talk to them. The inertia of the
moment is that all of this is going to get
automated and we're all going to have to go along
with it. But I don't believe that. Like I believe
in political power of people coming together and saying we
want to advocate for a different choice. There's going to
come a time pretty shortly when it's not just your teller,
it's your nurse, it's your pharmacist, it's everybody who provides
(20:52):
service in your life, and at some point people have
to stand up and say just because we can doesn't
mean that's what we want.
Speaker 1 (21:00):
Let's get to another email comment. This is a very
interesting one from Fred in Pennsylvania. He says, are you concerned,
in an ethical sense, not for the rights of a
human being, but that inalienable rights aren't being offered to
what is being built as an autonomous being? Do you
think the same abuses levied upon humans by corporations historically
will be more prevalent, if not easier to inflict, due
(21:21):
to a lack of oversight into the rights of AI itself.
Speaker 2 (21:25):
I'm going to give you a controversial answer, Jeremy. I've
done a few things in my life, and you shared
some of this. I was a human rights lawyer for
a period of time. I think it's pretty easy for
me to say to you, non controversially that human rights
are human rights, and that's not even AI. Let's just
stop it at humans and not talk about corporations as
having human rights either. We are down a weird and
(21:46):
winding path at the moment, but I don't care what
we build, as autonomous as they might be. Human rights
are human rights, and that's where we should stop the conversation.
If we're going to build tools that somehow are intended
to increase human welfare, then that should be a part
of the conversation, is that the things we create are
intended to help us. Now, I may regret this when
(22:07):
our robot overlords call come knocking in fifty years, but
for the moment, I'm pretty confident that actually we should
live in a world where we prioritize human interests.
Speaker 1 (22:19):
When you're on the AI version of Meet the Press
and they say, I feelst are you said in twenty
twenty five that we don't have any rights?
Speaker 2 (22:25):
Okay, senator gets no recollection of the events in question.
Speaker 1 (22:29):
Right, let's get to another listener comment. It's something I
hear a lot about when it comes to the tangible
benefits of AI. Tony in grand Ledge, Michigan Rights. I've
heard that many doctors are either retiring or planned or
retire in the next few years, and that US medical
schools are not graduating new doctors fast enough to replace them.
What role do you see for AI in the medical
(22:50):
field in the next ten to twenty years.
Speaker 2 (22:52):
You know, I had a chance to go out to
Stanford Medical School and I met with a high school
classmate of mine, a woman named Sarah Midendor who leads
emergency residence program out there. She's an exceptional doctor, a
carrying and empathetic human, and I got to spend almost
a day with residents, students, and practitioners in medicine. I
asked them the same question, and they said, you know,
(23:12):
we're really excited for all the ways that AI will
be used in medicine, and we had lots of use cases.
I'm happy to share those with you. But at the
end of every one of those conversations we got back
to the same point, which is, we don't see a
world in which AI is going to replace a doctor.
Medical profession isn't just about technical knowledge. It's not about
being able to do diagnosis and diagnosis. It's about supporting
(23:34):
somebody through some of the most vulnerable points in their life.
So the question this listener is asking is the key one,
which is what's happening here? Why aren't people going to medicine.
I don't think that has anything to do with technology.
That's an indictment of our healthcare system, of the ways
we built a system that's so unjust and inequitable that
has reduced the prestige and status of medical practitioners. Sometimes
(23:55):
this is an easy buy, and I apologize I'm not
trying to dodge the question, but I'll just say this
isn't a question technology displacing medical professionals. It's about whether
we center our social values, about whether we honor and
respect what these people who give up so much of
their lives to serve us do, and whether we can
turn that profession back into something that people aspire to
do and build a pathway where they can do it
(24:15):
without taking a lot of financial and social harm attached
to it.
Speaker 1 (24:20):
But let me just push you on the issue of
like what AI could do. I think about somebody that
has dementia or Alzheimer's. Could AI play a role in
eldercare in the future.
Speaker 2 (24:32):
As I said, I love talking about this, and if
you ever want to have a new conversation we just
talk about an healthcare but let me give you three
examples I think are amazing. The first is basic things
like when you have a chronic condition, adhering to a
clinical plan that a doctor gives you is really hard.
You might see your medical professional once every three months
or six months, but in between there's a lot of
things they've given you to do as a checklist, and
(24:53):
sometimes when you're facing that mental degradation or other things,
it's very hard. AI can be an amazing partner in
that work because they'll be with you twenty four to seven.
They can help you identify behaviors and practice as you're
doing that are problematic, and they can also just remind
you to make sure you take your pills. The idea
of a care companion who can help you adhere to
the medical plan your doctor's giving you is amazing. There's
(25:15):
a second category of things around diagnostics. Right medical radiologists
are great at what they do. AI can make them
even better so that they can do earlier scans, they
can identify issues earlier, and they can make sure that
people are in better care. And the last thing around
medical care that's super important has nothing to do with
the actual delivery of care. It has to do with
how inefficient our health system is. All of the back
(25:36):
end work that goes on in billing and negotiating, and
how insurance companies negotiate and decide whether or not to
pay out claims. These are things that are perfect for
AI efficiency. And if we could cut a lot of
costs out of that dead weight that hangs over our
medical system, we could get our providers to spend more
time with their patients and deliver better care.
Speaker 1 (25:56):
All right, I want to get to one final caller here.
It's kind of a goofy, but maybe we'll have some
fun with it. This comes to us from Ariel in Boulder, Colorado.
Speaker 6 (26:05):
Consider AI for managing workers, and let's do star Trek.
We have Kirk, Spock, and McCoy, each providing their unique
personality and their unique skill set. Now Doge would think
all I need is efficiency, so it would be spockxbox Bock.
But your average Trekkie would say that would not work
(26:26):
because each provided their unique solution to their problems in
every episode. So AI may be biased towards something according
to the person who's managing it, and therefore would fail.
Speaker 1 (26:41):
All right, So I think at the core of that.
I mean, besides the Star Trek stuff. The caller mentioned DOGE,
which is the Department of Government Efficiency Elon Musk's department,
and they're trying to cut spending in the name of efficiency.
It's tying into a lot of fears people have over
AI that efficiency expediency is prioritized at the expense of
(27:03):
the collaborative spirit. What do you what do you say
to the caller and this fear that AI excludes the
human element from the work that it's trying to support.
Speaker 2 (27:14):
I don't know why you're not letting me talk about
Star Trek, Jeremy. I just want to spend it.
Speaker 3 (27:17):
Can you do that too? If you want to go,
go for it in spots.
Speaker 2 (27:20):
But let's answer let's answer the actual, the important question
that you've asked. You know, there's a lot of metaphors
out there that people want to use about AI. Some
people want to call it a lens or a mirror
or all kinds of things, and I think they're all
evocative that at the end of the day, and this
is the core conceit of AI. What AI becomes is
what we build it to become. There is no special
(27:41):
supercomputer out there that's trying to make AI into an
evil genius or a villain or even our best friend.
It's people who are making decisions about what AI will
look like. And so I think I said to you
when we had our first conversation, I often think of
AI as a trojan horse. We can get a lot
of people to talk about AI because it's in the
hype cycle and people want to talk about AI. But
from me, every conversation about AI starts with speculation, and
(28:03):
it goes to technology, and then it ends up with
what are the decisions we are making as people and
as a society about what these tools will do for us.
It's going to force us to have some really hard conversations.
If AI allows us perfect access to healthcare, are we
okay with the world where some people have it and
some don't because of a political decision. If it's going
to displace workers, are we okay with the fact that
(28:25):
it's going to remove that teller from the storeline that
you and I talked about, or are we going to
say we value human experience and in the workplace? Are
we going to be okay with companies saying our profit
bottom line says that we're going to solve for efficiency
over empathy and care well, I think absolutely not. AI
is going to force us to have a real hard
conversation about the society we built and whether we're happy
(28:47):
with it, and what kind of society and norms, values
and principles we want to have guide it, and it's
our choice to make. This is the hardest part of
the argument because it feels so abstract, But this is
the most important takeaway from this conversation. None of this
is going to happen without us stepping forward and saying
we want a certain kind of future, and as a
community in a society, we're going to come together to
(29:09):
shape it. Or the alternative is we don't do that
and we just go along with what the tech companies do.
That's the choice in front of us. It's a moral choice,
not a technological one.
Speaker 3 (29:19):
We have the power, you're saying.
Speaker 2 (29:21):
We have the power if we choose to use it.
Speaker 1 (29:24):
I have one more question for you, just a personal
question about this. And you know, the way that I'm
mainly using AI directly is through things like chat, GPT,
and I wonder is it learning from what I'm telling
it or is it just learning from what I'm telling
it in regards to what it's doing for me, Like
does it take the information I give it and use
(29:45):
it more broadly than that or not.
Speaker 2 (29:47):
It's a really good question. I'll give you a non
technical answer, which is it's always learning, but it's learning
in a more abstract form. So if you tell it
something about yourself, it's probably not going to take your
personal information right away and put it into its corpus.
But if ten people ask a similar kind of question,
it's going to learn that that's a question that's important.
And when you tell it an answer is good or not,
it's going to remember that as well. But it takes
(30:09):
and aggregates its interactions with several billion people into the
central model, and then it plays it back to us.
Speaker 3 (30:15):
Well.
Speaker 1 (30:16):
Thank you so much, VLAs Star for joining us and
answering these listeners questions. VLAs Star, the Patrick J. McGovern
Foundation President, really appreciate it.
Speaker 2 (30:24):
Jeremy. This has been such a pleasure and so great
to hear from people across the country who are curious
and committed about these issues.
Speaker 1 (30:31):
Absolutely, and thanks to you for listening. Help us out,
Share this podcast with your friends on social media.
Speaker 3 (30:36):
Sign up for our weekly newsletter.
Speaker 1 (30:38):
Listen to the Middle dot com and while you're there,
support us by buying a Middle mug a Middle t shirt.
They're available in the Middle Merch Shop. And I'm allowed
to say that on the podcast, but I can't say
it on the radio show. So you are very important
to supporting our merchshop. I'm Jeremy Hobson. I will talk
to you later this week.
Speaker 6 (31:03):
He was still him