All Episodes

March 14, 2025 49 mins

On this episode of The Middle, our guests try to answer your questions about artificial intelligence as it becomes an ever-increasing part of our lives. Jeremy is joined by Ina Fried, Chief Technology Reporter for Axios, and Vilas Dhar, President of the Patrick J. McGovern Foundation, which is trying to make sure AI is used for good. DJ Tolliver joins as well, plus calls from around the country. #ai #artificialintelligence #chatgpt #jobs #digital #machinelearning #technology 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
The Middle is supported by Journalism Funding Partners, a nonprofit
organization striving to increase the sustainability of local journalism by
building connections between donors and news organizations. More information on
how you can support the Middle at Listen to Themiddle
dot com.

Speaker 2 (00:19):
Welcome to the Middle.

Speaker 1 (00:20):
I'm Jeremy Hobson, along with our house DJ Tolliver and
Tolliver so much new stuff this week. We have a
brand new fresh Middle website at Listen to the Middle
dot com.

Speaker 2 (00:28):
We have a new.

Speaker 1 (00:30):
Merch store with Middle T shirts, and we have a
new podcast that's been around now for a couple of weeks,
an extra called One Thing Trump Did.

Speaker 3 (00:39):
Uh yeah, you can actually get that on the Middles
podcast feed as we dive in exactly one thing Trump did,
not two, not three, just the one. Also, when you're
gonna get me a T shirt, man, I didn't know
that it's coming.

Speaker 2 (00:50):
It's in the mail, you know, Tolliver.

Speaker 1 (00:52):
As we prepare to have a conversation about artificial intelligence,
I'm going to come clean and say that the title
of One Thing.

Speaker 2 (00:58):
Trump Did was a idea.

Speaker 1 (01:01):
I wanted to call the podcast Extra Trump Tracker, but
there already is a Trump tracker, So I asked chat
GBT for a bunch of other names and it came
up with this week in Trump Chaos, Breaking Trump, which
is a play on breaking news and wtf did Trump do?

Speaker 2 (01:15):
So we settled.

Speaker 1 (01:16):
On one thing Trump did and it is available, as
Tulliver said in the middle podcast feed and partnership with
the iHeart Podcast on the iHeart Apple wherever you listen
to podcasts. So we're going to get to your questions
about AI in just a minute. But first, last week
on the show, we talked about transgender rights. A lot
of calls came in. Here are some of the voicemails
we got after the show.

Speaker 4 (01:34):
Hey, my name's Aaron Castillo and Los Angeles. I'm a
trans woman. This attack on the trans community has been
really scary because I don't feel safe. The pervasive culture
and the intense attacks have made it feel to at
least mean that there is a culture of life acceptance
towards the trans community.

Speaker 5 (01:55):
This is Carmela calling from Atlanta, Georgia. I hold a
PhD in molecular genetics. People love to use basic shorthand
to try to justify these political views and it's silly.

Speaker 6 (02:11):
Miname is Steven Hoyle. I'm calling from Cohenwald, Tennessee. I
do think that there is reason to be cautious, particularly
for very young children, about moving toward medical transition. And
yet we don't need to be falling on the sword
over a tiny minority of a tiny fraction of people
who aren't causing problems or doing any harm to anybody.

Speaker 1 (02:35):
Well, thanks to everyone who called in. So now to
our topic this hour, artificial intelligence. We actually did a
show like this last year and asked for your questions.
But a lot has changed with AI in the last year.
The technology is improving rapidly, and a new survey from
Elon University finds more than half of Americans are now
using AI tools like chat, GPT and Gemini and Claude.

(02:57):
But there are a lot of questions, and that's where
you come in. What are you a questions about AI
right now?

Speaker 2 (03:01):
Tulliver? Can you give the phone.

Speaker 3 (03:02):
Number please Elon University.

Speaker 2 (03:05):
Uh, it's eight four four has nothing to do with
Elon Musk. Just let's be clear, Okay.

Speaker 3 (03:10):
To clarify, what's eight eight four four four middle that's
eight four four four six four three three five three,
Or you can write to us a listen to the
Middle dot com and you can also comment on our
live stream on YouTube, TikTok, Facebook, and Instagram.

Speaker 1 (03:24):
I kind of wonder if that's an issue at Elon University.
Right now, let's meet our panel. Vlus Star is the
president of the Patrick J. McGovern Foundation, which is trying
to make sure that AI is being used for good.

Speaker 2 (03:34):
Vlus, great to have you on the show.

Speaker 7 (03:36):
Oh it's a delight, Jeremy. And now I gotta get
ready for this T shirt swag. I think it's coming
my way.

Speaker 1 (03:40):
Yeah, every guest. Ena Freed is also with us, one
of the best tech reporters out there currently at Aksousina.

Speaker 2 (03:48):
Welcome to the Middle.

Speaker 8 (03:50):
Thanks Jeremy. Great to chat again.

Speaker 1 (03:52):
And before we get to the phones, just give us
a lay of the land for AI right now. What
is the elevator? Pitch about how it's being used in
America in twenty twenty five.

Speaker 8 (04:03):
Well, I think like you did to come up with
a name is a good example. I think AI is
being used by most people at the edges to do
a particular project that could be a personal thing, it
could be a work thing, but it's still at the
very early stages. A lot of businesses, for example, are
still figuring out, you know, how are they really going
to use it at scale? They're running a lot of experiments,

(04:25):
and I think individuals are curious. I think most people
are asking a question if your high school or college age,
you might be seeing just how much help with that
essay you can get. It certainly caused a challenge for educators,
which I'm sure we'll get into and maybe to draw
a picture. But again, I think we're just scratching the surface.
Everyone's still trying to figure it out.

Speaker 2 (04:47):
VELAs how would you answer that question?

Speaker 9 (04:49):
Yeah, I think ENA's right.

Speaker 7 (04:50):
You know, I often say it almost feels like the
five stages of grief, Like we had AI come out
into the world, and all of a sudden, everybody was curious,
and then they were overly excited, optimistic, and then they
were existentially scared, and then it feels like maybe now
we're getting to a place of pragmatism where people are
asking a foundational question, Look, I know what I need
to do in this world.

Speaker 9 (05:09):
Can AI actually help me do that better?

Speaker 1 (05:12):
So I was just at south By Southwest in Austin
and there were like eight million AI sessions going on
and I went to one of them, which was about
media and the example that they used, and this was
somebody from Amazon. They were showing off a product that
they have that basically allowed them to take an entire
season of a TV show and in a few hours

(05:34):
create a Hollywood style trailer to recap what happened in
the last season, which would take marketing professionals weeks.

Speaker 2 (05:41):
And a lot of money to do.

Speaker 1 (05:42):
And it made me wonder, are these marketing professions professionals
about to be out of a job? And Ina freed,
what do we know about the job loss that is
already occurring and may occur because of AI.

Speaker 8 (05:54):
I think right now you're mostly seeing job losses that
would have probably occurred that are being slightly accelerated. But
I don't think we've seen the real economic disruption that's coming.
And again, the early learnings were that it's not like
you could just have chat, GPT, this generic tool replace
an entire job. What you found is one, you need

(06:14):
more specialized tools like the one that you mentioned that
Amazon's doing, that are designed to do a specific job.
And two you need the technology to be good enough.
But it's coming. I mean, the pace of progress is
faster than anything I've seen in my twenty five years
of covering tech. And I think people are making a
mistake when they look at today's technology, especially the most

(06:35):
generic forms of it, and say, well, this isn't going
to take away somebody's job. Look, even if it can
do only half the tasks of a job, a company
is not going to pay the same number of people
to do half as much work. They're going to have
half as many people in that department.

Speaker 3 (06:49):
Jeremy, did they show you the trailer or do they
just talk about it?

Speaker 2 (06:52):
Yeah? They did, actually, and it was okay.

Speaker 1 (06:55):
It wasn't probably as good as what humans would be
able to do right now. But vilas on that point
about jobs, how do you create a world that's filled
with artificial intelligence, but that also includes human beings that
are still doing work and getting paid for it.

Speaker 7 (07:08):
You know, Jeremy, I'm going to be a little wonky
with you in this first answer, which is we often
think about jobs as if they're either going to go
away or we're going to keep them for humans. I
think there's a different story to be told here. Our
entire economic system is built on this interplay between capital
and labor, where you have resources that come in and
the people who control those resources the people who do
the work. One of the things I'm deeply concerned about

(07:29):
is what happens when you fundamentally alter that balance.

Speaker 9 (07:32):
I'll give you an example.

Speaker 7 (07:33):
We are now talking about this word that's kind of everywhere,
much like your south By Southwest conversations, agentic AI, the
idea that people are going to use automated workflows that
are run by AI to go often do really complex things.
One of the things you might envision is a transformation
of a business that goes from having a thousand workers
to maybe having a set of agents that actually control
the flow of work, that guide people to do their tasks,

(07:56):
that make sure they're doing them well, that evaluate them.
But is fundamentally shift the balance between who has power
in the workplace, because now if you have capital, you
can control the AI system, and can control the AI system,
you can direct what people do in a much more
authoritarian way.

Speaker 8 (08:12):
Well, I was just going to add on because I
totally agree with what you're saying. And one of the
things of particular concern is in the past, technology has
tended to make the best workers better, and it's disproportionately
advantaged them. What we've seen that generative AI is really
good at is bringing up entry level workers to the
media and faster. On the one hand, that's great, it

(08:33):
gets people trained. But to Vilas's point, and I think
where you were going before I so rudely cut you off,
I think it devalues the and takes power away from
the average worker. I think the risk is that workers
become more fungible, and there's more power for the companies
and employers that can afford to own these AI systems, and.

Speaker 1 (08:54):
Who gets who gets to decide that is the AI
that's evaluating everybody and telling you how good they are
and how not good they are.

Speaker 7 (09:00):
Well, I don't think that's inevitable, Jeremy. I think that's
the point is a lot of what we're doing is
just following the inertia of a few people who are
creating a few systems that are out there changing a
lot of things for all of us, people like you
and me. But I don't think it's inevitable that that's
the way we build this future. I actually think we
could sit down and have a real conversation about what
it means to have a worker centered future one where

(09:21):
we actually talk about how these tools don't just think
about controlling for efficiency or how these businesses become better
and more productive, but actually centered dignity in workers' hands.
We could have a variety of alternative futures ahead, but
it feels like sometimes we're starting from a very different
place when we try to have that conversation than the
one that I just raised a couple one hundred million

(09:41):
dollars to go off and build a tool that lets
me control my workplace more effectively.

Speaker 1 (09:45):
Well, yeah, you know, free, is anybody you're talking to
trying to build a worker centric future of AI?

Speaker 8 (09:53):
They are, but they tend not to be the ones
raising the most capital, and you know it goes to
this power dynamic. I think what Vla's hands, that I
think is really the critical thing is the AI future
is not guaranteed. There will be a future with AI
in it, but what that future looks like depends on
societal norms, what regulations are past, what we insist on

(10:15):
as societies, And I do think right now we're sort
of letting the tech companies take the lead. I think
we need to be vocal about what we like and
don't like the nice thing about AI or one of
its benefits is it's very accessible. People can try it out.
And if there's one thing that I would encourage people
to do is try it. Whether you like it or
don't like it, you're going to be much better able

(10:37):
to have that conversation with some sense of what the
technology can do.

Speaker 1 (10:42):
And when you say try it, are you just talking
about like a chat GPT or something like that. Is
that the easiest way to try it right now?

Speaker 8 (10:48):
I think the easiest starting point are these chatbots. And
it's not just chat GPT, Microsoft, Google have them. A
lot of the technology is free, at least at the
basic level. You may not get all the features, but
you get a lot of it. And I think figuring
out looking at your own career and saying, you know, how,
how is this going to change my job? How is
it going to change the way I raise my family?

(11:10):
That sort of thing, rather than just this isn't a
future we have to have thrust upon us.

Speaker 1 (11:16):
Jeremy again, our number, that's right, our number, Tolliver is
a four four four middle it's a four four four
six four three three five three And you know Tolliver
before there was chat GPT, or Claude or Alexa or
Siri and apologize if I just made everyone's phone wake
up by saying that word.

Speaker 2 (11:33):
There was a chatbot called Eliza.

Speaker 9 (11:35):
Yeah.

Speaker 3 (11:35):
It was developed in the nineteen sixties at MIT by
a scientist named Joseph Weisenbaum.

Speaker 10 (11:41):
Eliza is a computer program that anyone can converse with
via the keyboard and it'll reply on the screen. We
had a human speech to make the conversation more clear.

Speaker 11 (11:51):
Men all alike, in what way they're always bugging us
about something or other?

Speaker 10 (11:57):
Can you think of a specific example. The computer's replies
seem very understanding. But this program is merely triggered by
certain phrases to come out with stock responses.

Speaker 1 (12:08):
And by the way, that sound you just heard for
our gen Z listeners, that's what keyboards used to sound like.

Speaker 2 (12:12):
So yeah, exactly.

Speaker 1 (12:17):
At some point we decided, you know what, it doesn't
need to make noise when you press the buttons.

Speaker 2 (12:23):
We'll be right back with more of your calls. On
the Middle.

Speaker 1 (12:27):
This is the Middle. I'm Jeremy Hobson. If you're just
tuning in the Middle is a national call in show.
We're focused on elevating voices from the middle geographically, politically
and philosophically, or maybe you just want to meet in
the middle. This hour, we're asking for your questions about
artificial intelligence.

Speaker 2 (12:42):
Are you using it? Does it concern you?

Speaker 1 (12:44):
Are you excited for the possibilities? Tolliver, what is the
number to call in?

Speaker 3 (12:48):
It's eight four four four Middle. That's eight four four
four six four three three five three. You can also
write to us at Listen to the Middle dot com
or on social media.

Speaker 2 (12:57):
I'm joined by Villa Starr of the Patrick J.

Speaker 1 (12:59):
McGovern Foundation, an Axios technology reporter, Ina Fried, and let's
go to the phones and Renee, who is calling from Houston, Texas. Renee,
Welcome to the Middle. Go ahead with your question about AI.

Speaker 12 (13:10):
Hello, Hi, Yeah, So my question is this so in
a world where it is becoming more and more difficult
to understand the true nature and intent behind certain media products,
For example, social media, the algorithm is designed to keep
you on there for as long as possible, some news

(13:32):
companies maybe are trying to push certain narratives, and I'm wondering,
first of all, how likely is it that AI, specifically
chat AI programs will be you to push certain narratives?
And if this is likely. How are we to be
able to discern this and protect ourselves from this quote

(13:55):
unquote propaganda.

Speaker 1 (13:56):
What a great question to start off with. I'll go
to you first, Ena on that.

Speaker 8 (14:02):
You know, I think it's a very smart and reasonable fear.
I think we haven't really seen how far this is
going to go, in part because right now a lot
of these services are at least built with the user
in mind, in that basically you're paying a fee and
it's delivering a product, and they want you to like
the product, which you know means you're paying in many cases,

(14:24):
although they do have free products. I do worry about
a future in which advertisers are paying for the content
and suddenly the interests of the chatbot are not aligned
with me as the person using them. And then you
think of how a political use. A political person might
use it to reinforce their narrative. So I do think
it's the right question to be asking, and there aren't

(14:46):
really a lot of rules in that. And I do
think these systems are very persuasive as they are, and
right now they're not programmed necessarily to persuade us to
a viewpoint or to buy something. But I think there
will be chatbots that are, and they will probably be
pretty effective.

Speaker 13 (15:02):
Mm hmm.

Speaker 1 (15:02):
I feel there was an IPSOS poll that found that
people are worried about AI, but that many Americans actually
trust it more than humans to not discriminate or show bias.
I wonder you know what you make of that contradiction, you.

Speaker 7 (15:16):
Know, I think it tells us a little something about
the sad state of where we are with public trust today.

Speaker 9 (15:21):
Unfortunately there right.

Speaker 7 (15:22):
But I gotta say, I think, you know, I disagree
with you just on the order of a few degrees here,
which is I still believe very deeply that there is
a way that we build AI for you and me,
for hundreds of millions or billions of people. But right
now we build AI for the tech bros of Silicon Valley.
And what I mean by that is we're building tools
that promote the interests of these companies, often at the

(15:44):
expense of users. That even when we invest in products
that are good for users, there's an ulterior motive. They're
trying to capture attention, they're trying to capture screen time,
and they're trying to sell advertising. So I think there
was a part of that question that was what do
we do about it, And to me, I think one
of the things we have to do is invest in
kind of a response or reaction, but maybe even perspectively

(16:05):
think about what kind of AI we would build that
actually supports what you and I want to accomplish in
the world. If it's about media, then what does it
look like for me to have an AI system that
evaluates the kind of news I'm being fed and actually
helps me understand where there's bias or whether there's misdirection
or manipulation. And I have to say I don't see
the same kind of public investment in those tools as

(16:28):
I do on the other side, which is why I
think civil society needs to enter this conversation in a
really meaningful way.

Speaker 1 (16:34):
Let's go to another call and Daniel, who is calling
from Kansas City.

Speaker 2 (16:38):
Daniel, welcome to the middle.

Speaker 14 (16:39):
Go ahead, Hey, thanks for taking my call. Yeah, I
have been concerned recently in seeing how many people are
being fooled by AI fakes, whether it's social media posts
like your previous callers talked about, or actual like scams
and frauds, that people are actually being really damaged by this.
So my question to you, you're experts, would be how close.

(17:03):
Do we think this is getting to him intelligence? And
that's a big question of AI. If it's fooling people
already in lots of different ways, do we think this
is actually approaching or encroaching on human intelligence?

Speaker 1 (17:17):
I have the exact same question, and I wonder when
we're going to get to that point.

Speaker 2 (17:20):
You know, what do you think?

Speaker 15 (17:22):
Well?

Speaker 8 (17:22):
I think, unfortunately, and this is just a sad fact
about where we are, you don't need human level intelligence,
unfortunately to scam people. I don't think we're there. In
terms of deep fakes. Voice cloning is very good. You
can make a voice that sounds very much like somebody.
You can do videos, but most of the scams right
now are a level below that, and they're already working.

(17:43):
So to me, that says we need to really be
cautious of what's coming and prepare for a world in
which we can't necessarily just trust because we saw a
video that that's what someone said. There are some technological answers,
but I think media literacy is really the key. You know,
I spend a lot of time I'm telling my parents. Look,
you know, if you hear a phone call, it's not me,

(18:03):
it's not your grandson calling like, you've got to interrogate that.
And I think that's the world we need to prepare for,
because the technology is going to make it trivial. I
do worry it, particularly as the systems get more powerful
about their ability to do scams and fraud at scale,
so not just being able to target one person at

(18:24):
a time, but target everyone with a lot of personal information.
In the past, you know, people would do a phishing
scam and they'd basically try and gather the one or
two people that are most gullible at the end. Now
you can really target everyone with enough personal information to
sound pretty darn convincing.

Speaker 1 (18:42):
So when we talk about all these bad things about
AI and scamming people and all that kind of thing.
V L A Star, is it too late at this
point If we decide we want to stop and just
say no more AI, we're done.

Speaker 7 (18:56):
Well, there's a really important truth that I think is
actually right in the question we were asked, which is
the question was, well, what about AI fooling us? I
want to be very clear what that question really is
asking is how are people using AI to fool us?
And that came through an enos common as well. These
AI systems, at least so far, don't do anything by themselves.

(19:16):
They don't go off and have their own ideas, they
don't go off and try to hurt people. There is
a fear around that I think we should just dispel upfront.
They're still very much directed by people, and so the
question becomes, as enough put it, are people trying to
scam each other? And are they using these tools more
and more effectively to do so? Yeah, I think so,
and I think that's a real problem. But the way
we address it isn't to try to stop creating the tools.

(19:39):
It's certainly not to try to put that genie back
in the bottle. It's going to be to say, how
do we build those new social norms and principles. How
do we make sure that we think about whether our
legal institutions, our law enforcement is equipped to deal with
these new kinds of threats. How do we make sure
that our technologists are building in safeguards into tools so
they can't be used in ways that are just wildly abusive,

(19:59):
And maybe most of all, how do we make sure
that you and me, the people in my town in
rural Illinois or wherever like, have a sense of the
reality of the situation instead of just the hype that's
coming through the headlines, that they actually know what these
tools are capable of, and how they can take on
better practices that make them more robust and resilient in
defending against these kinds of attacks.

Speaker 1 (20:20):
John is calling from Chicago. John, Welcome to the Middle
Your question about AI.

Speaker 16 (20:26):
Yeah, my concern primarily is about education. I'm a high
school teacher out this way, and I will often see,
you know, there's all these debates about whether or not
you're allowed to have cell phones in a classroom or whatever.
But what I see predominantly AI being used for is
my students trying to get out of doing the trivial
THEA see it that way, you know, the everyday homework assignments,

(20:49):
and then we finally get around to an assessment of
some kind. They're bombing them horrifically, and I'm getting parent
complaints and emails, and there's not a way for me
a police whether or not a child is actually activating
a brain cell prior to walking into you know, the
day of the exam or the day of the quiz,
and so it's really doing a disservice to a lot

(21:10):
of students who aren't using it effectively.

Speaker 1 (21:12):
John, can you tell when they're when they turn in
something that's from AI?

Speaker 16 (21:17):
Not initially, No, because a lot of the stuff that
I still collect is paper copies, or we might use,
you know, an online platform where they can submit them digitally,
but oftentimes it's like a photo of their work, and
if they're getting copies of the work or just the answers,
it's difficult to tell whether or not they arrived at
it authentically themselves, or you know, it's like the age

(21:38):
old copy from a friend prior to class sort of thing.
It's it's right along those lines. And obviously I can
tell when they take the assessment. You know, you got
a's on all the homework, but you clearly didn't understand
the material when it happened. But it's opening a whole
new avenue for all those same issues we've always had
an education.

Speaker 1 (21:55):
Yeah, there's one tell that I know that I can
tell when something's from AI, which is that they used
that rocket ship emoji that nobody ever.

Speaker 2 (22:02):
Uses and lessons from touching.

Speaker 1 (22:04):
I think that that's something we humans like to use. John,
Thank you for that A very interesting you know what
about that in education, What do we know about how
how this is affecting our ability to learn things and
just keep the students in line.

Speaker 2 (22:19):
Well.

Speaker 8 (22:20):
This is interesting because a lot of people are touting
AI as something of great promise for personalized education and
for scaling education in places where the teacher model doesn't scale.
Yet some of the first people to have to struggle
with the implications of AR, especially high school and college teachers.
I think there are a lot of answers short of

(22:40):
banning it. What we saw early on was a bunch
of school districts just banch at GBT, and that's not
a long lasting, permanent solution. I do think we're going
to have to change some of the ways that we
evaluate things. Some of that can be handled technologically. I
wrote last week about turn it in dot com, which
used to build its big business on detecting plagiarism. They're

(23:02):
creating a canvas where students can show their work so
they can use AI in whatever ways the teacher has permitted,
but the teacher can see or see a summary of
the work that they actually did. How much of that
essay were they writing versus how much fact checking were
they doing using the AI tool or were they just

(23:22):
having the AI do the work. And I do think
the future will probably look somewhat like that. It'll look
like more oral exams and things where you can tell
how much a student is learning. I do think educational
systems need to adjust, and I think this is something
we've done before as a society, and I think teachers
are going to have to build the relationship with the students.

(23:43):
That says, Look, at the end of the day, you
are going to have to come in and take a
test and prove you know it. So doing the homework
and using chat GPT isn't going to help anyone.

Speaker 1 (23:52):
Yeah, Tolliver, I know some comments are coming in online.

Speaker 3 (23:56):
Yeah, okay, this first one is in all caps, so
just know that. How do I tell these corporations to
stop yapping on and on about it? Seriously, I'm fine
with AI with limits, but still they keep going on
and on with Oh, our new AI model is superior
to yours. Please read at least I forgot dinner for this.

Speaker 1 (24:15):
That's what tiff commenter says exactly.

Speaker 3 (24:19):
Tony from grand Ledge, Michigan says, I have heard that
many doctors are either retiring or plan to retire in
the next few years, and that US medical schools are
not graduating new doctors fast enough to replace them. What
role do you see for AI in the medical field
in the next ten to twenty years.

Speaker 1 (24:34):
Vilas, What do you think about that role for AI
in the medical field?

Speaker 9 (24:38):
Yeah, I love it, you know.

Speaker 7 (24:39):
I just I came from giving grand rounds at Stanford University,
where I get to talk to some of the most
amazing kind of medical students and one of the greatest
hospitals in the country, and I asked them what they
think about AI, and you know what, to a t
they were excited about what it might mean for them
five ten or fifteen years out, and yet they still said,
you know, today, I'm still learning to practice medicine the
same way would have five, ten or fifteen years ago.

(25:02):
And so I think in that story, you're seeing what
we see across society. People are excited for what AI
might create for them, but today it hasn't yet totally
transformed our lives. I think in that paradox, you've got
something we really have to think about. How do we
make sure that people are excited to become doctors when
what we're doing is telling a lot of stories about
what medicine might look like instead of bringing back home

(25:25):
to what medicine is for to make people be healthier,
to make sure that we're investing in the kind of
social structures that let people live dignified lives. This is
the distraction about AI that really scares me is sometimes
we get so focused in talking about the tool that
we forget to really analyze the problem that we care
about through that lens of human experience and dignity. If

(25:46):
we did that, I think we'd shape a very different
kind of AI for the future.

Speaker 8 (25:51):
And if I can jump in, I totally agree with
the loss, And I think medicine is a really good
example of where the human and the AI can really
compliment each other. If you think about the career path
of a doctor, doctor goes to medical school at the
beginning of their career, they get ninety percent of the
training they'll ever get, and then they have a whole career.
And so we still want that doctor. We want that

(26:14):
human being. I don't want to just see a chat pot.
At the same time, I think AI, when used properly,
can help suggest things to the doctor, it can operate
with them. I think some of the complexity and nuanced,
though does come with how do you make sure that
the humans still have to velus this point a valuable role,
a meaningful role, so they go into the profession and

(26:36):
also enough training. We can't have the AI doing all
the diagnostics, all the grunt work and still have an
experienced doctor. But I do have a lot of optimism
that that is a field where humans and AI actually
can complement each other quite well.

Speaker 1 (26:51):
Let's sneak in a call here. Michael is in Northwest Alabama. Michael,
welcome to the middle.

Speaker 17 (26:55):
Go ahead with your question, good eving, thanks tremendously for accepting. Michael,
make this as brief as possible. You had some wonderful
questions over there, and when you said talked about the
possibilities of AI solving human problems, why don't we ask

(27:15):
more of those questions. I'll challenge you on two of them.
Safeguarding your problem my worries about privacy under AI, even
from data used by being used sold to companies and
corporations and employers, going all the way to the way

(27:35):
the Chinese government uses AI on ordinary people. And also
you talked about phony voices and phony accents, what about
doctoring photos and video footage even more than even more
skillfully and seamlessly than photoshop can do. A good example

(27:58):
that I fear is not only companies, I mean media
with biases using video footage that's been doctored up to
buttress up their biases, but also using videos with phony
information and photos with phony information in court. And if

(28:19):
you have anybody you can think of or I can
think of as our enemies putting their faces on pornography,
sort of like those famous jib Jab musical electronic Christmas
cards the employer to get fired. Thank you. I'll take
the phone off for your answer. Thank you tremendously. I'll

(28:39):
be with all of you.

Speaker 1 (28:40):
Appreciate you VELAs what do you think? The privacy issue
is obviously a huge one for people.

Speaker 7 (28:47):
I gotta tell you, I love the throwback reference to
jib jab to start off.

Speaker 9 (28:50):
That makes me super happy.

Speaker 7 (28:52):
Look, you know what's interesting is both of these questions
have the same framework attached to them, which is people
are doing things using these technologies that fundamental affect our rights.
And we got to have a conversation about the tech,
about deep fakes, about watermarking, about the ways that we'll
make sure that we can verify information. But at score
there's something more fundamental, which is you and I, our

(29:12):
governments and our systems don't have a single agreement about
what's okay and what's not. And this is where we're
getting stuck because every time we come up with one
of these, we can think of these extreme examples that
really offend us. I don't want my face being put
out there with a message that I didn't put on it.
I was just this week with a Bollywood actress, a
very famous young woman, who told me about the experiences

(29:34):
she's had with people creating deep fakes for her, and
they're terrifying. But the problem is we don't actually spend
the time to think about what we want to make
sure we allow and what we don't allow. And we
need to invest in shared governance and the mechanisms to
make sure that we can do.

Speaker 1 (29:47):
That well, Tolliver, it has been twelve years since Hollywood
imagined a world where our digital assistants are so human
like we can even fall in love with them.

Speaker 3 (29:57):
Yeah, the movie Her started Walking Phoenix who fell in
love with this robot played by Scarlett Johansson. And how
can we not play this iconic clip tonight.

Speaker 18 (30:07):
After you were gone, I thought a lot about you
and how you've been treating me, and I thought, why
do I love you? And then I felt everything in
me just let go of everything I was holding on

(30:28):
to so tightly, and it hit me that I don't
have an intellectual reason. I don't need one. I trust myself,
I trust my feelings.

Speaker 1 (30:38):
Haven't we all had a moment like that with Siri Toliver.

Speaker 9 (30:40):
One or two.

Speaker 3 (30:41):
She's a little bit trusive sometimes.

Speaker 1 (30:42):
Oh man, We'll be right back with more calls on
the Middle.

Speaker 2 (30:47):
This is the Middle.

Speaker 1 (30:48):
I'm Jeremy Hobson. In this hour, we're asking for your
questions about artificial intelligence.

Speaker 2 (30:52):
You can call us at eight four Middle.

Speaker 1 (30:54):
That's eight four four four six four three three five three.
My guests are Vilas Star, president of the Patrick Jim
Government Foundation, and in a free chief Technology reporter at
Axios and the phones are lit up, so let's go
to them. And Liz is in Birmingham, Alabama. Liz, welcome
to the Middle.

Speaker 19 (31:12):
Go ahead, Thank you very much. So this is more
of a sort of personal take on this. I have
a ten year old and a fourteen year old, and
I find myself thinking, you know how can I help
them to make decisions about their future career in college,
trying to sort of future proof for industries that will

(31:35):
maybe go away or you know, not be ones that
are very large in the future. So it's a lot
of information and it's hard to know at this point,
you know, in eight years or four years, where.

Speaker 13 (31:48):
It's going to be.

Speaker 19 (31:48):
So maybe I'm just looking for some help.

Speaker 20 (31:50):
I don't know.

Speaker 2 (31:51):
Yeah, great, great question, Liz, Thank you, Ena Freed.

Speaker 1 (31:54):
Where should Liz is where where should those children go
to college?

Speaker 2 (31:57):
Or what should they study?

Speaker 8 (31:59):
Well, I totally empathize. I have a twelve year old
and thinking about the same sorts of things. I think
it is really hard to know. I don't wouldn't claim
to know how the job market will have changed in
four or eight years. I think we can know what
are some of the skills that are going to be
valuable in an AI world, And I think they're the
things that we are uniquely good as human beings at

(32:20):
doing analyzing, bridging the gap between an answer that a
book or in this case, an AI can give and
what a human needs to act on it. So I
think it's a combination of critical thinking, media literacy, and
also where people's passions are. I think ideally AI will
bring us to a world where people will be able

(32:43):
to better align their career with their passions. I'm not
convinced that's the AI future we're building, but it's certainly
the AI future I want is one where the AI
allows us to take our own curiosity, our own interests,
and use that knowledge in conjunction with technology.

Speaker 9 (33:00):
That's my hope.

Speaker 2 (33:02):
Yeah.

Speaker 1 (33:02):
I have to say I have been inspired by AI
at times when I'm when I'm trying to come up
with ideas or I'm working on something and AI gets
really excited about it and so.

Speaker 2 (33:13):
Oh you should do this and this and this. I'm like, oh,
thank you.

Speaker 1 (33:15):
Okay, give me a little little little you know, kick
to get things going here.

Speaker 2 (33:19):
Let's go.

Speaker 1 (33:20):
Let's go to Watson, who's in Atlanta. Watson, what are
your questions about AI?

Speaker 18 (33:27):
Hi?

Speaker 20 (33:27):
I really appreciate you guys donce perspective between sort of
you know, acceleration and then also breaking. So I think
I'm I'm really curious to know what do you guys
see as being the risk of talking about AI risk
and does it get in the way about actually steering
or shaping Uh? This is like a material towards the
goals that we want.

Speaker 2 (33:48):
Great question, VLAs, what do you think I.

Speaker 9 (33:51):
Think I missed the middle of that, Jeremy, what.

Speaker 1 (33:53):
Is the risk of talking about the risk of AI
basically holding ourselves back from from you know, getting as
far as we can with A by just worrying about
what could go wrong?

Speaker 9 (34:02):
Super good? I love this. I think two things happen.

Speaker 7 (34:04):
One is, we started talking about the risk of AI
as if it was some existential.

Speaker 9 (34:08):
AI is going to destroy humanity.

Speaker 7 (34:10):
But there's a different risk to AI that we should
be talking about, which is how are we going to
make sure that we are minimizing that risk that we
are talking about the risk of power and institutions that
actually take a world that we have today and cement
it for generations to come, even when it's unequal. So
we do have to talk about risk, but we can't
just talk about risk by itself. We have to talk
about risk along with governance, along with management, along with

(34:34):
who's making these decisions, and broadly about democratic participation. You know,
the one thing I'll tell you, and I hear this
all the time from a lot of the folks who
are running these AI companies is all you ever hear
is an almost juvenile sense of bigger is better, more power,
more compute, more data, build bigger AI systems, and everything
else will figure itself out. And that's just not how

(34:55):
the world works. We should be thinking about. Let's take
the AI we have today, figure out how to use
it to make the world a better place, and in
doing so, make sure a lot of people get to
feel and see use these tools and do something good
with them.

Speaker 1 (35:08):
Speaking of making the world a better place in a
freed what do we know about the environmental impact of
AI at all those servers, and how do we make
sure that we can make the world a better place
with AI without destroying the world in the process.

Speaker 8 (35:22):
Yeah, And I think that's been one of the challenges
with this risk conversation. It was so focused on the
existential risk it didn't deal enough with the risks that
are here right now and misinformation and bias or two
of them. But as you point out, the climate impact
is another, and I think there are reasons to give
that time and attention right now. I do believe we

(35:43):
tend to get better at making technology energy efficient over time,
so I may be a little less worried but it
won't solve itself, and we do have to place a
priority on the environmental impact and be smart about how
we use it in this moment where it is very
energy intensive. I think there is a sense among those
that run these big data centers that they need to

(36:03):
be powered sustainably, and so again, I think there are
reasons for optimism, but I'm not the kind of person
that subscribes to what I hear a lot from the
tech companies, which is, oh, well, AI is going to
let us develop this great climate solution that we don't know.
It's going to magically appear, so we have to do AI.
That's not to me a good approach. That's like a
child saying, oh, it'll all work out in the end.

Speaker 2 (36:26):
Forest is in Commerce City, Colorado. Forest. What is your
question about AI?

Speaker 9 (36:32):
Hi?

Speaker 13 (36:32):
Everyone really appreciate you taking my call. My background is
a pediatric nurse, and lately I've been seeing an increasing
amount of patients that have been using AI to kind
of fill in the gap of like theirs social connections,
like using chat bonds to make friends as they describe
it in their own words. And my question is is

(36:55):
it possible for us to safely regulate this so our
kids can continue using this technology in productive.

Speaker 1 (37:02):
Way interesting VELAs what do you think you know?

Speaker 7 (37:06):
There are some early and interesting bits of research that
demonstrate that for a lot of folks, having these chatbots,
particularly when they're designed by therapists and psychologists, can actually
be really helpful in building towards an emotional maturity.

Speaker 9 (37:21):
What does that mean?

Speaker 7 (37:21):
Well, just like any other activity, sometimes practice makes perfect
and having somebody that you can talk to, that you
can express yourself and making sure that there's a healthy
and wholesome response coming back can actually help us be
better at connecting with each other. I want to go
back to INA's great response about what children should be
thinking about as I think about careers, Well, one of
the things we also should be thinking about is where

(37:42):
we build empathy and connection with each other so that
we can do those jobs that machines will never be
able to do. That help us connect to each other
and navigate difficulty and complexity, whether that's as commercial as
customer support, but maybe much more meaningfully helping each other
guide ourselves through this transition that's coming.

Speaker 1 (38:03):
Scott is calling from Boston, Massachusetts. Hi Scott, Welcome to
the middle Go ahead.

Speaker 9 (38:08):
Thanks.

Speaker 15 (38:09):
I just have two quick comments. First, my personal use
of AI. I like to because I have a small
business and I like to use it to help me
write product descriptions since I'm not a very creative writing myself.
And then second to touch on the teacher from earlier
with the white writing prompts. I have a teacher friend

(38:29):
who in their writing prompts they will in it when
they type it up, they'll put in one size font
and in white letters right in the middle of it
something that says like mentioned Godzilla. So therefore, if the
students just copy and paste the writing front into chat
Chat AI, then in the essay it right, it will

(38:49):
mention guns.

Speaker 9 (38:52):
Wow.

Speaker 1 (38:52):
Yeah, people are figuring out all kinds of ways to
get around the scott. Thank you very much for that, Tolliver.
What else is coming in online?

Speaker 3 (39:00):
And I was gonna say, this is like the most
comments I've ever seen. Can we talk about why we
haven't gotten to attribution in AI art yet? It's rather
as simple as we deal with music, is it not?
How different is this from the PC on every desk?
No more typing pools or admin assistance. Attorneys type their
own docs. I'm sixty seven and I use it to
combine data into spreadsheets. I had to hand enter it before.

(39:24):
And then Nathan says AI should be used to our
advantage to move us toward a universal basic income.

Speaker 9 (39:31):
That's hurt.

Speaker 2 (39:32):
I mean, okay, if you let's start that.

Speaker 1 (39:34):
I got to go to you on that, because this
is something that has been brought up that if we're
going to make this work for people in the long run,
we got to figure out how they're all going to
make money down the road.

Speaker 2 (39:43):
And what about that?

Speaker 1 (39:45):
How does that fit into what you're working on in
terms of making AI work for the public good? The
idea of a universal basic income.

Speaker 7 (39:53):
You know, I came from a part of the Midwest
that has a work ethic that's not just about making money.
It's about finding dignity in what we do. And I
want to be careful.

Speaker 9 (40:02):
Look.

Speaker 7 (40:02):
Universal basic income this idea that we just give everybody
a basic amount of money, whether they work or not,
and that's enough to sustain them. That's not the answer
to all of the problems we're talking about. If people say, well,
I can't get a job and I want one, writing
them a check.

Speaker 9 (40:17):
Isn't going to fix the problem.

Speaker 7 (40:18):
So instead, what we need is we need to really
conceptualize what a new eco economic model looks like when
people don't get the jobs they want, but there are
other things that need to be done and done productively.
How do we make sure that people are equipped with
the tools to be able to go out and do that.
BI might be a great idea, and we should do
some experiments around it, but we should also think about
how we really invest in workers in human power and

(40:40):
dignity and agency in their jobs in five, ten or
fifteen years. I tell you know, I sometimes joke I
don't even know what I'm going to be doing, much
less when I should be counseling some young person or
what they should be doing. But when we go to
figure it out together, we got to do with the
right intentions in mind, and that's a good starting point.

Speaker 1 (40:57):
Let's go to Allison, who's in Milwaukee, Wisconsin. Allison, welcome
to the middle. Go ahead with your question.

Speaker 21 (41:03):
Hey, yeah, thanks for taking my question. First, my comment,
I just think it's so naive to have us playing
around with chat GPT when it's very clear that large
organizations are going to use AI to do every kind
of reasoning task that computers can do better than humans.

(41:23):
And that's a lot of things. And as far as
you know what the imperative is right now, it's not
play around and make images and avoid the risk. It
confront the risk and mobilize labor to put pressure, whether
it's universal basic income or just regulation, and get out

(41:46):
to vote for people who are going to protect labor.
That's what I am. What do you think about that?

Speaker 1 (41:51):
Yeah, okay, Alison, thank you. I mean the idea that
we are just feeding more data into these AI chatbots
so that they can use it against us. I guess
it's kind of part of what Allison was saying there
at the beginning, you.

Speaker 8 (42:02):
Know, Yeah, I mean, I think there's very valid concerns
that are prompting that. I'm not sure I one hundred
percent agree with the approach though that by avoiding chat
GPT we're somewhat how doing that. I do think we
should be smart consumers of the technology and pay attention
to privacy policies. There are really different settings you can use.

(42:25):
You can decide, you know, hey, I want to you know,
use an incognito mode like you might in your browser,
or I want this data saved. I think you can
decide on AI systems that will use your information to
train future models and those that won't. I think the
broader point of protecting human product work product is going

(42:45):
to be really important, and we're already seeing it in
the entertainment industry. I think, you know, there's a real divide.
There's two legal arguments.

Speaker 2 (42:53):
You know.

Speaker 8 (42:54):
One is this idea that you know, if you use
my work to train your system, I should get credit compensation.
And then the AI companies, the governments asked for comment
on their AI strategy, and both Google and Open Ai
submitted comments today saying we want the right to train
everything anything that we can publicly find, we should have

(43:15):
the right to train our systems on it. And that's
a very profound discussion that we need to have as
a society.

Speaker 1 (43:23):
Let's get another call in alex Is in Columbia, South Carolina. Hi, Alex,
what's your question about AI?

Speaker 11 (43:29):
Well, mine was really related to the moral issues and
ramifications of AI, and like, you know, does it if
AI can read kan't and hypocrits, does it give it
a soul? Or you know, is there a soul as
far as is that's concerned. You know, I'm sure there's
deep religious concerns concerning it, especially in the human decision

(43:53):
making process for health care and specifically you know, lawfare.
So to say, given that also you know, you look
at you look at it's its ability to ignore racial factors.
So in the decision making process, given in lieu of
George Floyd, in this massive outcry of systemic racism and

(44:18):
implicit bias within the legal system, you know is are
they planning on utilizing open AI or a system similar
to that in the jury process or in the litigation
process and to reform Let me ask you, which is
removing human element?

Speaker 2 (44:37):
Yeah.

Speaker 1 (44:37):
We mentioned earlier that there's a poll that said that
people tend to trust AI more than humans to not
have biased But it sounds like you're not in that camp.

Speaker 2 (44:46):
You think the AI is going to be more biased.

Speaker 11 (44:49):
I don't feel either way. I'm not saying either way
because I don't know the system architecture, and I do
understand that it's a product and it's it's creator is
a capitalist, and capitalism praise upon the week to reward
the few of the ridge. I mean, it's just dog

(45:11):
eat dog world, And that's that's all right, because that's
how the that's how the big wheel turns, you know.

Speaker 1 (45:17):
Yeah, And it's interesting that we've had so many calls
of Vilas and Ena that are sort of getting at
at the fact that these big corporations are the drivers
of AI right now, you know.

Speaker 2 (45:30):
Yeah.

Speaker 8 (45:30):
And I think the piece that I took away from that,
which I think is a really important thing that doesn't
get talked out about enough, is when we're adding AI
to these important decisions, are we really scrutinizing what's underlying
the AI's decision, Because, guy, you know, at its best,
can you know, apply you know, more equality and equity

(45:53):
to its decisions? But it's got to overcome a bunch
to get there. First of all, the training data is
often based on all the bias that's existed in the
human world. So if we aren't careful, we're just codifying
that bias. And we've seen that in early AI systems
that decide things like parole and loans and housing, very
consequential things. So we need to be really careful before

(46:16):
we even hand partial decision making power over both to
the bias and to how are we applying this, how
are we using it? So I don't think it's an
either or thing. But I definitely think we need to
be paying attention to noticing the bias that exists in
the training data, because otherwise what you have is something

(46:37):
that looks just and fair and has compelling sounding reasoning
attached to it, but is no better than somebody who
has their own biases.

Speaker 1 (46:46):
Let me just finally, and we've come to the end
of the hour, but Vlastar, let me go to you
finally on the question of regulation. The US government and
all governments are notoriously slow in figuring out how to
regulate tech because move so fast and they've got to
get their heads around it and all of that.

Speaker 2 (47:03):
I mean, we've just had a.

Speaker 1 (47:04):
TikTok ban that didn't go into effect and maybe still will,
but you know, it.

Speaker 2 (47:10):
Takes a while.

Speaker 1 (47:11):
If you were to make one recommendation to the government
right now in terms of regulating AI, what would it be.

Speaker 7 (47:20):
You know, Jeremy, I've spent twenty five years working on AI.
I'm probably one of the world's leading experts on the
question you just asked me, and I wish I had
a magic bullet answer for you. But I'll tell you
two things that have to change. The first is we
have to stop talking talking about government's role as reacting
to tech companies or limiting them or changing the way
they work. That can be the point of regulation. The

(47:41):
point of regulation should be to think about what a
positive vision of an EI future looks like and put
in place all the pieces necessary for that, from public
funding and financing, to protecting privacy and autonomy, to maybe
sometimes when needed, restricting tech companies, but also fostering.

Speaker 9 (47:56):
An ecosystem of positive growth.

Speaker 7 (47:58):
That's one and the second is we can't just do
this inside of the US alone. This has to happen
as a part of a global effort, and we're beginning
to see the seeds of that. And again, I'm an optimist.
I'm gonna leave you with a bit of optimism. This
might be the first topic that we can actually step
above some politics and actually really think about the policy
that's going to affect every person on the planet, because
we all recognize what might happen if we get this wrong,

(48:21):
and I think we can begin to hope what we
might be able to do if we get it right.

Speaker 1 (48:26):
That is a great note to end on. VLAs Star,
the President of the Patrick J. McGovern Foundation, and Ina Fried,
chief Technology correspondent, at Axios. Thank you so much for
coming on and answering our listeners questions.

Speaker 7 (48:38):
Thanks, it is a great discussion, what a joy, and
thanks Tolliver.

Speaker 9 (48:41):
You're awesome.

Speaker 2 (48:42):
Yeah he is. Thanks, everybody loves Tolliver.

Speaker 1 (48:45):
Okay, next week we are live at Colorado Public Radio
and Denver, in a state that is both a hub
of renewable energy and also oil and gas. We're going
to be talking about the future of American energy in
the context of President Trump saying he wants to drill.

Speaker 2 (48:58):
Baby, drill always.

Speaker 3 (49:00):
You can call in at eight four four four Middle
that's eight four four forty six four three three five three,
or you can reach out at listen to the Miiddle
dot com. We can also sign up for our free
weekly newsletter and check out our new Middle merch shop.
I think I'm gonna do the same thing where every
dollar that comes in goes back into the show.

Speaker 1 (49:17):
The Middle is brought to you by Longnok Media, distributed
by Illinois Public Media in Urbana, Illinois, and produced by
Harrison Patino, Danny Alexander, Sam Burmis, DAWs, John Barthonicadessler, and
Brandon Condritz. Our technical director is Jason Croft. Thanks to
our satellite radio listeners, our podcast audience, and the more
than four hundred and twenty public radio stations that are
making it possible for people across the country to listen
to the middle I'm Jeremy Hobson.

Speaker 2 (49:39):
I'll talk to you next week.
Advertise With Us

Host

Jeremy Hobson

Jeremy Hobson

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.