All Episodes

December 15, 2023 49 mins

In this episode of "The Middle with Jeremy Hobson," we're asking about your concerns and questions about artificial intelligence. Jeremy is joined by veteran tech journalist Kara Swisher, host of the podcasts Pivot and On with Kara Swisher, and Nashville-based AI entrepreneur Tim Estes. The Middle's house DJ Tolliver joins as well, plus callers from around the country.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Welcome to the Middle. I'm Jeremy Hobson. We welcome the
listeners of WUFT in Gainesville, Florida this week as we
take your questions from around the country about artificial intelligence
at eight four four four Middle. That's eight four four
four six four three three five three. As always, Tolliver is.

Speaker 2 (00:22):
Here, Jeremy. I've actually been AI this whole time, been
studying your handy.

Speaker 1 (00:27):
You know, Tolliver. We just heard like a week ago
or so that Taylor Swift was named the Person of
the Year. But frankly, you could make the case that
chat gpt this was the year that AI really went mainstream,
that chat gpt could be the Person of the year.
So we asked chat gpt and you know what it said.
It said the concept of naming AI and chat GPT

(00:50):
as Times Person of the Year is intriguing because they
have significantly impacted society and technology, making it a strong contender. However,
the designation as Person of the Year typically implies a
focus on human beings.

Speaker 2 (01:00):
That is such a modest little robot that's so cute.
I think once they figure out how to make human hands,
then maybe they can be Time Person of the Year.

Speaker 3 (01:06):
You know all right.

Speaker 1 (01:07):
So, as we said this week, this hour, we are
asking for your questions about artificial intelligence. What are you
most concerned or hopeful about when it comes to AI?
Or number eight four four four middle. Before I introduced
our guests last week, we asked you if democracy itself
is at stake in the twenty twenty four elections. A
lot of people called in. Take a listen to some

(01:29):
of the voicemails we got.

Speaker 4 (01:30):
Yeah, are calling from maker j Alaska.

Speaker 3 (01:33):
Name.

Speaker 5 (01:34):
Hi, my name is Toddham in Sarasota, Florida.

Speaker 6 (01:36):
Hi, my name is Steve Marie and I'm calling from Nashville, Tennessee.

Speaker 7 (01:41):
My name is Jacob Jones. And I'd just like to
say that a choice between the most unpopular president and
the second most unpopular president, is it.

Speaker 4 (01:49):
Much of a choice. I am not being a Republican,
I'm not a Democrat. I am finally in the middle.
Democracy is under threat. It is in trouble.

Speaker 5 (02:00):
Whether our democracy is in danger or not. I don't
believe it is necessarily imminent. I think the media has
to play a very important role in keeping democracy alive.

Speaker 6 (02:11):
I do think that democracy is under threat. There is
a concerted effort to do away with the democratic system
as we know it. You have to be willing to
take a span and desfand the democracy as we know
it if it means that much to you.

Speaker 1 (02:27):
Well, thanks to everyone who called in so this hour again.
What do you want to know about AI? Do you
have concerns about it? Or are you excited about the possibilities?
Joining us to help answer your questions. Kara Swisher, who
is known in Silicon Valley, is the most feared and
well liked tech journalist. She is host of the popular
podcast On with Kara Swisher and Pivot. She's also about
to release a brand new memoir, burn Book, A Tech

(02:49):
Love Story. Kars. So great to have you on the middle.

Speaker 8 (02:52):
Thank you.

Speaker 9 (02:52):
We have to dispense with that that was years ago
in a profile in New York Magazine and stuck with
me like mold.

Speaker 8 (02:59):
I'm not well, n so let's just keep that okay.

Speaker 1 (03:03):
Also joining us at entrepreneur Tim Estus CEO of angel ai,
which is a personal AI for kids. He also sits
on the board of the Nashville based innovation studio Tim.
Welcome to you.

Speaker 10 (03:15):
Thanks for having me. Jeremy scity to be on It's great.

Speaker 1 (03:18):
To have you and Kara. You know a year ago
you had probably heard of chat GPT, I hadn't. I
heard of it first in January of this year. Now
it has more than one hundred and eighty million users globally,
and then Google just introduced its new AI Gemini Ultra.
Are these chatbots how most people are using AI today
or are they just the most visible way.

Speaker 8 (03:39):
I don't think a lot of people are using it yet.
It's start.

Speaker 9 (03:41):
It's at the beginning stages, just like the early Internet,
and so most people they're probably seeing it more is
when it's integrated into things like Gmail and things like that,
and that's where you're gonna You're not gonna it's stuck.
Quite like the Internet, where you go to it and
you type something in. It will be integrated into all
the apps. It'll be integrated into the Internet or services
or insurance or whatever, and so you.

Speaker 8 (04:01):
Won't see it.

Speaker 9 (04:02):
This just happens to be a consumer application of it.
But there'll be lots of chats. There's of all kinds,
and people will build them like apps like a new
like an Uber app was built on the on the
back of the iPhone. So you have to think of
it in a slightly different way than the Internet, even
though the internet's everywhere, and it's more like like moving
into the being, coming like electricity. This is what what

(04:24):
what's going to happen here? It's gonna it's going to
integrate rather quickly with everything.

Speaker 1 (04:29):
It did so seem though with chat GPT, like wow,
I can't believe we're at that level technologically at this point.

Speaker 8 (04:36):
Well yeah, I mean some of yes, yes, because it's
the latest new thing. It's not.

Speaker 9 (04:40):
It's certainly impressive, but it's been they've been working on
it for a long long time.

Speaker 8 (04:44):
And that's what you know.

Speaker 9 (04:44):
I heard about it ten years ago, from ten or
more years ago from people like Sam Altman and Elon
Musk and others who were talking about it, and you know,
different times it's called.

Speaker 8 (04:52):
Machine learning, super and superhumans. They've all kinds of where.

Speaker 9 (04:57):
So it's been being worked on for doctor fay fay Lee,
who I just interviewed, was working on image net, which
was a version of an early version, and.

Speaker 8 (05:04):
So it's been worked on.

Speaker 9 (05:05):
I think it's just you're getting to see it and
it and it evolving in front of your eyes, and
if you go back and look at like original Yahoo pages.
They're really weird looking like they're like, what is this?
What is this funny contraption? What is this butter shurn here?
And that's where it is now. It won't look like
this going forward because it is somewhat crude even as

(05:26):
it is what is happening now.

Speaker 1 (05:28):
So you say, maybe it'll end up being the Alta
visto or lycos of of change.

Speaker 8 (05:32):
No, I don't know which one. It'll be a lot
of them. Everyone will have one.

Speaker 9 (05:35):
Everyone has to have one, and so it's just the
I would if you want to think of it in
the simplest terms, it's the supersizing of the Internet, you know,
or digital technologies tim.

Speaker 1 (05:46):
Sis, What do you think is the most significant way
that AI is already changing society?

Speaker 10 (05:52):
Well, I think there's sort of good ways and bobbly
bad ways that it's changing society today. It's already been
you equitous for many many years. I mean early forms
of mL and AI, you know, date back to recommendation
engines and core search and pieces. And as Tristan Harris,
who leads this interviewing technology, I think, makes a very

(06:13):
good point, like we're kind of in the second engagement
of AI. Social in many ways was enabled through a
bunch of AI techniques to scale and recommend, and you know,
in many ways, there's a good argument that a lot
of social has had negative impacts, especially among young people,
more so than positive. And so in some ways there
is this balance of we are able to manage an

(06:37):
overwhelming amount of information that would be impossible without AI. However,
that management of information has hooked humans directly up to
the machine, and it's arguable if we really should have
been so my sort of hooking into this before we
had an AI to balance out you know, the AIS
and the cloud and the servers, so like a client

(06:58):
side AI. Until we have balance between the user and
the services, in many ways, the humans have become the product.
And that's probably like the biggest challenge is that how
much is AI used in an exploitive way? And then
you can already think of all these ways it's used
in an automation enhancement way. So it said balance in
sort of you know, human nature hitting this wave of

(07:21):
technology and seeing good and ill in it.

Speaker 1 (07:24):
Let's go to the phones and Marie is calling from
Salt Lake City, Utah. Hi, Marie, welcome to the middle.

Speaker 11 (07:29):
Go ahead, hid with like AI specifically if academics just
on her off, specifically in like a high school setting,
and like, are there concerns or possible regulations for this?

Speaker 1 (07:47):
Are you are you in high school or are you
do you teach in high school?

Speaker 11 (07:53):
I'm currently a high school student. And well it's like,
where's the future going for this?

Speaker 1 (07:59):
Okay? Have you have you had to use AI? Have
you been using chatch ept in your high school?

Speaker 11 (08:06):
It has not been in my curriculum. However, some of
my skiers have used it in a non super honest
way for assignments. But they can't really get caught because
we can. It's like almost their own work.

Speaker 8 (08:24):
Yeah, no, it's you can't get caught.

Speaker 1 (08:27):
Yeah, go ahead, Cara, what your thoughts about that?

Speaker 9 (08:30):
You will absolutely get caught because there's AI working on
catching you.

Speaker 8 (08:34):
You know, students have. One of the problems was the.

Speaker 9 (08:36):
Media immediately zeroed in on this student's cheating. Sorry, students
have been cheating for centuries through a variety of technologies, all.

Speaker 8 (08:44):
Kinds of technology.

Speaker 9 (08:45):
My day, you wrote something on a piece on your hand,
which was a very exciting way to do it.

Speaker 8 (08:49):
But so you know, this is going to be an issue,
But there's going to be.

Speaker 9 (08:53):
Just the way they've been catching through other methods, that
there will be AI to catch this, and so it
won't And teachers really do know when it's an AI
generated it's really they'll do it for some things. Although
to me, the most exciting thing is helping education, helping
you consolidate you timelines do historical It's actually a tool
the way the Internet is the tool and can also

(09:15):
be a distraction and a horror show for teens in
terms of self esteem. But it has a lot of
really amazing educational applications for learning that are that we
haven't even begun to plumb because we don't know.

Speaker 8 (09:27):
We don't know what it could do.

Speaker 9 (09:28):
It's because it's up to the creativity of educators and
others who make these technologies.

Speaker 1 (09:34):
Tim you're you're working on it for kids right now,
just briefly, like, how do you expect it will be
used by younger people?

Speaker 10 (09:41):
Well, and I think it's it offers an amazing power
and maybe a way to get some balance back. So
for instance, I've got a five and seven year old boy.
So I got two kids, and my seven year old
loves building with legos, and so in an AI experiment
that we've worked with with Angel, you know, teaching him
to vision, the AI will use analogies from Lego building

(10:04):
to help him understand it. And so the ability now
in these modern language models to essentially translate between conceptual
frameworks based on these internal representations that have been learned
from all of this giant corpus of the Internet opens
up a level of personalization and therefore potentially a level
of augmentation of kids that has never been possible before.

Speaker 1 (10:28):
You know, Taliver. There has been no shortage of warnings
from experts about how dangerous AI could be or how
it could negatively impact a lot of aspects of society. Yeah,
that's right.

Speaker 2 (10:39):
Here's the late British physicist Stephen Hawking speaking with the
BBC in twenty fourteen about the potential dangers that AI
poses to humanity.

Speaker 12 (10:47):
The primitive forms of artificial intelligence we already had had
proved very useful, but I think the development of full
artificial intelligence good spell the end of the race. Once
humans developed artificial intelligence that will take off on its
own when redesign itself at an ever increasing rate.

Speaker 1 (11:11):
Karro Swisher. That was in twenty fourteen.

Speaker 9 (11:13):
Yeah, you know, Elon Musk was the same way. He
came to one of my conferences and talked about, you know,
a terminator like future with these things. It's one of
the reasons he invested in open AI early, which was
his fear, his fear and loathing about what was happening,
and also the power of big tech companies taking over

(11:33):
this because that's who and that's who has actually that
has certainly come true. And so, you know, I think
what he was, I'm not Stephen Hawking, so I'm not
going to argue with him this This is a direction
it certainly could go in, just like nuclear energy can
go in the direction of weapons and some very incredible energy,

(11:54):
clean energy for example. And so I think one of
the things I always say, I'm not necessarily scared of AI.
I'm scared of people using AI and the uses whatever
they want to create.

Speaker 1 (12:04):
That is Kara Swisher. We've got more of your calls
coming up right after this break. This is the Middle.
I'm Jeremy Hobson. If you're just tuning. In the Middle
is a national call in show. We're focused on elevating
voices from the middle geographically, politically, and philosophically. Or maybe
you just want to meet in the middle. We have
no algorithms involved in the selection of guests or callers
on this program, and it's completely human powered. I'm joined

(12:27):
by tech journalist Kara Swisher. She's host of the podcast
Pivot and On with Kara Swisher and AI entrepreneur Tim
SA's CEO of angel AI. Our question this hour, what
are your questions about artificial intelligence? Are you concerned or
excited about the possibilities? Are you already using AI in
your business? Telliver, what's that number again?

Speaker 2 (12:46):
It's eight four four four middle, that's eight four four
four six four three three five three.

Speaker 1 (12:51):
And let's get right to the phones. And Murray is
calling from Western Minnesota.

Speaker 7 (12:56):
Murray, go ahead, Hello, I have a question I want.
It's AI could do something like do an immediate fact
check on someone like Donald Trump as he's giving speeches
or talking about things. And if AI could say that's
not true or somehow it could identify that a statement
made by a person like him or anybody a politician,

(13:17):
would you know, correct it right away and say you
know this is not a factual statement.

Speaker 1 (13:24):
Ury, thank you, tim Estis. This is a concern a
lot of people have about AI is influencing politics, coming
up with things that make it look like what the
politician is saying, but it's actually a deep fake or
something like that. What about fact checking politicians or using
it in a good way to deal with misinformation?

Speaker 10 (13:43):
Yeah, I mean I think there is. I think it's
a difficult question answers. Yes, it can, but I think
the question ends up being how well engineered and how
much safety isn't in engineering. So today, the language models
that could process that type of real time statement and
have a view of it if connected to essentially factual data,

(14:04):
historically they will often you know, still miss subtleties, like
the more advanced models are better at this, and then
you have an issue of bias, and so there probably
has to be a model to do this which essentially
has things that are very definitely like true and true,
and then a set of areas where there's sort of

(14:26):
different views and so often the political we all we
want to polarize and have like true and true, and
then you want to and then you have a sort
of opinion where it's still the debate going on. So
I think that you know, being thoughtful so that you
don't seem biased one way or the other in that
is kind of the missing link. So you know, it

(14:46):
calls for a language model in the middle. If I
could say that for this.

Speaker 1 (14:50):
Second possible for AI to to do that, to be
thoughtful in that way.

Speaker 10 (14:55):
It's absolutely possible. A lot of these issues are a
failure of training. I mean you have to think about
a lot like GBT four in many ways is consuming
and hoovering up most of the Internet and compressing it
into this representation. And much like us going through, you know,
an enormous library and skimming all the books and then
kind of knowing what it said, but still not forgetting
and not remembering every single detail. That's kind of what

(15:16):
we're seeing amplified a thousand or a million times here.
And so with that, yes, it's possible. But what's missing
is machine education. So we spend all this time, you know,
trying to educate our children into the truth, right, to
have good citizens, to have a better society, and yet
we have a technology that's going to inform kids and adults,

(15:37):
and you could argue we went really sloppy to get scale.
And I think there's some really interesting research showing that
there was a paper early Shery called textbooks is all
you need, and it was an interesting point to show
that extremely well curated, truthful data sets of small size
could be nearly as effective at massively larger data sets.

Speaker 1 (15:57):
Let's go to Daisy in Hailey. I Hi, Daisy, welcome
to the middle.

Speaker 13 (16:01):
Go ahead, thank you very much. I'm a sixty eight
year old woman. I have a very big problem understanding
the language of AI. I don't respond well to computers.
My big fear is that young people who haven't developed
critical thinking and haven't developed their intellect yet and have

(16:26):
brains that are developing, are going to use this and
somehow they will be influenced by the way that AI
speaks to them and the language, and they will be
almost encouraged not to think for themselves, but to come
up with a pretty way of saying what's maybe in

(16:48):
their hearts but not their brain. If that makes sense.

Speaker 1 (16:53):
Yeah, interesting point, Daisy Karris Wisher. I think about the
way that I use the way that I that I
already have a bad sense of direction, but now that
I've been using you know, Google Maps for years, it's
even harder for me. I don't think for myself as
much in direction what do you think about what Daisy
had to say there?

Speaker 8 (17:08):
Do you have to do?

Speaker 9 (17:09):
You have to think for yourself on that, Like I
think some of the it depends on what it is right.
And I think this has been happening forever as we
automate and we do different things. Again, I'm sorry to
sound like an old chrum, but it's like, well, guess
what this is the development of modern civilization with digital tools.
You could say, we don't know how to use a
card catalog anymore.

Speaker 8 (17:28):
Is that a good or bad thing?

Speaker 9 (17:29):
It just is like that was a skill I had it,
I don't have it anymore. Searching everything else that this
this is what happens as technology the eases certain duties
that we used to do by hand, and we.

Speaker 8 (17:41):
Have to get used to that.

Speaker 9 (17:43):
I do think, you know, I think this goes back
again television was did television dead?

Speaker 8 (17:47):
And everyone?

Speaker 9 (17:48):
It kind of did? Did radio it kind of did?
Did the Gutenberg Bible? I mean that was a crazy
time when that happened, you know, until they got education
and everything else. There was like witch burnings for centuries
went on because of that.

Speaker 8 (18:01):
And so this is just another transition. I think information
at your fingertips.

Speaker 9 (18:06):
Is the goal of the best goal of this thing
is is that a good or bad thing? I think
many could argue it's a good thing that we have
some commonality, we can have information. I think the problem
is misinformation, it being manipulated. And again, people are the problem,
not the technology. The technology is a tool or a weapon,
and it could be used either way. And that and
this one particularly because it's the way I describe to

(18:30):
people is it's like the Internet, except when you chearch
on Google you get to a bunch of sites. Now
it will bring you the things in the sites that
are most useful to you and collate in them and
organize them. I don't know if that's so different than
a lot of other processed foods, prepared meals, fast food,
like some of them are good, some of them are bad,

(18:51):
all of them are coming.

Speaker 1 (18:54):
Amelia is with us from Maryetta, Georgia. Amelia, go ahead,
welcome to the middle.

Speaker 6 (19:00):
Sure.

Speaker 13 (19:00):
Yeah.

Speaker 14 (19:00):
I just wanted to say that I've been using chat
GPT since it came out, and it's been like another
person in our in our workplace almost I you know,
I was saying earlier that, you know, my company has
been trying to get you know, equipment sales and other
types of things, uh, you know, honed in for our business.

(19:23):
But we're a very small business. We don't have it
people to teach us and to build databases and stuff.
And chat GPT has helped me in building databases and
writing code. I've never coded in my life until I
started using chat GPT, And you know, it'll hallucinate once
in a while, but even when it hallucinates, it points
me in the right direction. It's become a new tool

(19:45):
in my toolbox that work.

Speaker 1 (19:46):
Do you think that you've had to lay off people
or not hire people because of your use of AI.

Speaker 14 (19:55):
I don't think we ever would have hired somebody to
do what I was doing building the database encoding stuff.
It was just something that we've always wanted, that we
always needed. But you know, because we're such a micro
small business, we would have never been able to afford
to pay somebodey to create that system for us. So
I don't think it really prevented us from hiring or

(20:16):
you know, we definitely didn't let anybody go, and so
I think that it's just become a really great tool.

Speaker 1 (20:23):
Amelia, Thank you for that call. Tim let me ask
you you know that this is something that a lot
of businesses are already using in many different ways.

Speaker 10 (20:31):
Yeah, I mean I think that you just heard a
case study and productivity gain, right, I mean, so this
is she has a good example of something where you're
creating net new capability and not at the expense of something.
So that's a good example of the positive so businesses,
I believe today what's happened in the last eighteen to

(20:51):
twenty four months is various skill sets that were mostly
the manipulation of human language at a certain level, not
at level of say a very sophisticated writer like we
have on this show, but you know, at a basic level,
a supportive level is now able to be scaled and automated,
as well as a different way to package and summarize

(21:12):
information at a very broad level and then to personalize that.
So those are pretty powerful things, and those are basic
kind of tasks that you might have interns if you
would do now. I do believe there is a cautionary
tale here that we need to be a little aware of,
which is people have to go through stages of development,

(21:35):
and often people enter into entry level jobs so they
can learn skills and increase their knowledge and become more
and more advanced and eventually move on, you know, into
more advanced efforts and leadership. And if you take away
the entry level part because you've automated away, there's gonna
have to be some societal response to that, or else
we're going to sort of stop the flow of growth.

(21:57):
And I'm not saying that that's an inevitable thing. I'm
just saying that there's likely a disruptive phase before we
sort that OUs a society. You know, when you go
from ninety five percent or so a Graririan down to
what three percent now or something. You know, over a
course of you know, probably several decades, we're going to
see some of these job areas shift that fast in

(22:18):
a space of five to ten years, maybe even faster.
And so there needs to be a lot of empathy
I think about this, and especially speaking as a technologist
and working with others, sometimes we get so excited about
the potential of the future we don't give enough empathy
to the transitional realities. And that's something that you know,
I think needs to be discussed more well.

Speaker 1 (22:38):
And the Pew Research Center says nineteen percent of Americans
are in jobs that are at high risk of being
replaced by AI, including specific professions like budget analyst, tax preparers,
technical writers, and web developers. Taliver, I know we have
an important tweet that has come in.

Speaker 2 (22:55):
Yes, Kristen and Kansas City tweets. Tell Cara she's wrong.
She's immensely likable, brilliant.

Speaker 3 (23:01):
Shout out to christ there you care.

Speaker 1 (23:04):
She said that, let's go ahead and wrong.

Speaker 9 (23:06):
And also, you know, you just brought up something which
was interesting, was you know, jobs will be affected. I'm
often at parties where people talk about this, and of
course it's white collar jobs you're talking.

Speaker 8 (23:16):
About this time.

Speaker 9 (23:17):
Nobody minds when they get their Amazon thing quickly, or
nobody minds when they get their strawberries for four dollars.
Guess what strawberries don't cost that? They cost it because
it's automated. And so this is just reaching up into
a higher level of people in terms of not better people,
but better paid people.

Speaker 8 (23:36):
And that's what the question is.

Speaker 9 (23:38):
Should it be automated and should then those people be
freed to do better things that are more creative? Should
there be law students like bait stamping with law? Why
was the person doing that? Why is a person doing
a lot of stuff? And so could you use as
an opportunity to move people out of what are essentially
dead end or thoughtless jobs and move them into other things.

Speaker 8 (23:58):
That's the challenge we face.

Speaker 1 (24:00):
Let's go to Joseph and Saint Louis, Missouri. Hi, Joseph,
Welcome to the middle.

Speaker 13 (24:05):
Hi.

Speaker 15 (24:06):
Yeah, So I just wanted to say I've found AI
like chat TVT very useful, but my knowledge of it
is pretty surface level. So my question is, really, when
do you when can we decide that it's smarter than us?
I heard some talk about these singularity when what defines
when AI becomes smarter than us? Because I've heard the

(24:28):
stories of how it can already pass all these high
level exams and do all these things that are way
smarter than me. So I wasn't sure how you define that.

Speaker 1 (24:38):
I'm wondering that as well, Kara, do you have an
answer for that?

Speaker 9 (24:40):
Yeah, that was at the heart of this open AI,
well sort of at the heart was the heart of
that open hand thing was just a powerplay between human beings,
which again has gone on forever, but it really was
the idea of these they called ACCEL and DECEL decelerationists
and accelerationists, and there is a very big middle ground
between these. To one is the doom scrollers who think

(25:02):
Terminator three essentially, which was the worst of the movies.
And there's those that think it's you got to go
because it's going to help pollution, it's going to solve energy,
it's going to solve this, it's going to solve that,
it's going to solve cancer, all kinds of stuff that's
very actually quite possible when you start to realize the
computing power here. And so the argument is when it

(25:23):
becomes self aware, AGI generative AI is self aware, and
that is many people argue about this. Most people that
I know, who I think are great AI scientists, do
not think that it's aware.

Speaker 8 (25:38):
Others do.

Speaker 9 (25:39):
They just do they think it is, And so I
think that will we probably won't know it when they
do it, because then they'll hide it from us. I
guess this AI doesn't have isn't human. It doesn't have
the same emotions.

Speaker 4 (25:52):
Weird.

Speaker 9 (25:52):
It doesn't want to say, create an insurrection in the
Capitol on January sixth.

Speaker 8 (25:57):
It doesn't want to do that. It doesn't have those they.

Speaker 5 (26:00):
That worries about.

Speaker 1 (26:02):
A few years.

Speaker 8 (26:03):
No, why it doesn't care.

Speaker 6 (26:05):
You know.

Speaker 9 (26:05):
One of the things that Elon Musk said, he had
imaginally thought it was going to treat us like housecats,
you know eventually, but then I think he had the
better analogy.

Speaker 8 (26:15):
Uh, where we you.

Speaker 9 (26:16):
Know, it's like they're going to build a highway and
there's an antill in the way. We don't think about
the ant hill. We just build the highway, right, it's
not we're not trying to be mean to the antill.
It's just there and we don't even know it's there, really,
And so that would be more of the thing is
if you tell it to make as much money as
possible in the stock market, it might cheat because that's
guess what the data it's getting about humans who have cheated.

(26:40):
Because humans, it's going to learn from us.

Speaker 8 (26:41):
We are AI.

Speaker 9 (26:42):
That's the that's the problem is is that it'll say,
oh that I'll cheat or make everybody able to eat. Well,
it might have come up with the idea that maybe
we'll kill a billion people because that'll work.

Speaker 8 (26:54):
But maybe, but we just have to tell it what
not to do, right, don't do that. Do this here.

Speaker 9 (26:58):
The prime directives, whatever they happen to be, but humans
are the problem at every stage of this development.

Speaker 10 (27:06):
You agree with that, Tim, Yeah, I think it's pretty
profound what Kerry just said. I mean, I think honestly
that when we look at the current maladies from technology,
it's not like it's not the smartphone's fault, it's the
software that was designed to put on the smartphone phone,
which is exploitive and impacts human attention and concentration, addiction,

(27:27):
all these other things. So in the same thing, the
most dangerous thing essentially is a failure of values in
business ethics being enabled with more technology to go to
another level with it. And so you know, it's almost
very much like the Julius Caesar line. You know, the
fault is not in our star, has been in ourselves.
And I think there is a lot of truth like

(27:49):
in that. For this moment now, I do believe the
thing missing. And this is where when you look at
sort of the ACELL versus the DECEL, you know, the
this kind of constituencies, which are good because it polarized,
it creates almost like a frost fire kind of thing
in Silicon Valley going on. What's missing in it, honestly
is humility. And I think I come at this thinking

(28:11):
that much like you know the genetic research around viruses
that the Obama administration had banned and then essentially got
overturned in the last administration, I think there's immense consequences
to humans messing around with things in a way without
taking into account many of the unexpected consequences. And so

(28:32):
I believe at the core of this is the technology
industry as a whole has been insulated from liability at
a level than almost no industry has, and so without liability,
there will be no limits. And so I believe that
if we can restore a level of moral hazard like
in liability, and I see this in social media. I
see it where you have videos that TikTok serves up

(28:52):
in ten year old African American girls in Philadelphia hang
themselves from watching it, and then the parents sue and
the judge throws it out over section two thirty, and
you're just like, that's never what was intended to be.
And so so I think there is an imbalance that
has to get sorted in, and if we can do that,
we'll have less of the risk. But the risk is
in human nature. So I can one hundred percent of.

Speaker 4 (29:12):
Care on that.

Speaker 1 (29:13):
You know, there are lots of doom and gloom in
some of the predictions for how AI will impact humanity.
But on the other hand, many people are excited Tolliver
about the potential benefits.

Speaker 2 (29:21):
Yeah, and medicine is a field where we're seeing a
lot of enthusiasm. Here's doctor David Aegis, who directs research
on AI at the University of Southern California, my alma mater,
talking on CBS about wait, what AI can do for you?

Speaker 16 (29:33):
Computers are better at picking up tuberculosis than humans are.

Speaker 3 (29:37):
You know.

Speaker 16 (29:37):
Imagine you're in an emergency room and there's one hundred
X rays a doctress to look at. Well, maybe the
computer can look first and say, hey, doc, pay attention
to this one because we think it may be pneumonia,
and that patient could be treated quicker because they go
to the head of the line their X ray. So
there's a lot of potential how it's going to transform
what we do as physicians.

Speaker 1 (29:56):
Uh, Cara, just a few seconds here, But what do
you think is.

Speaker 8 (29:59):
Give you David, it's one of my doctors. Just but
you know he's listen.

Speaker 9 (30:04):
He has It's center funded by Larry Ellison I love David,
but he's a tech lover, so he's going to say that.

Speaker 8 (30:10):
But he's right about the medical parts. It's really exciting.

Speaker 9 (30:12):
There's so many exciting things, whether it's gene folding or
cancer or identification or drug interaction.

Speaker 8 (30:18):
It just goes on and on and on.

Speaker 1 (30:19):
Well, and I know that doctors are already saving hours
uh every day in some cases using AI so they
don't have to write up all the notes at the
end of the day that typically, so.

Speaker 8 (30:28):
Doctors are wrong. I hate to tell you.

Speaker 1 (30:29):
This sometimes Yeah, right, doctors, we're human and we are
human too, and we've got more humans coming up on
the middle right after this break. This is the Middle.
I'm Jeremy Hobson. We're talking this hour about artificial intelligence.
What are your questions and concerns about it? If you're
already using it? How our number eight four four for middle,
Our guest Karros Swisher, tech journalists and hosts of the

(30:51):
podcast Pivot and On. With Kara Swisher and a entrepreneur,
Tim Estes, CEO of angel AI, let's go to the phones.
John is with us from Chicago. Hi, John, welcome to
the Middle.

Speaker 17 (31:02):
You know I have a question I wonder if the
advent of AI doesn't represent an evolutionary moment in the
development of our species. You know, it's said that evolution
is a process by which we achieve higher and higher
levels of self awareness, and I wonder if that isn't

(31:23):
the definition of AI. You know, I'd welcome comments from
your panel.

Speaker 1 (31:29):
Great, great question, John Tim, I'll go to you on
that one.

Speaker 10 (31:32):
What do you think, Well, I mean, there are people
that do believe that. I mean, Kurtzfeil has believed that
for a long time. You could argue that a lot
of the ideology driving in Google has kind of got
that as a subtext. So I think that there is
reason to believe that, you know, technology has always come
alongside a biological evolution pushed human beings forward. What I

(31:55):
want to be very careful of, though, is you can
potentially take that too far and idolize that, and that
might make you make unsafe decisions, unsafe relative to the
empowerment of AI before it's tested, because you're so excited
about what its impacts could be. I think as we
start to merge humans and machines, sort of these human

(32:17):
computing bridges, like what you know, Elon's working on with
neural link and others like this is a very interesting
area to be very careful about, and that it might
take some real evaluation to know the long term effects
of these kind of integrations, even though there's a lot
of potential in it. So I think that's kind of
where I would say resounds the culture when you are
some caution when you start messing with the nature of

(32:39):
the species and what that means. But on the whole,
it's kind of a non issue, and that there's been
technology advancing revolution now and definitely it's what makes humans
unique is we have technology.

Speaker 1 (32:49):
By the way, Kurtzwile you were talking about is the
futurist and Google AI exec who believes AI will blend
with our brains within a.

Speaker 8 (32:56):
Dec the singularity.

Speaker 9 (32:57):
It's called the singularity. He also likes to take a
lot of itamins.

Speaker 1 (33:02):
Okay, let's go to Christine, who's in Sutton, Massachusetts. Hi Christine,
welcome to the middle Hi, thank you.

Speaker 18 (33:10):
I am a middle school fine arts educator. And this
evening we did a drama production entitled Holiday chat GPT.
Our students used the chat GPT platform to generate original
scripts that they programmed with the parameters of what they

(33:30):
were looking for with their ideas, and then they were
able to use their critical thinking skills to analyze what
chat cheap Chat GPT produced and then tweak it and
refine it and then take off with it and create
four original scenes that they used it for.

Speaker 3 (33:48):
Was it good?

Speaker 13 (33:50):
Yeah, it was fantastic?

Speaker 1 (33:53):
All right, good, well, I'm glad Christine, thank you for that.
Let's go to loose who is calling from Tampa, Florida. Hi,
Louis welcome to the middle.

Speaker 19 (34:03):
Go ahead, Hi, thank you for having me. I just
wanted to plan air serve. Also on the artistic side,
and one of the things I've been thinking about was
AI is AI R. It's been, you know, growing, becoming
more prevalent. It's in our space and one of the

(34:23):
things that make me concerned in stats one of your
AI experts talk about how it could lead to more
opportunities for career of jobs. But what I've been worried
about is seeing it how corporations in career fields are
maybe using AI in ways to actually phase out sort

(34:47):
of career of jobs. And where I worry is how
it might say, take away some of the career of
process from some career of verse, you know, where it's
AI created opening teams or scripts or things like that.
So I just want to hear that your thoughts were

(35:08):
in regards to a coming into the creative fields.

Speaker 1 (35:12):
Great question, les Kara, what about that.

Speaker 9 (35:14):
Yeah, it's been a big issue with the Hollywood writers strike.
I think the problem is is maybe some of their
stuff is replicable, right, that a sitcom sounds like a sitcom,
that there's not a lot of creativity. Truly truly human
creativity will not be harmed here.

Speaker 8 (35:29):
This is the thing.

Speaker 9 (35:30):
What happens is a lot of stuff is formulaic, and
so they can start to everybody's formulate on some level.
If you get enough data right, you can figure out
what someone's like. You don't quite get to them, but
you can mimic them. It's like anyone an impersonator really.
And so one of the things that the Hollywood people
are worried about is is that they that they will
be able to shove in someone's shows all the law

(35:52):
and orders and create more. Well, some might say it's
you know, someone could do that. That's pretty easy because
of the way the shows go. So that's one of
that's that's an issue but I'm not sure I call
that creativity. I'm not sure what I call I call
it entertainment. I guess I don't think you're going to
get Let me just say Taylor Swift's job is not
underseashare on this kind of stuff, because she's a unique

(36:14):
creative spirit.

Speaker 8 (36:15):
They probably could mimic and they've done that with music,
and that'll be interesting too. I just don't necessarily think
it's going to replace truly creative people.

Speaker 9 (36:24):
But a whole lot of stuff is much more replaceable
than you think. And that's what you've got to do,
is one use the stuff. Use the stuff, That's what
I keep saying, See what it does, and then see
how it could help you. And then you should really
understand how it could hurt you and what you're doing,
and it maybe may take a hard look at what
you're doing and realize you need to move to something

(36:46):
that this can't do. And there's tons and tons and
tons of jobs that this will not replace.

Speaker 1 (36:51):
Show promo opportunity coming up in a few minutes, we're
going to hear an example of AI versus not AI music.
But first let's go to Christopher, who's joining us from Atlanta.
Georgia High Christopher, welcome to the middle right.

Speaker 4 (37:06):
So I'm a professional artist these technology, he's AI, he's
back channel those sorts of things. I'm a humanitarian concern
regarding AI, and it's that the reason we're developing AI
is because it does things as well as people, and
it does it cheaper than people do. But the problem
is the versts. My question is how are society is
going to adapt to that? The problem that introduces because

(37:29):
if the benefits of the productivity of AI are not
shared with the people openly, how will the people afford
the benefits of AI if they're just packaged and sold
to them because we won't be employable. Let's take that.

Speaker 1 (37:44):
Let's take that to tim Estes. Tim are we going
to be Billy Joel Fan?

Speaker 10 (37:50):
I mean, I remember the song Alan Town, and you
kind of think about, you know, the Allentown issue here
relative to job displacement and societies being disrupted and like
sort of microcultures that are really good at a certain
set of things that are all going to go away.
Now this is white collar now, not blue collar, as
Kara said well earlier, and so I think that what's

(38:11):
going to end up happening is there is a enormous
I think the speed of it is really the part
that's really dangerous is the speed of the change and
the arrival of the tech. Being so good, so fast,
and being essentially free to scale, those things mean that
businesses will have almost an irresistible urge to automate whatever

(38:35):
they can to create margin, and that should have a
deflationary effect. So the good thing is we may have
found an antidote to a lot of the stuff we've
had to live with the last handful of years since COVID.
But at the same time, that doesn't really help if
you're making zero. And so I think that what happening
is the global outsourcing of labor probably will have a

(38:56):
new analog, which is the global automation of a labor,
which will also open up the ability to have it
closer at home. But if we don't think of creative
ways to essentially, I mean upskill is probably not the
right word. Maybe a good writer like care can give
me a better word here, But almost like AI assisted
upskilling of current labor, then there really is a risk

(39:17):
of economic and social upheople. I think you're going to
see a populist potential backlash against this, the dwarfs things
like Brexit and Trump and others. That's a real risk
if it's not handled well. And I'm looking to leaders,
like political leaders that are thoughtful enough to look ahead
on this, because we need to have that right now,
and we don't need it any year. We need it
right now given the pace of things.

Speaker 8 (39:36):
Right or create things like UBI.

Speaker 9 (39:38):
That's you know, people, let's start thinking about should we
work so much? Should we not work so much? It
gives you an opportunity here. Like I said, listen, I'm
not an.

Speaker 8 (39:47):
Acceleration or a decelerations. It's what's coming. I'm just telling you.
I'm just telling you the time. That's all. You know
what I mean? Like in what's inevitablest benevolence?

Speaker 12 (39:56):
Right?

Speaker 9 (39:56):
And so it makes sense from a shareholder perspective for
these companies to this. They will do it this, they're capitalists.
This is what capitalists do. They move to try to
create costs. So what do you want to do about it?
And then what kind of jobs are protected? What should
humans be doing? Could we find is there a way
to do a new way of humans not doing.

Speaker 8 (40:15):
Like is roadwork any better?

Speaker 9 (40:16):
No, it's not it makes people sad and depressed and
drinking and like, so, what is what is good work?

Speaker 8 (40:22):
And what should people be doing? Should they be doing
more art? Should they be.

Speaker 9 (40:25):
Doing more gosh, there's a lot of elderly people that
need help. That computer's not going to fix that for
a long long time. There's no robots ready to take
that on today. There's all kinds of really big challenge
education we face, you know, a positive teachers. Even though
people go on and on about truck drivers and automat
I'm not going to get into electric automated trucks because

(40:47):
they're there, they're in Texas right now. But should people
be driving trucks? There's not enough of them, by the way,
people always say it's going to get rid of jobs.
There's not enough truck drivers. Maybe they could drive them
on the highways, be much safer, and then you have
truck drivers drive goods into the cities.

Speaker 1 (41:02):
You know, But is anybody thinking about this at the
government level that actually knows enough about it to think
about the future and how to do yes, you say yes.

Speaker 9 (41:08):
So the Biden administration just put out any an executive
order that had a lot of this stuff in it,
a lot of safety issues and everything else, and it's
it's one of the problems is our government has abrogated
responsibility to regulate tech in any way, and we're playing
catch up on the previous the previous group of companies
which are the current group of companies.

Speaker 8 (41:27):
And so that's what we really need to do.

Speaker 9 (41:29):
And I think legislatures are paying attention, especially to the
safety issues, and they have to be paying attention to
the job issues because it's it's a drastic change and
it will be fast.

Speaker 1 (41:40):
I like Tim says, Let's go to Jim in Cloquet, Minnesota. Hi, Jim,
you're on the air. Welcome to the middle.

Speaker 20 (41:46):
Thanks for having me on. My question is I heard
at least one company was advertising there I as hallucination
free and then Talia hood. That wasn't something I had
worried about. I guess after listening to your show, I
heard the one woman say it's a great too when

(42:08):
it's not hallucinating. Is that something we should be worried about?
And how do we remain aware and educated enough to
know when it is hallucinating?

Speaker 1 (42:21):
Okay, Jim, We've got it. Tim. You know, actually that
is a term that's used about AI hallucination.

Speaker 12 (42:27):
Yeah, what is that?

Speaker 1 (42:28):
What does that mean?

Speaker 12 (42:29):
What?

Speaker 1 (42:29):
How do you answer that question?

Speaker 10 (42:31):
So I mean I'll try to be more charitable to
the AI. So hallucination means essentially it will state something
that is unfactual with deep conviction in a way that's
compelling because it's almost like it's imagining as if it's real.
Thus the word hallucination. What really is a better analogy
is think of these giant eyes as a giant, lossy

(42:53):
compression of the whole Internet, almost like the old JPEGs,
as they would fade into a page, and the initial
couple of you were kind of fuzzy, and I think
as they get better and better, they'll be less fuzzy,
they'll be precise. But right now there's still a somewhat
low resolution to be able to be highly accurate, and
so people are putting various technologies around those core models

(43:15):
to make this better. I'm a bit skeptical of something
being truly hallucination free. There's just too many really smart
people working that problem still, so I would think that
has to be in a very narrow domain. Or maybe
an AI wrote that ad saying it was hallucination free,
so it could work both ways.

Speaker 9 (43:30):
Yeah, it's also the crap in crap out, that's all
the data has been.

Speaker 8 (43:35):
They have to fix the data or the instructions.

Speaker 9 (43:37):
And look again, people who sinate are wrong, and so
this just does it on a huge hyped up level.

Speaker 1 (43:44):
We've got to comment from at Listen to the Middle
dot com from T Woodson from Illinois, Tolliver.

Speaker 2 (43:49):
I strongly believe the advances in AI tech are both
spectacular and terrifying. The pharmaceutical industry cannot release drugs without
clinical trials and federal approval. This is a wise thing
to do. Similarly, tech companies should not be allowed to
release AI without societal trials and federal approval.

Speaker 9 (44:05):
Kara, Yeah, that's what was in the executive order is
safety safety issues and testing of the safety issues requiring that.

Speaker 8 (44:12):
Of course Congress needs to act.

Speaker 9 (44:14):
So the Congress is completely not acted in any then
we don't have a privacy bill, a national basic privacy bill,
we don't have anti trust. Look, they need to put
in safety guardrails in that the government, just like you
do for cars or planes or anything else. And they're
the idea that they can't regulate this is it canar
because they regulate everything else, and they can figure it out.
And there's lots of smart people in government can do it.

Speaker 1 (44:35):
I think I can. I think I can squeeze one
more call in Ali is in Boston. Hi Ali, welcome
to the middle Go ahead, Hey y'all.

Speaker 7 (44:43):
Thank you.

Speaker 21 (44:44):
So, like most things in tech, it usually well a
lot of things have come from you know, the military,
and they're often you know, the affluent either get first
aids or the help to the creators. The first thing
that I think of was, for a while, I remember
seeing a thing about cell phones to open faces, like
to open up your phone. It was the phones were

(45:05):
having a hard time reading darker skin folks faces to
open it right. So my question is actually in concern
I guess I'm on the terrified side, is around policing
and descent and how you know this can affect you know,
what's the most efficient way to squash whatever protest is happening,

(45:26):
or you know, maybe to get access to land to
refine oil. You know, we're extremely flawed people, and we
do have people I think in power who are honestly
kind of hallucinating, and how can we you know, I
don't even think that we are protecting the world. And
I think about indigenous folks, how can they have access

(45:47):
to this because in terms of protecting the earth and
the planet, and you know, they make up like at
this point, but eighty percent of the land is like
something that they are happy to do it and it's
within their Belet Ali, let me take that.

Speaker 1 (46:02):
Let me take that to our guests. Yes, we've got it.
Let me take it to our guest, Tim. Just briefly
about you know, the issue of bias is a big
one and how police may use AI.

Speaker 10 (46:12):
Yeah, I mean, I think I think it goes back
once again to the human agency. We keep personifying technology here,
and in the end, it's a set of human decisions
or a set of human neglect that tends to create
these situations. So, for instance, the human neglect could be
training sets that are insufficiently representative. And there already are

(46:33):
to what Carry had said, like, regulators are looking at this,
governments are looking at it. There is a role for
government to play that it hasn't traditionally played in tech,
and I think that's got to be part of the answer.

Speaker 1 (46:42):
All Right, I said we were going to have a
little game here, Tolliver. We have time very briefly for
a quiz.

Speaker 2 (46:47):
Absolutely our guests, I'm going to play you two clips
of Beyonce singing cover songs. One is AI, one is real.
You simply tell me which one is AI. The first
clip or the second winner gets the title to my
prius jump in when you got it. Here's the first clip.

Speaker 9 (47:03):
We'll not say you and that girl welcome Brown.

Speaker 2 (47:10):
And here is the second, your baby, do.

Speaker 19 (47:14):
What you want to do?

Speaker 8 (47:18):
Tell you socket to The second one was real.

Speaker 2 (47:23):
I agreed it was the first.

Speaker 10 (47:30):
That part I'm experts.

Speaker 1 (47:33):
We are, but yeah, well I want to.

Speaker 8 (47:36):
If you did, Taylor, you wouldn't have got as ye.

Speaker 1 (47:39):
Karris Wisher, tech journalist and hosts of the popular podcast
On with Karris Wisher and Pivot. Her new memoir is
burn Book, A Tech Love Story. Kara, thank you so
much for joining me. Thank you, and entrepreneur Tim Asta's
CEO of angel Ai, also sits on the board of
the Nashville based innovation studio. Tim great to have you
as well. Thank you very much.

Speaker 4 (47:57):
Thank you.

Speaker 1 (47:57):
Jeremy Toliver. What is on tap for next week's show Happiness?

Speaker 2 (48:01):
We're asking what made you happy in twenty twenty three
Other than the return of the McRib.

Speaker 1 (48:07):
By the way, Taliver. As you know, we have a
weekly newsletter. It is free. People can sign up and
listen to the Middle dot com. The Middle is brought
to you by Longook Media, distributed by Illinois Public Media
in Urbana, Illinois, and produced by joe An Jennings, John Barth,
Harrison Patino, Danny Alexander, and Charlie Little. Our technical director
this week is Steve mork. Our theme music was composed
by Andrew Haig. Fans also to Nashville Public Radio, iHeartMedia

(48:32):
and the more than three hundred and seventy public radio
stations that are making it possible for people across the
country to listen to The Middle. I'm Jeremy Hobson.

Speaker 3 (48:40):
Talk to you next week.

Speaker 8 (49:01):
St
Advertise With Us

Popular Podcasts

Dateline NBC
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.