Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Now here's a highlight from Coast to Coast a m
on iHeartRadio.
Speaker 2 (00:05):
Welcome back George Norri along and Lauren Weinstein. His website
is Vortex v O R T e X dot com
linked up at Coast to Coast AM dot com. Lauren,
when you hear the phrase artificial intelligence with computers, what
does that mean to you?
Speaker 3 (00:20):
You know, we've talked about this, you know, about AI
so many times, and there's probably a lot of listeners
who have gotten you know, tired of my mind.
Speaker 2 (00:30):
Nor nobody's ever tired of you, Lauren.
Speaker 3 (00:32):
But you know, it's it's important I think always when
talking about AI to be to be clear about the
definitions that we're using. And and the term artificial intelligence
gets gets thrown around a lot, and there's a foundational point.
There is no intelligence in artificial intelligence, right, I mean,
it's it's a handy, it's a handy, uh, you know,
(00:56):
turn a phrase, but there's no intelligence there as as
you or I or most people would define intelligence. And
so within the the broad sphere of what we call AI,
there are basically two different categories, and one of them
(01:16):
is the category of of the type the type of
AI that's being used for uh, digging into into complex
calculations and dealing with massive amounts of data. The traditional
kinds of applications being things like you know, gigantic climate
models trying to model whether it advanced or Nowadays, there's
(01:42):
a lot of work being done in medical applications looking
at X rays and other dignostic data and digging through
tremendous amounts of data in ways that are difficult for
humans to do, especially because these AI systems, these of
this type can have the ability to find patterns pattern
(02:05):
matching that that aren't necessarily the best things for human
beings to do, though humans still excel in lots of
other areas, of course, and certain kinds of patterns humans
are humans are better at. But that's the really more traditional,
the good side of AI helping helping to deal with
(02:25):
lots of data in these kinds of ways. Now then
we move over to the hype filled side of AI,
and this is the side that I am very critical of,
and this is mostly under the bannered generative AI, and
this is where all the hype is, and this is
where the big tech players, you know, Google and open
(02:48):
AI and the others are all hoping praying. Really, I
think that this investment that they're making in it of
just untold billions and billions of dollars going into building
more and more data centers to crunch this data because
generative AI is very consumptive of computing resources. Uh. And
(03:09):
and in fact, they're using so much energy that this
has become a big issue now. The amount of energy
they need is just extraordinary, to the point where they're
trying to get permission to build their own nuclear reactors
and things to power these data centers. And and this
is this is the realm of the chatbots and and uh,
(03:30):
uh you know these you know, we're all where all
the hype is now, all the the the deep fakes
being used to generate uh you know, fake voices and
fake videos, and we're all the really controversial aspects of
AI really are now. And uh, the problem for these
firms is that they've poured not only have they been
(03:53):
pouring in all this money, but in many cases they
have diverted so much of their resources, their core resources,
uh they're human resources, they're engineers and and other people
into AI that other aspects of their operations have have
been many people would say have been neglected, and yet
(04:18):
there is no clear sense of where the money is
going to be made back from generative AI, because by
and large, the evidence so far is that while people
are willing to play with it when it's free, they
are much less willing to pay for it when it's
not free. And so what some of the firms have
(04:41):
been doing is just kind of bundling it in there
with other stuff that people need, like Google's doing this
with some of their their workplace products that are aimed
at businesses, and and they just say, well, now you
get now you get the AI too. And by the way,
the price of the whole packages of everything that has
just gone up. And this has been very controversial because
(05:02):
people have found in some cases their corporate information ending
up in these AI systems where they didn't want it
to go. And some of the problems with these systems,
which are basically on what are called large language models loms,
is that information that goes in can in many cases
(05:23):
will be in a complicated manner leak out to other users.
So there's all kinds of security and privacy issues beyond
the more foundational issues that so much of what these
systems say tends to be incorrect, and I have a
whole collection people. You know, people point out all the
time these simple questions that you could ask some of
(05:45):
these systems and they just give you completely nonsensical answers. Now,
when they're nonsensical answers, you know, usually you'll be able
to recognize it. You know, if it says that three
inches is equal to five inches, you know that that's wrong.
And that's the kind of mistakes you actually we'll see.
But the real risk of these systems is when you
ask a question and the answer seems plausible, and if
(06:08):
you're not an expert on a particular topic, you know,
you know, that looks like a reasonable answer to me.
But in some cases, many cases, the answer can be
completely wrong, or even more dangerous, partly wrong. And when
you get an answer that's partly right and partly wrong,
that's the recipe for real misinformation disinformation problems. It's sort
(06:29):
of the heart of propaganda when you look at it
from that standpoint. And it just seems like many of
these problems are intrinsic to these large language models, and
this is why a lot of people express concern when,
for example, Google has been pushing more and more toward
using these AI overviews. There's talk now that they're going
(06:49):
to be having versions of search. In fact, I believe
they have this in the operation for some classes of
users now, where the whole search experience is just talking
to the talking to the AI, and you're not necessarily
going to see the links the way you did, and
if the answers are wrong, you're just going to have
to try to figure it out. I personally, I think
it's kind of creepy when you get an answer from
(07:11):
these things, and then in the fine print underneath it says,
in essence, by the way these answers could be wrong,
it's up to you to verify them. So I say,
what's the point of asking a question if you're going
to get an answer back? And it says it's on
you to figure out if we're telling you the truth
or not.
Speaker 2 (07:26):
Truly remarkable, though, I was talking to doing a developer
of AI this weekend and he admitted that AI does
not have empathy, and I'm not sure it's ever going.
Speaker 4 (07:38):
To Yeah, well, I I don't think the AI in
the way we view it now in terms of these
large language models, is ever going to have any kind
of human emotion, right, so we can put empathy on
that list.
Speaker 3 (07:54):
Now, that's not to say that at some point in
the future, and I don't know if it'll be a
year from now, or one hundred years from now or
maybe never, that some methodology for creating what we might
call genuine artificial intelligence won't come to pass. And when
(08:15):
that happens, then you're going to be faced with all
these issues of emotions and perhaps self awareness and consciousness
and all these kinds of things. But there doesn't seem
to be any strong likelihood of those sorts of things
coming into play given the way these large language models
are built. I'm thinking more in terms of some completely
new kind of technology that we can't really visualize now.
(08:39):
But we couldn't have visualized a lot of technology we
have now one hundred years ago, a thousand years ago,
of course, So it would be foolish to say that
there couldn't be some kind of intelligence technology in the
future that would be much more human like. But we're
not there yet.
Speaker 2 (08:55):
Lauren, what do you think of these technologies where you
can take twenty seconds of bodio of an individual and
turn it into a full speech. Yeah, and it sounds
just like him.
Speaker 3 (09:09):
Certainly it's bad for voiceover artists, isn't it. I mean,
there goes, there goes a whole, a whole profession downmitrain.
I guess in a lot of cases, you know, the issue,
of course, is so many of these technologies, right, they're
they're tools, and the question is what are you using
them for? If they're being used in a way that's
(09:31):
you know, fraudulent, right, if you're trying, you know, if
it's a deep think, if you're trying to make it
sound or look like a politician is saying or doing
something that they really didn't do, or anybody else for
that matter. I mean, that's a real hell, that's a
real problem, and governments are attempting to find ways to
deal with that through the legal system. If it's being
(09:52):
used in ways to uh, you know, help in terms
of education and positive applications, uh, that that's something else again.
But there's another aspect to and this may just be
my you know, my my personal preference. I generally don't
like AI voices. I won't claim that I can detect
(10:15):
them one hundred percent of the time, because they're getting
pretty good, but I detect them a lot of the time.
Often there's you know, even when the voice itself is
pretty good, there's off a little pacing problems or bizarre
pronunciation problems, and you know, pauses and things like that,
and I just I don't find it appealing at all.
And actually, one of the fastest ways to get me
(10:36):
to a boarding YouTube video is what I realize. I'm
listening to a to a to an AI voice. And
Google actually is in the process of making it even
easier for YouTube creators to to do this, to create
AI generated videos and AI generated voices, you know, to
which I kind of say to myself, you know, what
(10:56):
to do? This is not something that I'm interested in.
YouTube is a marvelous is a marvelous thing. It's a
wonder of the world in many ways, but it's already
saturated with a lot of often low quality AI generated
videos and more of them. Is not something that's going
to be appealing to me. I'm probably at fact, I'm
sure I'm not in YouTube's preferred demographic, but that's how
(11:19):
I feel about it. Anyway.
Speaker 2 (11:21):
When we were talking in news with you about over
the year television and television sets, let's talk a little
bit about streaming. Who had Blockbuster dropped the ball to
let Netflix come roaring in when Blockbuster had hundreds and
hundreds of video stores all around the country.
Speaker 3 (11:42):
Yeah, it's worth remembering, of course, that Netflix was until
relatively recently. They started off as a DVD distribution service.
Speaker 2 (11:52):
That they mailed you back and forth, all right.
Speaker 3 (11:54):
They'd mail you stvds and you'd mail them back, and
that was it. I mean, it was basically a mail
order version of Blockbuster or whatever. And it was only
over a period of time as more people had, you know,
if not high speed internet, at least mediocre speeds enough
(12:14):
to get video, that they started moving more and more
towards streaming. And then of course that kind of opened
the you know, open the floodgates, and that we have
lots and lots of streaming services, only some of which
will survive, some of which have already died, some of
the more interesting ones there are not necessarily ones that
(12:34):
that most people probably even know about. But you know
that that changed the whole the whole focus and and Blockbuster,
you know, and I guess other if you think about
you know, there were other in person video there were
you know, automated systems for buying DVDs and things like that,
you know, while vending machines and things.
Speaker 2 (12:54):
I wonder if there was a young guy who went
to his boss at Blockbuster and said, Hey, there's this
new thing where people are going to be able to
watch movies without any DVDs. They just have a TV
set and turn it on.
Speaker 3 (13:08):
Yeah, this is this is the old joke, right, with
a response like people are always going to want DVDs.
They wanted to hold it in their hand, right. So,
you know, often these kinds of technologies kind of kind
of sneak up on traditional you know, traditional media and
traditional business business models. I mean, I can remember a
(13:31):
point where I was having a technical discussion with someone
and we were trying to work out the numbers about
whether it was really going to be practical to send
audio over the Internet in any significant way. You know,
the thought of doing video over it was was completely
you know, just fantastic, But there was a lot of
technology that had to be developed before it became possible.
(13:54):
There had to be hardware, codex and a lot of
advances in the hardware and the speed of these systems
to make it to make it possible. It's important to
remember that when we started on the arpinet, the backbone
of the internet, the high speed lines that connected fights
on the arpinet where fifty six kilobits per second, that's
(14:16):
thousand bits per second, which you know would have been
a low speed modem not all that many years ago
that people would have of having their homes. So the
kinds of speeds that are common now when people can
get them, and a lot of people can't get them,
a lot of people still can't really get decent internet
(14:37):
at all. If that's part of our shame in this
country that we can't get a good Internet to anybody,
you know.
Speaker 2 (14:45):
Where do you see the future over the next few years.
Speaker 3 (14:48):
Lord, Well, I think we can expect that prices to
go up. I think we can be pretty sure that
things are going to be getting more expensive for everything
related to the to the Internet, in terms of Internet access,
the streaming services. Already we've seen lots and lots of
(15:10):
price increases in streaming services. Plus they're trying to worry
that one time the whole point of getting a streaming
service was you could get rid of commercials. Now most
of the streaming services, at least at certain tiers, have
commercials and more and more of them, and of course
that would drove people away from cable packages in the
first place. And now we go full circle and now
(15:32):
it's you know, the sort of the same stuff, but
now it's delivered over the internet, maybe over the same
cable line that you got your your your cable TV
proper and also but it's it's another one of those
you know, the more things change, the more they stay
the same. But you know, prices, prices will will be
going up. I really don't see. I don't think we're
(15:54):
in a period now where we can expect to see
major improvements in Internet service because there just aren't pressures
to do that. The big Internet companies are kind of
sitting pretty right now, and to the extent that they
don't have to do buildouts at and T has already
said they're not going to do a lot of the
(16:14):
fiber buildouts that they originally had said they wanted to do,
so they have to said they're not going to bother
So I think we may be entering kind of a
period of stagnation in terms of that aspect of it.
But the prices will stagnate, they'll they'll be going up.
Speaker 1 (16:30):
Listen to more coast to Coast a m. Every weeknight
at one a m. Eastern and go to Coast to
coastam dot com for more