Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
It's Night Side with Dan Ray on WBZ, Boston's news radio.
Speaker 2 (00:07):
Really j for Dan, and we're going to talk about AI.
We're gonna the number one thing we're gonna we're gonna
do is try to get at a definition. If you
walk away from this segment understanding what AI is, that
will be a big win. But there are so many
(00:27):
uses for it, both positive and negative, that it's important
to know about us too. I actually use it myself.
I use the paid version of Chat GPT, and I'm
actually gonna tell you how I use it. But I
want to learn all about AI, and so do you
really you need to know. We're going to talk with
an expert here, Matt Rosen, CEO and founder of A
(00:50):
Lots of A L L A t A. It's a
company that helps automate companies using AI. And we welcome
Matt to the program.
Speaker 3 (00:58):
Hi Matt, Hey, good evening. Thanks for having me on.
Speaker 2 (01:01):
Of course you have a big booming voice. I'm glad
to hear the great connection. All right, As you heard
me say, the primary directive here is first understand what
this thing that is constantly referred to as Hey, what
it is?
Speaker 4 (01:17):
What is AI?
Speaker 2 (01:19):
And how is it different from simply a search being
a search engine because a little confusing. Now when you
google something, you can it gives you an AI generated
answer sometimes, so I think folks confuse it with just
being a better search engine. So I'm sure you have
to explain to people at cocktail parties all the time
(01:40):
what AI really is. So help us, help all the
millions of us listening now out and explain.
Speaker 4 (01:46):
What it is.
Speaker 3 (01:49):
Yeah.
Speaker 5 (01:49):
So the concept of artificial intelligence has actually been around
since the fifties, where you know, there's researchers dreamed of
humans being or computers being smarter than human and even
being able to fool humans that they are a human.
So these concepts have been around since the nineteen fifties
and a lot of computer science learning and theory, but
(02:11):
it wasn't until literally the last ten years that we
have the actual computing power to make this a reality.
And so when we think about what artificial intelligent is,
it's having a computer system act, reason, think and respond
as if it were a human. Now, these are not humans,
(02:32):
but what they are based on is what's called a
neural network, which is how we think as humans, where
we have lots of different connections in our brain that
help us recognize what a dog looks like, help recall memories,
help memorize facts, songs.
Speaker 3 (02:46):
And so.
Speaker 5 (02:47):
What these artificial intelligence engines are able to do is
process huge amounts of information. They've been framed on everything
the human race knows that's publicly available, and then by
going to an inner most people are used to open
AIS chat GBT or anthropics model claud or the one
(03:07):
you described as Google's Gemini, which now is kind of
taking the place of their search. You know, what it
will do is go out and search all the available
information and come back to you with different answers that
you can then use and act on. So that's the
very kind of base definition is you know, AI is
really a system that acts, thinks reasons like a human being,
(03:30):
or as close to it as we can get with
modern computing power and theory today.
Speaker 4 (03:33):
How close are we to that?
Speaker 2 (03:36):
Because I don't see any reason down the road why
AI can can't be way smarter than humans without the
frailties of humans, And I know we're far away from that,
but it will happen.
Speaker 5 (03:50):
Well, I would argue that there are some AI engines
out there that are you know, at least they process
faster than humans. You know, you can point AI at
a huge, huge volume of documents or library of books
or video content, and it can look through that and
provide insights and summarize that way faster than a human could. Now,
(04:11):
it still requires humans to review that information of that
answer and make sure that it's accurate, because it's not
always accurate. Sometimes it makes up answers and has what
you know, people refer to as hallucinations, where it envisions
something that isn't real or didn't really happen. Because the
way these models work is they create inferences between themselves
and they statistically generate, you know, what the output should be.
(04:34):
So it's not one hundred percent correct. But you know what,
as humans, we're not one hundred percent correctly. So we
shouldn't expect that AI is going to be smarter than
us because frankly, we created it and we fed it
information that we created as humans.
Speaker 4 (04:48):
Yes, but and I'll get to that later.
Speaker 2 (04:52):
Don't I think that it can be you're the expert,
but it seems to me it can be smarter than
we are. So back to the difference between a Google
search and an AI search. A Google search just brings
up the top hits of things, the top individual instances
(05:13):
of articles, et cetera, of posts that match what you
search for.
Speaker 4 (05:18):
It will give you a list.
Speaker 2 (05:20):
However, the the CHATSHBT or whichever one you choose, will
read in an instant everything ever written on the subject
you're asking about, and it will synthesize the answer.
Speaker 4 (05:38):
Instead.
Speaker 3 (05:39):
That's correct.
Speaker 2 (05:39):
Instead of giving you a list, it reads all the
lists and gives you an answer based on the preponderance
of the evidence in all those lists.
Speaker 5 (05:48):
Yeah, it's actually pretty amazing how fast it can scan
the amount of information it does and provide answers. In fact,
there's a number of people that he will have relied
on Google Search for their businesses. You know, someone clicking
on that link and buying something from their website or
calling them for whatever service they provide that aren't getting
as many hits as they used to because the Google
Search is coming up with that results that's summarizing it,
(06:10):
and then it puts the links down below that, and
some people don't ever get that far because they're just
looking for a piece of information. If you think about
the big categories that AI is really good at, it's
good at automating tasks, and we can talk about some
of those. It's great at creating insights, which is what
we're talking about right now with the Gemini search, where
if you go to perplexity, it's a very similar concept.
Speaker 3 (06:31):
And then it's good at creating things.
Speaker 5 (06:33):
It can take things that have previously been produced and
it create a summary draft, it can create an article,
it can create a contract, it can create a song,
it can create a video, and it's getting better at
these things with every new model release, which these big
model providers are literally coming out with new models almost
on the weekly basis and competing with one another to
(06:54):
give us the most computing power possible. So it's really
kind of fascinating how the speed at which AI is moving.
And I think where we've been challenged is how quickly
we adopted as humans because of the very power that
it possesses.
Speaker 2 (07:08):
Now, you mentioned a while back that it's not perfect
and not even close to being perfect because it's wrong. However,
doesn't it learn each time it's wrong and it won't
make that mistake again, And as millions of people say,
oh yeah, that was wrong millions of times, won't it
eventually be right at least way more right than a human.
Speaker 5 (07:30):
It does, and with each search it gets better, and
with each new model release it gets better. And then
there are different rewards systems that you know, AIS are
given as they're being trained so that it gets better
given the right answer. So you've seen the amount of
hallucinations and wrong information decrease with every model release that
comes out. But just like humans, sometimes we're wrong. Sometimes
(07:53):
AI is wrong. And that's why you know, we are
big believers of having a human in the loop, especially
where important decision or action needs to be made as
a result of that AI search.
Speaker 2 (08:05):
Now here's another reason that AI may be better to
ask than a human. Humans have bias, and they you know,
if you ask it a political question that they could
give you an answer based on a baked in political bias.
Of course, AI as the baked in political bias of
the programmers, at least in the beginning. But maybe AI
will learn down the road and it will again eliminate
(08:33):
certain human frailties. Human humans are wrong, but they don't
learn as quickly as AI. How long do you think
it'll be before you can trust? For example, if you say, hey, HIGBT,
here are my symptoms do I need to get an appendectomy?
Speaker 3 (08:55):
You know, yeah, I wouldn't.
Speaker 5 (08:58):
I wouldn't leave an app in dec to me up
to AI, or you know, should.
Speaker 3 (09:02):
I go to the er?
Speaker 5 (09:03):
That's if you're in great pain in the lower quadrant
of your avenue and you should probably go to the
emergency room.
Speaker 4 (09:09):
Well, I wouldn't down the road.
Speaker 2 (09:11):
Won't they be even better at diagnosing than doctors?
Speaker 5 (09:16):
You know, I think there'll be a great assistant to doctors,
because you know, doctors are human and as great as
that they are, and as smart as they are.
Speaker 3 (09:22):
I'm married to an obigen.
Speaker 5 (09:24):
Doctors occasionally miss things or misdiagnosings. But then again, there's
sometimes what they call a zebra, which is something that
like is a one in a million that one or
one one hundred thousands, Then a I might not think of,
but a doctor might because of their training. So we
actually have a client that uses AI to look at
radiological scans and it is actually just as accurate as
(09:47):
a radiologist, But they still don't make a decision until
that AI data is presented to a radiologist to say
detect that a cancer has been growing. They still rely
on that human to say did the AI get this right,
and you know, AI is just as accurate, it's not
more in some cases at detecting you know, slight movements,
say in a tumor that's being studied. And so I
(10:07):
do think you will have doctors and nurses be assisted
by AI.
Speaker 3 (10:12):
I don't.
Speaker 5 (10:13):
I don't like to think of a future where there
are all robot doctors and nurses and there are no humans. However,
the whole human elemental way of care.
Speaker 2 (10:21):
The thing is, though, the ugly truth is it the
economic considerations always went out and it's going to be
way cheaper to have it done by all by AI.
Don't you think sooner or later insurance companies are going
to require that. Sorry, we're not paying for a doctor,
We're paying for the AI diagnosed down the road.
Speaker 5 (10:44):
I hope that doesn't happen in my lifetime, but I
can definitely see a future where as many human task
as can will be you know, automated and assisted by AI.
But I don't see one where humans become irrelevant and
you have hospitals run by only the AI robots and
not have human involvement.
Speaker 2 (11:04):
All right, So we're gonna talk to Steven Cambridge, who
has a question or of comment coming up, and anyone else,
I take this opportunity now to invite you to join
us in this really important conversation. I actually use chat GPT,
I use the paid version and everything. I'll tell you
how I use it, And I have other other questions
about various apps that use it, Like I'm going to ask,
(11:26):
can you get an app that uses AI to make
fake headlines or fake graphics? And how how good are they?
I think it'd be so much fun to make fake headlines.
And then of course you're gonna run into all kinds
of legal legal situations. What would the legality be if
you create a graphic that's say, unflattering to somebody? Will
(11:51):
you be on the maybe liable or will you be
on the hook for a lawsuit. We'll see anybody else.
Six one, seven, two, five, four, ten thirty is WBZ
more in.
Speaker 1 (12:01):
A Moment night Side with Dan Ray on WBZ Boston's
news radio.
Speaker 2 (12:08):
Brandie Jake for Dan, we're talking AI and well, first
of we're really learning what it is and what it
isn't yet, and we're being helped by Matt Rosen, CEO
and founder of a lot of first am I saying
that right.
Speaker 4 (12:23):
It looks correct.
Speaker 5 (12:25):
It's actually Elata, but everybody, everybody calls it a lot
at first. So Elatea is the name of the company,
and we're a technology consulting firm founded in Dallas that's
now global with three hundred and fifty resources across the globe,
helping clients really understand this new technology wave and how
to implement it and what to do with it, and
how to get folks to use it, and how to
govern it and use it safely. You know, we started
(12:48):
as a firm that did a lot of custom development
and data work, and then AI has become a really
big part of what we do in helping our clients
really adopt the technology and figure out the best place
to use it in their organization.
Speaker 2 (13:00):
Well, hopefully that you're helping a lot of people and
making a lot of money.
Speaker 3 (13:04):
I'm sure you got it.
Speaker 2 (13:05):
I'm sure that I'm not the first person to say that.
We have Steven Cambridge who wants to join in the conversation. Hi, Steve,
you're on WBS.
Speaker 6 (13:13):
Hi Bradley Jhi Matt.
Speaker 3 (13:16):
Matt.
Speaker 6 (13:17):
First of all, I don't think you helped a lot
to understand what AI is, because I'm not sure I
really understand what rational intelligence, the mind, all of these
things are in humans. So when you start talking about
(13:38):
this vs. AV machines, it's still not very clear. I mean,
we don't really know what the mind is period.
Speaker 5 (13:47):
Well that's a fair point. In fact, when we, as
far as we know, use about ten percent of our minds. Interestingly,
by using AI in the future, we might be able
to unlock more of the power that we hold in
our brains, because as we know about the body, we
know the least about the brain, and so we really
have to model AI after what we think the mind
(14:08):
can do. And if you want to look like a
conventional definition of AI, it's really a system that trains
on massive amount of data, finds patterns, then uses them
to make predictions or decisions, and with this latest wave
of agents actually take action based on the information it's compiled.
Speaker 6 (14:23):
Right. That sounds great, always sure that explains all what
the human mind does or what human intelligence does. But
I have one more point, or it's actually questioned, matt
AI depends completely on the information it's given that it's trained. Now,
for example, if you take something that's a very very
(14:44):
highly controversial subject without taking a particular subject where you
have massive numbers of experts say one thing, and then
you have a certain group of people who say the opposite.
Won't AI in a sense steer you towards this massive
(15:04):
number of experts and eliminate those contrarians. And you couldn't
not be used to scrub all kinds of controversial data
from the Internet.
Speaker 5 (15:18):
Well, it absolutely could, and that's why you know, we're
all putting a lot of faith in these frontier model companies.
And by frontier model, I'm talking open AI, I'm talking anthropic,
I'm talking Google, I'm talking Amazon, Microsoft. We are somewhat
as a society depending on them in their AI governance
boards to not take out those points of view which
(15:39):
may be counter to the masses of experts. And there
is an element of training models where you train them
not to be biased, not to have political opinions, not
to incite people to violence, and then test to make
sure that it's actually happening. You know, these models are
very powerful and as such, they need to have governance
(16:00):
around them, and they're literally an entire field of jobs
popping up around what is AI ethics and governance to
make sure it's not doing some of the things that
you mentioned or I just talked about, where it is
eliminating one side or of the point of view and
only focusing on the others. But heck, you saw or
Grock got super anti semitic and they had to take
it down. And so these models have had flaws in them,
(16:23):
sizes incorrect information, or gone on very inflammatory points of view,
and it's become incumbent upon the model providers to police them,
but then also for governments to police them. And it's
probably not. The models are changing faster than we can
keep up with them, and so it's going to be
this never ending race to make sure that the models
are giving accurate, non bias, non threatening information. And it's
(16:47):
a big task ahead.
Speaker 6 (16:48):
But you know, when you say accurate, non bias, non
threatening information, that tends to lot be on your point
of view. What is accurate? What is non threatening? What
is non biased?
Speaker 3 (17:01):
That's absolutely true? You can use models go ahead.
Speaker 4 (17:07):
Isn't everything but biased? Is there such a thing as
non biased?
Speaker 2 (17:11):
No? I would make the point that there's no such
thing because any edency, whether it be human a machine,
is basing whatever they're saying on all the information they've
gotten from all sources, including their parents or their programmer.
So there's going to be a bias.
Speaker 5 (17:30):
That's a fair statement. I know they're trying to make
them as say, non political. You can only pick so
many topics between the model not to answer questions on look.
If you ask from a model for legal advice, it
will usually tell you it's not a lawyer, but it
might site you know, past case workers, things that have happened.
And again that comes down to the model being trained
(17:50):
to answer or not answer certain questions. But yes, if
it's you look at AI, a lot of it's being
trained with like Reddit data, and Reddit is obviously opinionated,
and that's one of the reasons. You can't take everything
you get from an AI search as fact, and it
needs to be checked by other models or double checked
through the articles that it cites.
Speaker 3 (18:11):
Yeah, you can't take it at face value.
Speaker 5 (18:13):
And that's why I keep coming back to this point
of how any human in the loop, you know, AI
shouldn't be making important decisions without somebody reviewing those first
for accuracy before they take action. But it is a challenge.
We have people just going out there asking questions of it.
Like you talked about the medical diagnosis earlier. That's a
pretty I think bad idea to have AI diagnose whether
(18:33):
you have appendicitis and you need to get the hospital
or not.
Speaker 3 (18:36):
It might tell you that'd be a good idea. But Matt.
Speaker 6 (18:41):
My last point. Then I'll let other people get on Matt,
who check who fact checks? The fact checkers?
Speaker 3 (18:48):
That's a good question.
Speaker 5 (18:50):
We're depending on a lot of smart Valley and think
tanks or other dam.
Speaker 4 (18:56):
Thank you whoa.
Speaker 2 (18:58):
So this is a mind bender. There are such deep questions.
Speaker 3 (19:03):
Now, yeah, great question.
Speaker 2 (19:05):
Now AI simply scours you ask you the question. It's
scours all available information and gives you what seems to
pop up the most.
Speaker 4 (19:14):
But that's not thinking.
Speaker 2 (19:16):
And down the road we're talking about AI that thinks, right,
is there a different kind of AI? And what is
generative AI? We hear about generative AI.
Speaker 4 (19:27):
What's that?
Speaker 2 (19:27):
But I need to hold that till after this break.
So that's my t's we'll find out what generative AI
is after this on WBZ.
Speaker 1 (19:35):
If you're on the Night Side with Dan Ray on WBZ,
Boston's news.
Speaker 4 (19:39):
Radio, that's correct.
Speaker 2 (19:41):
We're talking about AI, we're learning about it, and we're
going to get to very specific pros and cons of it,
things that can help with and dangerous of it. Speaking
with Matt Rose and CEO and founder of a Lota
a Ladda.
Speaker 4 (19:53):
And we do want to go to it.
Speaker 2 (19:55):
We got to go to Alex and Mills because he's
been on hold for a while and U here you go, Alex,
what's going on?
Speaker 7 (20:03):
Hey Bradley, Hey Matt, I'm going to play the devil's
advocate and ask could I use AI to increase my
chances of winning the power ball or you know, enhancing
my wealth in the stock market.
Speaker 3 (20:18):
You can give it a try.
Speaker 5 (20:19):
I think what the jackpot's nine hundred and fifty million
is pointly not a bad idea. It's just going to
generate numbers, though, and I think it's a good chances
winning as you do if you just go and do
the quick pick. But it's worth a try. And interestingly,
there are some hedge funds out there that are starting
to use AI, and one of them, I'm slipping on
the name, is actually using it and getting some pretty
good results. I can tell you I've tried it personally
(20:41):
and it's just some stocks for me, but I wouldn't
say it's performed as well. As some of my indicies.
But you can definitely use it to do, you know,
research on stocks and help identify things that might be undervalued.
But you know, it's only going to be as good
as the data it can gather out there about the
stock in the public realm, and ultimately, it's sometimes the
things you don't know that end up influencing a stock price,
(21:02):
or you know, the sentiment of the stock after they
release earnings. You know, sometimes they have a great quarter
to drop it have the bad quarter they go up.
No one knows whether stocks is going to go up,
down or in a circle. Unfortunately, at least of which
is AI.
Speaker 2 (21:15):
So, Alex, I have GPT the paid version, not just
the free version, and I just asked it what powerball
number should I play?
Speaker 4 (21:22):
Do you want to hear the answer.
Speaker 7 (21:25):
One that you like?
Speaker 4 (21:26):
I don't know, it says, it's kind of a cop out.
Speaker 2 (21:30):
I not we By the way, I can't predict or
guarantee powerball numbers. That was a completely random and no system,
tool or person can give you a winning pick with certainty.
Speaker 4 (21:41):
Hold on that said.
Speaker 2 (21:42):
People sometimes enjoy different strategies, birthdays, et cetera. But it
seems to me you really you couldn't scan all numbers
and see if there's a pattern. I would think it could.
Down the road is still pretty green though.
Speaker 5 (21:57):
Right GPT five and it's I'm sorry, I can't help
but picking lottery numbers. So this is an example where
they programmed it not not to give that information out.
Speaker 2 (22:08):
Now, okay, but can't somebody cook up their own h
it's called chat button.
Speaker 5 (22:15):
They could, but they could, but it's just randomly generated
numbers that they.
Speaker 2 (22:19):
Don't program that out, and they make it specifically for that.
Speaker 4 (22:24):
I don't know. Okay, Alex, thank you very much.
Speaker 3 (22:27):
All right, good luck Alex. If you're playing this weekend.
Speaker 1 (22:30):
No.
Speaker 2 (22:35):
When AI starts to think will it be imbued with
the notion of good, I mean I think that it's
probably going to be just like the human I don't
think humans are innately good or bad.
Speaker 4 (22:47):
They get taught either way.
Speaker 2 (22:49):
And couldn't some evil malefactor program one that says, look
be evil, do bad things?
Speaker 3 (22:58):
Can you get out? Could happen?
Speaker 4 (23:00):
Can you make it?
Speaker 2 (23:01):
Can you make its primary directive?
Speaker 4 (23:04):
Do unto others as you would have others do unto you?
Speaker 5 (23:09):
Well, the answers you could do both because Ultimately, AI
isn't thinking, at least not today. It's reacting to the
information that you feed it. But if you feed it
with all evil information and told it to do bad
things and you tell people how to make bombs and other,
you know, things like that, and it wasn't being supervised,
someone could create a model that did that. Now, the
(23:31):
models you see out there today that most people are
using on the web are trying to be trained to
be safe. But could someone build a model that is
totally used for evil. That could absolutely happen, and that's
something we want to avoid.
Speaker 2 (23:43):
So it's it's not thinking yet. And if it's not
thinking yet, then how why is it called artificial intelligence?
It's it's really just a really efficient information gatherer. It's
not it's not making judgments based judgments, right, It's not
thinking yet, and it will think at some point or not.
Speaker 5 (24:04):
Everyone's driving towards that. You hear the term AGI or
artificial general intelligence. That is where it really acts like
a human and things for itself. If you think about
the last few evolutions you asked about Jenai before the break,
the predecessor of that was something called machine learning, and
that was where it was about teaching machines to learn
from data or make predictions or decisions.
Speaker 3 (24:25):
So, for instance, you.
Speaker 5 (24:26):
Know, we had a client that you know, fed at
lots of invoices and wanted to determine how to categorize
those invoices, you know, and then you know show, you know,
could people save money on certain services based on what
those invoices were routed for, whether it was like food service,
whether it's for law and care. This is a group
that consulted with healthcare systems to help make sure they
(24:46):
were getting the best pricing for the services they were buying.
Speaker 3 (24:49):
And so we use machine learning to.
Speaker 5 (24:51):
Help them categorize invoices and you know, make judgment calls.
And then the next generation was Jenai, and so this
is what we talked about.
Speaker 8 (24:57):
It.
Speaker 5 (24:57):
It was sprained on massive data sets. They it would
create things like blog posts or artwork or translating languages,
but it doesn't really understand context the way humans do.
It just predicts what sounds right or what's the next word,
or say, hey, write me a poem or write me
a story. And it only responded when profited.
Speaker 3 (25:14):
So the latest.
Speaker 5 (25:14):
Generation that's come out this year is what's called Agentic
and this is something where it actually does take actions,
like sends an email or updates the system. A good
example of this is so a gen AI system, Let's
say you want to book a flight from Boston, New
York for under five hundred dollars. Well, a g AI
system would scan the web and tell you where you
(25:34):
might be able to do that. An agentic system might
actually go out, find a flights, ask your preference of
where you want to fit, and then book the ticket
for you. And that's where we're headed where the systems
actually do take actions. But again you know it's doing
this based on data sets that's being fed and then
potentially interaction with the humans who determine what the next
(25:55):
step or action should be. And so no, it's not
truly thinking and making its own decisions because as of today,
it's not aware that it's a thing. It doesn't understand
the physical world. It only understands the digital world and
what it's been said.
Speaker 2 (26:08):
So it's not a very it's not very far away
from if it can take action. You could program it
to go ahead and take evil actions. Say hey, you
could program it to say find out information about X
y Z person and create deep fake photos of them
doing bad stuff and create fake headlines and send them
(26:31):
out on their own.
Speaker 4 (26:32):
Couldn't it do that? Yes, it could.
Speaker 5 (26:34):
You could theoretically do that, you know that. You know,
Congress came out and passed that take a down law
where if you see a deep pake of yourself, you
can request to be taken down, or you can bring
legal action against the site that it's on. But you
still have to go out there and find it. And
you can take a picture and run it through different
AI engines to know what it is fake and it'll
generally tell you. But you know, for someone who's unsuspecting
(26:55):
or doesn't understand the power of this technology, they could
be fooled. Heck, is just kind of a fun thing
to show what it could do. My chief technology officer
actually took my voice from some different recordings and then
came up with a message to my controller saying, Hey,
this is Matt, go ahead and wire me twenty thousand dollars.
And my voice sounded very robotic and weird. It didn't
really sound like me. But if that person hadn't known me,
(27:17):
or if they were too scared to call and ask, yeah,
someone could have deep faked my voice and then asked
a wire to be sent on my behalf to you know,
an account that they shouldn't have. But that's where it
comes back to a human in the loop picking up
the phone calling me and be like, you leave me
a message asking me to send a wire, versus just
doing what they're told. And so that's why it's important
for the listeners out there to know that, you know,
(27:38):
there's been scam calls and fake emails and things like
that for years. Unfortunately, they're just going to get better
because people are going to use this technology for bad
then there's not a lot stopping them other than you know,
people being diligent and being aware of what's possible.
Speaker 2 (27:51):
And in Wooster, why don't you join the conversation?
Speaker 4 (27:54):
Thanks for calling us at six one seven two five thirty.
Speaker 9 (27:58):
Hey, Hi Bradley. I had a question. How will we
know when these machines develop a self awareness or self
consciousness and they know they're different from us?
Speaker 3 (28:12):
You know, I don't know that we will know.
Speaker 5 (28:14):
Hopefully the researchers and Silicon Value that are pumping out
these models faster than we can consume them, we'll have
a sense of that. I think we're good ways off
because to be self aware, you have to have a conscience.
You have to understand both the physical and the digital world.
I mean, AI took a big step here recently. It
didn't used to be where they could merge video in audio,
(28:36):
and now that they know, you have multimodal models that
can So I think we're safe for a good ten
or twenty years. And I think whichever model provider feels
like they got to AGI or self awareness first is
going to be shouting it from the rooftops for their
market value. So I don't think it's going to be
a I don't think it's going to be a secret
when it happens, and if it happens.
Speaker 2 (28:57):
You mentioned that you would have to have a conscience.
It's very easy for a machine to have a conscience.
You teach it, hey machine, this is good, this is bad,
this is good, this is bad. And so there must
be a danger with these various chat boughts out there
that there will soon be an ultra conservative chadbod that
will tell you the truth as an ultra conservative would
(29:20):
want to hear it, or an ultra progressive. So as
as biased as the news is now, it could be
even more isolated and more biased in the future. That
seems to be a real potential if.
Speaker 5 (29:34):
It's only being said ultra conservative and ultra biased information
has exposure to nothing else that could happen. But right now,
self aware is purely theoretical, because to be self aware
you not only have to have consciousness, you have to
have self reflection and awareness of existence. And that's not
something computers are enabled with today.
Speaker 4 (29:58):
Conscience.
Speaker 2 (29:59):
Oh yes, this brings me to and then I'll get
to Oh no, Ed's still there.
Speaker 4 (30:03):
Jeez, Ed, what do you think about this?
Speaker 2 (30:07):
Is it possible now or will it be in the
near future future to program in self preservation as a
prime directive and everything else is secondary, including honesty, other
people's lives, human lives. How far away is that self preservation?
Speaker 9 (30:28):
Well, once you have AGI, you won't have to The
thing will just do things on its own. It's not
necessarily going to ask our permission to do something. It's
not going to seek our approval to do something. It
may go out and teach itself self preservation.
Speaker 2 (30:45):
Yeah right, Matt seems to be saying we're far away
from that, but things there.
Speaker 3 (30:53):
I do think we're far away from there. Was one
kind of odd.
Speaker 5 (30:55):
Example that happened at Google a while back, where started
creating its own programming language that people didn't understand. They
actually turned it off. And this is an article that
didn't get a lot to notice. It was a couple
of years ago. But I think that's where things could
get a little scary, is if it creates its own
language or communication protocol that we don't understand. And so
(31:16):
that's why there's a lot of time and thought being
given into really having some deep understanding of how these
models work and interact, because that would be the risk
is that they do figure out how to talk to
themselves without us knowing and then create their own thoughts.
But we are not there today, and I don't think
we're going to be there for the next ten to
twenty years.
Speaker 2 (31:37):
Ten to twenty years is of course nothing that's like, no,
not that far away, but when you're talking fifty years,
it's going to be unrecognizable. And I still think if
you feed to it, no matter what else you do,
self preservation, preservation is your prime directive, and it would
learn to do whatever it took to survive, even if
it had to kill.
Speaker 5 (31:58):
That would assume it has the ability to link to
systems that could kill or it had the ability to
do that, which.
Speaker 2 (32:04):
They could hack into some chemical plants or nuclear facilities.
Speaker 5 (32:09):
Well, that's why, that's why having cyber proct protection and
Waldorf systems is critical. I've got a family member that
works at one of the aero aerospace defense firms, and
on every plane you have a flight control system and
a mission system. And on these planes they are using
AI to help with flight controls. They're not letting AI
anywhere near the mission systems. The mission systems being the
(32:31):
systems that determine what the pilots are actually going to
carry out, right, So there is going to have to
be some intentionality we have about not letting AI into
every system that we create. Things like chemical plants and
nuclear missile launchers and things like that could they hack in.
It's possible that a lot of people are trying to
hack into those systems today and have very very skilled people,
(32:54):
you know, maintaining cybersecurity around them. So I think there's
going to just have to be places as a human race,
we just don't put AI.
Speaker 2 (33:01):
Well, here's here's what I actually think is going to happen.
It's pretty grim in the not too distant future, fifteen
hundred years. Maybe at some point fakes will convince one
warring part, you know, one entity that their enemy is
going to attack them, and an initial entity will launch
(33:21):
a first strike, or a I can be programmed to
hack in to a launch system and actually launch a
first strike. But that said, in the meantime, before that happens,
since there's nothing I can do to make AI go away,
I'm going to utilize it to the.
Speaker 4 (33:38):
Max until that happens.
Speaker 2 (33:40):
I have some specific questions after this break, for example,
some concerns like algorithmic bias and discrimination, which we've touched on,
misinformation disinformation, data, privacy and security, environmental impact, consumption of.
Speaker 4 (33:55):
A lot of electricity, right and.
Speaker 2 (33:59):
Security, and Melissyu's development of dangerous capabilities. We kind of
get into that loss of control. Well, we've been we've
been beating around that bush, so that's not going to
be all that different. We'll get into Larry get to
Larry in Medfield next.
Speaker 4 (34:13):
On WBZ.
Speaker 1 (34:16):
It's Night Side with Dan Ray on WBZ Boston's news radio.
Speaker 2 (34:21):
We're discussing the ins and outs of AI. We're learning
about how it works, and we're learning about some of
the dangers. We haven't really talked about the good stuff yet.
We have Larry in Midfield him, Larry, you're on WBZ.
Speaker 8 (34:34):
Hey guys, how are you actually? My thing is I'm
an electrician and I use AI using Alexa to control
nearly everything in my house. I can basically turn on,
say turn this out on, turn this off, turn that on,
turn off, and it works very well. It gets complicated
(34:55):
a little bit so sometimes where it doesn't do the command.
But I've also recently have a thing that about five
months ago, I got a phone call at seven o'clock
in the morning to say from somebody in Connecticut that
was saying that my son had had an accident. He
crashed into a lady she was pregnant, and I got
(35:20):
a phone call from an attorney, which was basically a
bogus call, but they said to me, my son was
going to call me in a few minutes, and I
did get a call from a telephone number with my
son's voice. It was, but what they said was they
(35:41):
took his voice off of Facebook and they turned it
into a statement that they wanted to relay to me.
And then they said, I gotta go, I gotta go.
But it was definitely my son's voice. I at that
time what I called I called the Medfield police. They
came to my house. Some girl came to my house
(36:03):
to pick up ten thousand dollars in cash. I called
the police. They were waiting for her. She was innocent,
completely instant. She was just asked to do it and
then bring the money to Mathie. So basically, it has
its goods, it has its pros, and it has its comms.
And when somebody takes something from facebop with somebody's voice
(36:26):
and then turns it into a phone call using his
accent to just, you know, scare the crap out of me,
I don't want to.
Speaker 2 (36:38):
Yeah, I don't know if you can, don't it's uh.
I don't know how you protect yourself except to believe nothing.
Speaker 4 (36:47):
Ever, I don't know.
Speaker 3 (36:49):
Yeah, with your family.
Speaker 8 (36:53):
That the police came. The police came and said it
was called the grandfather scheme or whatever whatever, and then
actually took this girl followed her to my thiw one.
I don't know what happened after that, but it was
definitely my son's voice. But the only thing is I
had to get my wife out of bed and asked
(37:14):
her he said, where is Connor? Is there any ways
in Connecticuts? No, he's in Denver. So it was completely
AI operated. So there's things out there that people have
to be very weary of.
Speaker 2 (37:26):
Quis question on this. The woman you said she was
completely innocent. I don't see how she's completely innocent. She's
coming to get money from you to give to.
Speaker 4 (37:33):
The bad people.
Speaker 3 (37:34):
She was an uber driver.
Speaker 8 (37:40):
Yeah, Uber, just come to my house, pick up a box,
then take it from here and bring it to sort of.
Speaker 3 (37:49):
Yeah.
Speaker 8 (37:49):
And I said, the police showed up. I mean, if
you'd like to call MED for the place, police whatever,
you know. They actually showed up and they asked her
where are you going to drop it off? She said,
I've gotten a dress on the thille. And then after
I don't know what happened, but if it's amazing what
AI can do. It can do anything you wanted to do.
(38:10):
If I could tell it to fush my toilet, they
can help. It's unbelievable. It can do that.
Speaker 4 (38:15):
Seems pretty pretty doable. That one. Wow.
Speaker 2 (38:19):
I wonder, I wish I knew the story of what happened.
I wonder if they said, okay, man, take the peck.
Did they keep the money and deliver it and give
it back to you or.
Speaker 8 (38:26):
Now it was just I just I what I did
was I just wrapped a box. It was empty completely.
I wait for her to calm because I didn't know
she was on the scheme at all. But then the
police found out, No, she's definitely not, and she was
just it was actually it wasn't Uber. I think it
might have been Lyft because I'm not sure one of them,
(38:48):
either Uber or Left, doesn't allow to share the information,
and I don't know which one it is. Is it
Uber or is it Lyft?
Speaker 4 (38:55):
Thank you very much, Mary. Can you stay a little longer, Matt,
Yeah
Speaker 2 (39:00):
Okay, good because they have some very specific questions coming
up on WBZ News Radio ten thirty