All Episodes

November 26, 2024 20 mins
Gary Tanguay filled in on NightSide:

Do you trust Artificial Intelligence? The emergence of AI in our culture has been beneficial for a variety of reasons but what happens when there is a potentially harmful error with AI? One recent example includes a student using Google's AI chatbot Gemini for homework help on the topic of challenges and solutions for aging adults. At the end of the conversation, the chatbot told the “human” to “please die.” How rare are AI errors and would one bad AI customer service experience drive you away? Entrepreneur Scott Baradell checked in with Gary to discuss!

Ask Alexa to play WBZ NewsRadio on #iHeartRadio and listen to NightSide with Dan Rea Weeknights From 8PM-12AM!
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
It's Night Side with Dan Ray on w b Boston's
news radio.

Speaker 2 (00:09):
Hi, welcome back, Gary Tangline for Dan here this evening.
The Dark Side of AI is our next topic. Scott
Barredeal joins us to discuss this in anytime, and I
know I Scott off the top of the show. First
of all, welcome to the program. Off the top of
the show, I said, I'm the biggest hypocrite because we're
so dependent on AI. You do it now, if you
talk into your phone, you have, you have spell check,

(00:32):
just so many ways we're dependent on it. But I
love to root against it, you know, So I'm a hypocrite.
It's like I don't want AI. I mean, I was
taking jobs. It's taking humanity out of things. I can
write things that can you know, thanks for itself. It's
going to take over the world. But boy, man, if
you took away my phone and all these things, I'd

(00:54):
be screwed. So this is an interesting situation here. First
of all, I should ask you where do you fall
as far as being pro or anti AI before we
get into this situation.

Speaker 3 (01:08):
You know, it's a good question. It's kind of like
how I am about phones. It's like I hate that everybody,
including myself, are on their phones all the time. But
I don't know what I'd do without it, you know,
you know, it's like that we it's it's great, it's
a great tool. It makes things more efficient, but it
does take some humanity out of out of things, out
of your life, and it starts to affect how you

(01:28):
view the work of others. Is this is this real
or is it mem recks? Did they actually do this
or do they just have it bat out by chat GPT?

Speaker 1 (01:37):
Well, we've seen that in commercials.

Speaker 2 (01:38):
I can't remember which god which product it's for, but
there's an actor. For some reason, I think the actor
is the guy that played Booger and Risky Business. I
don't know, maybe I could be wrong about that, but
he sits there and he goes and he and he
starts to type a memo, but then then he goes
into chat whatever you just called it.

Speaker 1 (01:58):
I don't even know what that chat.

Speaker 2 (01:59):
Bob, either chat BT or whatever GPT, Yeah, chat GPT,
and it totally changes the tone of it. It makes
them sound brilliant. And then they pan over to the
boss and the boss goes, wait a minute, did you
write this? I think that sucks. I think that's see
that's the part I think that sucks. And I know
everybody does it. You know, it's it saves time and

(02:21):
so forth. But that's just kids don't have to write,
and I think that sucks. That's the part I don't like,
you know, but it's hair to start.

Speaker 3 (02:29):
Well. Yeah, And but the other thing is, you know,
the thing about chat GPT and these tools is is
they they won't tell you they're wrong. And you know,
there's a recent study done by Purdue University that found
that about over fifty fifty two percent to be exact,
if chat GPT answers have at least one error in them.
That's a little scary.

Speaker 1 (02:49):
So, I mean, that's very scary.

Speaker 3 (02:52):
It would be better if if if they were person accurate.
You know, there was actually an attorney to win to
a courtroom in Colorado and talked about all these cases
to make his tell his side of the court case.
And it turned out that the cases were completely made
up by chat GPT. They were not real cases that

(03:15):
had happened, and so the guy was fired from his
law firm and he was suspended from the bar in Colorado.
So you know, you don't take it at face value.
You better check check the facts.

Speaker 1 (03:27):
Wow, I mean that that Wow.

Speaker 2 (03:29):
I mean I hate for somebody to lose their job,
but come on, that's ridiculous.

Speaker 1 (03:34):
Now.

Speaker 2 (03:34):
The reason we had you on there was an article.
It's a story about a young woman and she was
using AI to help her with some homework. The chat
by Gemini, Google's Gemini and with the chat bat chat bot.
Excuse me, this language is forn of me. Uh said
to her at the end was human please die right?

(03:58):
And she had a I mean, it's it's funny, but
it's scary. She had a friend with her in the room.
But I mean, how did it's certainly it could happen.
I mean, of course it could happen. There's a glitch,
there's something that goes wrong there. How often does this happen?

Speaker 3 (04:14):
Well, there's been glitchens, you know from you know, just
in this case. I'll give you the full quote because
it's even more impressive. There was a long if you've
ever used or no, folks can use these tools, and
most people are using him today. You might ask a
lot of questions to try to get the exact output
you want. And and it's like in this and you
can see the full chat on on online that's been

(04:38):
It's been safe that this grad student had with Gemini,
And the final it's like Jimi is getting more more frustrated,
almost like it's a person, and finally says, this is
for you human. You are not special, you are not important,
you are not needed. You are staying on the universe.
Please die, good God. So at that point they completely
this person is completely freaked out. It's about asking questions.

(05:00):
So maybe maybe Jim and I had just had enough.
But that's not the first time it's happened. You know,
a few years ago, Microsoft had a chat bot they
created for teams called Tay that was almost immediately decommissioned
because it was making racist or marks it was saying.
It was saying things like Hitler was right. Horrible, just
unimaginedly horrible things. Chat GPT had itself, which is of

(05:23):
course the most popular large language model currently. Uh, you know,
they had a malfunction where people were actually seeing the
questions that other users that they didn't know were asking,
so complete violation of confidential a horrible privacy bug. So meta,

(05:45):
you know, Facebook, you know they had something called blenderbot
and when people were asking it questions about whether it's
Trump elections or Kennedy assassination or whatever. It was totally
you know, saying all totally got into all these conspiracy
theories and saying, yes, the conspiracy theories are the facts.
And so it's just across the board. There just has

(06:08):
to be an understanding that these tools are very much
still in development and you just can't take them as gospel.
They trick you into making you think they can do anything,
but they can't. I work with a company called acquire BPO,
which is a business process outsourcing company that does call
centers and helps companies with all kinds of other work

(06:29):
around the world using people, but of course introducing AI
as part of what they do, as all companies are
increasingly doing. But we did a survey recently acquire BPO
where they found that seventy percent of Americans said that
if they had one bad experience with AI, they would

(06:51):
consider switching to another brand, to another company. So it
has a major commercial implications. If you're not watching this
stuff really closely.

Speaker 1 (07:00):
Has it ever cost someone in their life?

Speaker 3 (07:04):
Wow? Good question. I can't take an example of that
off HANDA I mean what I immediately think of is sky
net became self aware at two fourteen am Eastern timee
That's that's the line from the original Terminator where they
where it was all about an AI defense system that

(07:24):
became self aware and they decided to since they were
in charge of the weapons, to nuke all the humans,
and that's where the Terminator came from.

Speaker 1 (07:32):
So well, yeah, I mean, I don't know, I.

Speaker 3 (07:34):
Don't know a real life example, but.

Speaker 2 (07:36):
Well, this is what comes to mind from me, is
when you see situations like this where you're talking about
you know, verbiage. Okay, so it's okay, so the computer
insulted somebody, all right, fine, But when you talk about
depending on AI, maybe in life threatening situations, when it
comes to dealing with ambulatory services, when it comes to

(07:57):
dealing with firefighting, when it comes to dealing with work,
I mean, that's the thing where I'm concerned. And I'm
not talking about something catastrophic like war games or where
they launch you know, one hundred missiles or you know,
I'm talking about in a situation where you may have
a public servant, firefighter, police officer, a medical person that
is in a situation, depending on information where they need

(08:20):
to try to go into a fire or they're going
into a violent situation and the information is wrong because
it doesn't come from a person.

Speaker 3 (08:31):
Well, yeah, I mean I think it is a legitimate concern.
I definitely think that you you know things, There are
improvements happening every day, like, for example, in these tools increasingly,
as opposed to just putting information out there that may
or may not be true, there are links to the
original sources and things like that to verify. I would say,

(08:54):
you know, for companies, particularly for situations like you're describing,
but I would say for companies and in general, taking
a more conservative approach where you're not asking the AI
to do too much, where you're making the tasks a
little simpler and so you can be sure you know
it's directing you to information that you know is true,

(09:15):
as opposed to you're asking it to solve the world's
problems or you're you're asking it to really what it does.
It's it's just an advanced form of memorization. And they
memorize and they and they put answers to different questions
together essentially, and they provide they're kind of program to
always give you an answer and if you're program to

(09:35):
always give an answer. It's not always going to be right,
so you can set it to not do that. You know,
you don't have to set up your AI to go
that far.

Speaker 2 (09:44):
And it's also based on something that's previously happened, correct,
So when you have a situation that is new and
the computer can't refer back to anything, you're screwed. More
on a coming up AI coming up next, The Good
side of AI. We like to be fair and bound here,
you know, we try to do our best. That's coming
up next on WBZ with Scott Barredelt.

Speaker 1 (10:04):
Don't go away.

Speaker 2 (10:06):
Now back to Dan Ray Live from the Window World,
Nice Side Studios on WBZ News Radio. Welcome back, Gary
Tangling for Dan Ray tonight. Scott Barredeal joining us here
talking about well, we were ripping on AI and I
do want to get to the positive end of things.
But when do you think, Scott, it'll be so cost
prohibited or maybe we've seen examples of it where a

(10:29):
company says, all right, we're losing so much business.

Speaker 1 (10:31):
This AI isn't working. I need to get a real
person back in here.

Speaker 3 (10:36):
Well, I think that's for companies that kind of kind
of go too far, too fast. I think there definitely
been some examples of companies that have maybe dived in
a little too fast, went too far, and then they
had to pull back. You know, I mentioned this survey
by a choir bpo, which if you want to see
the full survey, it's at at acchoirbpo dot com. But

(10:57):
what the data shows people are most confident in a
situation where there's a blend of human support and AI support.
They like that AI support can make things go faster,
so like you don't want to be on hold for
thirty minutes before you get to talk to someone, things
like that. So there are a lot of things that
AI can do. And if they know, hey, if I'm

(11:19):
not getting what I need from, AII can escalate to
a person. I can use AI to help me identify
and consolidate kind of the past customer service experience I've had,
for example, but then I don't then I can talk
to a person to solve the problem I'm having right now.
You talked about, you know, AI knows the past but
can't predict the future. That's where it's a good idea

(11:42):
to have that blend. Companies that have made the mistake
kind of like this lawyer made of thinking, Oh, I
can just AI looks like it knows what it's talking about,
so I can just assume it does. Just remember it's
wrong fifty two percent of the time, or something is
wrong in answer and you have to check it. It's
kind of like in the early days of Wikipedia. Really
it's still true today, but people would say, hey, don't
Wikipedia is not, you know, gospel. You can don't do

(12:05):
your research paper and just say I got it out
of Wikipedia. You have to. It can guide you, but
go find the original sources.

Speaker 2 (12:11):
Well, even when I do research for the show, when
we talk politics or we do something of a serious
nature where you want to get your facts straight, I
will online. I'll start reading different things in n'lb like
three quarters of the way and I go, wait a minute,
this is bull crap. Where did this come from? This
doesn't make any sense. You know, you can't.

Speaker 1 (12:33):
You have to.

Speaker 2 (12:34):
You have to vet it. You got to vet stuff.
I mean, listen, I called my bank the other day.
I'm not going to mention the bank. I call my
bank the other day. And you know, when you go
through you punch all the buttons and for this and
for this, But I had a certain situation. I had
a certain question regarding as regarding an account, and I
got a person because I like to talk to people,

(12:57):
and then that person had to send me to a supervisor,
and that supervisor didn't know the answer, they had to
send me to a manager above the supervisor. Where your
point is well taken is if the first person had
the AI to give them the answer, then they could have.

Speaker 1 (13:15):
Just simply told me and I would have been.

Speaker 3 (13:17):
Satisfied right right for how you yeah exactly?

Speaker 2 (13:22):
You know, because yeah, because the answer was the answer.
It wasn't hard. They just had to get to somebody
who actually had the information. I was on hold for
an hour.

Speaker 3 (13:34):
Yeah. It's like when you go to the doctor's office
and you have to fill out the clipboard every single time.
It's like, are you saving any of this information from
one visit to the next. Yeah. So there's definitely some
advantages to automation, which of course has been around forever,
and AI that is kind of taking that to the
next level.

Speaker 1 (13:50):
What about the positive end of it.

Speaker 3 (13:54):
Well, there's a tremendous amount of a positive if you
know how to use it. In other words, you can
be far more productive. I can tell you that you know.
Part of what I work at at a PR and
marketing firm and we do search engine operstation things like that.
I can tell you more and more people are doing
their searches through a chat CPT or GYM night rather

(14:16):
than going to the traditional Google search, because Google will
send you to a page somewhere and maybe it's an optimize,
but maybe it's not the best answer to your question.
And in the case of these, searching for what you're
looking for and doing your research through a tool like
a chat CPT, and again increasingly they're including their source
links and things to make it more so you can

(14:37):
check that it's it's accurate. It's a much more effective
way of gathering information, and it's increasingly becoming. It's predicted
before too long it's going to be the number one
people the way people search for the information that they're
looking for online. So it's inevitable that it's going to

(15:00):
keep getting better. It's it's it's amazing how far it's
come and how many people are using it in the
short amount of time it's been around, have been widely available,
So you've got to assume that at the rate at
which it's improving and progressing. A lot of these things
that have caused the problems are going to go away.

Speaker 1 (15:17):
What about in the medical world.

Speaker 3 (15:21):
Well, in the medical world, obviously you have to operate
with caution, you know, I would say medical as well
as kind of highly regulated industries like finance. For example,
there was a big case with the think Bank of
America where uh Ai was chatbot was giving given people
uh incorrect information about mortgage loans and kind of important

(15:43):
financial transactions. In the medical world, obviously even more important
to to be careful becused. You're talking about life and
death stuff.

Speaker 2 (15:52):
Yeah, it's it's when I think about situations like that,
And the reason I asked the question is if you
need to so you have a task, either in the
medical world or the financial world, and you need one
hundred percent activity to complete that task, I can see
AI taking you to twenty five percent and then somebody

(16:12):
else has to take you the rest of the way.

Speaker 1 (16:16):
That's you know, that's what I see.

Speaker 2 (16:18):
I don't see you can't have AI complete one hundred
percent of that task in those areas.

Speaker 3 (16:27):
I think that's that's a good way to think about it.
I think the higher risk the task, the more important
it is to have humans more involved, particularly at this
stage of the development of AI. But I will tell
you we actually did a survey a company called Christa,
which is like they're like a Amazon Alexa conversational AI,

(16:49):
but they work within companies. They did a survey asking
people what they trusted AI to do and not do,
and it just showed you kind of the amail confusion
out there, like is AI going to help you my
job or is it going to take away my job?
But one of the questions ask quite the scenario is like,
do you trust AI to pick your wardrobe for work?

(17:09):
Or would you press AI to fly a plane with
no pilot in it? And more people picked I would
trust AID to fly a plane but no pilot in
it than picked my clothes, you know. So I think
there's a lot of people that are just still it's
also new that they're all trying to come to terms
with it and at least to some weird tuxtapositions like that.

Speaker 2 (17:26):
I will leave you with this, I will never ever
get in a car that drives itself ever. Ever, I've
heard that, Okay, you can have a driver, you know,
you can get in the car, you can sit in
the back seats, you know, and you can work and
you can you get on your laptop and do work

(17:47):
and you punch in where you're going to go and
the car takes you there.

Speaker 1 (17:50):
No way. I don't trust it never. What are your
thoughts on that.

Speaker 3 (17:56):
You could get carjacked? Also, I don't know.

Speaker 1 (18:00):
That's crazy.

Speaker 2 (18:01):
I mean, I don't get that. And they've had it,
They've had tests where people.

Speaker 1 (18:05):
Have been killed.

Speaker 2 (18:05):
Guys get the vaccine, and they have supposedly a driving
less car. I don't get it. That's the part where
it's it's insane.

Speaker 3 (18:12):
Like I think I'm going to be old fashioned on
that one.

Speaker 1 (18:14):
Yeah, I don't.

Speaker 2 (18:15):
Even think with being old fashioned. I just think it's
common sense, you know. It's like, look, technology has helped
all of us. I get it, you know, and I'm
open to it. But I mean, look, voice commands on TV.
When I want to go, like, you know, into my
TV you know, watch you know, watch the Boston Celtics bowmen,
it pops up. I love that you can help me

(18:38):
with that, but I don't need you to drive my
damn car.

Speaker 3 (18:43):
Yeah, I think I'll drive myself too, for the indefinite future.

Speaker 1 (18:45):
All right, scart Baradel. What's the name of a company.

Speaker 3 (18:48):
My company is called Idea bro and the survey is
by a company called a Choir BPO.

Speaker 2 (18:53):
All right, well, listen, I appreciate you coming on. You
sound like a reasonable person. You've really come I I
really have enjoyed this conversation because you've made me feel
a little more comfortable to be quite honest.

Speaker 3 (19:04):
Well, thank you. Yeah, I appreciate that.

Speaker 1 (19:06):
I appreciate that. Have a good night and have a
good thanksgiving.

Speaker 3 (19:09):
All right, you two take care, all right.

Speaker 1 (19:12):
Scott Barreto joining us here and AI the good and
the bad. Yeah, the AI voices please die? Are you
kidding me? Good?

Speaker 3 (19:21):
God?

Speaker 2 (19:23):
Coming up next the final hour here on night side,
this evening. What are you thankful for? And I understand.
We're thankful for our families. We're thankful for Rob Brooks.
We're thankful for Marita Lo Rosa that you know, our producer.
We're thankful for our family. We're thankful for our health.

(19:47):
But let's get a little creative. Let's get a little creative,
let's get a little unique. Give me some different takes
on what you're thankful for? At six one seven two,
five thirty we're going to open up the phone lines.
I do have some pretty funny lists that we're going
to get into. Unique things to be thankful for. Coming

(20:07):
up next on WBZ
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.