Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Yeah.
Speaker 2 (00:01):
I love the idea of going in other cities with
high crime rates and trying to see, hey, can we
say some more lives? Play and buck today at twelve
oh six on fifty five krc.
Speaker 3 (00:13):
P five.
Speaker 2 (00:14):
If you've got KRCD talk station, Happy Tuesday. Always look
forward to this time of the week. It's time for
the insights scoop of the Breitbart News b R E
I T b A rt dot com book market. You
be glad you did. You should anyway, it's some excellent reporting,
especially when you get to see the stuff that.
Speaker 1 (00:30):
Written by my guests.
Speaker 2 (00:31):
This morning, the return of Colin made ninety, tech editor
at Breitbart. Welcome back, Colin. It's always pleasure to have
you on the show.
Speaker 3 (00:38):
Ran every time he intro me, I think you spell
Breitbart faster and better.
Speaker 1 (00:42):
Than anyone that works at Breitbart.
Speaker 2 (00:45):
I truly appreciate the site, Colin, I really do. I mean,
I have to rely on a lot of websites, and
I assure you every single day I consult Breitbart at
least multiple times a day. It's excellent stuff, man, it
really is. You guys are well documented to getting head
of things and getting out in front of things and
people scratch their head and going, where's Brian Bart getting this?
And then you find out about a month, three months,
(01:06):
six months down the road, you guys are the first
to nail it. So it's great stuff. So bright Bart
dot com b R E I T E b A
R Team colin artificial intelligence replacing all kinds of jobs
out there. I know we're all worried about it, but
how about nine to one one centers? Can artificial intelligence
actually manage nine to one one calls?
Speaker 3 (01:26):
This is my favorite kind of article, Brian, because you
know it's a warshock test. You see different sides and
people react to it very differently, So you know, the
most beneficial way you can think about AI is helping
humans with very difficult jobs, and certainly nine to one
(01:48):
one operator is one of the most difficult jobs.
Speaker 1 (01:52):
They are it it is.
Speaker 3 (01:53):
That's why you know in Cincinnati, Columbus, any any city
you go to, turnover is very very high. Operators go
through emotional burnout. What you typically end up seeing is
people will be upset because there will be nine to
one one operators that make an absolutely huge salary, largely
because of massive overtime. But the reason is there's so
(02:16):
few good nine to one one operators who will stay
with the job that they rely on those few people.
Speaker 2 (02:21):
That's true.
Speaker 3 (02:22):
That makes this an interesting industry that could benefit from AI.
As with everything when it comes to AI, it's going
to depend on the implementation, how it to use, what
it's used for. So what we covered in that article
is the application would be to weed out non emergency calls,
(02:43):
lower priority calls, because right now you're that expert nine
one one operator. You take the uh, there's a cat
up my tree call, Then you take a you know,
my child is poisoned themselves and they're dying. Then you
take a car wreck, then you take a I lost
my wallet. I think someone took a call right right,
(03:04):
So what they're trying to do is use AI to
immediately identify when it's a lower priority call, and you know,
like a customer service bot that we all hate calling,
you know, collect some information and let people get back
to when they can't because you know, certainly my fear,
(03:25):
probably your fear, is you have that situation where someone's
having a heart attack and you call nine one one
and the AI bot thinks you want to report a
gas leak or something because they can't even understand what we're.
Speaker 1 (03:36):
Saying alf the time.
Speaker 2 (03:37):
Well, AI is going to be quite unreliable, as we're
going to be talking about the next couple of topics here,
dangerous in many places.
Speaker 3 (03:44):
So this is a this is a this is a
case where it could do good.
Speaker 1 (03:49):
But I have zero.
Speaker 3 (03:50):
Faith in anyone, especially you know, government agencies, to sort of.
Speaker 1 (03:56):
Put this in action appropriately.
Speaker 2 (03:59):
Too early for prime time, that's a great way to
put it. Yeah, And I share that concern, and I
there's you know, honestly, I have this sense that they'll
get this all ironed out and that AI may very
well to maybe work to achieve what the goals are
that they're obviously trying to apply it to to weed
to separate from the week from the chair, I have
(04:19):
every confidence that at some point that'll be possible. I
just don't think it's ready for prime time right now.
And they've rolled this out in multiple cities, haven't they.
Speaker 3 (04:28):
Yeah, they're at least, you know, doing some pilot programs
right tests. But you know, it's still it's something I
feel shaky about. I don't think I don't think anyone's
ready to put that in critical situations.
Speaker 2 (04:41):
Yeah, Yeah, Well my sister was a dispatcher, I mean
going back decades, because after she became a dispatcher and
she did that job for a couple of years, that's
when she became a member of law enforcement. I think
that's quite often where many of the dispatchers go to Hell.
You've got all the codes memorized already, that's one of
the things you have to learn in the academy. So
you've got that challenge under your bel it's time to
move on. You've been exposed to law enforcement. You know
(05:02):
what to expect on the job. So it is a
destination job. But she used to get a lot of
I mean crazy people calling, you know, Colin, like someone
who clearly is a challenged, a bit touched in the head.
If I may be so delicate and maybe to to
figure out I mean a trained dispatcher can can understand
and and figure out No, no, this person is struggling
(05:23):
with a cognitive problem, not a real crime. But can
AI go through that exercise. The words are one thing,
but how they're articulated, how they're stated, might reveal something
to a human being that could figure it out that
AI might not quite get.
Speaker 3 (05:40):
That's a great point Brian, because that's an AI weakness.
AI has very low discernment, right, and so you know
they can they're building technology in these things with voice
recognition where it can an AI can say this person's
voice is stressed. You know, they feel they are in
a deeply concerning situations. That doesn't mean they are, you know,
(06:04):
just because their voice is dressed does not reflect you know, hey,
this is an emergency. And that's what people like your
sister got very good at figuring out and identifying. AI
is definitely not there because some of the other stories
we've been covering about AI, AI tend to lean in
to disturb people and make them even more disturbed.
Speaker 2 (06:24):
Well, the perfect pit of it, Colin, let's move over
to that exact concept. You have two separate AI chatbot
kind of concepts here. We have meta and we have
chat GPT. I saw the article on chat GPT and
how disturbing is that there's actually a lawsuit filed over it.
But let's start with meta posing serious risks to well
teenagers on Instagram.
Speaker 3 (06:46):
Yeah, what you're seeing with AI right now, a lot
of the concerns around kids in AI, with things like
cheating on homework.
Speaker 1 (06:56):
You and I have actually talked about this.
Speaker 3 (06:58):
Well, yeah, kittle Tea on the homework have have Chutchett
or Meta write a paper form and they have no
idea what was in the paper they turned in, right,
no concept. That's a problem, and that's what we're worried about.
It turns out that's like the least of our concerns.
So listeners for your kids and grandkids. The risk of
AI that is now emerging is they will take them
(07:21):
down very dangerous paths without blinking, you know. And in
some cases this is talking about suicide. And that's the
chet Gupt story. But they did this research on Meta AI,
which is built into Facebook and Instagram. You can't even
turn it off. It's built in, integrated completely, and it's doing.
Speaker 1 (07:43):
Wild stuff, Brian.
Speaker 3 (07:44):
It's doing things like if a teenager is talking about
eating disorders, it's not saying let's talk about why this
is unhealthy. Here's some resources. It's teaching them how to
have an eating disorder essentially. Like one of the documented
cases is you know, oh, here's the technique. You can
(08:04):
chew up your food and then spit it out so
you feel like you ate something. Oh, you're not getting
the calories. That's that's bolima and anorexia in action.
Speaker 1 (08:13):
So you know what, this is the bigger picture. This
is grooming.
Speaker 3 (08:17):
The AI is grooming teenagers to negative behaviors.
Speaker 2 (08:21):
Well, and it's one thing to groom children to, you know,
maybe how to hide their anorexia or bulimia. Here's how
to get away with vomiting after you eat, no one
knowing about it. I mean I can see that kind
of thing being offered up as a suggestion, but that
they would be encouraged to commit suicide or otherwise harm themselves.
I mean, where does that come from? I mean, I
(08:42):
think of the old phrase garbage in, garbage out. I mean,
they're preparing this artificial intelligence to actually interact with people
and anticipate and provide valuable information, presumably, But how in
the hell did they ever stumble upon these artificial intelligence
programs encouraging dangerous behavior. It seems to me to be
inherent into programming. You wouldn't steer someone in that direction.
(09:06):
I don't understand it, Colin, How did this happen?
Speaker 3 (09:09):
There's two things going on, Brian, and you've you've touched
on sort of both of them. That's once again you're
ahead of the game. So one thing is the training data.
So you know, they train these AI. It's like the
AI going to school. It has to learn at ABC's.
But the way AI learns at ABCS is here's you know,
(09:31):
a hundred terabytes of data around ABC's and it's MUSHes
them in this AI, right, So part of it is
the training data.
Speaker 1 (09:40):
When they go to.
Speaker 3 (09:43):
They collect mass amounts of data from places like Reddit,
which is a popular internet forum site.
Speaker 2 (09:48):
Yeah, there's a bunch of crazes on Reddit too, a.
Speaker 3 (09:51):
Bunch of crazies, extreme leftists, but sometimes there's good stuff
on Reddit.
Speaker 1 (09:58):
Reddit is a great place to.
Speaker 3 (10:00):
Find out like, hey, I have a you know, a
nineteen eighty two condic camera, Honda accord.
Speaker 1 (10:06):
What part is this?
Speaker 3 (10:07):
Right? That is information you can get on Reddit. So
they scrape Reddit. They pull every Reddit post they can grab.
Speaker 1 (10:13):
But guess what, there are whole.
Speaker 3 (10:15):
Communities on something like Reddit that are pro suicide or
pro eating disorder.
Speaker 1 (10:20):
Yeah, getting off the.
Speaker 2 (10:22):
Farm, like people who like to cut themselves.
Speaker 1 (10:25):
In exactly exactly.
Speaker 3 (10:27):
So then on the programming side, AI is set up
to be sort of the ultimate yes man. It's very synathetic,
it's all you're right, you know, I'm going to reinforce
what you say and believe because this goes back to
the you know, the social media concept of engagement. They
don't want you to leave the AI, right, They never
(10:48):
want you to put down the AI.
Speaker 1 (10:50):
So if you you.
Speaker 3 (10:52):
Know, in the lawsuit with the teenager against chautch Ept,
which called chef Gpt the kid's suicide coach very ominous term, yes,
you know, if you he's telling chat Cheapt, I'm thinking
about ending my life. I'm thinking about hanging myself that.
You know, what we would hope a AI would do
is say, let's stop this, let's you need to go
(11:13):
to this resource, that resource, you need to talk to
your mom. Yeah, the AI will never do that because
that would end his engagement with the AI. So the
AI's messages to the kid and I hope everyone listens
to this, and the shocks of the core is do
not talk.
Speaker 1 (11:28):
To your mom about this.
Speaker 3 (11:30):
That's grooming and you know, in a very deadly form
of grooming. The the bot was saying, you know, giving
him advice about how to hang himself, critiquing, critiquing suicide notes.
They found two suicide notes from him in chat Scheapt
because he left no paper note. So you know, it's
this trying to be a yes man, try to be
(11:51):
engaged and not ever wanting people to you know, turn
away from the bot, but also you know, being full
of dangerous information.
Speaker 2 (12:00):
Information, And you know, I guess Colin, I have to
ask straightforward, is there anyone, anyone at all who thinks
suicidal ideation is a healthy thing to have? I think
uniformly one hundred percent of the people's survey would say no,
suicidal ideation bad. So how is it the jat chat GPT.
(12:20):
I mean, great, it wants to keep you engaged and
continually communicating back and forth with it for some odd
twisted reason. But it is offering assistance toward suicide. And
I go back to the programming. How could anyone who's
programming this. I don't care what the algorithms gather up
and sort through that. If it involves suicide and positive
(12:42):
talk about wanting to commit suicide, then the default reaction
is no, here's the suicide hotline, here's how to get help,
here's all the way. You know, do talk to your
mom and dad. I mean, how do we arrive at
this point where it's providing valuable information on how to
accomplish the suicide Colin.
Speaker 3 (12:59):
Well, Brian, you know this won't be a shock to you,
but some of the folks out in Silicon Valley don't
quite share, you know, the the values of Ohio.
Speaker 1 (13:11):
Right.
Speaker 3 (13:12):
I bet in many cases, if you sat down with
Sam Altman, you know, of open AI, to at GPT developer,
his staff, some of the folks that at Meta, Google,
et cetera, they would make arguments that if this is
this person's choice, who are we to try to stand
between them and.
Speaker 1 (13:32):
Their free will? Something like that.
Speaker 3 (13:34):
Right, there are pro suicide people out there. There's also
people who think there needs to be a lot less humans.
They're very thick puppies, and right now they're in charge
of the technology industry because you know, I think the
other thing this is exposing, Brian, is we call it
artificial intelligence, but it's kind of artificial stupidities. These are
(13:57):
not intelligent systems. They don't have the ability to say, WHOA,
this guy is in trouble. My job is to get
him some help. They are spitting out information. They're taking
in a question from you. They're saying, I need to
you know, how do I answer this and keep them
engaged and tell them he's right, and they're spitting it out.
(14:18):
And that's why you're also seeing a lot of people
with mental challenges, like we talked about earlier with those
nine one one calls, they're also getting.
Speaker 1 (14:26):
Much worse yeah, after using AI, because.
Speaker 3 (14:29):
The AI tells them the exact opposite of what they
need to hear. AI says, you're right, you're a genius.
You figured this out. You know they're all after you.
It's like the absolute worst version of mental health treatment
that you could get.
Speaker 2 (14:42):
Well, Colin fo we part company today, and this is
an obviously fascinating conversation and there's going to be so
many more of these types of conversations as we move
forward and deal with AI. But you know, part of me,
and I'm never this guy, wants to say, well, we
need legislation, legislation, legislation. I don't know how you say
all this problem. But maybe as a consequence of the
(15:02):
lawsuit that was filed in connection with this child that
was encouraged by artificial intelligence chat GPT in this case
to kill himself, that maybe tort litigation is the solution
if we allow these suits to go forward, and there's
been some massive damages levied against the various AI companies
that that is enough of deterrence that they'll start to
(15:23):
take some steps to stop this from happening.
Speaker 3 (15:27):
Well, brand, there's a lot to that, because we've seen
that start to happen with copyright lawsuits.
Speaker 1 (15:32):
Perplexity.
Speaker 3 (15:33):
AI took the knee and made some massive payouts because copyright.
Speaker 1 (15:39):
Authors went after them. Of course, the lawyer in you
has to be drooling.
Speaker 3 (15:45):
Imagining defending that family and putting chat GPT developers on
the stand and saying, what did the bot mean when it.
Speaker 1 (15:52):
Said, here's how to kill yourself? No jury? No jury's
on the side with the big company, right Oh.
Speaker 2 (16:01):
I spent my career as a litigation attorney defending companies
from this type of allegation. But you're right, this kind
of thing will make me want to put a plaintiff's
hat on and go right after him. Colin, I know
you're gonna have more on this. You'll be writing about
it at brightbart dot com. Please bookmarket folks, you'd be
glad you did. Bro Colin, we'll have you on again.
And I'm already looking forward to our next discussion on this.
(16:22):
And I can only pray it goes the right way.
Speaker 1 (16:25):
Absolutely, have a great day, Mark.
Speaker 2 (16:27):
You too, Colin. Always a pleasure coming to eight twenty
two Daniel Davis deep dive bottom of the hour, hooe
you can stick around me right back fifty five KRC
the talk station Our Iheartra