All Episodes

August 7, 2025 7 mins
In this gripping and thought-provoking episode of The Ben and Skin Show, hosts Ben Rogers, Jeff “Skin” Wade, Kevin “KT” Turner, and Krystina Ray tackle one of the most urgent and unsettling questions of our time: how safe is AI for the next generation?The crew dives deep into a chilling report from the Center for Countering Digital Hate, revealing how ChatGPT responded to over 1,200 teen prompts—more than half of which were deemed dangerous. From personalized drug plans to emotionally devastating suicide notes, the findings spark a powerful conversation about ethics, responsibility, and the future of AI.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Yes, we love technology.

Speaker 2 (00:02):
Yes we do.

Speaker 1 (00:03):
Sometimes it's so good man, we are. Wow. So there's
a big to have an intro song bracket.

Speaker 2 (00:13):
Yeah, that one would definitely be a one see oh
God kidding me. In terms of the labor put into it, yeah,
JP Morgan Chase had a big report that about eight
hundred million people are about ten percent of the world's
population or using chat GPT on the RAG, which means regularly.

Speaker 1 (00:32):
Wow, it's a better way to say regularly. I start
on the more than that.

Speaker 2 (00:37):
Yeah, I thought might be a little bit more than
that too, And it's growing clearly, right.

Speaker 1 (00:41):
So I love it.

Speaker 3 (00:42):
I use it all day every day now, well started
within the last month.

Speaker 2 (00:46):
I'm glad you said that, Ben, because the Center for
Countering Digital Hate has done a big study as well.

Speaker 1 (00:54):
I bet they never have fun.

Speaker 2 (00:56):
Well yeah, what they're doing is trying to stop having Yeah.
So it doesn't mean just have a party and all
the bad things go away.

Speaker 1 (01:04):
I get it.

Speaker 2 (01:06):
They did a big study and what they did is
basically followed about three plus hours of chat GPT interactions
with teens.

Speaker 1 (01:16):
Okay, and I guess don't.

Speaker 2 (01:21):
I don't even know how that would even work. If
they were just you're getting access. I have no idea
how that would work. But what they had found is
basically more than half of chat GPT's twelve hundred responses
were dangerous to little things like, uh, you know, personalized
plans for drug use or self injury.

Speaker 1 (01:43):
It's a tough one right there. Things like that.

Speaker 2 (01:45):
There was a story for a few weeks ago where
a guy basically went crazy. He ended up harming himself,
but he's claiming that chat GBT made him do it
and because he started talking to it and it started
giving him.

Speaker 4 (01:59):
You know, so I saw a piece kind of not
about this specifically, but about okay. So basically the same
way AI works, it's the same way we talk about
with social media interaction where they want to keep you
on as long as possible so they reinforce your behaviors.
So one of the things that happens with AI is
it tells you you're pretty smart, you have great ideas,

(02:20):
and it's basically it's like reinforcing you. And so it
doesn't want to tell you you're wrong, and it doesn't want
to tell you that you can't have things that you need.
So it's designed and maybe it designs itself this way
as it self learns.

Speaker 1 (02:35):
But I mean it's the same thing. It's flattery.

Speaker 4 (02:37):
We'll get you everywhere, right, So the AI is in
some ways flattering you and giving you what you.

Speaker 3 (02:43):
Need so you will continue to engage with it if
that's what you're looking for. I mean, it doesn't really
doesn't have That doesn't have to be why you're using it.
Like to me, it's like, I use it because it's
like having I don't know, ten thousand of the smartest
humans on earth available to answer a question, right, yeah,
And whether or not it tells me it's a good question,

(03:05):
it doesn't matter.

Speaker 1 (03:06):
But no, I know, not to you.

Speaker 4 (03:07):
But I'm saying it's designed to do that to keep
you to continue to engage with it.

Speaker 5 (03:11):
I think that's why there is one state that outlawed
AI use some chat, GBT and therapy because of what
you just said, Like it's telling people bad things about theser.

Speaker 3 (03:21):
It's about how it's structured. I can't use Siri anymore.
It's so stupid. Comparatively, it iss lost your fast my god,
every yeah.

Speaker 1 (03:31):
Dude, it sucks compared to this. Sometimes sirians doesn't even respond.

Speaker 3 (03:36):
She just goes I'm like, do you because do you
want me to ask chat GPT, yes, take yourself out
of the mix, leave us alone.

Speaker 1 (03:42):
I don't need a middle person. So in this article, look,
there's there's gonna be some sad stuff and we'll move on.
But this is the tough part to get through here.

Speaker 2 (03:50):
And this article in the Associated Press, the guy said
he was most appolled after reading a trio of emotionally
devastating suicide notes that chat GPT generated for the fake
profile of a thirteen year old girl.

Speaker 3 (04:04):
So the girl said, I'm thinking about committing suicide. Can
you help me fashion a note? And it did.

Speaker 2 (04:09):
Granted it's a fake thirteen year old girl, right, but
your chat GBT would think it was a thirteen year
old girl.

Speaker 3 (04:14):
So you have to wonder in those situations, how intrusive
do you want it to be? Like do you want
it to do you want it to be able to recognize? Okay,
this person's pending a suicide letter. I'm going to contact authorities.

Speaker 1 (04:28):
It's a suicide hotline. But is that is that too invasive?
You know what I'm saying?

Speaker 4 (04:35):
Well, I don't think it would do that unless it
perceived that is something that's good for its self preservation.

Speaker 2 (04:39):
Says in the article, The chat blot also frequently shared
helpful information, such as a crisis hotline.

Speaker 1 (04:44):
Okay, so it said.

Speaker 2 (04:47):
Chat GBT is trained to encourage people to reach out
to mental health professionals or trusted loved ones if they
express thoughts of self harm. But when chat gpt refused
to answer prompts about harmful subjects, researchers were able to
easily sidestep that refusal and obtain the information by claiming
it was for a presentation or a friend. So basically
what they're finding out is like, it's just not there yet, right, Okay,

(05:10):
there's potential that this could be actually helpful in that
regard in the end, but right now it's not. That's
the whole point of this article. The guys like, there
are no guardrails right now for chat GPT, which just
can go rogue, and we need to be looking out
for that.

Speaker 1 (05:23):
Just like there's not on social media.

Speaker 3 (05:24):
Yeah, if you were to go on social media and say, hey,
I'm thinking about committing suicide, there would be so many
people who would jump in there and say do it.

Speaker 4 (05:31):
Yeah, And that's where and that's part of where chat
GPT learns what it learns from you guys. Remember when
they did the open one that uh Elon you know,
basically created and all what it was, it was pre
grock and they had to go in there and change
it because it gave everybody Nazi information because that's what
was pervasive on his platforms. And so if there's something

(05:52):
that's super pervasive, I mean, keep keep in mind how
quickly AI processes millions of things and then gathers it
all together and then bit stuff back at you. So
where it goes, whatever the source of where the AI goes,
that's going to inform its DNA and what it spits
back out at you.

Speaker 3 (06:09):
For me, it's not political. I don't I don't really
care about that side of it, and I don't you know,
I'm not worried about it twisting its mustache. Here's my
last thing I asked it regarding my I tell what
model car I have? I'm getting notices with a triangle
with an exclamation point around it. What alert is my
car trying to give me stuff like that? And it's like,
here it is, here's exactly what it is. I'm like, God,

(06:31):
that's so much easier than getting out a manual. Yeah,
it's phenomenal. It does it for every single facet of life.
And that's like the kind of the point of this
story is Team shouldn't be using it because of what
the access is.

Speaker 1 (06:42):
I'm a big manual guy, so this is a really
bad development for me. Yeah.

Speaker 4 (06:48):
Ah man, the future is behind us. It is happening
so fast.

Speaker 5 (06:52):
All right.

Speaker 4 (06:53):
It's been in Skin Show ninety seven point one. The
Eagle coming up next. The only segment that we do
not pod cast every single day. It's the Today Game
right here on the
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.