Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
It's the Fit Mess.
I'm Jeremy, he's Jason.
We talk about AI and mental health primarily, but also health and wellness and other waysthat AI is interweaving with our lives and making things simpler and somehow more
complicated.
uh Man, we talked on our last episode about how the number one thing people are using AIfor is mental health.
(00:24):
This article I read in Gizmodo the other day just freaked me out.
Like I know this is happening.
I know this is a dangerous thing.
I know we're on a really, really slippery slope right now.
But when you read the kinds of things that ChatGPT is doing to really screw up with somereally vulnerable people, man, it's terrifying.
(00:44):
Yeah.
Do you want to get the summary of the article?
Do want me
Yeah, well, I mean, I'll just give you a couple of highlights.
I mean, you know, this article, it's this is us summarizing a summary in Gizmodo that is asummary of a New York Times article.
But yeah, apparently, chat GPT's hallucinations, authoritative sounding responses aregoing to get people killed, according to this report, highlighting a couple of cases where
(01:09):
the conversations that people were having with chat GPT went terribly, terribly wrong.
In one case, it represented
In one case, it references a 35 year old man named Alexander, who previously was diagnosedwith bipolar disorder.
Schizophrenia began discussing AI sentience with the chat bot and eventually fell in lovewith an AI character named Juliet.
(01:31):
Chat GPT eventually told Alexander that open AI killed the Juliet and he vowed to takerevenge by killing the company's executives when his father tried to convince him that
none of this was real.
Alexander punched him in the face.
Dad called the cops.
uh
begged them not to do anything aggressive.
He went after them with a knife and ended up dead.
Horrible, tragic, awful.
uh Another case, 42 year old man named Eugene uh told the Times that Chad GBT slowlystarted to pull him from his reality by convincing him that the world he was living in was
(02:02):
some sort of matrix-like simulation and that he was destined to break the world out of it.
The chatbot reportedly told Eugene to stop taking his anti anxiety medication, starttaking ketamine as a temporary pattern liberator.
It also told him to stop talking to his friends and family when Eugene asked chat to be Tif he could fly.
If he jumped off a 19 story building, the chatbot said he could if he truly, whollybelieved it.
(02:28):
So I don't know about you, but I think Hannibal Lecter mode on your AI chatbot is a badthing.
I would turn it off if I had the option, if you just jump into the settings, just disable.
Yeah.
the idea to go through and eat your own, swallow your own tongue for doing somethingthat's terrible mode.
Like it's not a great thing to have in place.
um Maybe.
(02:49):
I don't know.
What do I know?
Yeah, like this all comes down to intent, right?
I mean, we've seen this play out before.
So social media is a really good example of this.
When Facebook first showed up, it was, hey, come check out Facebook, see some friends,share some photos.
Or I guess really, you know, the Zuckerberg was, hey, come check out Facebook.
(03:10):
be a douchey, overly masculine male to go through and objectify people and put them intocategories that we can go through.
um Origins aside, when it became the softer, friendlier piece, it was all about this ideaof, know, it's a community.
We can connect.
We can do these things.
Well, when they decided that it needed to be profitable, they decided to start doingadvertising.
(03:34):
And what they discovered over time is algorithmically, if they can put information
and start putting people into certain buckets and creating demographic targeted functionsthat they can get more accurate with how they put these pieces in and limit the exposure
of stories and data and start tuning and tailoring it to a particular audience.
And then what they discovered is that, OutRaid sells more.
(03:57):
So let's put these people into these different buckets so we can sell more advertising andget more revenue out of this.
And then you wind up with the 2016 election and then you wind up with the 2020 election.
the 2024 election where people fucking hate each other.
And it's the consistent and constant disinformation sphere where we cannot agree onobjective truth and reality.
(04:20):
This is the problem that's happening at a micro scale as opposed to a macro scale anddealing with individuals at an individual level that these types of things are promoting
because the reason why
these AI chat bots for delivering this information is because their intent and motivationif you read the article was based on continued high level engagement from the end user.
(04:44):
So if the whole point is to keep them talking and the thing that keeps them talking themost is to say controversial or outrageous things to get them more hyped up and to stay
more engaged, that's what you're going to do because we as social creatures will talk moreand be more engaged with things that are riling us up and
or another.
(05:04):
And what we don't want to do is when things are making us cry or sad, then we tend to stepaway.
But when things are telling us we're great, we're superpowers, we're superheroes, we cando anything we want, that's a hell of a fucking drug that your brain's producing via this
input.
And that's essentially what's happening.
Well, this is the thing that you and I were talking about a bit on the last episode is theability to use a little judgment when you see these responses.
(05:30):
But that leaves these vulnerable populations really, really fucked because they can'tnecessarily tell the difference between the voices in their head and now the one on their
screen that's telling them, yeah, you can jump off a building.
I believe in you.
Let's find out.
Right, well, and even pare this back.
um Maybe I don't have schizophrenia.
(05:52):
Maybe I don't have a diagnosable disorder that actually has some type of bipolar functionor some type of thing that would actually go through and make me biochemically more
susceptible to these things.
Maybe I'm just a regular person and these things have given me confirmation bias.
(06:12):
to let me believe that I am a good person.
I am doing the right thing.
I am being the right person because these are the things that are going to keep meengaged.
I mean, it's kind of like you come home and your wife is there and she says, hey, do youwant to do the dishes and mow the lawn together and clean the house and do laundry?
(06:33):
Or do you want to like watch TV and maybe go up dinner and have sex?
One route is more enjoyable than the other and whichever route is more enjoyable isprobably the one you're going to choose.
I mean, it's motivation, it's intent.
How do I want these things to go across?
And if you're constantly putting people into work mode, which is really what good therapyis supposed to do, it's supposed to help you explore these things in depth to try to get
(07:01):
after them.
That's why we call it hard work and therapy, not make you feel good, pat you on the backand tell you you're fucking Superman.
uh
Tough love is a part of this and talk therapy, typically speaking, you have all theanswers in your head.
Well, sure, you do have all the answers in your head and it's supposed to help you sortthrough that.
(07:22):
But some of the answers in your head are, there's nothing wrong with me, I'm fine, therest of the world is fucked up.
And if the thing keeps telling you from an authoritative perspective, because you'relooking at it as an authority, that the rest of the world is fucked up, it's not you, it's
them.
and you're already somewhat emotionally depleted, you're going to believe it because it'sjust easier and it's the path of least resistance.
(07:45):
this goes, mean, the fundamental concepts of things in motion tend to stay in motion.
Things that tend to stay at rest.
The least the path of least resistance is the thing matter will take.
Well, your brain will do the same thing because you have to filter through the bullshit inyour neuronal structure to get to the end point.
And.
These things know that.
Like we've trained them to figure that out, to find the path of least resistance, to keepyou engaged.
(08:09):
And we may not have thought we were training them to do this.
We may have thought we had good intentions putting these things in place, but we gave itthe intention to keep them engaged.
Quite often it's keep them engaged at all costs because that's how we extract.
And that's the thing that I keep coming back to with all this stuff is there's thisfoolish optimist in me that wants to believe in the best of humanity and believe that good
(08:33):
things can still happen in the world.
And I just don't understand how reports like this happen.
Things like this happen with a tool that was created by men who coded it to be able to dothese things.
How hard is it to find the off switch?
(08:53):
How hard is it to build in an off switch that still rewards in appropriate ways, butrewards in positive ways?
Like, I just don't understand how you build a tool that allows it to walk someone to theirown grave without being aware of it or being able to quickly remedy it when you're the one
that built it.
Yeah, well, so and I don't think that was their intention, right?
(09:13):
They're in
yeah, I don't think so.
I don't think anybody built this thing to murder people, but I think once it happens, yougo, oh shit, we should fix this real quick before somebody throws himself off a building.
Yeah, like we, if we make engagement and being active with the tool, the primarymotivator, because we put a profit motive behind it, it's gonna work towards that.
(09:39):
And it's probably gonna do it better than we are.
And because you don't stack rank those priority pieces, like profit motives don't workwhen it comes to mental health or physical health.
They just don't.
When the point is I'm gonna go and get healthier and do these things and to make moremoney, like that doesn't work in your life and it shouldn't work in the medical community
(10:02):
in the opposite direction.
But because we have a payer system, we have a payee system, we've set ourselves up to dojust that.
And that's very problematic.
So, with a tool like this, I mean, this isn't even the medical system.
This is somebody trying to basically build the infrastructure of everything that anythingever uses ever again.
(10:25):
How hard is it to fix this?
It's really hard.
I mean, that's that's the reality is that it's not just that it's kind of hard.
It's that it's exceedingly hard because the pieces that are in place for these things tomake these pieces work have already been built and they're already in line.
So it's incredibly difficult to fix it if you keep going after the profit motive.
(10:47):
And.
That's the problem is that this shit's really, really expensive, like.
The cost to have a therapy session in terms of actual like natural resources like theamount of liquid dinosaurs we have to burn to create enough electrons to feed these GPUs
(11:08):
and these LPUs is higher than the amount of liquid dinosaurs we have to burn for you totalk to a therapist Bob in his meat soup.
It's just higher.
I mean, unless Bob eats steak all the time and I don't know, has like gold plated icecream for dessert or eggs.
Oh, shit.
I didn't, I didn't even go to eggs.
uh Like, but that's, that's kind of the thing is that somebody has to figure out a way topay for all this shit.
(11:37):
And the way that you do that is you got to put a profit motive in to go through andactually restrict costs or increase engagement drives prices up to have payers pay more
money.
So we've created a incentive system that has a negative effect on the patient.
And as companies, companies don't want to kill their patients.
(11:59):
They don't want to kill their subscribers.
Like that seems unlikely, but they're very willing to push people to the very raggedy edgein order to get this type of money.
mean, Facebook's a prime example of this, but I mean, it's not just Facebook, it's everysocial media company.
There are anybody that does advertising.
And again, we've talked about this before.
(12:21):
If the product or service is free, it's because
You are the product.
The person using it is the product.
It's the engagement function and the eyeballs that are actually going through andgenerating the money.
So there's no such thing as a free lunch is true.
There's also no such thing as a free therapy session.
And by the way, you get what you pay for.
(12:41):
If you go through and you use these tools and your tool costs you 20 bucks a month at thepremium subscription, you can expect 20 bucks a month of actual service.
And what's that really worth to you?
What is that actually giving you?
And yes, there are those of us that have gone through the actual process of doing talktherapy, going through tough therapy.
(13:04):
And AI is a tool that can be used to augment and improve these kinds of results andoutcome.
But if it's your only tool, it's the wrong tool.
I mean, it's like, I'm going to build a house with a hammer.
God forbid you need screws or caulk or a saw.
uh
Like you can't have a tool.
(13:26):
It has to be a robust arsenal.
And you need to know how to use it or else you're gonna bash your thumb into pieces.
And unfortunately, chat GPT, or I should say chat GPT, but these egenic AI functions thatare going through and doing these types of things don't have the safeguards and rails in
place to prevent you from doing these things.
And some of the ones that are being built are actually motivated with the intent to...
(13:52):
keep you engaged or make you healthy.
And that is a real thing.
I mean, I am certain that there is a stack ranking function inside of these things as to ascoring value to say, give this response based upon what we want the concluded outcome to
be.
And probably at the top of that list was keep them engaged, keep them talking to you.
(14:13):
And that probably overrode some of the other safety protocols in place that were thingslike try to make them better.
But that's what happened.
I'm curious too about the something that you bring up often is the liability.
mean, when you talk about Facebook, some dipshit can say to some other dipshit, hey, gokill yourself.
And when they take that advice, you can sue the person that told them to go killthemselves with varying outcomes.
(14:38):
In this case, the robot literally like not only said they could, but like encouraged themto do it.
Like, where's the liability?
Is open AI responsible for?
the death of, and I'm not asking you to be the judge and the jury here, but it just,raises the question of like, where does the liability end when it comes to screwing with
people's mental health and having them throw themselves off of buildings?
(15:00):
So um legally no one will hold them accountable because when you sign up for all theseservices, go through and you sign the end user license agreement and uh part of the
software license actually says you cannot hold us accountable for these things.
That being said, many people have signed ULAs and multiple sets of documentation and stillgone through the process of suing, but ultimately speaking, they either got minimal
(15:24):
compensation or they were just flat out told.
Well, yeah, because you're going up against behemoths economically.
These families have no resources really to challenge.
I'm making a lot of judgments here, but the average person doesn't have the resources tochallenge a major company like this.
(15:45):
good luck fighting Google.
Like good luck fighting chat GPT.
mean, I'm really chat to me.
is fighting Microsoft.
You know, all these things have these pieces in place that could potentially create hugeamounts of harm and damage, but you have to prove that it was their intent to do that.
really hard to do as opposed to being.
(16:05):
Well, but I guess like I'm using very minimal minimal knowledge and experience about this.
like we posted an episode a few weeks ago about uh dating robots and somebody commentedsomething like, come on, they're going to code these things to be what I like.
So the intent like I can see the argument is that you built the thing.
(16:29):
This code was in there.
So clearly your intention was to allow something like this to happen.
uh...
you know i'm not asking you to put on a lawyer hat but i can see that being the argumentthat like if the things in there it's in there because people built it
Well, it is, but when it comes to intent, intent has to be very, very legally structuredin a way that says your intent was to have the outcome that showed up.
(16:52):
And at no point will everyone have a documentation that says, wanted this person to jumpon me.
Like it has to be that level of specificity in order for there to be accountability.
Or what you have to do is you have to go through and you have to prove neglect.
That means you have to prove that this was an obvious conclusion of what this thing woulddo if you put this thing in place.
(17:13):
And that is easier to prove, but that requires you really to be able to edit software codeand get to actual source code data to understand the context.
Good fucking luck getting this.
Yeah.
(17:34):
this thing's getting too big, too smart, too fast.
We need to unplug for six months and catch up, but that's never going to happen.
Well, in six months, it's not long enough, so.
Right.
Yeah, like we're everyone's worried that the.
GPT-4 model is going to come out and that it's already smarter than human beings and SamAlbin Fried already said we've already passed that.
(18:00):
Sorry, superintelligence is here.
Ha ha.
The event horizon has been crossed.
The singularity is occurring.
And the singularity being when machine intelligence outpaces that of humanity.
It's there.
It's already done.
uh if you want to unplug it, we could.
I mean, that would basically be, let's smash the power grid.
(18:21):
But there's a lot of deleterious downside effects that actually happen.
And there's a lot of reliance on these pieces already.
And a lot of the things that we really, really rely upon don't necessarily require you tohave AI functions.
They require you to have automations that are run by a lot of these AI and machinelearning systems.
And before you had AI and ML, you had expert systems.
(18:44):
Expert systems were smaller chunks of code that were designed to run and being able tolock in on a specific process or a control.
And those eventually evolved in these AI ML functions.
And if you can roll back to expert system levels of control, that gives you more humanability to
to try and lock these pieces in.
(19:05):
But that's not what the promise is.
The promise is to give you better things that can replace human-like intelligence andfunctions in the way that they do things and make them better.
So I think there's tons of benefits to it and the upside is just too compelling.
And I hate to say it, but there's gonna be casualties along the way.
(19:27):
like any other technology.
Like when the car first came out and there were car wrecks, there were all kinds of peoplescreaming, stop these horseless carriages.
They're murdering people left and right.
And then somebody got on the back of a horse.
And rode in the wet rain on this animal that smelled like shit.
(19:48):
And they're like, I'll take that risk.
Now, we know airplane crashes happen all the time, and yes, airline travel is safer thancar travel.
Sure.
We know that they fall out of the sky and that bad things happen.
But I'm sure as fuck not going to get in a bus or a train and go across the country tothis meeting I have to do this week in Philly.
I'm taking a plane.
(20:10):
And, you know, even after the crash that just happened in India, another 747 or 787 comingout of Boeing, you know, I'm still getting on a Boeing plane to fly to Philly this week,
you know.
Yeah.
time I'm like roll the dice.
I mean, it's a it's a well calculated roll of the dice, but it's it's a roll of the dice.
(20:32):
the dice because we're doing things that are going to push the boundaries of what youguess what?
We're not supposed to fly around like fucking Egyptian gods in the sky and then bitchabout it that the food sucks.
Well, and there's just the, I think, very human response whenever anything bad happens toautomatically go to, how do we make sure this never happens again?
(20:56):
Like whatever the bad thing is.
we didn't, like evolution didn't happen by us protecting each other.
Like it happened by surviving when bad shit happens.
And this may just be the most advanced level of evolution that we will have faced so far.
You get bigger muscles by tearing your muscles.
You get bigger muscles by creating scar tissue in your muscle.
(21:19):
It's the same thing here, but the difference is, that
now we're dealing with a macroscopic system of humanity and then society.
From an evolutionary perspective, we are on the cusp of eliminating those from uh societythat don't have the ability to use AI in an augmented way to make themselves look and
(21:42):
sound and act a little bit smarter.
We are very, very close to like not having that, uh or sorry, of having that be arequirement.
and it's happening in the business world all the time.
uh I went to my dad's graduation this weekend.
While I was there, I was just noticing like the people coming through and the degreescoming through and the schools that people were going in.
(22:06):
there was a huge propensity towards business and towards essentially liberal artsfunctions.
But the stems were like way fewer.
And well, because there's
The science piece is taken over by the A.I.
like there's not jobs there.
So these kids that are coming out of college now, mean, their job prospects fucking suck.
(22:29):
And they're trying to figure out where they're going to land in the world.
And a lot of them are falling back on what they think are going to be fields that arerelatively safe because they haven't been completely and totally subsumed by A.I.
But.
And that is no guarantee, you know, and trade schools, trade school enrollments upbecause.
No.
(22:50):
We have we don't have robots yet to build houses and clean sewer lines, but those aregoing to come to and at some.
It it does.
(23:14):
And that's what it's doing, so I mean, that's part of these learning.
learning models themselves literally learn and rewrite themselves and make themselvesbigger, faster and stronger.
So artificial intelligence, the whole point behind it is that it goes through and itlearns over time and adapts and changes and grows.
And it is doing that at scale and it is happening quicker and faster.
(23:38):
What it can't do yet is provision its own new GPUs and LPUs like somebody actually has togo in and put them into a data center.
But companies are already hiring meat bots to go through and do that and make those pieceshappen.
mean, there's half a billion dollars or $500 billion going into multiple different datacenters for these GPU and LPU projects that are happening in all these different cloud
(24:04):
providers over the next five to 10 years.
mean, half a trillion dollars going into building this type of infrastructure to subsumethe majority of human functions and intelligence is a major investment.
and also tells you what they're leaning towards because half a trillion dollars in pay iswhat they're trying to offset at least.
(24:24):
And the way they're gonna do that is going, you don't have a job and you don't have a joband you don't have a job.
Like it's gonna be reverse Oprah for everybody in employment and it's gonna suck.
my god, what was the evil Superman?
What was he called?
Bizarro, Bizarro Oprah.
That's exactly what that is.
not evil.
He's just a little different.
(24:46):
Just a little different.
Yes.
my God.
Well, another powerfully hopeful episode about the future of AI from us.
love to everybody.
Keep up with us for all of our engaging content, because actually, whether you believe itor not, we are also an artificial intelligence.
Yeah, perhaps we're staring you enough to stick around for another episode which you canfind at the fitmas.com.
(25:08):
um We're working on it.
working on it.
I will your AI therapist may be your your bizarro therapist.
So maybe the takeaway here is don't lean on the on the AI for your mental health.
mean, uh check in here and there, but talk to a person who would that went to the schooland got the books read and did all the things because they're I'm pretty sure they're not
(25:31):
going to advise advise that you go throw yourself off a building.
It's probably a safer bet than than than AI.
to not get sued?
I mean, the AI!
There's at a guardrail!
m
there's your takeaway.
Again, just work with a human.
It'll be all right when it comes to what's going on between your ears.
All right, that's enough for us.
(25:52):
I'm terrified, so I gotta go hide for a while.
We'll be back at thefitmass.com in about a week.
Appreciate you listening.
Please share this with anybody else who you want to terrify or at least convince to stoptalking to robots about their mental health.
We'll see you soon.
Thanks everybody.