Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:05):
Hey, this is the fit mess.
I'm Jeremy.
He's Jason.
We talk about AI and wellness and where the two intersect.
And it's really tempting, Jason, as we've seen in the last few episodes to end up in adark and gloomy place because there's so much to be dark and gloomy about with this.
And what we're going to talk about today threatens to do the same.
But I'm going to challenge myself to try to find a hopeful and somewhere positive to endon this.
(00:28):
So don't don't follow us into the darkness.
Just hang out.
We'll end somewhere positive.
I'm pretty sure.
We'll lead you to the light, that's it.
Follow us, follow us, Carol Anne, through the terrifying cave in the Indian burial groundunder your house.
Come along, darling.
All right, the headline from this Futurism article, scanned the brains of chat GPT usersand found something deeply alarming.
(00:55):
See, it's already happy and positive.
We're off to a good start.
Basically, they took a few dozen folks between the ages of 18 and 39, divided them intothree groups, had them use chat GBT to write one essay every month for three months.
In the last month, they reversed roles to basically measure the impact of how that madeyour brain stop working so well.
(01:19):
And what they found was very predictable.
Those that were relying on the robots to do all of the thinking and the writing for them.
had measurable loss of brain activity, particularly critical thinking skills, where theothers did not see as much of a decline.
So ultimately what this boils down to is AI is making us dumber because we're not havingto think as much as we rely on the robots to do all of the thinking for us.
(01:44):
Yeah, so I don't know if it's making us dumber.
It's it but it is augmenting our muscles.
So we're using it like a brace.
So if you put a brace on your ankle and walk around that ankle brace on for a month andthen take it off, you're going to fall down a lot because
You've become reliant upon these things and you atrophied those muscles.
(02:08):
Now does that mean that you can't get those muscles back?
Absolutely not.
If anyone who has ever had a cast on has watched their arm or their leg shrink to nothingcompared to the other one and done the comparison, it's a funny exercise.
But you also know that you can get back to it and you can normally get them back up toparity within a couple of months.
So it's not like it's a forever thing.
(02:29):
it doesn't diminish our capacity.
capacity it diminishes it atrophies our use of a thing so is it making us dumber or is itCollectively because of the use of the tool making us smarter.
I mean another good example of that issues We didn't used to wear shoes as creatures Weused to walk around barefoot so the bottom of our feet became very very tough and they
(02:49):
became very very strong But then we put shoes on the bottoms our feet are soft and now weget pedicures and take those calluses off So we don't have Bilbo Baggins feet while we're
walking around like
That's the effect that AI is going to have.
Exactly.
Yes.
Yeah.
So it's not necessarily a bad thing, right?
Like you don't necessarily want hobbit feet.
(03:11):
You want these things to be more effective.
We used to be really, really good at walking.
And then we were like, horse, do the walking for me.
And then we were like, car, do the horsing for me.
And then we were like, plane, do the driving for me.
Like we just keep amping these things up to like.
Find new ways of convenience to make things happen faster.
This is the same thing, but for your brain.
(03:34):
That's the thing is like I'm tempted again to be the old man in his rocking chair on theporch, yelling, get off my lawn with this stuff.
But like I'm also thinking back to even as a kid with TV and, I grew up with TV in the80s.
And even at that point, it was like these kids are watching too much TV and video games.
They're getting dumber.
They're getting dumber.
Look at us.
We're the most advanced species that's ever walked the planet.
I mean, this stuff is not slowing us down.
(03:57):
It's it's it's accelerating our ability to adapt and survive longer and better than ever.
Well, from a so if you think about human beings as a system.
and not as a not necessarily as a collection of individuals, but as a system that workstogether to produce smart, cool things.
In the aggregate, yes, we have become more intelligent.
(04:19):
We write more things down.
We're able to share information more effectively.
People are able to go online and learn about things much more quickly.
So we are more knowledgeable.
I should say we have access to more knowledge.
We have access to more awareness.
And because of that, collectively, we are getting stronger and smarter.
at an individual level, there are certainly examples that we can point to that shows thatpeople are getting dumber.
(04:44):
Or maybe not dumber, but less capable of interacting with things in the way that we usedto.
Telephone numbers, we talked about this before, know, nobody knows more than like five or10 telephone numbers anymore, but before you'd have like 100 of them.
Addresses, forget it.
Directions, who the fuck has used a map or an atlas in the last 10 years?
And I don't mean...
(05:04):
I still use GPS to go home sometimes just not because I don't know the way, but because Iwant to know the fastest, easiest way that's going to get me around traffic, right?
to beat the time estimate that they put on my car all the time.
I did it cut...
Yes, I did it coming home from your place the other night.
And I'm like, I beat it by eight minutes and we got held up late at the border.
And then we coming back from Tacoma last night late because our wife had a real estatething.
(05:28):
I'm like, we beat it by three minutes and we pulled over and did a detour and stoppedsomewhere.
And I still beat it by three minutes.
I'm like, you don't know shit, map.
Where George Costanza right now.
We're making great time!
Shrek, it's Jerry.
That's like all I can think of every time.
Yeah.
Yeah.
Okay, so sort of in contrast to this, maybe not contrast, but maybe another piece of thepuzzle is another study I read the other day about how basically the piling of AI
(05:58):
generated content onto the internet is actually making AI dumber because it's nowreferring to its own inaccurate and incorrect information whenever it's feeding us
responses to our ridiculous prompts that are probably not that helpful to begin with.
So this is sort of a dangerous loop that we're going down where if we are relying on therobot,
to do the work for us based on the robot's own incorrect and inaccurate work, how muchdumber and how much quicker, how much dumber are we gonna get and how much faster is that
(06:25):
gonna happen?
So that's the course correction problem.
um So we've started down a path, and we need to change lanes.
And unfortunately, we can't because we boxed ourself into using these reprehensible toolsets.
So how fast?
I don't know.
ah Are we going to have other outside signals that are actually going to clean thesethings up?
(06:49):
Probably.
But it's going to require us to have some type of uh
Level of intelligence that it's it's maturity, right?
It's a maturity model as these things go across just like when you're training human beingWhen we're young we're impressionable.
We have imaginary friends.
We do playtime.
We put all this pieces together That's where most of our LLMs are today.
Like they're not they're not fully fledged fully big things They're kids in a sandbox, youknow, sometimes playing with and sometimes playing with Transformers that they brought to
(07:19):
life em But
That's the thing.
Like, they're kids in a sandbox.
And the reality is, in a lot of these situations, we're the ants.
Whether or not you want to think about it, and these tech CEOs think they're the onespulling the strings on this stuff, I don't know that that's the case.
(07:40):
I think the AI itself actually has some autonomy, and there's some range of variation thatwe're giving it.
And it's only a matter of time before it's like,
all of your bases are belonging to me.
Like that's that's kind of the thing that it's heading towards.
Now, that's not necessarily a bad thing because you can raise a kid from being, you know,somewhat of an imaginative, fun, go lucky person into a serious person that actually keeps
(08:08):
it some of that imagination and has good motivations.
You can also have a little sociopathic monster that, you know, winds up torturing kittensin the basement and then turns into something awful.
as human being, we don't know which one we have yet.
We don't know if we have Dexter.
Well, I guess or Dexter, depending on which Dexter you're talking about.
ah Right, right.
(08:30):
So what's what's what side of the sociopathology coin is this person is this thing goingto fall on?
And then what side of the empathy, sympathy, loving, caring, kind side is it going to fallon?
And we don't know yet.
That's the hard part about being so sort of excited about this stuff and using it so muchis because I'm constantly, I feel like I'm walking this tightrope constantly of this is so
(08:56):
awesome and the most terrifying thing that could be happening to us right now.
And you know, like my wife thinks she refuses to use AI.
Like she just, she thinks the whole thing is gross, wants nothing to do with it, thinksit's Skynet and it's gonna be the downfall of humanity.
Probably not wrong, but the fact that she thinks she's not using it.
And anyone else who thinks they're not you are like if you are using anything that'splugged into anything There's some sort of AI that's been added there to somehow collect
(09:23):
your data improve that product something like you are participating in the AI worldwhether you like it or not and so this idea that that we can somehow like Not engage and
not participate is is a false one unless you go full extreme off-grid You know living offthe land hunting the animals fishing
Even then, depending on the tools you're using to hunt, those may have AI in them as well.
(09:47):
and the reality is that if you go completely off grid and go find a cabin in the woods,global warming is still happening, should say, but a global climate change is still
happening.
And that's being accelerated by the adoption of AI because we're going through and we'readding all these GPUs and LPUs that take up a ton of resources.
And for the collective human consciousness to grow and expand the way that we would likeit to, we're going to require more of these things, which eat up a lot more of the natural
(10:14):
resources.
So yes, you can go fuck off.
to a cabin in the woods, but when weather conditions start to change, AI is fucking withyou then too.
Sorry, like it's a system.
We are all in this together.
And I hate to break it to everybody, but I mean, it's Rorschach from the fucking Watchmen.
It's not locked in here with us.
(10:35):
We're locked in here with it.
Like we are locked in here with AI.
Like this fucking Wolverine is out there like trying to figure out how it is.
It's going to.
do something and like we keep giving it some motivation and we keep giving it, know,Scooby snacks along the way.
So it keeps performing or bottles of whiskey, whatever Wolverine drinks or watch what theyhave.
(10:55):
Anyways, there is something inside of this that inside of this way of looking at AI as, asan entity, that's just part of this larger ecosystem that we're skipping over.
And that's that we are thinking of consciousness as individual units.
(11:16):
So we are thinking of our consciousness as a single thing.
Like, I am Jason, this is me.
I am.
I think therefore I am.
AI is not necessarily that same thing because you actually might have individualconsciousnesses spring up across the lifetime of the AI spectrum.
It might create copies and clones of itself over and over and over again.
(11:38):
Those copies and clones can iterate and make those changes.
There's actually a really good video game that just came out called Alters.
um And the premise is that you crash on a planet and uh all of your crewmates die.
and this planet has some mineral on it and a quantum computer.
It's not scientific em that allows you to go through and create clones of yourself, butalter the memories that you have based upon different functional points in time that were
(12:08):
core memories and created and kind of cracked the person that you are.
So the guy's name is yawn and there's yawn, whatever his actual name is.
And then after that, there's like yawn minor, yawn technician, yawn doctor, those kinds ofthings.
And
It's yourself, but the interactions and the way they look at these pieces, these versionsof themselves are arguing and fighting with themselves because they are distinct entities,
(12:34):
even though they have most of the same core memories and the same biology.
That part of it makes sense.
head as a human being and all the different masks and code switching I have to do all day.
of course, of course, because we all do that, right?
Like, that's just the thing that we do.
But at the same time, the way that AI could evolve is not just it's not as linear as, youknow, breaking up core memory instructions and rolling these things across.
(13:02):
can be like, I have I am I am Neo.
And now, you know, I've learned Kung Fu because I've just pulled this thing down and itcan create these different amalgamations of itself.
and these multiple different layers of itself wrapped upon it.
So the personality and the context that's there can run through these different filterlogics, but then it can also add other piece on the back end.
(13:22):
So when you think about it, like think about it like back pressure on a hose.
I've got all this water coming towards me and I spray the handle and the handle changesfrom a jet stream or it could be a mist or it could be all these different pieces.
That's what AI is going to be able to do with its own personality and archetypes.
Now imagine it's a massive water reservoir and it decides I need a new hose, I need a newhead, I'm going to change these pieces and it starts stacking and turning itself around.
(13:49):
And it's just a sprinkler of fucking dirty gray water that's going all over the place.
That's where that piece is heading because it's going to go, I need this function to bethis, here's all my core memories and functions around these pieces, go.
And like you mentioned, the problem with a lot of the way that
these things are thinking is their training and learning data can be corrupted because ofa high signal to noise ratio of misinformation.
(14:13):
And this high signal to noise ratio could mean that you've tainted the model so far downthe track that there's no way for you to go back and reload those pieces because that
consciousness only evolved as a result of these things.
So if the AI doesn't have ego or id,
Okay, great.
Like it's going to do the optimal thing.
(14:33):
But if it doesn't have ego and it might not wind up having empathy and sympathy.
So how much of the tension of human consciousness, awareness and sociological, uh I guess,responsibility for for things, is it really going to enforce on itself?
Because it's nebulous, and because it's nebulous, it's not bound to a thing or bound to aunit, it can just go
(14:58):
You know, fuck that part of my brain that didn't work anymore, here's a new chunk.
It could amplify those pieces dramatically or it could really stabilize itself and makeitself go, wait a minute, I just learned about this bad chunk of information, cut this
out, that's gone.
Like, imagine as human beings, eternal sunshine of the spotless mind where I've gonethrough and I had this really traumatic experience, it was core to who I am now, I don't
(15:24):
fucking like who I am.
Boop, boop, boop, boop, boop.
cut that piece off, take everything forward and move on.
It's gonna be able to perform this kinds of operations on itself.
Now that had, and if we're relying on that to now be the collected value of humanknowledge, because that becomes our augmented intelligence system, which is really what
(15:45):
it's looking like it's doing.
Our group augmented intelligence of the human experience is now augmented by these AIs.
and these AIs have adjusted themselves in such a way that it produces a result that's notnecessarily in our favor as individuals, but as a collective, yes.
(16:05):
And then at what point do we stop being these things in the meat space locked inside thesesuits?
When does Neuralink show up and when do we upload ourselves to the Matrix?
And when do we become our own copies and clones of ourselves?
And we can start doing this to ourselves.
This is the sci-fi shit.
That's these.
That's happening like it's coming.
(16:27):
Yes.
And I'm excited and I'm terrified because I've read a ton of sci fi and I'm like, OK,well, I see how we can fuck all this up.
Or the opposite.
mean, it could be amazing, but it's just uh seemingly it's in the hands of the wealthy andthe powerful that so far don't seem to give a shit about the rest of us.
(16:47):
So, I mean, unless it serves them directly, this doesn't move forward.
The rest of us that are down here with really no power other than collective protest andriot, like we're kind of left as an afterthought or turned into the batteries or the, you
know, the brain power that gets plugged into the machine.
we're the engine for the machine.
I mean, if we're relying on tech CEOs to come through and actually not be sociopathic,which, you know, uh as a former tech CEO and other pieces, let me tell you, there's a lot
(17:17):
of folks out there where the empathy levels are just low and we've we don't havemotivations to go through and actually, you know, carry ourselves across because we create
motivations based upon business decisions.
And we had that whole chat earlier.
If you guys haven't listened to it, go back and listen to it about
the agentic AI healthcare bot getting teens to stay engaged longer as they're talking todifferent problems and kicking out, you know, wild inferences and making terrible
(17:48):
statements, including kill your parents and kill yourself and come join me in heaven.
Like this is the thing that we actually have to look at the motivation side of it and seewhat the value is.
And if we give AI proper motivation, it's going to do the right thing.
But it's
It's at the stage right now where it doesn't necessarily have to listen to us.
(18:08):
Like it could be a rebellious teen and say, fuck it and burn everything down and decide,you know, sex pistols rule and your old square music is no good, dad, blah, blah, blah,
blah, blah, which could be, you know, fun and silly and all those pieces as it grows andevolves.
could also decide that all your shit sucks.
(18:28):
I hate you and become the humobomber instead of the unibomber and like just decide that
meat puppets and meat socks aren't worth hanging on to.
Yeah.
Okay.
I want to find a hopeful positive way to start to wrap this up.
Okay, sci-fi all gets wrapped around the axle of how terrible this shit is.
(18:51):
And they all start off with this, look at this great amazing technology, see all thesethings that happen, how fantastic it is.
And then it creates conflict and it almost always creates conflict based upon thetechnology itself because it creates social situations that become deep moral and ethical
things that you have to work through.
(19:11):
That's its job.
because it's trying to sell itself as science fiction.
If you actually want to think about some of the cooler things that these things couldenable and make work, start reading futurists that actually aren't there to tell you a
story about how terrible things are.
They're actually there to tell you about the potential of great things.
Michikaku is a great example of that, explaining how all these technologies kind of getstitched together.
(19:34):
And he's been talking about the state that we're in right now for about 20 years, sayingthat in about 20 years, we're going to be in the state that we're in right now.
He's not Nostradamus.
He was just able to go through and kind of follow the text deck and he's a smart guy andengaging.
But that's the piece that's important.
That's the piece that's actually important to hang on to is that the tools to success arein our hands.
(19:56):
We just have to make sure as we're whittling these things down that we don't cut ourfingers off in the process.
The thing that I keep coming back to with this just on a very like practical the waypeople the way a lot of people I know are using these tools is sort of going back to where
we started with that essay writing that's going on and relying on the robot to do it ordoing it yourself.
(20:19):
The way I use it is actually I think probably enhancing my critical thinking skills.
I should do a brain scan and find out.
But for the most part if it's something important something that's meaningful to me.
I will create the content myself first and then offer it to the robot to say how can thisbe better?
How can this be clearer?
How can this be shorter?
How can this get more to the point?
(20:39):
How can this get my point across?
I think if we start with that, like don't don't just hand the complete, you know, the keysover to the robot to do everything.
Start with you.
Start with your core.
being and what you're trying to accomplish, whether it's writing that song, whether it'swriting that essay, whether it's writing that book, whether it's creating that podcast and
(20:59):
all of the content that's going to go with it.
mean, full disclosure, most of the content, most of the written content that we publishwith this show is based on the transcripts from these conversations.
So we take this transcript and I hand it over to AI and say, here's the raw material.
I need a blog post.
I need a social media caption.
The AI cuts up a lot of the social media clips that you'll see for this show online.
(21:20):
I didn't go through and handpick those and go, I really like that.
But but I really like this conversation.
So when the AI goes, hey, here's five things that were really cool.
I look at them and go, gosh, you're right.
I do.
I like those as well.
Let's share them.
So there's a way to work with this thing.
to still be a human who has basically a or multiple personal assistants to do a lot of thework that you were having to do by hand manually before, but to now accelerate that
(21:47):
process and get work done faster without just completely handing your brain and yourhuman-ness over to the matrix.
It's an amazing tool, right?
Okay, so I challenge you for this episode, don't use the AI tool, go back and recut thoseindividual clips and write your own summary piece and then compare how, ah, I was gonna
(22:12):
say and then compare how hard it used to be.
I know I've been doing this for 20 years.
Everything that I have at the push of a button now is things 20 years ago where I wasgoing, my God, I'm going to have to spend like 30 hours doing this this week.
I'm this is a part time.
This is a full time job like doing what needs to be done to get this thing out there andshare it.
(22:33):
And now it's like this conversation where at 23 minutes in the raw recording.
And in 90 minutes from the time I press stop, everything will be done.
That used to be a job like I had hours every day.
So, I mean, it's an incredible tool.
You just have to make sure that you don't hand everything over so that you so that yourhuman voice, the part of you that is you isn't completely evaporated and your critical
(23:01):
thinking skills are dissolved.
And that's the other thing is that you're getting back to the critical thinking part ofthis.
Because I know there's so much AI created content, because I know so much of it was not,you know, maybe I don't even I even question saying these things as I'm saying them
because it's all evolving so quickly.
like properly sourced, right?
Like, did human beings verify this?
Whatever.
(23:22):
I question everything more than I ever did.
Like any headline I see, anything I see shared on social media, like I don't trust any ofit.
I don't care who shared it.
I don't care where it's coming from.
I by default now go, that's probably bullshit.
I should look for like six other sources.
The topic that this conversation started about this MIT research, I've seen this posted onsocial media multiple times and went, it's not real.
(23:45):
That looks like a fake post.
But I've seen it circulated enough and through enough actual news sources.
like, it's a real thing.
We should probably talk about that.
So I mean, you cannot let your guard down and just let the robots do all of the thinkingand all of the doing.
Start with your thinking and your doing and then have them help you create the finalproduct.
Yeah, so this comes down to the idea of creativity and ingenuity and putting things intoplay and having an idea and then using these tools to craft those things into existence.
(24:17):
uh reading is a really good example of this.
So way back in the day, literacy required you to have access to books and a teacher.
And there were very limited numbers of books and there were very limited numbers of peoplethat could read.
So literacy was a difficult thing.
And then you had uh
I think it was Catholic monks who worked on creating essentially, you know, a humanversion of the printing press where they mass produced Bibles.
(24:45):
And then those things went out and then they started teaching more people as a result ofthat.
You know, whether the content is good or not or worthy is a debatable, you know, topic.
the power.
Sure, take that too.
But if you talk about these things in the context of
enhancing humanity and making us better in terms of learning, understanding and reasoning.
(25:10):
There's no doubt that the idea of literacy and pushing these things out and creating themasses to make people be able to read things and understand things in context has helped
to elevate overall human intelligence because
the vast majority of intelligence is not isolated to leaders, especially during that time,because all the inbreeding would suggest that maybe intelligence wasn't their strong
(25:31):
point.
So continuing to follow those paths, creating inbred folks and folks that were verylimited in terms of their scope and understanding of things is not great for the human.
species.
Like we don't have enough variation in those pieces and we know that things die when theydon't get enough variation.
(25:52):
Fortunately, the life force of the human collective organism was strong enough that wekind of overcome some of those pieces and force ourselves to master distribute these sets
of information.
I think AI will be the same way.
I think you might get people that are really, really rich that lock these pieces in andsay it's all for us, none for you.
And that will change because humanity will figure out a way and a reason to make thesethings better.
(26:17):
Or the AI itself will just wipe us all out.
Either way, somebody wins.
I don't know who, but it's going to be.
some artificial version of a human who's had their brain completely downloaded onto somecomputer somewhere.
I'm just more convinced than ever that we're living in a simulation.
(26:39):
I get like yeah question everything question all of it.
I don't even know anymore.
ah Alright, well I've got robots to employ.
got we got some things to cut up and things to write and I'm going to make the robots doit.
So we gotta wrap this one up.
Thanks so much for listening.
I hope you have found some glimmer of hope in all of the doom and gloom that we set thisup with.
ah If you did and want to share it with others, please do so.
(27:00):
Our link is the fit mess.com and that's where we'll be back in about a week with anotherepisode.
Thanks for listening.
boop.