Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Alright, this is the FitMask Podcast.
Thanks so much for listening.
We talk about all things AI and how it can either completely fuck up or really help yourlife when it comes to your health and your wellness.
uh Jason, you and I, we actually, talked about a similar topic a few weeks ago talkingabout how doctors use AI, even though most of them don't even trust it to begin with.
So why shouldn't the rest of us do the same thing, jump on the bandwagon and try andfigure out what the hell's going on in our heads and our bodies with this fancy new tool
(00:27):
that we're all getting used to?
uh
And you and I were supposed to talk about this a few days ago.
And when we were sort of kicking the idea around over a text, I mentioned that I'd read abunch of articles about how people are using it for therapy.
And so I got curious.
I was in a bit of a slump, having some rough times, just fully upfront.
Like I'm a new assistant coach for my kids' softball team.
(00:50):
And I watched her pitch her first game and just got annihilated.
And it was, and it killed me like as, as her coach, as her dad, like it took a couple ofdays to bounce back from that.
And I literally was like, okay, AI therapist, get me through this.
And so I started asking it all these questions.
It was nuts.
Like I've been in therapy for a long time and the questions that it asked me as aliterate, like I went in real dumb guy, real like basic level, Hey AI be my therapist.
(01:19):
And it's like, started asking me questions and I got specific and
it started asking me these really introspective questions about why I was feeling the wayI did.
And perhaps I was being a bit hard on myself and all these things that my therapist hassaid to me for years.
And I was blown away.
I was like, wow, that's really cool.
But then it also gave me actionable steps.
(01:40):
Not only go write in your journal or whatever, but was like, what would you do to be abetter coach next time?
And I was like, I would do some more drills.
And it was like,
perhaps try these drills.
Like it had softball drills to take to the lesson.
like my therapist wouldn't have that, right?
They would go, oh, go Google some ways to be a better coach.
(02:00):
It was so profound that I actually did it for a few days and ended up like coming up withnew ways to journal, new ways to do things.
It's mind blowing how much this can be used as a massive tool to help your mental healththat I kind of was poo-pooing and laughing at before trying it myself.
Yeah, I mean, that's that's legit.
(02:21):
So as a former former Little League softball coach, uh I would have loved to have had thistool as opposed to uh sitting there doing the assistant work and, you know, essentially
playing cage mom and doing cheers and the dugout with the girls and running a batting listorder.
Maybe actually know how to play softball and maybe maybe the AI could have helped me.
(02:45):
Alas, it was not available.
So yeah, I mean, I could definitely see how that would happen.
And like, I don't think it's substitute necessarily for regular human interactions.
But when you're talking about what you're talking about, what it really does is it helpsyou organize your thoughts and organize your questions in a way that makes it easier to
get through to the truth of the topic and the truth of the subject.
(03:07):
And it cuts down the noise that you have going on in your own head to try to find thosepieces.
Now, it's awesome because one, it can make things less confusing.
and you can organize information in a way that makes it much more palatable for you as anindividual.
The other side of that is that it's giving you information that's structured in a way thatit thinks is useful and helpful for you.
(03:31):
And by it, whatever told it how to respond in those things.
And the way these learning models normally work is they have basically
humans that come in and augment and look at responses as they come through and say, yeah,this looks like the best response, or this does not look like the best response.
And ChatGPT does that to me all the time, because I'm part of their beta signup programfor different pieces.
(03:55):
So um it's fantastic, because it gives you that kind of power and enables you to do thosetypes of things.
It's maybe a bit dangerous, because it definitely enhances the potential for us to allfall into the Denny Kruger trap.
Thinking we actually know what the fuck we're doing when we really we don't what we'redoing is we're using something to organize our thoughts and feed us information that we
respond to and Most of us just want it to say, you know do this take this pill do thisthing that's relatively easy because Hard work is just that it's hard work.
(04:26):
So at some point we have to make the delineation between Is is what I'm doing here withhis virtual therapist in this chat GPT realm
enough or is it kind of my starting point?
And like you said, you know, you've been in therapy for years and it asked you questionsover the weekend that were as in depth and as interesting as what it took you years to go
(04:51):
through in therapy.
I don't think you would have been able to unearth that if you hadn't previously gonethrough therapy.
So the layman looking at this reading and said, mean, like, well, fuck you.
It's like search the Internet.
Well, of course, because I can put in a question, the Internet happens be back in responseto me.
And nowadays, the top of line is, of course, an AI response.
But then there's several subjects that
And when you look at the compendium of information that's actually there, it really islike a choose your own adventure novel.
(05:14):
And when you start thinking about your mental health as gamifying things, and you startthinking that your physical health is gamifying things, it's really, really useful from a
productivity perspective, because you can work backwards and look at these differentactionable steps and actually show measurable improved performance.
So we've talked about it before, all these different metric wearable devices, they
(05:35):
give you that same kind of level of interaction.
And, you know, I spent time to build a company to do this to make it work.
ah But the mental health side of it has similar types of challenges and stretches.
The problem is, is that there's not really a way for you to go through and wear a wearablethat says today you're happy.
ah So this becomes that kind of subjective feeling of your own because well,biometrically, you might look great, like you're calm, relaxed and mellow.
(06:01):
You might actually be
depressed and you might be this calm relaxed and mellow because you've been laying in bedall day watching reruns of mash, know, yeah, the yeah, exactly, you know, maybe not mash.
um But that's kind of the thing is that like, the way that your body reacts and the waythat your nervous system reacts, certainly they're tied together.
(06:26):
But the way the signals actually represent something as far as measurable metrics goes isvery, very difficult.
So tool like ChatGPT, we could actually journal it and say, today I felt like this.
And then if you compare that and overlay that with your other biometric data, you'reactually gonna be able to get a much better understanding of how these things affect you
(06:47):
over time.
So like a good night of sleep, most people find that if they get a good night of sleep,that they tend to feel better and are more productive.
Like it's not rocket math, right?
But if I got,
all the biometrics that said I got a good night of sleep, but I woke up and I still didn'tfeel good.
And I still felt down and depressed or just, just blase because I have some otherunderlying condition going on.
(07:11):
Nothing's going to tell me that.
Like I got to ask questions to try to get to the answers to those pieces.
And that's what this really comes down to is that it's up to us as individuals to advocatefor ourselves.
And the way that we advocate for ourselves traditionally is we say, I don't feel good.
I'm going to the doctor and say, doctor, blah, blah, blah, blah.
And the doctor says,
Yes, uh take this and you'll feel better or do these things and you might feel better.
(07:35):
The problem that we run into is that doctors themselves, typically speaking, when we goand see them, they're not as invested in our health as we are.
So because they're not as invested in our health as we are, they're not as likely to tryto keep us motivated and on task and on track.
So we supplement that with other commercially available tools or other pieces out there,apps or these kinds of things.
(07:56):
AI is just another advancement of this and just another repurposing of these functions.
And if you look at all the tools out there that do this shit today, they're all using AIand they're using it to either get rid of people, correlate data or both.
And the idea is they're trying to make these things much more focused and much easier toexecute on because the signal to noise ratio of life is just high.
(08:16):
using AI to do that, I think is actually quite healthy.
There is a downside.
Of course, of course there is.
ah The thing that I thought was so great about it was that, like you said, this is not areplacement necessarily.
But it's a hell of a stop gap between appointments.
Or if your insurance is only going to get you in the office so many times or in thevirtual call so many times, it's a great tool to go to in the moment when it's like, I'm
(08:43):
maybe not crisis.
If you're in crisis mode, AI might not be the way to go.
But if you're like, you know what?
just, I got to get out of this.
I got to sort through this.
Like I said, the kinds of questions that it made me ask were fantastic and really helpedme through that slump.
But then when I got a little more clarity, I was thinking about just like my journalingpractice and how it feels very automatic and very like I checked the boxes that I did the
(09:06):
journal today and I don't really feel like I get anything out of it.
And so I asked it to help me come up with a way to improve my connection to the feeling ofgratitude, right?
Like not just I'm grateful to have a warm home to.
Yeah.
connect to the feeling?
And the prompts that it gave me, like as I was filling in my journal following theseprompts, like I was getting emotional, like really connecting with the feeling.
(09:30):
And I was like,
A robot taught me how to feel this in a way that I haven't been able to on my own in areally long time.
I just, again, no, I know, I know it's exactly that.
It's exactly Wally, yes.
that new one, like the wild robot?
Like...
(09:51):
I fucking cried in all of them.
Seriously, seriously.
But I just thought, how many books have I read about how to journal?
How many times have I searched how good journaling prompts?
And three things that you're grateful for, but the way that it dialed it in to find thefeeling and sitting with that feeling and writing about, I won't bore you with the details
(10:12):
of it, but just to get that connection from just asking a robot what to do better was sopowerful.
But yeah.
Yeah, no, I you brought up a good point there, too, though.
um people in crisis and actually like real legit mental health crisis.
um It's highly likely.
(10:33):
That the person on, you know, like a suicide hotline probably has a prompt engineer or aprompt right there, AI prompt, but an AI prompt actually transcoding the call, reading it.
and saying, are possible answers for you.
Here are possible paths that you can take to try to make these things more effective.
Because I'm sure as an individual, every time these people listen to it, they're probablygetting swept up in the story and the emotion of it, at least to some degree.
(11:00):
And the things that they might miss, the AI might catch.
It's like having that secondary set of ears listening in to try to find the best pathforward.
Because again, this is augmented intelligence.
That's really what we're using it for.
And we're using it to augment our own
mental intelligence as well, and our own mental awareness.
(11:20):
If you think about it in those terms, this thing is great.
We should use it.
Everyone should have access to it.
There shouldn't be a cost barrier to it.
Like, this should just be part of regular mental health.
uh The downside is that it actually is not cheap.
I mean, your mental health session with ChatGPT from an overall resource utilizationperspective is probably quite high because you're using a lot of GPU and LPU cycles.
(11:45):
em But also, it is even still relatively low cost and they subsidize that by sellingsomething of yours, which is your data and which is the information about you.
So as you're giving it more information, as you're telling it more about you, that's quitepossible.
It's building a profile on you.
(12:07):
And that profile could be, this person has mental health issues.
These are the things that affect them this way.
This is how you effectively target them with advertising.
This is how these are the things you want to send to them so they can actually buy themand actually make money off of them.
Like this is all the nasty reality of living in an e-commerce environment.
And really we've gone from, you know, this idea of democratized capitalism into economicfeudalism because you wind up with these different kingdoms of relative ownership of these
(12:35):
data principles.
know, Chai GPT is one of those.
So is Amazon.
So is Microsoft.
So is Facebook.
These are all wild gardens of information sets and take these data pieces and try to makesense of them.
And the kingdoms themselves share and sell your data back and forth between each other.
So just be aware that this might be going out there and that might be a thing.
(12:57):
And yeah.
bunch of the articles I was reading were highlighting, Gemini has no HIPAA rules.
So whatever you tell it about your health conditions, your mental health, whatever, that'snot locked in the doctor's safe.
You are sharing that willingly in a relatively public way.
So you certainly want to be careful.
(13:18):
But when it's like, hey, I'm feeling sad because I feel like a shit assistant coach on asoftball team.
You know, there's relatively low risk there, other than I'm probably gonna get sold a lotmore softball shit in my Facebook feed.
But otherwise...
Right, like, and that might not be a bad thing.
You might actually need that D Marini LX bat for your daughter.
(13:41):
You might need that.
seeing the rope bat a lot.
The rope bat is the one that I keep getting.
You might need to buy a $500 bat to make yourself a better as a coach.
And if for some reason the coaching doesn't work, you can blame the bat.
I have like $2,000 in bats in my garage for my two kids playing softball.
my God, I might have to come shopping because we're looking for a new bat.
(14:05):
All right.
One of the things that I did read that many doctors, at least in the American MedicalAssociation, were talking about that were concerning is the way that these AI tools sort
of gather all of this information.
So we used to just be Dr.
Google and read a bunch of links.
So now AI does that for us.
It reads all the links, combines all the information together.
(14:25):
I loved this one quote because it talked about the stuff that gets cut out.
of all that is it's sifting through and finding the most important information.
And it says, even if the source is appropriate, when some of these tools are trying tocombine everything into its summary, it's often missing context clues, meaning it might
forget a negative.
This doctor says, it might forget to use the word not.
(14:46):
So a great example is somebody asking what to do for a kidney stone.
And Google AI told them to drink urine.
The guidance was probably to drink lots of fluids and then assess your urine intake, notto actually.
drink urine.
So it's important to verify the information that the AI tools generate for you.
Yeah, I mean, the first time drinking your own pee is probably not so bad.
(15:11):
But, you know, if you keep recycling it, I've seen enough movies on the topic that itcould be problematic.
Right.
Pee on you?
Yeah, it's a tough one.
We um want to rely on these tools, right?
(15:34):
We also don't want to be solely relying on these tools.
It's a give and a take, it's a balance.
honestly, it's like anything else.
You're trying to find the 51 % median marker.
You're trying to be better than at least half.
It's just this aggressive mediocrity to try to actually get a win and a victory in some ofthese spaces.
(15:55):
We're trying to find these pieces.
you bringing up it being a stopgap between you and being able to talk to your actualtherapist.
I mean, that's a great idea.
um The other piece about this is that the way that you look at these data sets, and if youlook at the information that's there, and you can start journaling and adding your journal
(16:16):
to your chat GBT, hey, here's my journal for the last year.
Am I trending towards happiness or am I trending towards something else?
What are you actually seeing when I go through this?
How is this reading?
Because really what you're talking about doing with journaling is creating a narrative.
And it's the narrative in this story that you're telling yourself over a protracted periodof time.
(16:37):
And ChaiTPT actually does do a really, really good job of looking at the intent focus ofactual words.
And by tokenizing every statement that's inside of it, it can see if you're actuallygetting, if it looks like your intent is more or less positive or negative.
And there's a lot of variation in that, right?
Like you might sound more abrupt because you're doing more work and you don't have time tojournal as much, or you're more tired.
(17:02):
Or a few days a week, you have a little bit of extra time when you're going to make things30 % longer because you're going to spend more time clicking on those things.
Or you've discovered something new, like voice to text.
And you're going to talk these things in.
These are all marker posts that require human analysis in the regular world to go through.
(17:23):
That ChatGPT is enabling individuals to do this with their own data sets as opposed todata experts, which is how we do it today.
So the idea of having these data scientists be able to go through and pull through massiveamounts of information and then provide a summary and information and next steps and
actions and really analysis.
Chat GPT gives that to you as an individual, which is fucking rad.
(17:46):
It's also terrifying because there are some people that shouldn't have this kind ofinformation analysis because they're not going to be critical in the way that they
actually use it.
And they're not going to be their own.
oh
they're not going to be their own safety net, and they're going to want to become in theirown cautionary tale.
And there seems to be plenty of that that has occurred with other technology trends.
(18:07):
And if you look at the way that we're starting to uh kind of try to absorb and understandmental health uh as a discernible concept, not just within
the way that we interact with people in the real world, but the way that we interact withpeople online, and you start looking at how these things kind of mix together, the
(18:33):
dividing line between the actual value and benefit of the data that you're producing andputting online versus the data that you're actually producing in real space, that curve is
going to, or that ratio is going to get wider.
Like your online data and your online presence and your online who you are, that's goingto become.
who all these virtual tools that you interact with think you are.
(18:55):
And when you approach things over time, it's gonna start using that as the bias lens ofhow it actually views you.
And over time, this shit's gonna make it into your doctor's office, and it's gonna make itinto your mental health clinic office.
Like, it's going to happen, and it's gonna make it to your insurance company, and they'regonna start using it to do what insurance companies do to say, you have a pre-existing
condition.
Like, you think you suck at softball, therefore, therefore obviously you would have eaten,therefore you would have obviously eaten three extra donuts every day.
(19:22):
That's cool.
Yes.
Yeah, because they're gonna make a bullshit to make it justifiable and but this is this isjust the reality where we live and We can either embrace it or we can try to shun it but
no matter what you're gonna be affected by it one way or the other whether it's passivelyor actively and I for one am gonna actively involve myself with it and Fuck around with it
(19:44):
because I think at the end of the day It's gonna at least have some value and that valuemight be a positive for me or at least an understanding what the negatives look like
so can know what to avoid and how to make those pitfalls actually things that I'm notgonna draw myself.
Yeah, I I'm talking myself out of my own argument that I was about to make.
But the idea that companies are farming this information and using it for evil, that'snothing new.
(20:11):
We're just perhaps giving a little more detail and a little more information than we usedto, where we used to ask a question and then search, click the first 10 links and read
them.
Cool, now everyone knows what the 10 links are that I clicked, but they don't necessarilyknow.
what I'm doing with that information as much as maybe they are now.
ah Your point though about the person who maybe can't get out of their own way and isgoing to screw themselves over in this process, this isn't going to help their case.
(20:37):
I was reading an article today that was uh this Israeli study that was done to compare howAI diagnoses patients versus real actual doctors.
And this study was done, and then actual doctors reviewed the results of both side byside.
And they found that the AI recommendations rated as optimal in 77 % of cases versus 67 %when it was an actual doctor diagnosing these people.
(21:01):
So literally, the patient would come into the office, go through this AI doctor process,and then from there, go and talk to the doctor, share all that information again.
Same experience, one with a robot, one with a person.
The robot's got it right 77 % of the time.
Doctor's got it right 67 % of the time.
bet a robot did that analysis.
I bet it did.
I absolutely bet it did.
(21:23):
It totally cheated.
ah But that's crazy to me that an independent doctor would look at both and find that,like, holy shit, it knew enough in those cases.
And I mean, I think it's also treating very common stuff.
It wasn't looking at user-specific medical records, stuff like that.
But that's pretty terrifying and awesome.
(21:47):
Yeah, and the more data you give it, the higher likelihood that it's going to have thatit's going to be accurate and correct.
I mean, it can actually look at visual data now.
there's uh Phillips and all these other companies out there are using AI and ML to gothrough and do image diagnostics.
(22:07):
They can actually try to track things down faster, which makes perfect sense, right?
Like, does this thing look like a tumor?
Yes, does this thing look like it's but not?
cancerous, is it moving in these directions?
How do these things change over time?
It can do a lot of those pieces.
And because it actually has better memory recall than the human brain, and it's moreaccurate, it will give you better contrasting ratios.
(22:29):
Because human memory fucking sucks.
And we know that.
And it's manipulable, malleable, based upon different situations where AI memory is prettyatomic.
Like it stays intact.
And it can recall those things with a much greater level of accuracy because it canactually
open up the image and recheck again, where humans, they might be like, I remember this.
(22:49):
I don't need to reopen that image again and just be totally wrong.
So, I mean, and there's also the accessibility of it, like their ability, the AI's abilityto access information, pull it in, they don't have to open it up around the third visual
cortex.
They're not, you know, worried that the doctor changed their prescriptions.
So they see things differently.
Like there's all kinds of reasons these things are better, but it doesn't take away, um,
(23:16):
value of having human review of this data.
So I think what we're going to find over time is that we're going to use this to become amuch better sorting mechanism, a much better filtering mechanism to try to get through
faster.
And what we're going to wind up with is homogenization of information and how things areactually being represented and understood, which means, you know, Kevin, Doug, Bob, James.
(23:42):
are going to be put into these buckets of things and Kevin, Bob and Doug, you know, bothhave a cold and James has, it looks like the avian flu, but something new has come out and
nobody's going to track that because they don't have a bucket for it or they're going tohave a bucket called other.
And they're going to try to keep these things as separated as possible, but it's alwaysgoing to be about intent focus and how much we can get out of it.
(24:04):
And finding novelty, that's one thing that AI is actually shitty at.
Like
It's good at finding things when they don't fit into a pattern.
It's not good at defining this thing with a new pattern once it breaks out of those piecesbecause it's not making shit up in the same way.
And what we've discovered is that when we tell it, all right, go ahead, like turn up the pvalue and the i value is really, really high.
(24:29):
And it starts hallucinating, which is what it does.
And it gets high and starts putting out responses.
It's wildly inaccurate.
And like, it's not like there's a
Well, it doesn't have imagination.
Humans have imagination.
It does not have imagination.
It's connecting dots.
It's taking existing knowledge and connecting dots where we're able to somehow see thingsthat aren't there and identify that, oh, this is something new and start to explain what
(24:58):
it is.
Well, and I would argue that it connects dots.
And when you tell it to hallucinate and you tell it to be creative, it doesn't have theability to go, oh, I'm going to try a little bit to be creative and I'm going to keep most
of this stuff in check.
It just goes, OK, it just starts throwing shit at it.
(25:21):
Like it throws a wig on and a cloud nose and some big floppy shoes and like, now I guesswe're going to do this.
And like runs out to go try to make this thing happen.
You
of the difference.
You can tell a human, hey, 98 % of what I do is this pattern, and it's solid, and thesethings don't go through.
But this 2 % that I'm going to experiment with and play with and try to use some logic andreasoning around to try to see if this thing's legit, AI doesn't really have that kind of
(25:50):
nuance just yet.
But it's getting there.
And eventually, it will.
Now, whether it's going to be completely and totally replace the human experience,
No, because it's not bound by the same physical inputs and outputs that we are like itdoesn't have a visual cortex that may or may not make me feel like I'm warm because I see
(26:12):
the sun, even though the temperature hasn't changed and no sunlight's hitting me.
But I suddenly have a psychosomatic approach where I'm feeling different.
Like it doesn't do those things.
It doesn't have those same kind of interactions.
It has had the same type of neural network layout like we're just we're just different.
But that doesn't mean it's not a really, really good tool to optimize things and try tomake things.
Yeah, so I think the lesson here is uh find the balance between human interaction and someof this and how it can all work together.
(26:40):
Don't completely replace the human interaction with your doctor, your therapist, yourfriend for God's sake, ah but it's certainly a valuable tool.
And one that I know I'm gonna use, you have to go, I have an appointment with my onlinetherapist, so I should go as well.
So a good conversation.
Thanks so much for listening.
We'll be back in about a week at thefitmass.com.
(27:02):
If you found any value in this conversation at all, please do share it with somebody whomay benefit from it.
The links to do so are at thefitmass.com.
We'll see you in about a week.
Thanks folks,