All Episodes

June 11, 2025 23 mins

My productivity hack: https://www.magicmind.com/FITMESS20 Use my code FITMESS20 for 20% off #magicmind

----

Will robots decide if you keep your nuts based on cancer predictions?

The world of predictive healthcare is here, and it's not the helpful crystal ball we hoped for. Insurance companies are already using AI to analyze your genetic data, social media posts, and digital footprints to predict everything from mental health crises to testicular cancer. The catch? They're not using this information to help you - they're using it to deny coverage and shift financial responsibility back to you when predictions go wrong.

In this episode:

  • Learn how AI is currently being used to predict your health outcomes
  • Understand the financial and personal risks of genetic data sharing
  • Discover practical steps to protect your data and maintain autonomy

Listen to this episode to understand what's at stake before you become a statistic in someone else's algorithm.

Topics Discussed:

  • How genetic testing companies are selling your DNA data to healthcare analytics firms
  • The nightmare scenario of preventive surgery based on AI predictions with moderate confidence levels
  • Why American healthcare profits are driving global surveillance standards
  • How social media monitoring can predict mental health episodes before they happen
  • The reality of insurance companies using AI to deny coverage based on "prior knowledge"
  • Brain-computer interfaces and the subscription model for your thoughts (Black Mirror style)
  • GDPR vs. American data protection laws and what rights you actually have
  • Why HIPAA doesn't protect you from insurance company data mining
  • The difference between humanitarian AI tools and profit-driven surveillance systems
  • Practical steps to minimize your digital health footprint starting today

----

MORE FROM THE FIT MESS:

Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok

Subscribe to The Fit Mess on Youtube

Join our community in the Fit Mess Facebook group

----

LINKS TO OUR PARTNERS:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:05):
This is the fit master.
We talk about AI and health and wellness.
today we're going to talk about what the robots might be able to predict about your uhmental health, your physical health in the future and how that could impact the decisions
you make to avoid the possible outcomes predicted by robots and ultimately who'sresponsible for your decisions based on what the robots tell you to do.

(00:27):
The future is now.
Minority report, it's all coming true.
The science fiction is here.
We're living in it.
We're watching it unfold in front of us.
And the scenario you just described, Jason, before we pressed record, literally our nutsare on the line here.
Yeah, ah predictive analytics.
So going through and taking the data sets out there to go through and try to find patternsthat let you go through and say, based upon these input variables, it looks like I am

(00:58):
highly susceptible to these types of disease, ailments, those types of things.
And we already do this today, right?
We already do it where we go through and we look at someone's genetic profile and we say,you're more susceptible
to this kind of cancer risk than others, because we've used large language models to gothrough and kind of map these pieces and take the human genome itself, sorry, not

(01:19):
language, machine learning to go through and actually understand these pieces in context.
So you can already go to like any of the genetic data services out there and say, youknow, tell me, am I more susceptible to cancer or Alzheimer's or dementia or a million
other diseases?
And invariably it comes back with a score.

(01:40):
And that score is a certain percentage.
And if you don't share this data with anybody and you just keep it to yourself, OK, well,you know what your risks are and you understand those things in context.
However, 23andMe being bought by a data by basically a health care analytics company uh isgoing to take that data and probably not use it to uh do anything good for you.

(02:02):
They will probably use it to deny medical care and to make it difficult for you to gettreatment because you had prior awareness.
uh of a known potential bad outcome.
let's say it said you're highly susceptible to lung cancer and you decided to have a cigarand AI catches on video somewhere and it says, oh, well, you broke the rules.

(02:26):
You know, this might happen.
So you are in trouble or you chose to live in an area that's got higher air pollutionbecause it's close to your work.
Again, this comes down to the concept of not just that your personal responsibility is nolonger just your business.
And that's kind of the scary part.
And at the end of the day, who holds accountability?

(02:47):
So I mean, what we were talking about before was let's say you're at high risk fortesticular cancer and the prescribed treatment for testicular cancer is to remove your
testicles.
And let's say that balancing act is like 51%.
You're 51 % more likely to get cancer than the average person.

(03:09):
If you keep your nuts, okay.
Well, I mean do I lower my odds by only keeping one?
ah Do I have to remove both?
and by the way What happens if I choose not to remove them and then I do get the cancer?
Am I now on the hook for paying all my medical bills that are associated with that andthen I went and losing my nuts anyways Or if I cut my nuts off with no good reason

(03:35):
And then 10 years later, they come back and they go, our analysis was entirely wrong.
Your genetic code for this does not match that.
You are actually regular like everybody else.
Sorry about the no nuts.
um I mean, this is the kind of horseshit that we're going to have to figure out how todeal with, because we're giving a lot of respect and we're paying a lot of attention to

(04:00):
these artificial intelligence sources.
And companies are using them.
in ways to block you from getting access to things or to entice you to do certain thingsand follow certain patterns and behaviors.
And they're trying to do it because your medical insurance companies are not, don't maketheir money by paying out on your services.

(04:20):
They make their money by collecting your premiums and not paying out.
Well, that's not just going to be true for the U.S.
like with our privatized healthcare system.
That's going to start being true for everybody because all of these things cost extramoney.
And at what point do you
uh countries start saying, well, we simply can't afford to do these things anymore becauseof X, Y, and Z, or we know the survivability rate of these things with co-born-minted

(04:44):
factors is much lower.
So we're not even going to bother to treat these pieces.
Like, I think we're getting to those kinds of places where before it took a human being togo through and sort this data and try to understand your charts and context and pull all
this data in and use those predictions to go through and say, well, it seems like you havea higher
you're more highly susceptible to these types of things.

(05:06):
So maybe you should change your behavior.
Where if a company is using AI to do this, and instead of going to a natural path andgetting all this information, understanding these things in context, if the artificial
path uh goes through and tracks all these data sets and puts them into context that makessense for the end operator, i.e.
the insurance company, that's a whole different ballgame.

(05:29):
And we're heading there.
they're going to use this for that kind of fashion.
the next question becomes, what do I do with this information?
And how do I not make myself more panicked and more scared and more susceptible to stressrelated death causes or anxiety causes because of this?
And that's, I think that's the part we should chat about.

(05:52):
Well, that's that's the thing that's interesting.
I was looking at a study just before we jumped on here talking about how this can playinto mental health and the different patterns and different things that can be detected to
help identify problems before they occur.
Right.
So like the things that you are typing online, whatever you're saying on social media, arethose going to be offering clues to some data source that will help indicate whether or

(06:13):
not there is a likelihood for developing anxiety or depression?
And then does that then lead to a better treatment outcome?
I mean, that's what you described is
honestly the most likely, but also worst case scenario.
But there is there is a reality out there, I have to believe, where this actually getsused for good.
And and we see on the on the horizon, these bad things that could happen to somebody andsome, you know, humanitarian some some good Samaritan.

(06:41):
sees this and goes, here's how we stop this before it becomes a problem.
Let's move you out of this region.
Let's get you to quit smoking.
Let's go.
mean, a lot of this, you know, a lot of these analogies I'm using human beings do now.
Hey, dummy, don't smoke.
As it turns out, it's bad for you.
We have a bunch of data that backs it up.
The robots will be able to pull all that together more quickly.
and see things we don't see, they will be able to see what you're doing online, other sortof digital footprints that you're leaving that will leave these clues that could

(07:08):
potentially lead to a better outcome for you.
And I'm curious too, because the whole time you were talking there, I was thinking, thisis very true for the American healthcare system, because it is 100 % for profit.
But I live in Canada, where it is much more of a socialized system.
And if they can find a way to go, hey, let's cut costs.
because we can actually prevent a lot of these things from happening rather than waitinguntil the fire is raging and there's no way to put it out.

(07:34):
It could, I think, knowing very little about the way insurance companies work and the wayall this stuff gets sorted out, it seems like if we can prevent bad things from happening,
we can also save a lot of money in the long run.
Right.
So the question becomes which motivation has higher precedence in the operator.
And if you're talking about governments, governments are supposed to be there for theirpeople and to make those pieces actually work in a typical democracy.

(08:00):
And I think Canada has has one of those.
Right.
And I think most of Western Europe has one of those.
The U.S.
is not the U.S.'s primary motivation is actually profits and companies or people andcompanies are people with extra
extra special rights, and they're elevated above regular people.

(08:20):
And that's why so many people themselves have made themselves corporations, wrappedthemselves around this piece is because that gives them those types of inalienable rights
that corporations have that human beings don't.
It's fucked up.
It's weird.
It's horseshit.
It's some stupid fucking mind game.
And at the end of the day, it's the reality of the people that live in the US and in ourfor profit health care system.

(08:43):
So
I do think what winds up happening is the U S winds up affecting everybody's healthcaredownstream because we are one of the largest providers and builders of medical care record
systems, medical compliance systems, all of these different component pieces that peopleuse to make decisions.

(09:04):
And because that's who we are, because that's what we provide and that's we produce.
As much as you want to think I'm safe in Canada, I don't think you are.
I think our ability to fuck things up.
in the US is highly contagious and our blast radius is global.
Let me do make those kinds of mistakes.
And I think we're running headstrong into this one and we're going to do that.
Now, don't get me wrong.

(09:25):
think people and some companies, some organizations will find a way to use AI in notnefarious ways, not shitty ways to do the things that you're talking about.
Like, Hey, let's have you move.
Let's have you get away from these power lines.
Let's have you do this.
Let's have you do that.
But the cost basis to be able to do something like that.
And the U S can be incredibly prohibitive and you may not actually have access to thosepieces.

(09:47):
So Obamacare, the ACA comes in and says, everybody has to have healthcare.
Well, it wouldn't be that hard for the ACA to also say everyone's DNA needs to be on file.
And I think we are in a particular administration that is highly susceptible toimplementing that as an executive order.
And it doesn't matter if another administration comes in and says, get rid of all of it.

(10:10):
It's done.
Tough shit dude, once you're in, you're in.
Like, it's a fucking Chinese finger trap for the lack of a better term.
When you're in it, you're in it, there's no way to wiggle out.
You're just kind of stuck.
And yeah, you can slowly try to move yourself out.
Yes, I understand it might not be the best analogy.
Maybe the better analogy is Roach Hotel.

(10:30):
We're in a Roach Hotel and we're stuck and we're glued in there and it's a big fuckingparty of genetic mayhem.
And once you check in, you don't check out.
And I think that's what our reality is, especially in the US.
Well, not to pull a talking point from the conservative playbook or anything, but thisdoes open up a door for private industry to create tools, you know, going back to the

(10:54):
mental health analogy, if there's something that you can attach to your Facebook account.
And every time you have reached the 100th post on why I hate Donald Trump, something canjump in and say, hey, maybe step away, maybe take a break from the news for a week.
It might do you some good.
Right.
There's there's some there's ways that like
removing the insurance company altogether or removing even the medical industry.

(11:17):
There are some tools that I think could be implemented that will be able to monitor.
God, I'm saying this out loud.
I don't even know.
I don't think I even agree with myself, but you can monitor your behavior and help suggestthings that could make your life better.
Because if you fall into these patterns of constantly whining to Facebook or constantlyinteracting with your AI tool as your therapist and it goes, you know,

(11:42):
I'm seeing a pattern here.
You come to me for a lot of stuff.
Have you talked to a person?
Have you gone outside lately?
Maybe just give it a shot.
I just think that there's, the optimist in me wants to believe there's a way that we canuse this and honestly, just try to help people.

(12:03):
And I know it's bullshit because I work with enough people that I know that we live in asystem that is designed to just make money from people to get.
to get money from your pockets into my pockets in the most efficient way possible.
And the actual helping of people seems to be something that nobody seems to give a shitabout anymore.
extracting profit from pain.
is like, AI is a refinement tool to help make those things better and faster.

(12:28):
Automation puts those into place in a cold, sterile way.
Adding these pieces together and then using inference models to go through and make, youknow, try to predict future crime or future health or future death, whatever the fuck you
want to call it.
That's reality.

(12:48):
Yeah.
We're already using it today.
We used it in expert systems before we had AI with human beings doing it.
Now we're going to do it at scale and apply the same type of thing to every use case.
Where before you'd have to go through and put a lot of extra work into it and havesomebody look at all these records.
It's really fucking easy now.
So we'll just use it because it's there, it's available, and it's going to happen.

(13:09):
But I do like the idea that there's an opportunity for private companies to come in andhelp make some of these things better.
Facebook example is a really good one.
Like, I've seen X number of posts, are you putting these pieces in in this way?
And that might suggest that, you know, perhaps you need to take this course of action orput these, or move these things in this direction.

(13:31):
That's a great idea.
Those thresholds, those pieces, those components are highly up to the individual to makethose decisions.
So this is a checks and balances problem where if you really want to do this right and youwant an Overwatch,
over your body, your mind, your mental health and all those.
And you can predict it and put it into play and say, here are what my thresholds are andthen say, go.

(13:54):
That gives you that kind of autonomy over your own health and your own system pieces.
I think that's great.
I think 90 % of the people on the planet would not use it that way.
I think they'd press the easy button.
And then the first thing that this town says, hey, buddy, maybe we should disconnect fromthe screen and take a nap for two hours.
Most people go, fuck you and like turn it off.

(14:16):
They're gonna be like, you don't tell me what to do, I'm the boss of me.
Because that's what we are.
Like when we get in these states of mania, which happens all the time when we're stressedout and going too hard, we don't know how to go.
And just relax, because we're not in that Buddhist state of enlightenment and wokenessthat we can go through and go, none of this actually matters.

(14:41):
Fuck it and push it away because we don't do a good job of treating people or of rewardingpeople for.
Well, but not to self care, but but stoic calmness like we reward outlandish behavior.
Communication.
People lean in on communication when it's a variable over a monotone rate, which meanseither super, super high or super, super low.

(15:09):
And if you do something in the middle, nobody's going to pay attention to you.
Like we are on a podcast and I know that most people will not watch this, but I am makinghand signs and doing shit with my hands because I'm trying to help people understand the
emphasis of what's going on.
That's just my speech pattern.
Like we're ingrained to do this and

(15:31):
Getting somebody's attention and being calm and being rational and having these things gothrough this way is not something we're good at.
So we make snap emotional decisions where the artificial intelligence does not do that.
It is cold, calculated, monotone.
This is how things are.
This is the result.

(15:51):
Here's the other problem with my own idea, right?
the Overwatch tool that you now have plugged into Facebook, it's going to go from ahumanitarian tool to making suggestions that are sponsored by Nike.
know, like, hey, what if you went outside and went for a run?
Maybe put on some of these new Nikes and see how they feel.

(16:12):
All of a sudden, it's just another advertising mechanism.
you watch the first episode of Black Mirror the new season?
Literally that.
So a woman gets a brain, sorry for the spoiler alert coming up, if you don't want to learnabout Black Mirror season one, one, or season four, episode one, don't listen.

(16:32):
But a woman has a brain problem and they have to replace part of her brain with someartificial components.
And the way that it does that,
is it communicates to a back end tower in a data center via cell towers.
And if she gets out of range of the cell tower, her brain just shuts off and she fallsasleep.
If she wants to use the service, they have a various different subscription levels.

(16:58):
And the entry level is like something ridiculous, like $100 a month, like super cheap,right?
And these people are struggling to get by.
She's a school teacher.
He's in construction and they do a price change because they're coming out of beta andthey've upgraded to a new network.

(17:19):
Well, now they want 300 bucks a month and then it's 700 and then it's 900.
And suddenly they just can't afford it.
And the way that they get around that is they say, well, instead of that, you can haveads.
And she just randomly starts spouting off ads that she's talking about.
She'll be talking about a topic and it will pick up on it.
And then she'll go, did you know?
Blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah,

(17:56):
Game over, man, game over.
brain computer interfaces.
And when this brain computer interfaces go down, you're going to want a really, reallygood firewall.
And you're going to want to not accept any of the end user license agreements.

(18:18):
And you're going to want to make sure that somebody can remove it.
And I don't know what's going to happen.
So then I'll take it back to the question that you asked.
What do we do about it?
We've resumed our doom and gloom talk about the inevitable end of society thanks to AI.
What do we do to protect ourselves?
That's true, it's Wednesday.

(18:39):
doom and gloom one is on hump day.
The happiness one is on Friday, so we go into the weekend feeling good.
This is the shit, can't believe this, I got to grind through the rest of the week.
And then Friday, it's the joyous one.
So what do we do about it?
So short of like, abandoning society and moving to a cabin in the woods, which I think iswhat you could actually do about it and not have to, which probably self 99.9 % of all of

(19:01):
the first world problems that most of us experience these days.
Check your data.
um Don't allow any end user.
Don't accept any end user license agreement on anything.
Disconnect from social media.
Stop listening to podcasts and dumb-dumbs like us who actually don't fucking knowanything.
Or just here talking about summaries of information that we've pulled in.

(19:22):
Which, by the way, I think we're not just dumb-dumbs.
ah But realistically speaking, looking at your social media footprint is probably a reallygood starting point.
So if what you're doing,
is out there rage blasting or rage tweeting, whatever you're to do on your social mediaplatforms.
Maybe stop that.

(19:44):
And that's good advice in general because things are watching that and they will start tocorrelate those pieces and put those pieces into play.
Next, if you have subscribed to a DNA service of some kind that's tracking yourinformation and has good information on you, start looking into ways to go through and ask
to get your data back and have your data destroyed.

(20:05):
Now, if you're in Europe, GDPR requires them to do that.
In the US, there's no data protection rights around that.
But if you live in California, there is.
And other states are beginning to adopt these laws as well.
I don't know what Canada's digital data footprint laws are, um but it's worthinvestigating.
But lots of countries are starting to do this because they want people to have autonomyover their data.

(20:29):
So once you get that, then you can start taking those pieces out.
um
and try to protect it as best as you can.
And I think that's how that these things are going to make sense for people that have tohave medical insurance or you have to give up your end user rights to your data.
That's a whole different scenario.
And HIPAA is supposed to protect this, but HIPAA only applies to the medical community.

(20:51):
And oddly enough, insurance companies are not considered to be part of the medicalcommunity, even though they have doctors.
And even though they can look at your medical records, those scenarios, are notnecessarily they don't necessarily have to
follow all of the HIPAA rules when they take your data.
Now they might have to go through and anonymize certain data sets, pull those piecesapart, back them out.
But the compliance legislation around it is not clear.

(21:15):
And all it really is is just a, it's the handling of records and who you are and are notallowed to give them to.
But as an individual, if you sign up for an insurance company, part of their licenseagreement says that you have to give them full access to your medical information.
So.
This is one of those things where you're just going to have to be very, very careful withthe input sources that you're allowing people to look at.

(21:36):
And the best way for you to do that, again, is to move out to a cabin in the woods.
Short of that, disconnect as much as popular as possible, and then sanitize your datastreams.
ah much of what you just said terrifying but I'm gonna echo not only for all of thesereasons to look at what you're doing on social media for these reasons but also because

(21:59):
you're not gonna change anyone's mind there is no Facebook post in the history of Facebookthat ever made anyone go god you know I never thought of it that way huh you're right so
just stop because you're not helping yourself you're just you're terrorizing yourselfyou're terrorizing all your friends just stop with the rage posting let's try and have
some fun people come on what's wrong with you all right

(22:19):
We're going to have some fun.
We just had some fun.
We're going to have some more.
We're going to wrap this one up.
Thanks so much for watching this on YouTube or listening to this on your favorite podcastplayer.
Come back in about a week to get another one.
It's going to be also at our website, the fit mess.com.
And yeah, we'll see you there in about a week.
Thanks so much.
See you soon.
Bye bye.
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.