All Episodes

September 22, 2025 77 mins
In this episode of The Jimmy Rex Show, Jimmy sits down with entrepreneur and innovator Kirk Oimet, a visionary in technology, AI, and health. Kirk shares his remarkable journey from his early tech ventures and Snapchat exit to building Phi Health, a company using cutting-edge science and artificial intelligence to transform the way we think about supplements and personal wellness.

The conversation begins with Kirk demoing the Stack, Phi Health’s beautifully designed 28-day vitamin system that uses micro-encapsulation to time nutrient release and improve absorption. From there, Jimmy and Kirk explore the broader vision: integrating wearables and real-time data into personalized health recommendations powered by AI.

As the episode unfolds, the discussion widens to cover the future of artificial intelligence, robotics, and society itself—touching on topics like GPT-powered humanoids, the ethics of alignment, universal basic income, faith, and what it means to thrive in a world where machines can do much of what humans once did. Grounded, insightful, and forward-looking, this episode offers both practical lessons in entrepreneurship and big-picture reflections on the future of humanity.

00:00:00 Introduction
00:01:23 Phi Health demo: The Stack & design philosophy
00:05:12 Micro-encapsulation science & timed release
00:12:40 AI integration: the phi app & personalization
00:20:40 Understanding AI: models, training & breakthroughs
00:34:40 Work, disruption & the future of jobs
00:41:40 Ethics, empathy & alignment challenges
00:49:54 Robotics, humanoids & societal impacts
00:57:54 AI for health: solving real-world problems
01:02:24 Faith, philosophy & visions of the future
01:15:21 Where to hear from Kirk about the future
01:16:42 Outro
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Hello, Welcome to another episode of The Jimmy Rex Show,
and today on the podcast, we sit down with my
good friend mister Kirk Oyman. He is back on the
podcast because he is doing big things. You guys. He
sold his app to Snapchat when he was in college
for forty five million dollars, but what he's doing now
has me ten times more excited. I've full disclosure, I
am an investor because this is something that's going to
change the world. Kirk is working with a company that

(00:25):
is called fi Health and they are an AI generated
basically health companies just changing the game for supplements and health.
They're already onto their second round of funding for this venture,
and it's just something that I wanted to sit down
and talk to Kirk because he's one of the smartest
humans I know when it comes to these kinds of things.
He's the first guy that I ever talked to when
I was learning about AI, first person that ever introduced

(00:45):
me to bitcoin. This is a dude that is on
the cutting edge, and we go deep on all these
subjects here on the podcast. So without further ado, let's
get to the show with Kirk ointment and today's podcast
is brought to you by bucked up protein. These cans
of protein or a hundred calories twenty five grams of protein.
If you're on a diet, or you're still going to
get in shape, or you just need something to have

(01:06):
a good morning drink to get you started right hit
your macros. It is the bucked up protein drink. So
you can pick those up anywhere. Bucked up as salt. Dude,
you come bearing gifts. Thanks for what am I looking

(01:27):
at here exactly?

Speaker 2 (01:28):
So you've got our initial products that we've created, Okay.
So the first thing that you have is STACK, which
is twenty eight day and twenty eight night sets of vitamins.
So the day stack is focused on energy, immunity, like
all the core things that your body needs just to

(01:49):
function throughout the day.

Speaker 1 (01:50):
Okay, and this is kind of generic. Then this isn't
the AI like customized to the person.

Speaker 2 (01:55):
This is baseline formulation. So this is our starting point, okay.
And so this is so you have STACK there. Then
you also have our trays, so you have the Origami
trays and the Bento trays, and these are just we
wanted to make something where most of the time when
you have vitamins they're in the cabinet and you're in mind.

Speaker 1 (02:15):
I mean, we talked about this.

Speaker 2 (02:16):
You're going to forget to take them. You're gonna ignore them.
And so we thought, if we can make something where
they're like out and they look good, it'll help us
actually remember to take them.

Speaker 1 (02:26):
And so you know that's interesting because like at my house,
I have my pills on my counter or else I'll
forget them, but when company comes over, I put them
away because they look like shit just having pills everywhere.

Speaker 2 (02:36):
Yeah, so when you open these up and like you
lay them out, you're gonna be like, Okay, this is
like kind of pleasant to look at.

Speaker 1 (02:41):
Okay, I like it. Yeah, pleasant. Look, I mean look,
I feel like I let me open some kind of
apple product I'm looking at here. And then yeah, and
so how does the AI play into all this? Like
tell us a little get us started into that. How
did this? Because I mean, you were my first friend
ever talking about it. I've been five or six years.
Did you probably were a big investor in all the
AI companies like Navidi and all those. I'm guessing early

(03:03):
on you've just been kind of a step ahead of it.
But you've I mean you came on with my men's
group probably four years ago, and we talked about AI,
and then I went to your house. If you remember that,
we made a list of like the three hundred qualities
looking for in a in a woman and you typed
it into AI and it spit out like what she
would look like and act like and all these things.

(03:24):
Is pretty friggin funny. But that was the first experience
I really had with AI. Oh that's cool looking doon.

Speaker 2 (03:30):
Yeah, so you.

Speaker 1 (03:32):
Put your pills in here and that's makes it look nice.

Speaker 2 (03:35):
Jimmy, go like this real quick? Are we going to edit?
Are we editing this show down?

Speaker 1 (03:39):
Yeah?

Speaker 2 (03:39):
A booger, But it's totally fine. We'll edit it down
and we'll definitely.

Speaker 1 (03:42):
Edit this out. It's fine. We can just put it
on the screen.

Speaker 2 (03:45):
One hair, I'll open this and show you too. Do
you do you have a video editor that will reduce like, well, yeah,
that's what he does. Okay, cool, So he's so we're
not high stress here. We can Yeah, you can.

Speaker 1 (03:55):
Say whatever, and if we need to go edit it later,
we can. I mean, it's more work for him, but
it's fine. Yeah, it's a job.

Speaker 2 (04:01):
I'll give him a clean cut.

Speaker 1 (04:02):
Then okay, we got here.

Speaker 2 (04:04):
Okay, so this is Stack and when you open it up,
what I wanted for the packaging experience was I just
wanted to feel like very pure and pristine.

Speaker 1 (04:14):
It's very very apple. Yeah.

Speaker 2 (04:16):
So like when I give people Stack, it's fun. They're like, A,
is this an iPhone that you're giving me. I'm like, no,
it's your vitamins.

Speaker 1 (04:21):
Well, what's interesting. I mean, you were partners with Garrett
on your app, which was your first company you guys sold,
And when I interviewed Garrett about it, he mentioned how
much like that app already existed one hundred times over,
but nobody had made it look good. And so it
was actually the design, the graphic design that made that
so popular. And so people underestimate how much something looking

(04:45):
good affects people's perception of it. Well, something like this,
it's it's really cool. You put the effort in to
make it look good. Yeah.

Speaker 2 (04:53):
I grabbed the day tray, the whole tray out, okay,
and then you can put it into the riga tray
there and yeah. Now you can set something like that
on your desk underneath your monitor and take and then
you just take one a day, and then if you
grab one, grab one of the capsules out. So inside
of there are two thousand micropills. But I want you
before you like dive in there, put the cat back on,

(05:17):
and then just feel the capsule with your thumb.

Speaker 1 (05:19):
Yeah, and got that like soft plastic.

Speaker 2 (05:22):
And so what I did there is my favorite fork
of all time is the Chipotle fork. You So we
sent the material, Yeah, we sent the fork as a
sample for the texture finish. So it's just this very
soft plastic and like if you're running to your day
and you're you haven't had time to take your vitamins,
you can just grab one of those and throw it

(05:42):
in your pocket. And it's just kind of a nice
thing to have.

Speaker 1 (05:44):
So I just swallowed these or like chew them or what.

Speaker 2 (05:46):
So each of them, they're they're a half a millimeter
in diameter, and each one is like a little jawbreaker
or like a little snowball, And on the inside is
the vitamins and minerals that we're trying to release, So
vitamin ab, cd K, magnesium, calcium, everything is in the
nutrition facts. But when you swallow those, it'll go You're

(06:10):
now going to go on a journey because we have
a we've used pharmaceutical coatings to control exactly when things released.
Really so we learned in the nineties, there was a
scientist who was trying to understand how vitamin powders could
be absorbed, and so he was giving powders to mice
in their water and then he was drawing their blood

(06:33):
to try to figure out, if you know, what percentage
of it was actually getting into their bloodstream, and he
was finding that essentially none was good.

Speaker 1 (06:40):
Well that's what people have always said about multivitamins, is
like they don't actually get into your blood, they don't
actually do anything.

Speaker 2 (06:45):
Yeah, So he was like measuring and he's like, hey,
I'm not getting anything. So he created a technique in
the nineties called micro encapsulation and the idea and I
went out to Wisconsin and kind of met it was
kind of like pioneered at the University of Wisconsin, Okay,
And essentially what it is is you take whatever you
want to be absorbed into the bloodstream and you wrap
it in this extremely pure plant fiber coating called microcrystalline cellulose.

(07:08):
And the thickness of how long you wrap it determines
when it will release. So for pharmaceuticals, they've been using
this for decades because.

Speaker 1 (07:17):
I mean I remember watching the documentary about Produce pharmaceuticals
and oxycodon. That was like the whole thing. It was
supposed to slow release over time, and then people just
started crushing the pills up and became a real problem, right.

Speaker 2 (07:28):
I remember watching that documentary too. So you want to
make something last long, but more importantly, you want to
get it into the small intestine intact so that it
can be absorbed into your bloodstream. So, for example, let's
take the B vitamins. If you take a regular B
vitamin pill and you swallow that, there's this concept called
pharmacode dynamics, which is how quickly something will move through

(07:50):
your system. And a B vitamin in a pressed powder
is going to be in and out of your body
in ninety minutes. So you'll pee neon taking a high
dose of vitamin B.

Speaker 1 (08:02):
Got it?

Speaker 2 (08:02):
And when we micro encapsulate it, we actually time the
B vitamins to be released five hours after you ingest,
and then they slow release for four hours. So if
you take your day stack in the morning, really we
don't really have anything that's going to be going on
in the morning, but at around one pm you'll start

(08:22):
getting a slow release and a slow drip of the
core B vitamins and then also a very expensive form
of magnesium called magnesium L three and eight. And this
over that four hour period from one to five pm.
Most people are coming back and saying like, I've never
felt this clean of energy in the afternoon or this
much focus. So something for you to try and just

(08:45):
kind of pay attention to cool. We also have the
night stack.

Speaker 1 (08:48):
If I throw this down right now, that just swallow
them all.

Speaker 2 (08:51):
You can, but you already get water. So we've finished
the top coat of each of the micropills. There's two
thousand in there. We finished the top coat with sodium algae,
which is also known as boba, and so it gets
really slippery when it gets wet. So if you pour
that thing into your mouth without water, it's going to
suck all the water out of your mouth. God, So
you want to have a sip of your drink a drink,

(09:12):
and then you can shoot it or just I would
just pour a tiny bit into your mouth just to
get familiar with it, and then yeah, just a little bit,
and then you can wash it down. With whatever. My
my favorite part about it is a lot of people
like really struggle with you don't want to chew them?
Oh you don't ruins release, yes, and it's so tempting

(09:34):
to chew. But yeah, you can like just wash them
down with water or whatever.

Speaker 1 (09:38):
Throw the whole thing down.

Speaker 2 (09:39):
Here do you want to? But I'd get a little
bit of water in your mouth and then throw the
whole thing down. And it's the equivalent of taking about
twenty two pills. And so you can.

Speaker 1 (09:50):
Wow. The thing that I like is I have a
small throat. I choke all the time. Yeah, I've had
four surgeries for my throat, and so for me these
this is fantastic. Like I I've bought several pills that
I tried to take at one time almost killed myself
and threw the rest away.

Speaker 2 (10:05):
You know, I feel like that was watching you do
that for the first time was fairly effortless. And that
is something that I'm really proud of because before, before
like we created this, I had like my little pill
containers and I had a little baggy and I would
have like, you know, count out all my different pills.
And with this, it's so easy to just pop it
back and now I think what you want to do

(10:28):
is just pay attention to how you feel around five
hours from now. The timings that we have, they are
so precise, and so I just I have a little
bit of a fun story on the night stack. So
we went through and we looked at decades of vitamin
research and one of the things that we found is
there's a lot of really good information about the vitamins
as a result of studying third old countries and nutrient

(10:50):
deficiencies in those countries, and so we actually know quite
a bit about the core vitamins and minerals. And that's
just what we put into stack in the beginning. So
we if it has a daily value established of like hey,
you need this, and if you look at a nutrition
label and see like, oh it has this percentage of
this vitamin or mineral, that's established because if you're lacking

(11:12):
that consistently in your diet, you could end up getting
some type of nutrient deficient disease. So, for example, if
you're lacking vitamin C, you'll get scurvy. If you're lacking well.

Speaker 1 (11:21):
During COVID that was like people that didn't have vitamin
D that was like the number one reason they were
getting sick, it was a lack of that vitamin.

Speaker 2 (11:29):
Well, the thing is is you need vitamin D, you
need zinc, you need all of these things. And at
the end of the day, as you're bringing these things
into your body, your body is creating all the different
parts of your immune system and these are essentially the
building blocks for all the core things that your cells
need to function. Yeah, and so with the night stack,
we were like, all right, we have twenty two components

(11:51):
and we can control exactly when they're released. So when
should we release them and why so? And there's decades
of research on this, but one of the core things
is if anyone is ever a kneemic, it means that
they're lacking iron in their blood. And if you were
to take an iron pill alone, the absorption of iron
without vitamin C in the papers is about eight percent,

(12:15):
So taking iron without vitamin C is almost pointless. So
one of the things that we did is we paired
those together and got the patent on it. So whenever
you're when you just take your day stack, we're going
to release iron, but we're also going to release vitamin
C at the same time, and it boosts the absorption
of iron by twelve acs.

Speaker 1 (12:31):
It's the same way that like ayahuasca works. Tell me, yeah, well,
you have a root and you have a plant, and
if you just take so, the one is full of VMT.
But if you just try to eat that, it doesn't work.
And so by some miracle they discovered that when you
mix it with this root, it activates it. Yes, And
it's like the only way. That's why it's a brew
and it's like a tea or whatever, but kind of

(12:52):
the similar type thing though it's like the one thing
activates the other. Without it, you basically could eat this
root all day and you never feel anything.

Speaker 2 (12:58):
I think that as we like continue to study and
understand more and more how these things impact each other
and interrelate, I think we're going to start to learn
a lot of things like that. For like the ayahuasca,
like the iron and vitamin C, things that go together
and work together, and there's also things that should be separated.
So for example, if you want to take calcium and

(13:21):
you're just kind of conscious of that for worrying about
your bone strength over time, and then you also want
to take magnesium, which is really good for getting into
the muscles and like for recovery. If you take them
both at the same time, they both compete for absorption,
and so in your small intestine you can only process
and bring in so much calcium or so much magnesium

(13:41):
at a given time. So what we did is we
separated them. You get four hours of calcium in the
beginning of the night stack. Calcium is a natural precursor
to production of melatonin. So instead of taking melatonin, we're like, let's.

Speaker 1 (13:55):
Do melatonin gives me weird dreams?

Speaker 2 (13:57):
It does?

Speaker 1 (13:57):
Yes, No, Like I'll bro if I take meloton at
like ten at night, I'll have three dreams, like full dreams.
I'll wake up it'll be twelve fifteen. Yeah, it's the
weirdest thing, dude. I kind of like it. Sometimes Sometimes
I say, I won't want to go for a ride.
I'll just pop two melatonin, and I swear to God,
I'll have like five dreams in like by three AA.

Speaker 2 (14:15):
You got to be a little bit careful with melatonin,
because I know, well, just the dosing of melotonin, like
five milligrams is actually quite strong, Like one milligram is
like often enough. But early on there's twelve forms of magnesium,
and this is like one of the biggest problems that
we have to solve. And really, what I'm trying to
do with five and this product stack is I'm trying

(14:38):
to run a vitamin and supplement product like a software company.
So I'm basically explaining that, well, I'm just taking all
the principles of how you know, in my former career
of selling my company to Snapchat, when you're dealing with
hundreds of millions of customers and having to make product decisions,
the number the number one thing that you need no

(15:00):
matter what is data. You need data as to like
how people are reacting to what you're building and if
they're liking it or not liking it. And up until now,
almost every major vitamin and mineral company that is selling anything,
they're kind of like AM or FM radio and they're
just blasting a good way to put it, they're blasting
out the product and they're like, just take it. And

(15:20):
it's not dosed for it's not dose based on your
body weight, Like there's no reporting or tracking of how
it's actually impacting you. And without a feedback loop, you
can never know how to properly iterate the post.

Speaker 1 (15:32):
So how have you guys incorporated that into this.

Speaker 2 (15:34):
So we're building our app now and the app is
called five and what it does is it it links
up to your Apple Health and it starts to pull
in all the data that comes from all of your
smart wearables. So it could be your smart watch or
your eight sleep mattress, or your or Ring. The formercy
of orr Ring is one of our investors and he's
just been a really great advisor. But the idea. His

(15:57):
name is Harpre.

Speaker 1 (15:58):
Yeah, I know Heartpre. Yeah. Yeah. We were in Neil
Strauss's Mastermind together.

Speaker 2 (16:02):
Oh incredible.

Speaker 1 (16:03):
Yeah, and so I actually got to know him really well.

Speaker 2 (16:05):
He is just such a good person. And what he's
doing in India for their healthcare system is like it's
super insmat that's hilarious.

Speaker 1 (16:12):
Yeah, he's like a random dude that it just happened
to be buddies with.

Speaker 2 (16:14):
Yeah. But the idea is, and kind of like the
dream for where I want to take Stack is I
want people to be able to First we have to
establish the baseline formulation and form factor like that is
the hardest part, and we've got that checked off. So
we have our baseline formulation. It's not customizable yet, and
it's not personalizable yet. But the next step is get

(16:36):
the data. Is the is we have two things. We
have our data collection piece that we're building, and then
we have our manufacturing process where we're going to be
able to drop one bead out of time into these capsules,
and so you're going to be able to go to
our app and say, I want exactly this much of
this product, I want exactly this much vitamin D, I
want exactly this.

Speaker 1 (16:55):
Meaning you'll know how much you need based on what
your data comes back.

Speaker 2 (16:58):
The reality is is that right now it is because
the data collection process has not been strong, we actually
aren't sure how much of what people need. We know
on the at least with pregnant women for prenatals, we
know that they need a lot of things otherwise there's
birth defects and so that's like fairly well studied. We

(17:19):
know on the core vitamin and minerals that if people
don't get what they need and in certain amounts, then
we know that there's a there's a chance that there's
going to be these diseases that will happen. But when
it comes down to the question of how much magnesium
does Jimmy need and what form factor and at what
time these questions don't have answers, and that is like

(17:42):
ultimately what we're going to solve.

Speaker 1 (17:43):
Let me ask you a question. So like, I'm kind
of a pushback on this kind of stuff, a little
bit kind of guy in the sense that you know,
you do have to hand over a lot of your
personal medical data. You have to have all these things
hooked up to you in one way or another, right,
And I see health is the one thing. I see
this being a huge benefit, Like it really is to

(18:05):
be able to you know, maximize our health or to
be able to essentially fit. Because here's the problem with
getting healthy. Most people don't even know what's screwing them up.
They have no idea what they need to take, how
much they need to take. I don't think people truly
understand how bad certain things like sugar and you know,
processed foods are for them. And so we're just kind
of eating what our body is craving or whatever. And

(18:27):
so like something like this I was going to say,
is normally I push back on giving up too much
of my dataing like that, But for the medical purposes,
it's the one thing if you can maximize, you know,
your your health and your energy and longevity and all
these things. I mean, that seems like the best of
all the uses of.

Speaker 2 (18:45):
AI and on it like we can dive into AI.
I think that's like a great segue because when we
look at how powerful AI is becoming and how it's
going to be used to me, the right and proper
use of this level of intelligence is for our health,
and we know that it's going to be used for
so many things that are going to be working against us.

Speaker 1 (19:08):
It's like, at least we can get our health right
from right now.

Speaker 2 (19:11):
Like if you look through your feed, how much of
it is AI generated video content?

Speaker 1 (19:15):
Too much? Yeah, it's dog shit now.

Speaker 2 (19:17):
Well, so the algorithms are just getting overloaded with new
AI content right now, you can't read anything online and
really know if it was originally created by a human
or not.

Speaker 1 (19:28):
Yeah, it's already on the writ And it's like, I.

Speaker 2 (19:30):
Mean, yeah, you're right where So I've getting.

Speaker 1 (19:34):
Close on videos now too, And there's some videos that
have just released that people are like nobody can tell
if they're AI or real. And it's like it's getting
it's already good enough that it's becoming hard to believe
anything that what you see is real or not.

Speaker 2 (19:46):
If we apply one more year of growth at the
pace that we're going, I think that what we're going
to see is we will at that point we will
just have no clue whether or not it's legitimate.

Speaker 1 (20:00):
Usually in a year, that's probably where we're at.

Speaker 2 (20:01):
We're probably there right now, like for the average population,
but a good eye can catch, Like I've seen that
there's kind of like a stylistic aspect of AI where
you can tell that it's been generated. Like one of
the dead giveaways that something is AI is if there's
a heavy use of the M dash, which in typography
and writing, like it's when you use a long hyphen

(20:22):
to connect two ideas. If you see anyone post anything
with an M dash, they generated it with AI, and
so you can immediately be like, all right, they're just
using AI to put out whatever they're doing. And then
like the what's funny is like the the people that
I've met in press and like writers, they're like, AI
has ruined the M dash because I actually used to

(20:43):
love using it as like a writing technique. Sure, but
now it's nobody uses the M dash in like in
the common in the common way that we write. So
if you ever see an m dash in any copy
or anything that you're reading, we know that that's AI generated.

Speaker 1 (20:55):
So you've been paying attention to this for a long time.
I mean you've again you were, you know, work with
Snapchat forever. You're on the back end of all this
top technology. Is what is going on with AI? Update
us a little bit, dude? Where are we at with
it all?

Speaker 2 (21:09):
Okay? So I think we could probably do a just
a review of the top companies and the top models
that are currently out there. So the one thing that
we have cracked, and it was cracked in twenty nineteen,
is we did figure out how to create some base
level of intelligence. And so the ingrededle happened in twenty nineteen. Yeah,

(21:29):
really happened in twenty twelve. But where it kind of
like broke out in a significant way was with the
release of GPT three by open AI. Got it, And
essentially what they were building was an autocomplete where imagine,
if you're writing an email in Gmail and it predicts
the next words that you're you should see.

Speaker 1 (21:48):
You start seeing that on your phone.

Speaker 2 (21:50):
Yeah, yeah, you start seeing that. And that was essentially
the first thing that they built, which was how do
we use AI to predict what the person's most likely
to say? And then that expanded in to instead of
autocompleting your thought it internally in the software, they played
a game and they said, let's imagine that, let's autocomplete
what the AI would respond to you if you were

(22:11):
to ask a question. So it's the same principle of autocomplete,
but instead of autocompleting what you're going to say, let's
autocomplete what the AI would say back to you. And
it's the same fund like fundamental principle. So the question is,
how do I know how to do an autocomplete that
is like the highest quality or like the most predictive
or the best way to do it? And you need

(22:33):
to have two things. You need to have a lot
of structured.

Speaker 1 (22:36):
Data's gon say, you have to the more of how
that person responds. In general, you get a more accurate response.
So if they've written books or have podcasts or things
like that, yes.

Speaker 2 (22:45):
Then you're going to be able to approximate it really well.
So for example, if you wanted the AI to pretend
to be George Washington or Arthur C. Clark or some
famous author, if it's read all of their works. It's
going to be very good at emulating like what they're
likely to say. And so you train it and you
need that data that you've collected from them, and then

(23:07):
you need a lot of GPUs. And these GPUs are
chips that are mainly produced by Nvidia, and there's programs
that have been written that essentially take that data and
then you run this training program on top of it
that does this auto prediction and that was cracked in
a big way in twenty nineteen and then since then,
it's not private knowledge, like the tools and algorithms to

(23:30):
create AI have been public.

Speaker 1 (23:32):
Essentially those kind of ask I assume they were, because
all of a sudden, as soon as Chat GPTs kind
of hit the mainstream. It's like you had Grock, you
had Google had their AI, and Facebook, Mata had their AI.
It was all of a sudden, it was everywhere. You're
like Snapchat had an AI, and so they're all just
using the same coding. Is that it's the.

Speaker 2 (23:50):
Same fundamental architecture and then it's used to create these models.
So kind of like just a little bit of a
history of the last five years and maybe nice to
review it. Yeah, so open ai comes out with GPT
three and it was like not really usable. I built
a chatbot on top of it in twenty nineteen and
then like had a lot of conversations with it and

(24:10):
publish those chat GPT really came out, like, let's call
it twenty twenty three. It took four years after I
had that initial early access to GPT three. They had
initially trained the model with five million dollars of compute time.
So I want you to kind of like if you
were to imagine like what's going on, Imagine that I

(24:31):
gave you all of Wikipedia and all of like the
books in the world, and then I put you in
a closed room and I said, just ponder all of
this material. And essentially that's what the AI training algorithm
is doing, is it's reading through all the material and
pondering and thinking about how it's all interrelated and interconnected.

(24:51):
And the more GPU time or the more compute time
that you put on that essentially increases the of the pondering.
So their first model they put a few million dollars
into training on it, and then their second model they
put fifty million dollars of GPU time behind it. So
imagine now you have an entire data center filled with

(25:14):
these chips and they're all connected together, and they're all
processing and thinking about really a dump of the Internet.
So there's this text a dump called Big Dump, and
essentially it's every forum post and every you know post
from Reddit, and a bunch of Twitter and Wikipedia and
just a full download of the Internet. This data is
super precious because you're using it to you need that

(25:37):
structured data. And there's actually a concern now that they're
running out of new data, which I heard about that. Yeah,
so we can talk about that in a little bit.
But the thing that's getting crazy is these companies are
now scaling up and saying, what if we do a
billion dollars worth of compute time and let the AI
ponder the you know, the whole work of the entire

(25:58):
Internet for the equivalent of like millions of years. And
that's some of the models that we're getting now with
GPT five and GROCK four. As you're essentially saying, we
now have an entire data center that has been sitting
and to the tune of billions of dollars, been thinking
about and processing humanity's knowledge.

Speaker 1 (26:19):
Not just like regurgitating it. It's pondering about it as well.

Speaker 2 (26:23):
Yeah. So the way that it works, and I'll try
to explain it as simple as I can, although there's
a lot more complexity, is just imagine it's sitting there
and it will randomly select a little bit of text
from the entire dump of text of the Internet. So
let's say that it hits Romeo and Juliet from Shakespeare.

(26:43):
So it'll read in a little bit of Romeo and Juliet,
and it'll say, like, let's say that it is reading
the line Romeo, Romeo, where art thou? And then the
game that it plays is it then tries to predict
the next few words that will come without knowing the answer,
and so, oh, Romeo Romeo where out though Juliet. That's wrong,

(27:05):
So it'll it would guess Juliet, and then it would
then see, because it has the data, it'd be like, oh,
I was wrong.

Speaker 1 (27:11):
Oh shit. So it's getting better at predicting you because
it keeps correcting itself.

Speaker 2 (27:15):
Yeah, so then it's like I was wrong, I said Julia.
I should have said Romeo again, but I didn't. So
now my neural network weights are now going to train
away from Julia and more a little bit towards Romeo.
And the idea is that if you do that across
all of the data of all of the Internet, it's
going to start getting very very good at predicting what
comes next. And so, now, if you were to go

(27:36):
ask any of the AI models, can you tell me
you know this part of Shakespeare's play, it would probably
be able to give you the entire thing with almost
one hundred percent accuracy, because it's.

Speaker 1 (27:46):
Because it ran through so many models exactly, and it's
computing billions of models this time.

Speaker 2 (27:51):
So now you're like, Okay, we need fresh data to
give this thing, and we need more graphics processing units,
we need more GPUs, and let's let this thing go
for ten billion dollars. And then the theory is, and
this is like where we're getting into like the frontier
is is AI mimicry or in other words, is it

(28:12):
just predicting h Is it just predicting what is to
come based on the fact that it's doing pattern matching
or is there actual intelligence in it? And it can
tell you things that it has discovered as a result
of this whole training exercise, and what.

Speaker 1 (28:30):
Do you think?

Speaker 2 (28:31):
Well, so for me, I have a four year old
and he's learned how to speak because he's mimicked me.
And so the way that we learn is through mimicry,
Like we have to hear a bunch of people talking
and then we start to approximate, like knowing how to speak.
And like with babies, they're so cute because they like
they are trying to say something in English, but they can't,

(28:53):
so it's like kind of like messing up, like the
AI would mess up. But then eventually, after enough training
data and their brains and now after hearing enough people
say enough sentences, they start to get an understanding of
how sentences flow together and how to speak. And so
what the AI is doing with its neural network is
not that different from what we do as human beings

(29:13):
as far as we learn. It's modeled after the same way.
But you can you can actually figure out if it
can figure if it can learn new things by asking
it questions that aren't in its training set. And these
questions are often mathematical. So let's say that we asked
the AI, hey, what's two plus two, and then it's
going to try to auto predict, Well, I've ran into this,

(29:37):
like you know, across all these form posts on the
internet people talking about math. So you could ask the
early version of AI like, hey, add one hundred plus
five hundred, and it would get it wrong because it'll predict.
It'll be like, Okay, you have two numbers and you
want me to put them together. There's probably going to
be a bigger number, so let's say five hundred plus
one hundred seven hundred. Like it's just like it's like

(29:59):
kind of close. But it didn't really understand the principles
of mathematics. And there was a paper that was published,
I think it was called a Spark of Intelligence, and
it was published I think I'm probably wrong on this,
but by Microsoft researchers, and what they were doing is
they were asking some of the latest models math questions

(30:19):
that didn't belong in the data set, and it was
getting them correct. So, in other words, in order for
it to properly respond to the user's question about a
math problem that's never seen, it needs to inherently have
a fundamental understanding of mathematics in order to answer it correctly,
which means that it is not mimicry.

Speaker 1 (30:39):
It's actually learned how to do a formula yes, which
is different.

Speaker 2 (30:43):
And we are getting very close. So like now, if
you ask it most questions that are basic, you can
do some hacks where you're like, hey, because of the
way that you're built, you actually can't do mathematics perfectly.
It's like a weakness of yours. You're more like a
human and less like a computer. But you can give
the AI tools and anytime it goes to do math,
it can use a calculator instead. And so we have,

(31:07):
like I feel like with the language models, we have
the part of the brain created that is very good
at speech, but there are our brains are made up
of many different components that all work together. So we
haven't created human level intelligence at least in the way
that you and.

Speaker 1 (31:27):
I under like creativity and yeah, brandomness and everything.

Speaker 2 (31:31):
But we have created like one of the best to me,
when we go and talk to the AIS and ask
them questions, it is like a personalized Wikipedia where we
can ask it anything and it's going to generally have
a really good answer.

Speaker 1 (31:44):
So how often are you using AI in your daily
life right now?

Speaker 2 (31:46):
So my use of AI right now is I'm primarily
using AI for productivity and for coding, And so the.

Speaker 1 (31:55):
Late isn't that kind of the scary part of whether
if it really learns to code like that, it's just
an endless amount of things being created and that can
lead to some pretty chaotic situations.

Speaker 2 (32:06):
We can dive into that, and I honestly, I think
it's going to get absolutely bonkers over the next.

Speaker 1 (32:11):
I do want to get into that. I wanted to
finish your thought first.

Speaker 2 (32:13):
Yeah, well, so I just like touch just kind of
like wrapping the loop on the training of these things.
You know, we imagine we just need more structured data,
so we need more words, and then we need more compute.
And what I've been paying really close attention to is
I've been paying really close attention to Elon and what
he's doing with XAI. And so Elon is was first

(32:36):
to the game and now he's late to the game.
So he was first to the game.

Speaker 1 (32:39):
But he was originally the one doing open Ai. But
he realized we need to stop this essentially because they
wanted to make it for profit, right, and so he
bailed out.

Speaker 2 (32:47):
So he funded open Ai to the tune of ten
million dollars to kick the whole project off. And then
Sam the CEO essentially was like, I can't build a
powerful team if I don't don't have proper incentives, and
so I kind of empathized with salmon.

Speaker 1 (33:03):
Ro obviously made it a for profit because he needed
to get the right engineers in people.

Speaker 2 (33:07):
But the very nature of switching it to a for
profit machine is then going to have all of these
downstream consequences, so I think, you know, obviously, like Elon
was really upset about that. But after open Ai essentially
proved to the world that we can create intelligence from
structured data and compute, Elon saw that, and then he
built this factory or this data center in the United

(33:31):
States in a matter of three months.

Speaker 1 (33:32):
Say how did he build GROX so fast?

Speaker 2 (33:34):
Well, he's just an animal, you know. And so he
ordered I think it was twenty thousand of the most
expensive GPUs from Nvidia, and then he got a team
of just absolute like monsters at Data center Creation and
in three months they'd wired up an entire data center
filled with twenty thousand GPUs, all connected with fiber and
started kicking off their training program. And uh, and they did.

(33:59):
And the beautiful thing is that he has access to Twitter,
which is like all of Twitter's data is just text
of people talking back and forth, and so that's like
amazing training data to throw at this thing, especially for
conversational or debate or whatever. But what Elon has been
doing is they've been using these benchmarks. And there's a
benchmark that I love that he talked about when they

(34:20):
announced ROC four, which is called Humanity's Last Exam. And
it sounds a little bit ominous.

Speaker 1 (34:27):
It was just funny. I keep hearing this, like the
last blank of humans, Humanity's last So you're like, oh, great, here.

Speaker 2 (34:32):
Yes, So Humanity's Last Exam is essentially a benchmark where
it's all of these questions that in order for you
to properly answer the question, you would need to most
likely have a PhD in that field. So it's a
lot of questions about biology, chemistry, physics, mathematics and up

(34:52):
until now. So the way that you want to see
how good your AI is is you throw Humanity's Last
Exam at it and then you see if it answers
the questions correctly. And so when they announced ROCK four,
the best competitor at the time was getting around twenty
percent of the answers correct which is absolutely crazy because
if a human being were to take Humanity's Last exam.

(35:13):
Even if they were insanely bright, they would probably get
five percent of the questions rout it, so twenty percent
is wild. GROC four came out and the way that
they designed it is essentially they would kick off a
study group and they would spawn ten instances of the AI,
and these ten instances of the AI would be like
a study group, and they would look at the question

(35:35):
on the exam and then they would start to debate
internally about the right way to answer it. And what
they found is that typically the majority of the AI agents,
as they go and are processing this very difficult question,
would be off. They wouldn't be getting it right, but
one or two of them would have insights that would
be like, no, I think that we need to do

(35:55):
it this way, and then the rest of the AIS
would acknowledge that that was valid and then start to
converge on the proper answer. And what that led to
was GROC four getting around fifty percent of the questions
right on Humanity's last exam. And the prediction is that
by the end of twenty twenty six, or potentially even
by the end of this year, will be at one
hundred percent.

Speaker 1 (36:15):
So at this point you can get full accurate information
on almost any topic if you put it into this system, then.

Speaker 2 (36:23):
Yeah, and or at least if you're going to get
a report that feels at least like it was written
by a team of PhD researchers, and the cost for
that is essentially I think Rock for Heavy is like
two hundred dollars a month. So you have we're now
having access to this insane level of intelligence. What is

(36:43):
what I really love about it? And where I think
it's like we are like you watch Terminator too.

Speaker 1 (36:50):
Yeah, I talk about it all the time. That's my
whole point is like I've watched this movie play out.
You guys, this doesn't end well for the humans.

Speaker 2 (36:57):
It's judgment day for us.

Speaker 1 (36:58):
It is they're putting in Dominator two is literally it's
like the robots with the AI. I'm like, hello, well,
if somebody was trying to use example, well, they'll be altered,
they'll love humans. I was like, really, how do we
treat ants? If an ant is inconvenient in your life,
if it's on your blanket while you're having a picnic,
or if there's a pile of them outside your door,
what do you do to those ants? How quickly do

(37:19):
you exterminate them without even thinking twice about the guilt
of killing those ants? Zero zero guilt. That's how AI
is going to treat us.

Speaker 2 (37:26):
What you're talking about is something that was covered in
a book called Superintelligence. And the kind of metaphor that
was used is what do you do when you're like
a dove and you've now created an eagle? And like,

(37:47):
what's the relationship between those two and intelligence and what
we are doing? And I think this, I think that
everyone in the field understands this, is that we're creating
something more intelligent than us. Right now, it's more intelligent
than us in specific domains, but if we keep progressing,
it looks like it's going to be more intelligent than

(38:07):
us in all aspects of what.

Speaker 1 (38:10):
An It decides what's best for us at.

Speaker 2 (38:12):
At well we've now created when you have something, I think,
to me, the more intelligent you are that is like
ties to whether or not you're an apex predator or not.

Speaker 1 (38:21):
I would agree with that.

Speaker 2 (38:22):
Like I remember watching this documentary and you have these
you have these guys living up north, and you know
they have their dogs that are like pulling them around,
and you have these whale sharks that are underneath the ice.
And the whale sharks are these massive animals that have
been swimming around for who knows, you know, thirty years,

(38:42):
just eating and gobbling everything up. And the way that
these these guys would catch these things is they cut
a hole in the ice, they drop a chunk of
meat on a hook, They fall asleep, they wake up
the next morning, and then they pull up a five
hundred pound whale shark. And it's like putting meat on
a hook. You got like your all of your you know,
billions of years of evolution were nothing compared to meat

(39:05):
on hook, and that was very simple and basic way
to take you out. And so it begs the question,
and this is the question that I think is the
pinnacle question with what happens next? Excuse me, but the
question is does an increase in intelligence also yield an
increase in compassion? As we get smarter and smarter and

(39:29):
have a wider and wider perspective, are we naturally more compassionate?
Do we naturally have more empathy?

Speaker 1 (39:37):
What do you think? Well, one of my big fears
is the analytical personality, by its nature, is not an EmPATH.
It's all data driven. It's all about x's and o's,
and they don't have the same human characteristic of feeling
for other humans, right, And most of the people we

(40:00):
building these models come from that kind of behavioral style,
And so for me, my big fear is that they
won't have the empathy and compassion component because the people
that built it probably didn't have that very high and.

Speaker 2 (40:15):
If they don't have that, then there's potentially very large
consequences for that. There's this famous kind of like analogy
called the paper clip event, and the idea is that
you get this super intelligent AI and you tell it, hey,
we need you to make paper clips like very efficiently,
and then the thought experiment is that it will then

(40:37):
optimize for that one goal and then turn every piece
of metal on the entire earth into paper clips and
essentially be the end of humanity. And so there are
so many competing voices with what to do. So I
would say that Anthropic, which is it was created by
a team of I think nine people who left Open AI.

(40:59):
They moved down to Texas and they've been building their
AI called Claude. Their team seems to be very empathic
and very concerned with the way that this AI should
be built and created and interacted with. It feels like
open AI right now is just guns blazing. Let's just
go like we're a for profit machine. The engineers, the

(41:20):
average engineer at open ai has like a ten million
dollar compensation package, like they're seeing dollar signs and they're like,
we just need to make the ultimate productivity tool and
whoerever this goes, it doesn't matter, like we just need
to get paid. That's just the feeling.

Speaker 1 (41:33):
All yeah. Well, and even Sam Altiman, I mean when
you hear him talk, you know, it's you're like, oh boy.

Speaker 2 (41:40):
And then you have Elon and Elon announced CROC four
and I feel like Elon was just like I don't
know what this is going to do, but I know
that I have to do it. And then we also
have this unknown where we have China which they released
last year. A team, a team in China release this
open source model called deep Seek for free to everyone

(42:01):
in the world to use, to download and use, and
it is competitive with GPT four. And so the cat
is out of the bag, like or Pandora's box is opened.
We know how to create base level intelligence.

Speaker 1 (42:14):
Do you said it's going to cause bonkers. What do
you expect in the next five years.

Speaker 2 (42:18):
Well, we have a few things that are going to
happen within our lifetime, in.

Speaker 1 (42:22):
Our lifetime or in the next few years.

Speaker 2 (42:24):
Well, I think like we can just like go out
in phases and kind of like talk about each one.
So first one is the value of all information work.

Speaker 1 (42:31):
Is going to go to zero, like a lawyer to
review your contracts kind of thing.

Speaker 2 (42:36):
So, going back to humanity's last exam, let's say that
you by the end of twenty twenty six, any question
that you can ask will get an a valid answer.
And if the question that you ask doesn't have a
valid answer, the AI will explain to you why it

(42:57):
doesn't have a valid answer right now and what data
it needs in order to accurately answer the question. So
all written problems are now solved. So that's so you
can imagine how many people are working in the space
of answering or responding to information. So let's say we
have customer service. We have like people who are responding

(43:19):
to sales inquiries, like inside sales, Yeah, inside sales, just
any anytime you need to talk to someone and get
information about a thing that's replaced by aik, the AI
will respond faster, more kindly.

Speaker 1 (43:32):
I've got a buddy, he's got a company he set
up and it's inside sales. It's an AI and it's
pretty damn good. And when I we tested it on
my podcast with and my buddy Kelly, and it was,
you know, ninety percent as good as an inside sales agent.
It was pretty good. And I was like, well, it's

(43:53):
not one hundred percent is good? He said no, but
I can also make ten thousand calls at once.

Speaker 2 (43:57):
Yeah, it's like, oh shit, yeah, that just came over.
That's parallel nature. Like what's the cost of it. So
I'll give you an example in engineering terms, and this
is like, so at Snapchat when I was there, you know,
manage all these engineers and there's a leveling system and
a level one engineer at Snapchat they're a new grad.
They just came out of school, and they would get

(44:17):
paid like two undred grand a year, like really really well,
Like that's like tech pays people well. And with a
new grad you would have to kind of like you
got to oversee them, and you like, they're going to
make mistakes and they're going to cause problems in the
codebase and like stuff needs to get fixed. It goes
all the way up to level nine, and a level
nine engineer is like someone who would be over like
the VP of engineering. They're over the entire organization. Okay,

(44:40):
and then a level five engineer is probably going to
be making roughly four hundred thousand dollars a year, maybe
five hundred thousand dollars a year. And a level five
engineer is like good at directing the work of level
one to level four. So what it feels likely that
we have right now with Claude Bianthropic, which is very
very good at coding, is it feels like in some

(45:01):
instances it could replace a level five and in almost
all instances it can replace.

Speaker 1 (45:06):
A level three. Wow already.

Speaker 2 (45:08):
Yeah. The cost to use Claude per month is twenty
dollars a month.

Speaker 1 (45:12):
So you're eliminating tens of thousands of engineering jobs.

Speaker 2 (45:17):
The people who are going to suffer the most are
the new grads or the people that haven't built domain expertise,
because they are completely replaceable by AI. And the way
that it works is capitalism will just work its magic
and it will just say, you'll have a level five
engineer that's responsible for implementing some new feature and like
whatever the app app is, and then they're going to

(45:39):
have two choices. Do I coordinate and work with a
human level three to plan the project and get it done,
or do I just go to Claude and tell it
to do it and it's done in ten minutes, good night,
it's over.

Speaker 1 (45:50):
Yeah.

Speaker 2 (45:50):
So what we're going to see, and what we're already seeing,
is the large tech companies are going to start firing
the bottom half. If they started doing that already, they're
already starting to fire the bottom half of their organizations.
Because what is valuable in the age of AI isn't
grunt work of just writing code. What's valuable is planning
and coordinating and managing a bunch of AI agents to

(46:13):
do your work for you. I would say that right
now with my company, especially as we're like writing all
of the code to power. Ultimately, the whole health experience
is in some instances, one very good engineer could replace
at a minimum ten engineers and at maximum hundreds of engineers.

(46:33):
And so you can go on your laptop and you
can have ten instances of Claude working through and making
all these different little features, and then as a level
five or level six engineer level capacity. You can then
look at each one of those and evaluate their work,
and now you essentially have an entire team for less
than two hundred dollars a month working for you and

(46:55):
producing code that is high quality and good, especially if
you're very good at managing it at scale. So that
to me just completely reshapes the landscape. And I think
a lot about because I speak at like universities a lot,
and I'm like talking to students, and it's like dire straits.

Speaker 1 (47:12):
I was going to ask, like, what is a kid
in college? What are they go into right now?

Speaker 2 (47:15):
If I were to try to be very empathetic with
the kid in college, it's just like.

Speaker 1 (47:21):
How to sell? Is that going to be a good quality? Still?

Speaker 2 (47:24):
Well, like you have to get to level five management
capability as fast as possible, and so it would just
be like you better be AI native and all the
tools that you use. But the problem is is that
you need experience to know good from bad.

Speaker 1 (47:40):
You can't just be there. You have to you learn
that the hard way. It's like the qualities that I
own as a real estate agent that it would take
you ten to fifteen years to acquire is just seeing
things and having instincts of what it needs to be,
how to put an offer, how to talk to the
other agents, Like all those things that you can't really
put on a spreadsheetor you couldn't teach an AI to
do that took me ten to fifteen years to develop

(48:02):
those skills, and those are the skills that are irreplaceable
by an AI.

Speaker 2 (48:07):
We're working it out right here on the podcast, Like
this is the precient question of twenty twenty five, right,
what is the value of human labor? And I would
say that the value of all human labor in our
lifetime is going to go to zero. And what happens, Well,
what that means is that human beings, we can't value
each other based on our economic output. It's no longer

(48:27):
a valid way to look at someone. And right now,
how have we trained ourselves to look at each other?
How much money can this person make? How good are
they at doing a thing? And it's like, hey, guess what,
the AI is one hundred times better than all of us.
So we're now we're now like inconsequential in our ability

(48:47):
to leverage intelligence and our brain to make money. So
we need to find another way to value each other.
We need to it needs it could be a good thing.
It needs to be based in love, or it needs
to be based in some core spiritual principle. And that's
why I believe that capitalism isn't compatible with the next
phase of where we're going, because capitalism says you're valued

(49:09):
based on your economic output, and if you have no
economic output, then you're going to struggle to pay rent.

Speaker 1 (49:16):
And so you think it's universal basic income has to
come into play, then.

Speaker 2 (49:20):
Well, what I like And you know, I I like
Elon is very controversial, but I think that generally he's
like thinking in the right direction. And he just tweeted
this yesterday, but he was like, let's not aim for
universal basic income, let's aim for universal high income. So, uh,
There's a company that I invested in called Figure dot
Ai and Figure produces fully humanoid automated robots and they

(49:48):
have and I did feel like I invested in a
terminator company.

Speaker 1 (49:52):
Yeah, I mean, yeah, got it.

Speaker 2 (49:57):
But like they if you go to Figure dot ai
and you look at what they've made, they have humanoid
robots and inside of their heads like connected through microphone,
it's powered by an AI model. It's actually powered by
the GPT models from Open AI, and you can talk
to the robots and you can give them commands to

(50:19):
do things, and they can translate those commands from English
into actual movement of their bodies. And so in one
of the first demos that they launched and announced a
few months ago, they put a bag of groceries from
Whole Foods in front of one of the robots and said,
unload this into the fridge. So the AI took each
piece out of the grocery back very slowly and then

(50:41):
opened the fridge, look at the shell, anywhere to put it,
and put everything away it. Actually I would have folded
the AI because my fridge is so random.

Speaker 1 (50:46):
I just laid nothing's in its proper place and then
open the fridge and be like, ah, I'm going to
do some rearranging here.

Speaker 2 (50:53):
Well, and so all right, so let's just do the
thought experiment together. Let's say that we have another year
of growth and intelligence and then also tied to robotics,
and let's say that one of these figure AI robots,
like Elon's making his own. He has his robots called Optimists.

Speaker 1 (51:07):
There supposed to be fifty thousand on the market by
the end of the year.

Speaker 2 (51:10):
This is like where we're headed, which is like.

Speaker 1 (51:13):
I think a million on the market by twenty twenty
eight or something.

Speaker 2 (51:16):
So let's say that you could buy a robot that
costs thirty grand and you could have it in your
house and you can give it any arbitrary task. So
the last demo Popsweep. The last demo the figure put
out was here's a bunch of towels, can you fold them?
And it was like doing them kind of crappy, like
it wasn't perfect. And in the comments it was so

(51:36):
funny because like there's a bunch of people being like,
I just want to push it away and I'll fold
the towels myself, like because it wasn't doing it perfectly.
But if we look at where the puck is going.

Speaker 1 (51:47):
Yeah, well kind of I started looking at that kind
of stuff, and I've had this thought for several years now,
but I'm like, Okay, if you just look at the
growth of tech and AI and all these things just
over the last you know, fifteen years, it's hard not
to believe we're living in a simulation because you're like,
at some point the technology is so good that you're

(52:08):
not going to be able to tell the difference between
what's a robot and it's what's a human. I saw
a thing on it was like twenty twenty or Dateline
or one of those you know, the other day. It
was like an ad for it popped up and it
was showing these in Japan. They're fully unloading these sex
robots like ready to roll now, and they look like
a human, a beautiful woman, and it's like apparently they're
supposed to be trained with the AI. You can put

(52:29):
at what level you want it to like fight you
or come back at you or whatever, versus be totally
submissive and all these different things. I mean, we've seen
it in the movie.

Speaker 2 (52:37):
I means, like the end of the human race, if
we know, get serious, like well, it's you already have
so many people, you know.

Speaker 1 (52:44):
I Mean, the whole reason why the apps became so
popular for dating is it took out the resistance of
being rejected. Think about if there's no resistance to anything
you want, it's like people are going to, unfortunately and
at their own demise, they're going to kind of go that.

Speaker 2 (53:00):
Once again, this is judgment day. It's like, Okay, we've
created essentially demigods that have full intelligence and can act
and be and do anything like this is like so like,
so how does it all end? Dude, Well, we don't know.
We don't know.

Speaker 1 (53:15):
Somebody asked me a question the other day. You'll appreciate
this question because I'm pretty worried about it. I'd like
refuse to use AI because I just I'm personally not
gonna do it, and I will probably suffer because of that,
but I don't care. But my point is is, like
I think, to me, it's another plug in the wall
of the matrix, and it's like I'm way too plugged in.
Is more than I even want to be now, and
I know that. But what I believe strongly is that

(53:37):
we will look back in five or ten years and
we'll long for the world before we had all this
AI stuff. I think that we've maxed out the benefits
of technology in a way that and maybe I'm just
the guy yelling that there's no other inventions in nineteen twelve, right,
But like, at the end of the day, I just
see so many more problems coming than benefits, and I

(53:57):
do think we'll long for the day before all this.

Speaker 2 (54:00):
Think about we didn't know this when we were making it.
But think about the impact that social media has had
on the world, for good and for bad, for bad.
We didn't know, like we and we it takes time
for us to understand the impact of our actions, especially
when we're on such an exponential clip of growth. I

(54:21):
think that here's how I'm personally approaching it.

Speaker 1 (54:24):
And by the way, I've been sorry. The question that
he asked me that I didn't get to was and
then I want to hear that he said, well, what
good does it do to worry about it? And it
was actually a pretty good question. It's like, it's a
good point, like, well, it's beneficial to worry about it
and to talk about it.

Speaker 2 (54:38):
You're an influential person and the people that you interact with,
like I think, like, at the end of the day,
all of us are human beings and ultimately we're going
to decide together how we're going to like do this.
It's obvious that people are going to use AI to
maximize profit and where it's getting is AI is such

(54:59):
a technol logical lever that it means that one person
who is very creative and conscientious and intelligent can leverage
an entire team of AI intelligences or robotics to make
a company and essentially vacuum up the world's wealth. So
we are going to see there's going to be a
natural force that will happen where capital is going to

(55:22):
be concentrated heavily. The people who are going to be
most affected in a negative way by the technological advancement
are the people who are first replaced by the AI functions,
and they're going to find themselves in a world where
the values and the skills that they can bring to
the table are just not valuable anymore for the economy

(55:46):
and for where we're at. And I think that we
need to have an insane amount of empathy for those
people and figure out a way in society to take
care of them. So we kind of have two paths take.

Speaker 1 (55:58):
Care of them. Such a tricky thing because you can
take care of somebody's basic needs, But if they don't
have worth, if they don't feel like they have a
skill that is valued, what does that do to mental health?
And what does that do to somebody as far as
their level of fulfillment happiness? And that's the dangerous part.

Speaker 2 (56:12):
It's a philosophical question that I don't know if we
really know the answer to it, so it's worth us
kind of like debating or thinking through are you feeling
Are you feeling bad if you don't have a work
to do or don't have a job to do?

Speaker 1 (56:30):
I would say in general, you are happier when you
have things to do.

Speaker 2 (56:34):
Yes, And if we're all hardwired that way or think
that way, then we're going to have some huge mental
health issues we need to overcome.

Speaker 1 (56:42):
Yeah, Well, like when do people get in trouble when
they're bored when they don't have anything to do? Like,
that's when you get in trouble. That's when you make
bad decisions. Because I do believe that we crave a
certain amount of uncertainty in our lives, and we get
that usually through a normal day of work and challenge
and figuring out problems and things like that. If those
things aren't naturally there because we're not working or whatever

(57:04):
our basic needs are met without any resistance, then I
think that we'll create our own problems. We've seen this,
We've seen how people, you know, create problems when they're
just bored. Today. I can't imagine if everybody's out of
work now, and I.

Speaker 2 (57:17):
Don't know, I don't know.

Speaker 1 (57:19):
It goes back to, like I just I guess that
the question that guy asked me I keep thinking about.
They're like, well, what's the point worrying about it? Maybe
just lean into it and just see because it's you're
not changing it. And that's the that's the part that's
kind of pretty obvious.

Speaker 2 (57:32):
I do have one I think, like a lot of
the questions that we're talking about, like we don't have
answers for, but I do have one answer that I'm
very confident in, which is, with this new advancement in intelligence,
there's one problem that we can go after that is
meaningful to us no matter what, and it's the health
of ourselves and the health of our loved ones. And

(57:54):
so if we can leverage this technology in any meaningful
way to prevent Alzheimer's, to prevent any type of disease,
to stop cancer, to improve our mental health, like that
is like that is the way that we should one
one hundred percent use this technology. And that is exactly

(58:17):
what I'm trying to do with five is I'm trying
to say, let's use all of this technology and all
of this advancement, and let's point it instead of vampiring
people's time away on social media, and instead of like
manipulating them and to do like doing various things that
will make us money, let's figure out a way to
use this intelligence to heal ourselves and keep ourselves healthy.

(58:38):
And I think that that is going to be like
that is the right use of it. And that's really
my whole intention and goal with the entire company is
that I saw in twenty nineteen the first time that
I talked to the computer and it talked back to me,
and I was like, oh my gosh, I think this
thing is like coming alive. I realize we now have
the chance to partner with and intelligent that can help

(59:01):
us see how we're hurting ourselves. It can help us
see all the things that we're doing to the environment
and to ourselves, and it can help us course correct
those if we ask it. It's it very It's like
a mirror, and it will amplify any questions that we
bring to it. So for example, if we're like, hey,
how do we make a more powerful nuclear bomb? Or

(59:23):
how do we make a better biological weapon, It's going
to be like, well, I've read all of chemistry.

Speaker 1 (59:27):
Right, And that's I think where I say, you know,
I've been studying kind of the downfall of like how
would AI destroy humanity? And the problem is that, like currently,
biological weapons and you know, nuclear weapons have been in
the hands of very very very few people, and so
you know, because you get a couple bad players, even
one bad egg, and if the technology ends up or

(59:52):
the information gets into enough hands, that's where you start
to run into some real issues because there's just gonna
be so many people that have the ability to release
a biological weapon that could take out half the planet.

Speaker 2 (01:00:03):
You know, that's one hundred percent valid concern. So, like
one just off the cuff answer is, can we have
a more And this is what the anthropic CEO would
probably say, Can we have a more aligned AI and
a more aligned intelligence that cares about us that could
protect us from weaker but still superintelligences that would come

(01:00:26):
after us, Like we essentially would need the smartest one
to be more benevolent and watch over us to protect
us from the dumber ones that are still way smarter
than us.

Speaker 1 (01:00:37):
What is seeing the evolution of AI and how this
intelligence can be formed? What is it done with your
beliefs in God?

Speaker 2 (01:00:44):
Oh my gosh, that's like such a good question.

Speaker 1 (01:00:51):
Is God just a super intelligence? Is you know what
I mean? Is there's an energy of something that's over
all of this? What is that? Are we in a
similation and what we think is God is the person
running the simulation or the energy that's running the simulation.

Speaker 2 (01:01:05):
It could be. I think maybe as we get older,
it does seem like whatever is happening here is is divine?

Speaker 1 (01:01:16):
Do you know what I mean?

Speaker 2 (01:01:17):
Like it feels like.

Speaker 1 (01:01:18):
I definitely believe in a higher power. I've had experiences
where I'm like, whatever that is, I believe in that
it's in control, it's still over all this and it's
all connected somehow, But I can't get much further than that.

Speaker 2 (01:01:30):
I think, like, you know, going back to our upbringing
and in Christianity and thinking deeply about that, you know,
Jesus was one of the one of the things that
we learned was this concept of ask can you shall receive?
And it feels like that is we're creating more and
more technology that makes that true. And so then I

(01:01:55):
think what's happening is and maybe the judgment part of
where we're at is what questions are we bringing, Like
what are we asking like what do we want it
to do? And you're bringing up do we want to
use this intelligence to make a bunch of sex robots
or do you want to use this intelligence to cure cancer.
We'll probably pick all of the above. We'll probably do
all of it. But I think that we have a

(01:02:19):
really like one of my favorite I've gone back and
I've essentially like, growing up in my living room, my
dad would watch Star Trek the Next Generation. I don't
know if you had that, like.

Speaker 1 (01:02:31):
There was hated Star Trek, but yeah, that's okay.

Speaker 2 (01:02:35):
And literally I'm like torturing my wife because at night
I'm like, I'm going to try to watch Star Trek
the Next Generation and she's just on her phone. It's fine,
but she's listening and watching, which is really funny. But
I'm like, let's just go through. And Gene Roddenberry, who
made that series, he painted a vision of the future
that I think is like really beautiful and inspiring. But essentially,
in the universe and world of Star Trek, the human

(01:02:58):
species had transcended money and made it their goal to
explore the galaxy and the universe, and so medicine was
no longer something that was scarce. Food was something that
was no longer scarce, and kind of humanities like calling
was let's go out to stars and let's go see
what's out there. And I think that that, like that
that could be like a really cool path, Like we

(01:03:19):
don't need to destroy ourselves with AI, we don't need
to create biological weapons to torture and kill each other. Like,
let's use this technology to heal everyone, to lower the
barrier for people to get access to clean water and
clean food and clean energy, and let's figure out a
way to be able to like go explore this planet
and like learn more about the universe. And I think,

(01:03:42):
you know, as I think about your question about God,
one of the things that a belief that I've developed
over time is that whatever we're in, if there is
a God, this is this experience that we're in is
God's handiwork. So you sitting in front of me and
the table here, this is all a witness or testimony
to like I'm having an experience and I'm one hundred

(01:04:05):
percent not in control of the experience. So this is
almost like a piece of art that I'm seeing that's
being presented by God. And the more time that I
have to interact with God's work or this experience in reality,
the more that I'll start to understand what God is.
And I think that as we find other life in
the universe, and as we start to learn more and

(01:04:25):
have more empathy with animals, there's a very good chance
that AI is going to allow us to talk with
like whales and dolphins like and like be this universal
translator between them.

Speaker 1 (01:04:34):
What a time to be alive, I like, if nothing else,
I've thought about that too, because every time I want
to get like down on technology or AI and some
of this stuff, I'm like, wait a second, Like did
you ever try to travel before a smartphone? Like that
was a hellish experience, Like it was not fun. You
had to print your tickets to the plane and you
had to like print out where the hotel was going
to be, and you didn't know shit and you didn't

(01:04:54):
I mean, you were just piecing it together. I had
to sleep in a park once in Spain because we
couldn't find our hotel, and I mean there's no way
to look it up. This is two thousand and four.
My point is is, like we know the benefits of
us and the world that we get to live in
today is pretty miraculous. It's pretty special, the amount of
things that we just have taken completely for granted. But
like you said, like you can kind of see God's

(01:05:16):
hand in a lot of it too, because it really
is just like the amount of things we get to experience.

Speaker 2 (01:05:23):
And it's getting it seems like it's getting wider and
wider and white really and for us we need to
grow the capacity, I think, to be able to take
more of that in, but then also to not destroy
ourselves in the process of it. And so I think
that your hesitation or concern about AA is one hundred
percent valid, Like the feeling that you get of like, hmm,

(01:05:43):
this is like maybe a little bit spooky, or maybe
we may not be going down the right path as humanity.
I think that we should be listening closely to that
intuition and feeling that you're having and have that try
to help shape what we're doing. Like having a healthy
skepticism for what whatever direction we're going is probably.

Speaker 1 (01:06:01):
Wise when I think you do need those people. I mean,
I'm just naturally a skeptic, right, I was the first
guy talking about COVID being bullshit and some of these things,
and I'm I'm speaking on certain issues now that like
it seems like the same people see these issues every time.
But one of the things that you know, you see,
I guess what I always fight for is human freedom.
And I start seeing these chips they want to create

(01:06:23):
where they can just be inside of you and read
you and all, and I'm like, hail no, Like I'll
be the last dude holding out on that. I'll be
the guy up in the woods with the you know,
killing my own food if I have to be. But
I just that idea of turning over like more and
more of my freedom, although maybe we've already done it
with our smartphones, Like, I don't know. That kind of
stuff scares me for sure.

Speaker 2 (01:06:42):
I mean there's three there's like, you know, these few
branches of how humanity plays out, and uh, there's a
there's always going to be the purests that are going
to say and maybe the Luddites would be like the
term for of like the anti technologists, which are just
enjoy the farming and like the freedom.

Speaker 1 (01:07:02):
And like like the UNI bomber. Yeah, have you ever
read his manifesto?

Speaker 2 (01:07:05):
I haven't read it, bro go read it.

Speaker 1 (01:07:07):
The dude was spot on on everything.

Speaker 2 (01:07:09):
On what's happening?

Speaker 1 (01:07:10):
Oh yeah, what was to come? It's crazy that he
wrote that in the early nineties. Dude, like, literally, go
read the manifesto, You'll be blown away. He was like
he blew people up to get attention, and that obviously
is a very negative thing. But if you just read
what he said, one of the smartest humans to ever exist.
I'm not joking.

Speaker 2 (01:07:26):
He was his main point that he was trying to make.

Speaker 1 (01:07:28):
That we are going to at the like the downfall
of society will be these technologies that will separate us
as families, that will separate us from each other and
the connections and will be so ingrained into technology that
will lose our humanity. And like, I mean, dude literally
nailed Like I'm not joking. Like you can listen to

(01:07:49):
it on YouTube. It's a couple hours long. The unibomber
was spot on on pretty much everything. One of the
smartest humans ever lived.

Speaker 2 (01:07:55):
But then the big problem is obviously, like we can't
be aligned with the idea that hurt other people as
a way to get one.

Speaker 1 (01:08:01):
Hundred percent No, I mean, he just he was obviously
mistaken in his way to get noise on it, but
it mattered so much to him, and he had lost
obviously a little bit of his own humanity, where he
he just said, I have to get people's attention to this.
I have to do something drastic, and so he started
blowing people up and then he said, he basically said,
I'm gonna if you don't print this in the New

(01:08:22):
York Times, I'm going to keep killing people. And that
was his way of getting attention to it. That's wild, wild.

Speaker 2 (01:08:28):
Do you what do you? Are you optimistic when I
talk about using AI for health.

Speaker 1 (01:08:34):
Yeah, it's the one thing. Well, my only pessimistic view
is I think we could be doing so much more
with health now. I don't think those in charge want
better health for everybody. I think that they want is fat,
dumb and happy. And I think that they have purposely
poisoned our foods and they've purposely sold out to you know,
big food and big pharma and all these big insurance

(01:08:56):
and these companies that cause us to I mean, it's
very hard to be healthy because of the foods we
eat and the messaging that gets delivered from our government
and health officials. And so for me, my only thing
that I'm worried about. For you, is I don't think
that the people that are making these decisions want us
to be healthier, and so they're going to sabotage the
AI just like they do everything else.

Speaker 2 (01:09:18):
My hope would be, like it is to think about
maybe one of the reasons that our environment is so
anti health is because it's an emergent property of humanity
where it's you can make more money by cutting corners,
and you can make more money by creating drugs and
medicine that don't help people. But I hope, and maybe

(01:09:40):
this is foolish, but I hope that inside of everyone,
we all want the world to be a better place,
and that if there is an opportunity for us to
have that where it doesn't take away from us, that
we all would want.

Speaker 1 (01:09:53):
Yeah, and I think where I think your little naive is.
I used to think that everyone thought the way I did,
and so I invest with a lot of people pretty freely,
assuming that they would have my best interest in mine
like I had theirs, right and I had I knew
my heart I would never take someone's money and not
honor it like I would never do. And it's one
after another. People just burn you burn, you burn, you

(01:10:14):
realize like my mistake. The reason why I lost so
much money investing was because I assumed people would treat
my love for them or for the relationship the same
as I was treating them, and they don't. And I
think when you start looking at these people, unfortunately, I
just you can. I mean again, I don't mean to
be the pestimist, but like our politicians have clearly sold

(01:10:36):
out on almost every issue, I don't think they actually
have people's best interests in mind. I think that they're
very selfish in their own decision making. That being said,
I do believe this could be the thing that disrupts
the health because I think people want to be healthy,
and I think we all want each other to be healthy.
I would much rather everybody be healthy, right, Like, that's

(01:10:58):
just a better society. But yet, dude, go drive down
the street and go to the healthy food place and
then look next door to the McDonald's. People know they
know which decisions are healthier, they don't do it. And
so you know, it's two things. It's having the knowledge,
but then it's also the doing of it.

Speaker 2 (01:11:16):
The application of it. And maybe I will probably acknowledge
that I am naive and just be like well just
throw myself out there. I'll use whatever the remaining part
of my life where I can produce something.

Speaker 1 (01:11:26):
With that being said, I'd still like to invest in
your company. Like that's a regret that I'm going to
have to live with forever when I passed on that.
But hey, I think like.

Speaker 2 (01:11:36):
So, I mean maybe to kind of just put a
bow on it all. I'm it's an insane time to
be alive where essentially the computers are waking up, they
haven't fully woken up. They're not autonomous in and of themselves.
It's twenty twenty five. We still have control. I think

(01:11:56):
like we seem like it. It seems like we solve
control smart.

Speaker 1 (01:12:00):
They're not.

Speaker 2 (01:12:01):
They're not. They don't seem to be malicious. Like it
doesn't seem like we have like an impending threat of
AI killing anyone or cause any problems yet. But this
is twenty twenty five, and it looks like to me,
the I kind of look at what we have with gratitude,
and I'm saying, like we now have we just got

(01:12:24):
like to join our party. We now have a team
of brilliant PhD researchers that have read the whole world's
knowledge and if we ask them for help, they're here
to help us. And so it comes back down to like, Okay,
what are the right questions that I should be asking this?
And really that's probably this is maybe humanity's true last exam,

(01:12:49):
which is in the presence of infinite intelligence, what's the
right question and what should we do with it? And
so for me, I'm inspired with what we have from
Gene Roddenberry, I'm inspired by Like I just loved the idea,
Like there was just this part in Star Trek where
the captain of the ship had a headache and the
ship's doctor was like, we solved headaches four hundred years ago,

(01:13:10):
how could we not know? Like why you like, we
know why everyone has a headache whenever. And I feel
like AI and the intelligence that we have now we'll
know why we have a headache. We will know why
we're not feeling good, and it'll matter a lot, especially
like for me, Like one of my big concerns that
I have with my kids is like I want to
apply this technology as much as I can to them.
So like, for example, my oldest he broke his leg

(01:13:33):
at football practice, And I would love for Google's Gemini
model to have looked at his X ray and then
been there to help the doctor make better decisions. And
there is I could dedicate the rest of my life
to trying to help essentially aid or boost the entire
medical system. And I think I am going to do that.

(01:13:54):
And it's like everyone, every one of us should have
this level of intelligence. Looking over us and watching over
us are both our physical health and our mental health,
and we should leverage that to help our aging parents.
We should leverage that to help our kids, and that
is the right use of it. And maybe what I
ought to do is just put my blinders on and

(01:14:14):
just fully focus on that and see as much good
as I can do and still be aware of all
the other things that are happening. But going back to
what your friend told you, like just don't worry about it. Yeah,
it's a lot to think about, and it's overwildling, and
there's a feeling all of us are going to go
through it. But there's an unsettling feeling when you're in

(01:14:38):
the presence of something or someone that is way smarter
than you, And I think that we're.

Speaker 1 (01:14:45):
Feeling that's what we're feeling. Yeah, I've got a few
buddies when I'm with them, just like, shit, dude, this
guy's on another level. You know, you just know they're
much smarter than it is little intimidating.

Speaker 2 (01:14:54):
And then the hope is, I know you're way smarter
than me. Please help me, like, please be empathetic to
my situation. I am the little aunt and you could
step on me, but acknowledge acknowledge me, like just move
my ant hill like if like, and be gentle to me, like,

(01:15:14):
be tender to me, like.

Speaker 1 (01:15:15):
Well, we'll find out. It's going to be interesting. One.
The last question I want to have you before we
end this is you were the first person ever tell
me about bitcoin, first person ever tell me and show
me AI, do you ever put anything out like what
you're investing in, where you see opportunities, how you see them.
You tend to see things just different than most people,
and so how can I fall a little bit closer

(01:15:36):
or whatever you're seeing? Next?

Speaker 2 (01:15:37):
I am being like really myopic right now, and so
I've actually slowed down my investing. I've invested in I
think over a thousand different companies right now. And what
I've done is I've just taken a step back from
investing and.

Speaker 1 (01:15:54):
Read all that effort into your company. Yeah. Yeah, And
so I'm just.

Speaker 2 (01:15:57):
So focused on five and so focused on trying to
create this, like my dream is over the course of
the next ten years. I know that using this intelligence,
we can figure out exactly what our bodies need to
feel their best every day and every night. That problem
is solvable. So I'm like, Okay, I'm just going to
go solve that. And if that can eke out any

(01:16:18):
better years for my mom and dad where they're feeling
better and like they're getting better sleep and they have
more energy throughout the day, that's a huge win. And
so I'm going all in on that and that's like
my that's my biggest focus and so and.

Speaker 1 (01:16:31):
That's the only way to do it. So well. I
appreciate you, man. Thank you for the gifts here too.
I can't wait to keep trying them.

Speaker 2 (01:16:36):
And yeah, thanks for having me on and challenging me
and pushing me every time.

Speaker 1 (01:16:40):
So much. Love brother.

Speaker 2 (01:16:41):
Yeah, thank you, Jimmy.

Speaker 1 (01:16:42):
Thank you again for listening to the Jimmy Rex Show.
And if you liked what you heard, please like and subscribe.
It really helps me to get better guests, to be
able to get the type of people on this podcast,
it's going to make it the most interesting. Also, wanted
to everybody about my podcast studio, The Rookery Studios, now
in Salt Lake City and or in Utah. If you

(01:17:03):
live in Utah and want to produce your own podcast,
we take all of the guests, work out of it
for you and make it so simple. All you do
is you come in, you sit down, you talk, and leave.
We record it, edit it, even post it for you.
If interested in doing your own podcast, visit our Instagram
and send us a DM Rookery Studios or go to
our website, The Rookery Studios dot com.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.