Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
All right, and hello everybody.
(00:03):
So welcome to another edition of the Gen AI Meetup podcast.
I'm Mark with my wonderful co-host, Shashank.
So we're going to try to talk about some of the news stuff
this week.
It's kind of been a little bit of a slower news week,
but a slow week in AI, there's still a lot to talk about.
(00:28):
So we're doing that.
So first of all, Shashank, what's up with the opening
AI this week?
It seems like a lot of stuff is happening there.
Yeah.
So we've had a lot of drama from them
for the last couple months with Sam Altman being
ousted by the board of directors and then him coming back
(00:49):
and replacing the board of directors
and the old people who had an issue with him,
kind of just slowly deciding to take their leave
and move on to other projects.
And we've recently found out that OpenAI
had a non-dispareagement clause in their employment contract,
(01:13):
which probably prevented some of the people who
left from saying bad things about them.
Or they would have lost all of their stock.
They would take away people's stock.
If they talked negatively about OpenAI?
Correct.
But since then, I think they've removed that clause
and they've changed the employment contract
(01:36):
and the terms of agreement and stuff.
And one of the recent people who left
had a lot of negative things to say about it, particularly--
it's some of the same things that they've been complaining about
that OpenAI has been focused more on making money,
releasing bigger models without any care about safety
(01:57):
and preventing misuse.
And they're just focused more on money
as opposed to the founding principles, which
was to bring safe AI to the world.
You know, I wonder if OpenAI is going
to be relevant in the next couple of years.
(02:18):
Because OpenAI, I think, is kind of ahead of most companies.
I think the only company-- there's a few companies
that you could argue are more ahead.
I think some may argue, and Theropic is a little bit ahead.
You could argue Google, whatever.
But I mean, OpenAI is 100% in the running right now.
So I think that if you think about it, OpenAI is the top now.
(02:46):
But that's leveraging a bunch of the research
they did a while ago.
I mean, because with any software project,
you're going to do a lot of the initial research
and development at a software project maybe takes a year.
So let's say January, February, March,
you're doing all your initial design,
(03:08):
then April, May, June, you're building July, August,
you're maybe fixing all the bugs, then time around fall, winter
time, October, November, December, you're launching it
and then fixing the bugs.
(03:28):
So I think that maybe you really need those senior people
at the beginning.
And then maybe over time, once the architecture
is kind of established and they know what they're building,
you maybe need them less.
So I think that OpenAI is still able to kind of ride
(03:49):
in on their coattails a little bit.
But it seems to me like if all the most senior researchers
go away, who's going to be there to build next stuff?
I mean, maybe it's possible that OpenAI has hired
a bunch of super smart people and they've already learned
what they need to learn from those senior engineers
(04:09):
and maybe those people were there just for show, I don't know.
But it seems unclear to me if OpenAI is going to continue
to be competitive going forward after all these people left.
Because I don't know, I don't think Sam Altman is
like a massive machine learning researcher.
Well, to clarify, a lot of the people
(04:31):
from the exact team have left, not specifically
like people from the engineering team.
So people who are driving the vision for the company,
the chief scientists.
But the engineers who are doing most of the work
are still there.
A lot of people are still there.
So I feel like it's kind of functioned just fine as a company
(04:54):
in the short near future.
But the engineering leadership may suffer.
But we've also heard they're working on a new project.
What is it, Project Strawberry?
Yeah, yeah.
Sam Altman, he like tweeted a picture
of him gardening or something like that with some strawberries
(05:14):
in the garden.
He's like, oh, I love strawberries.
So this has been like a whole thing where opening eye,
they called this algorithm like Q-star.
We don't really know what it is.
Apparently it's some sort of like revolutionary breakthrough
in machine learning.
Nobody actually knows what it is, but it's been really hyped.
And now it's been code named Strawberry.
(05:36):
So we don't know a lot about it.
For all we know, it's just like all hype that may not
live up to it.
So I feel a little bit bad talking about it
just because we're kind of big feeding it into the hype.
But open AI right now is still like a big company.
So we got to talk about it at least a little bit.
(05:57):
So the rumors are it's going to be the next gen model, maybe
GPT-5 equivalent or something close to it.
But a lot of other people are also very skeptical,
because there's a bunch of other labs, including Google,
that have been working on polishing the capabilities
of these models, like math, reasoning,
(06:19):
and given how all the other companies have caught up
to the lead that Open AI had a while back.
And we talked about this in the last episode.
Facebook released a fully open source model, which
is comparable to GPT-40 for free.
And that's just insane.
So it's going to take a lot for them to catch up
(06:42):
and maintain their position as the leading company in AI.
One thing I'm looking forward to is for them
to build some kind of an agentic workflow or UI
to help people run these models and solve more complicated tasks
(07:04):
like in a loop, especially now that they
have a bunch of different GPTs, which are made by other people.
And they have a developer ecosystem, which is growing.
So if they're able to leverage that and build agents
to solve harder challenges, that would be pretty cool.
I thought the GPTs were similar to agents,
because, for example, with the Figma1,
(07:27):
you can say, hey, make me a PowerPoint presentation.
Yeah, but multi-tool wheel link agent that
can string together all kinds of different GPTs
and have them all work together.
You could-- I don't know if open AI needs to be the one
to build that, though, because there's already
a lot of tools out there for making multi-agentic workflows.
(07:51):
Like there's Kruaii, there's AutoGen.
I think Lama Index has one.
All of those are for developers.
I'm thinking a customer-facing one, which
would make it super simple for you to just describe what you want.
And then boom, you have an agent.
I mean, you could.
To me, I don't think that is that interesting.
(08:13):
I mean, it's interesting, right?
But that would just feel more of a wrapper around some of these
agentic frameworks that we already have,
maybe with a UI.
And maybe you could have some AI to make it easier for people
to use it.
But to me, that-- that's not revolutionary.
(08:35):
That's more of an application of what we have today.
I guess speaking of agents, Mistral built a platform.
Yeah.
To make agents easier.
Yeah, that's pretty cool.
So it seems like-- so for those that don't know,
Mistral is a small French company.
(08:57):
And they are-- their whole thing is like doing AI really cheaply.
So as far as I know, they're spending way less money
on their research and development than other companies.
So they're just super small team,
and they're able to kind of compete with the big boys.
(09:20):
So they made it so that you can go in to the Mistral website
and create an agent.
So it makes it very user-friendly.
It's the whole thing.
It doesn't require any programming whatsoever.
It's just like a thing that you can fill out
like form on the website, basically.
(09:41):
And you can have an agent.
So I think this will really kind of open it up
for people who want to go and then use this for their workflow.
And I think it's like little innovations like this
that are really going to help kind of bring AI to the masses.
I completely agree.
This makes it so much simpler to create agents.
(10:01):
Because like you mentioned, Lama Index, Blank Chain,
they're all fantastic tools.
But so cumbersome to use, especially as a non-developer,
non-engineer.
If you have a GUI, this would be fantastic for the masses.
Yeah, for sure.
For sure.
(10:22):
OK, yeah, we'll just do a little rapid fire news.
So yeah, Mistro.
Another one, Shoshaki mentioned, is
that new open source image generator flux.
Yeah, I don't know about that.
I don't think they're open source.
I think they're-- actually, you're right.
I think they are open source.
(10:43):
It's the team behind stable diffusion.
And that's another company that's been imploding
similar to OpenAI.
A lot of people have left.
For different reasons, though.
I don't think they have any issue with the vision
for stable diffusion or something, especially
because they're not as big of an industry leader as OpenAI
(11:05):
is.
But they have had a lot of struggles
figuring out how to stay afloat and provide
these large models for free.
It's not cheap.
So they're founder left.
And apparently, a lot of these people
(11:25):
who are responsible for this new image generation model
flux.1 from based labs.
They used to work at stable diffusion
a while back as the chief scientist
and some of the exacting.
And they left.
And built this new open source alternative to Dolly 3
(11:46):
or mid-journey.
And it's comparable to the best image generation models
out there, which I think is mid-journey.
It gets text-right in images, which
has been a hard problem for a while.
Oh, the flux gets text-right.
Yeah.
Yeah.
Wow.
I think most companies right now are
starting to solve text-insight images.
So you can ask it to build your logo.
(12:08):
And the first image on their website
is flux-on-based labs, which are generated
by their diffusion model, which is really cool.
That is cool.
And also, it seems like it doesn't have the guardrails
that a lot of these other AM models have in place.
So it looks like you can create images of whatever you want.
(12:34):
Yeah.
So it seems like they're really leaning
into some of the adult content that you could potentially
generate.
Oh, I see.
Speaking of no guardrails, Elon Musk
tweeted an image of Kamla Harris, which apparently
(12:55):
violates all of Twitter's or X's policies
against spreading misinformation.
But--
I don't know.
I mean, who's going to do anything?
It's like, what do you want to do?
Fire him?
I mean, they could take down his post.
I mean, he's like the CEO.
Is he the CEO?
(13:16):
He's the-- he like owns it.
Yeah, I mean, no one's going to take down his post.
He's like the chief anarchist king, something.
He has a meetup title there.
I mean, yeah, so speaking of Elon,
he just did this--
did you see that Lex-free meeting review with him?
I started watching it.
(13:37):
It's very long.
Yeah, eight hours.
That's the longest podcast.
Well, I mean, Elon Musk, I think,
talked for like an hour or something in that podcast.
It's with the whole team.
Yeah.
And then he was talking there how they just built
some sort of like new data center.
I think it was a Nashville.
OK.
Yeah, so apparently it's one of the world's biggest AI data
centers.
So--
Was that where he's raising money for his XAI data center, too?
(14:00):
Yeah.
That's-- I don't know if that's where he's raising the money
from, but that's where he's going to be built.
That's where he's going to build it.
Yeah.
Things around--
there's something around Tennessee, I think.
So yeah, it seems like maybe GROC will be in the running.
Elon claims that they will be maybe an order of magnitude
(14:20):
better than GPT-4 when it's done.
So we shall see.
I mean, if he says it, it's probably true,
but the timelines may have a little bit out in the back.
He is not known to stick to timelines very well,
but he gets there eventually, though.
He does.
And I think that's OK.
I mean, I don't hold it against him.
(14:41):
And these things are hard to build.
So it's fine.
Yeah, I guess the neural link episode
isn't directly related to generative AI,
but it does use a lot of machine learning
to interpret the results and make sense of all of these brain
(15:01):
waves and signals that are coming in from these receptors
that are in your brain.
Some of the cool things that they talked about
was the fact that we have very limited throughput
when we're talking with human language
and to be able to shove all of the things that I have
(15:24):
in my head into your head directly
through a high bandwidth interface,
that would be super cool.
To all of the listeners out there,
I have so many thoughts in my head,
but I'm limited by language, by my ability to articulate,
and condense all of these thoughts that are in my head
into human language, and to envision a world
where I'm able to just beam all my thoughts
(15:46):
into other people's heads in vice versa.
That would be insane.
That's very cool.
Although, one thing that I'm not sure about is,
I think in my brain, I'm not the most articulate.
And then when you start speaking,
you can process your thoughts and maybe get more
(16:07):
articulate, right?
Because it's like, you have all these thoughts, right?
Maybe you could beam all the information,
but would the other person be able to understand you?
Ooh.
Because it might just be too much,
unless you had another chip in your head
that was helping you interpret it, summarizing.
It might be too much information.
(16:28):
Imagine hypothetically.
But let's just say you were speaking in front of a large group
of people.
Let's say you were speaking in front of a room of 100 people.
And then all of those 100 people just started
beaming their thoughts.
It just made it too much, right?
(16:50):
I mean, in a certain sense, it might
help you get a read on the room, because they could
tell you their thoughts, but maybe not the whole room
at once, maybe at least one on one.
Right.
But if you built that, how would you turn that off?
Like, how would you prevent somebody from--
I mean, maybe you could--
Well, it's like sharing a file of your Bluetooth.
(17:11):
Maybe you have to send it and the other person has to accept it.
I guess, but then that's slowing things down, right?
No, I'm sure that would be fine.
You can still have high bandwidth, you know,
SSL communication, which is--
Yeah.
Maybe you could just put on a do not disturb mode, where no one
(17:34):
can be your thoughts to you.
But then also, if they could read the thoughts,
could they send advertisements to your brain?
Would they know if you actually not just even looked at the ad,
but really read it, understood it, thought about it,
considered it?
I mean, that's probably what would happen.
(17:58):
Because I mean, if I'm an advertiser, like, maybe you
click on my ad, maybe you even see my ad.
But did you really see the ad?
If you know what I'm saying.
I do, I do.
That's a scary feature.
I'm sure we'll get there eventually,
because advertising takes over every single domain.
(18:19):
And it is taking over these LMs, too.
Specifically, perplexity is trying to inject ads
and trying to also be good citizens of the internet
and share some of the advertising revenue
with the content publishers, like news websites,
so that it doesn't get a lot of flack for just stealing data.
(18:42):
Which I think is a good thing.
I think that is the internet we all want,
where the people who are creating the content
get compensated for doing that.
And these models don't just steal all of this data.
Oh, I mean, that seems like a good thing.
I didn't realize perplexity was doing that.
And by the way, you got like a free perplexity subscription
(19:03):
with your rabbit R1, right?
Sure, sure.
Free is--
Well, no.
I included.
Included, yeah.
I honestly haven't used the rabbit device after I got it.
It's been dead for a while.
I need to charge it.
Let me plug it in.
But the perplexity subscription has been pretty nifty.
Mostly when I'm trying to rely on factual information
(19:26):
to do some kind of research, and I
don't trust the subjective judgment
on who's nations of some of these models.
Yeah, I like perplexity.
It's pretty good.
I haven't used search GBT, but that
would be cool to try out if and when
that becomes publicly available.
But so far, perplexity is fantastic.
(19:48):
It's hard for research.
What types of things do you research with it?
Well, right now, I've been on this health
kick, learning more about my blood sugar.
I have a continuous glucose monitor on right now,
trying to understand more about gut health, sleep, diet,
(20:11):
supplements, et cetera.
And there's so much information online.
And so much conflicting information.
Each expert has their own opinion
and to try to condense all of this knowledge.
I'm not going to sit and read every single thing about everything
is so overwhelming.
So this has been useful in condensing all of that.
(20:33):
That's great.
Actually, one thing that I've been using a lot
is called consensus.
I think you know that one, right?
Yeah, it's a research engine, right?
Yeah, exactly.
So it has a co-pilot that can help you synthesize all
of the research papers together.
So as you say, oh, is LDL cholesterol bad for me?
(20:54):
And it'll say, oh, based off of these 30 papers,
and then seven of these papers were
from a highly cited journal.
And then 18 of them said it was bad for you.
Five of them said it was neutral.
And then three of them said it was good for you or whatever.
So everybody or 74% say it's good.
(21:20):
So it's kind of like an automatic meta-analysis tool.
It's been fantastic for helping me just find what is--
I mean, I shouldn't say the truth,
but what is--
- Insane.
Yeah, the consensus recommendation.
(21:41):
So that's really cool.
I'm looking at it right now.
I remember we used this to build that like an app.
And I forget what we were using.
We were using their--
Well, scraping their website for a test hackathon project.
We weren't even scraping the website.
We're just calling their API.
(22:02):
Which wasn't specifically public.
But anyways, we just did that a couple of times
to try it out.
We're not actually using that anymore.
It's a fantastic, fantastic website, though.
Honestly, I think it's like my favorite one
for finding out like medical knowledge.
Because I feel like the problem is when you search
(22:24):
on Google for something, or not even Google,
but just like not the single out Google.
Just like any search engine, really.
You're going to find information that is more
from these established places.
Like, oh, this is what the CDC says.
Like, this is what the World Health Organization says
about, you know, whatever.
(22:45):
Which may not necessarily be true.
I mean, that's right.
It may be true.
It may not be true.
Maybe-- I feel like often times like the recommendations
aren't like--
They're too general.
Yeah.
It's just like for the general population.
But when you want to like dig deep, go into it.
Like, you want to actually look at the research paper,
(23:05):
know like the mechanism action.
It actually gets something like hyper-specific.
And I think consensus is like a great thing for that.
Honestly, I think this is one of like my favorite use cases
of GNI.
It's like a simple one, right?
Of just, you know, consolidating information.
But it's super important, right?
(23:27):
Because like, when if there is like a research paper,
like it might take you--
I don't know how long it takes to read a research paper.
It might take like a couple of days to like actually read it.
Like, understand it, right?
But like an AI model can read it with a fraction of a second.
And the AI model could read, you know,
(23:49):
a hundred of these papers or a thousand of these papers,
understand them, synthesize the information,
and like summarize it so that like you can understand it.
So you can get like the information that you need
from the actual papers, which I think
is just going to like really help like figure out like what's
actually like true, right?
Because I think that in the days of kind of clickbait journalism
(24:14):
where people will say like, oh, well, red wine is good.
And it's like, oh, well, actually it's not like red wine.
But it's like the resveratrol and the red wine.
And it's like, oh, well, actually resveratrol is like,
you know, debatable if it's good or not.
So it's like maybe red wine isn't good for you.
But then people just you know, read that article.
Like, oh, red wine is good for you.
And it's like, is that true?
(24:36):
It's like, well, the consensus is maybe not.
Yeah.
And it's not just medical research.
It's literally all kinds of research papers.
I just searched, you know, searched on consensus
is nuclear fusion possible.
And it gave me, you know, possibly at a 40%, yes, 33%,
no, 22%.
(24:57):
So it was like, yeah, maybe.
And it has a bunch of citations and a bunch of reasons
for how it jumped to that conclusion.
Whoa, yeah, that's cool.
What about like something a little bit more controversial?
Like, what about like a rent control?
Does it say anything about that?
Like does rent control work?
Does there any studies around that?
Kind of curious.
Because I always heard that it didn't work.
(25:18):
But I guess like it works for the people
who don't want to have their rent raised.
But I guess we might need to think through
how we want to phrase this question a little more.
But for does rent control work specifically?
It says no at a 60%, 67%.
Oh, whoa.
OK.
So maybe like does rent control result
(25:42):
in the average home price getting less expensive?
Let me actually take a look at perplexity
and see how that compares.
OK.
Because I have the first subscription.
Let me see what kind of research it does, what kind
of citations it has.
It looks at Vox articles, some Forbes articles,
(26:03):
and five more articles from Reddit, The Hill,
which is a newspaper outlet.
Balance summary.
It's just giving me like a balance summary with arguments
for and for arguments against.
And--
Oh, but it's not telling you like if the subscription
is--
(26:23):
Oh.
It's probably true, probably.
Yeah.
So I mean, I think like if you say, oh, rent control
doesn't work 60% of the time.
Or that means all the 40% it does.
Yeah, that's a good question.
Maybe the 67% is the number of papers
(26:44):
that have an argument against rent control.
OK.
Yeah, it could be.
Oh, yeah.
But before I forget, speaking of health,
so I've been using this continuous glucose monitor
to learn more about how my body responds to sugar
and how my blood sugar spikes with--
(27:09):
as I go about my day, as I eat foods, as I get more or less
sleep, as my stress levels change.
And one nifty feature that this app has
is a meal logging feature where I just
take a picture of whatever I'm eating.
And it exactly classifies everything that's on my plate.
(27:31):
So what's the app?
This specific one is very.co.
Please spell that.
V-E-R-I.
Oh, V-E-R-I, not V-E-R-Y, but V-E-R-I.
Yeah.
You know, dictionary words, four letters are hard to get.
So I think they went with very n.co.
(27:52):
I think calm domains are also very hard to get.
But so this feature--
I think it's something that a lot of people at hackathons
want to build.
Take a picture, log your food, or take a picture at a restaurant
and decompose the ingredients, see what nutrients are in it,
(28:13):
log it to something, or tell you if you're allergic to something,
and ask a bunch of questions about your food.
So this app is really cool.
I don't exactly know how they work, but I
can imagine building something like this
with Facebook's segment, anything.
Take a picture, segment all the different looking things,
(28:35):
and classify each one of those segmented object image chunks.
And I took a picture earlier today of a solid bowl that I had.
And as long as the ingredients are not lumped on top of each other,
as long as you can see each ingredient with the camera,
(28:57):
it classifies it pretty well.
It distinguishes between chicken and pork,
and different kinds of vegetables, and beans, nuts, salsa, guac.
And I love these little nifty use cases for AI.
That's pretty cool.
Have you actually noticed any--
(29:17):
have you noticed any results, like feeling better,
you have more energy?
Have you learned something that affects you negatively,
or positively?
How has this helped you?
Yeah, I mean, it's only been less than two weeks,
but I have learned a lot about a few takeaways.
(29:40):
So eating the same foods later in the day
will cause my blood sugar to spike like two to three times
more than it would in the middle of the day.
So for reference, a blood sugar spike is not a good thing, right?
It's not a good thing.
It results in more oxidation, more aging,
(30:04):
and puts you at risk for pre-divities and diabetes.
And you want your body to get just enough energy
and getting too much glucose in your body.
It ends up being converted to fat and overloading all of the--
(30:24):
I think the mitochondria in your body,
and not having it generate energy for you.
And that is not a good thing.
And there's a lot of drawbacks of having too much glucose
in your body.
Yeah, in particular, having glucose spikes,
where you eat something sugary, and you
get this rush of glucose in your body all at once.
(30:47):
So have you found that that apps that helped you find foods
that maybe are less of a glucose spike?
Yeah, I have a lot of learning.
So the order in which you eat your foods
can have a big impact.
So eating sugary things at the start of a meal
(31:09):
is a good idea.
If you just start the same meal in a different order
and eat veggies first, that'll lessen the spike that sugary foods
in the same meal will have.
You can eat desserts at the end of a meal, which
is not as bad as eating desserts by themselves
as a snack in the middle of the day.
If you're more stressed out, if you haven't had enough sleep,
(31:32):
then your blood sugar spikes a little more.
Eating later in the day is not as good.
And being sedentary right after a meal is not good either.
Even just like waving your arms around
or having a very light walk for 10 minutes
can cut down this massive spike that you would have
otherwise had after a big meal.
(31:54):
Yeah, a lot of interesting insights.
And a lot of different foods have different impact
on different people.
So this is a really cool thing.
It's not that cheap.
It's a little prohibitively expensive
to use regularly for most people, I think.
But it's interesting to try it out once.
(32:17):
Yeah.
So I use a continuous glucose monitor for a few weeks.
And I got a lot of the same info that you got.
I actually didn't realize that different meals,
or the same meal at different times,
with spike your glucose in different ways.
Because when I had the glucose monitor,
I was just trying to eat a bunch of different stuff,
(32:38):
just to see how it would affect me.
I'm gonna do the same, yeah.
But I didn't really try the same thing over and over again.
But I should.
That's a good idea.
At different times, in the morning versus that night.
I have a couple of favorites next.
So I like eating them all the time.
I love hummus.
I have this go-to smoothie recipe that I make every weekend.
(32:59):
I show a bunch of fruits, bananas, and a bunch of different things.
And that is a massive sugar bomb.
Despite how healthy that I think it is, it is very sweet.
Well, I would imagine that because you're blending it,
it probably is just like taking on the fructose.
Like, it's like liquid.
Like, it's just like straight to the bloodstream or whatever.
(33:20):
Immediately, directly in my veins.
Oh, you don't drink, you just inject it.
(laughing)
Yeah, I mean, the fact that it is blended,
makes it so easy to digest that my body
doesn't need to do any work to break down
all these individual ingredients.
So it's absorbed by my body a lot faster
than if I ate these ingredients by themselves.
(33:41):
Yeah, you know, I read a study a while ago
that they said that they gave two mice.
One group of mice, they gave the same food,
but it was like processed in some way.
I think like blended or something, it was like a pellet.
And the other one, they gave it as like a whole piece of food.
And then they found that the,
even though the identical calories,
(34:02):
like the mouse that ate like the same food
that was like slightly more processed
and just gained a lot more weight.
Yeah, I think this is a incorrect notion
that calories are the same everywhere.
Calories in, calories out was something that I used to use
when I was working out in college.
Well, I think calories in, calories out
is still like technically correct.
(34:24):
Because the thing is, so I agree to that not,
I don't know how we start getting on this topic,
but anyways, like I agree that not all calories
are created equal.
They're not.
Yeah, because like, you know, if you have like a,
I think it's like protein, it takes like a more energy
to kind of digest that protein calorie, right?
(34:48):
Yeah, it's just, it's probably just gonna liber everything.
Yeah, so anyways, so then like, you know, technically,
calories in and calories out is true,
but like oftentimes like, you know,
if you have a set number of calories that's like highly processed,
you're just gonna get a lot less out.
(35:09):
It's so much more complicated than that.
Even, you know, think about like potatoes.
You boil a bunch of potatoes, and they're good to go,
they're hot, they're steaming, they're yummy.
If you eat them right then and there,
you'll, you know, get some amount of blood sugar spike
and get some amount of energy from those potatoes.
(35:31):
If you put them in the fridge, let them, you know,
get cold overnight and then, you know,
take it out and reheat them, eat them again,
it's gonna have a much lower caloric absorption of your body
because the way those molecules buy in changes
(35:52):
after you cool down these bonds
and it makes it harder to break down some of these chains
and extract energy out of it.
So it's such a complicated science
and each different action that you do
that the processing of the food takes
has an impact on how to fix your body.
(36:12):
- Yeah, I even heard that like,
if you have like vegetables,
which is like sitting on your counter
like in a refrigerator for like a while,
they lose nutrients over time.
- That's true too.
- Yeah, I mean, the more you process ingredients,
the more is lost to like oxidation or you know,
I don't know, maybe like sublimation or something.
(36:34):
So ideally you wanna like,
that's why my coffee beans, I was so annoyed,
I ordered a bunch of coffee beans from Amazon,
but I came grounded.
I ordered whole beans, but I came grounded.
And I was so annoyed 'cause like grinding coffee beans
right before you, you know, make a brew of coffee
retains the flavor, the, you know,
(36:56):
some of the, like all the caffinoles,
I forget the compound, but all the things
that makes coffee amazing.
- I used to have a friend who just bought like the unroasted
beans and they had a coffee roaster.
- Yeah. - And he was like,
"That's another level."
- Yeah.
Man, I feel like all this makes you feel like,
(37:18):
like how are we supposed to be healthy in like today's
modern society?
Like everything's like pre-package,
I've been sitting on the shelf for like,
you know, months before it gets to you
and like plastic containers with, you know,
all these like additives and preservatives.
So it doesn't, I don't know, like I feel like,
I just need to go off and,
I just like a farm somewhere and then just like,
(37:39):
start eating just food that I grew myself.
- I have an answer with, with genitive AI.
(laughing)
With, no, I'm serious, with some kind of an all-knowing
medical expert that's right at your fingertips,
that's able to comment, learn about your eating habits
and give you suggestions.
(37:59):
I feel like that's realistically the only way
to process all of this information.
'Cause who has the time to understand all of this
about food science, nutrition, biology,
and medicine to make the right choices to be healthy?
- You're right.
There should be somebody who makes like a,
(38:22):
like an app or something that just tells you
like what to eat and when to eat it.
And then like based off of like your biomarkers,
like I wonder if you could like somehow like bring it,
I don't exactly how you build this.
You have to have some sort of thing that measures what it can,
(38:43):
like I don't know, could like connect to your,
continue with glucose monitor, connect to your,
smart reading, Apple watch, whatever.
If it could like read your cholesterol levels,
like I don't know like ketone levels, whatever,
and then just take all that information
(39:03):
and then see if it could synthesize it, be like,
oh, like you should skip a meal or like,
hey buddy, like it's time to have
a steak and eggs for breakfast, 'cause you need some protein.
Like I think that would be really cool.
And I think you're right, like I mean,
that's really the future that needs to be there,
(39:25):
'cause I think that's like kind of Brian Johnson's whole thing.
He's like, well, like we're gonna be run by algorithms
in the future anyways.
So it's like mud as well, just like bow down to them.
- Yeah, so Brian Johnson is someone who's eight,
I don't know, I don't even know what to call him,
like not quite a fitness or health influencer,
but someone who's a regional rejuvenation athlete.
(39:48):
- Sure, sure.
- A, someone who sold this company,
made a lot of money and decided he's gonna use himself,
his body is a guinea pig to try to live forever,
try to do exactly what it takes to operate
at optimal health, get the perfect sleep,
get the perfect nutrition to reverse the impact of aging
(40:12):
that just happens as we live and make choices in our lives
that maybe bring us joy but are suboptimal
for the functioning of our body.
And he's a little too robotic.
I think a majority of people, probably like 99% of people
wouldn't want to live the life that he does,
(40:33):
but it would be great if we can take some of the learnings
and distill that down into simple exercises
that we can do throughout our day.
- Yeah, I mean, it feels like,
I mean, maybe you could even make like,
3D printed food or something like that,
that gives you the exact nutrients that you need.
(40:53):
- No, that defeats the purpose.
So we're talking about eating whole, unprocessed,
organic, untouched by machines and, you know,
well, I don't know, I mean, I guess you could have either,
either way, right?
So on the one hand, you could say,
all right, I'm gonna have this broccoli
straight from the garden.
(41:15):
Or you could say, like, well, my body needs,
like 37% carbohydrates with like this micro nutrients
and that micro nutrients.
So it's like, you know, you could eat broccoli
or you could have a food pellet that has exactly what you need
(41:35):
that will help you live optimally with the optimum energy.
- Most muscle gain.
I mean, it won't be like natural.
- Sure.
I guess if I were to humor you,
if we had really cutting edge 3D printers
that can print at the cellular level
and we printed fibers and other things
(41:57):
to keep our microbiome healthy
and not have a highly processed,
not have the same impact as highly processed foods.
That'd be pretty cool.
- Yeah, I mean, like--
- 3D printers from fibers too in your steak.
- Sure, sure, why not?
I mean, like, you think about it, like, nature does it,
(42:18):
right?
Like, why can't we?
Like, I mean, isn't it crazy just like,
but like a seed in the ground and like a grows
or you don't have to like do it,
you just like put it there, like put some water sun
and then like soil and then you get like a plant.
It's amazing.
Like, I mean, if like, it can,
if it's seed, it can do it.
Like, why can't we?
- I wouldn't say a seed has been that all by itself.
(42:41):
It is like four billion years of evolution
that has led to this phenomenon being emergent.
- Well, sure, I mean, but like, you know,
you have the four billions user of evolution
and then like, we could use that to like build
the 3D printer or thing, which, like, the seed,
which grows exactly what you need for Tuesday.
(43:03):
- That would be cool.
- That would be cool.
Like, all your meals, like, it just like grow it.
Like, imagine like you just like put it like ground
and like, outcomes like a cow with the exact macron
micronutrients that you need.
It would help you live forever.
And also, you could optimize like your food preference as well.
So like, when you ate it, it would taste amazing.
(43:25):
- Actually, I don't know if cool is the right word
that sounds like a highly controversial thing
that a lot of people might have very
conflicting opinions about.
- You think so?
Why would that be controversial?
I think that would be cool.
- Yeah, I put a seed in and get a piece of steak out of it.
The steak seed.
- Yeah, look at the cool.
Well, anyways, I know we're about out of time,
(43:47):
but if the folks in DeepMind are working on a cool
protein folding solutions that can come up
with the steak seed, let us know.
- Yeah.
- All right, well, until next time.