Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Yes, hello everybody and welcome to the number one generative AI podcast in the world with
(00:09):
your hosts, Mark and Shashank.
That's us.
So, anyways, you guys are awesome.
So the podcast has actually been growing.
So if you're new, thank you for listening.
If you have been listening, thank you for listening.
(00:30):
We really appreciate you.
We recently seen a big uptick in listeners.
So I hope you guys find this interesting.
We've been trying to really post the video podcasts lately.
But yeah, it's a bit of a slow news week this week.
Yeah.
But there still is a few things to talk about.
(00:51):
So Shashank, did you hear that TSMC is now producing four nanometer chips in Arizona?
That's pretty cool.
I know that we're talking about that for a while.
I know, was it like a Foxconn tried to come into the US, but that never panned out.
(01:12):
Hopefully, this actually pans out.
I think Trump is really trying to push domestic manufacturing.
Maybe some good can come out of the new presidency and have the entire, well, more of the supply
chain for hardware built in the same country, which de-risk all these companies, because
(01:39):
TSMC is kind of in a delicate situation right next to China.
Not to mention just having one single company in one single country be responsible for so
much of the world's technology is just kind of a risky situation to be in.
Yeah, for sure.
I feel like if you were going to write a novel about how to take over the world, I would
(02:03):
definitely write the sci-fi novel.
Just put Taiwan right there in between all the superpowers, kind of a little bit defenseless
and yeah, it would make for a really interesting book.
But yeah, I think that basically, as you mentioned, the problem that the US has is even though
(02:26):
we design a lot of chips in the United States, right?
I mean, like Apple, I think it designed some of its own chips.
You have Nvidia, I think Qualcomm does a lot of the chip design until all these companies.
Yeah, so many companies.
Basically all the companies, they design their chips in the United States.
(02:47):
Even the AI-specific ones like Cerebres and San Panova and Groc, they're all designing
their chips in the United States.
But pretty much all of them will use TSMC to develop the chips, right?
So manufacture it.
Yeah, it's a manufacturer of the chips.
(03:07):
So TSMC is just, you know, they're not designing the chips, but they're actually going to be
the ones who make it.
And that's part of the reason why we see on the back of the iPhone, what is it like, design
in California, made in China?
I don't think they put the made in China part.
They leave that out.
But yes, designed in California.
(03:28):
Yeah, exactly.
But you know, it's basically made in China, right?
And so, anyways, with that being said, right?
I think that the fact that the TSMC is moving some of their many actual manufacturing efforts
to the United States is going to be huge for just like national security and helping the
(03:52):
US stay on top of the generative AI movement in general, right?
Although I was reading that, apparently you can't do all of it in the United States.
So apparently some of the final steps, I think it's the packaging, needs to be done in Taiwan.
So apparently they do some of the chip manufacturing in the US, then they ship it back to Taiwan
(04:15):
for the final packaging, which doesn't exactly make sense to me from that from the article.
But maybe there's some really fancy packaging that they need to do, put the big bow on the
lock or something, right?
And then they can ship it back.
But apparently that's coming.
And apparently soon they're going to have some of the smallest and best chips in the United
(04:40):
States.
So maybe soon the iPhone will say, designed in California, made in America.
Well, I mean, just to clarify that point a little bit, Foxconn is still staying in China.
So it's still going to be a made in China phone.
But at the very least, if you look at the CPU and GPU and the processors, that at the very
(05:02):
least might be entirely made in America.
But yeah, that is kind of a weird caveat that it's going to be mostly made here, but then
just sent back before being sent again back to the US.
Yeah, or maybe it just goes to China from Taiwan from there, because most of the stuff is
(05:24):
assembled in China at Foxconn or somewhere else.
Yeah, but Foxconn I think has some factories in the United States.
Okay.
So for example, you know, the Lordstown Motors, that was like the electric car company.
There I think either in Ohio or Pennsylvania.
Okay.
(05:44):
I remember I was driving, I was driving to Pennsylvania.
Actually, I was driving to Titusville, Pennsylvania maybe a year ago.
So Titusville, Pennsylvania as an aside, it's super interesting.
So it's not exactly related to NAI, but we can try to make the parallel.
So basically Titusville, Pennsylvania is the start of the oil industry in the world basically.
(06:09):
So for those who don't know, there used to be pretty much only one oil company on Earth.
That was Standard Oil.
And Standard Oil was owned by John D. Rockefeller, who is arguably the richest guy in all of history.
Mr. Oil, yeah.
Standard Oil.
He made a lot of money and he's the reason we have antitrust laws to break up massive companies.
(06:32):
Exactly.
So super big.
And if you actually kind of trace back all of the biggest oil companies today, like I think
B.P. and Chevron, Exxon and all these super massive companies, they all used to be one company.
And that was Standard Oil.
So yeah, he was a billionaire, back when a billionaire actually was a lot of money.
(06:58):
That was a lot.
That was a lot of money.
That was a lot of money back then.
Yeah.
I mean, it's still a lot of money today.
It's still a lot.
But you know, 200 years ago, like 150 years ago, I don't know.
He was like the Elon Musk of the day.
But like more so almost.
The Elon Bezos of the day.
Yeah.
Yeah, exactly.
So, yeah, Titusville, Pennsylvania was where they started drilling for oil in the United States.
(07:25):
And it was kind of the thing that put him on the map.
Well, so he actually never drilled for the oil.
He did the refinery, refining of the oil.
So they would use trains or pipe, use pipelines to bring the oil back to Cleveland.
And then in Cleveland, Ohio is where he would have a lot of his manufacturing and like a processing
(07:48):
of oil, they turn it like gasoline and whatnot.
So, Generative AI, maybe it's the gasoline of the future.
Does that, does that work?
That maybe kind of, I guess it is the currency.
It is the gold, the liquid oil of the future.
(08:09):
But speaking of like oil and energy, I feel like we need to start thinking about more and
more energy to fuel all these different models.
All these companies are building massive plants.
And yeah, I think the TSMC plant will be pretty cool.
But we're going to need like a lot of energy to fuel all this stuff.
(08:30):
Yeah.
For sure.
We will.
Because I think when we were talking to Matt from Cyribrus, he was mentioning that I think
one of their nodes used like 20 kilowatts of power.
I think that's what it was.
That's like a one-way for it.
Yeah, just one-way for it.
And packaged up into a solution with networking.
(08:51):
Right.
Exactly.
And in order to train a big model or even to run, I think, Lama 405B, which is, you know,
Facebook's new LLM, you would need maybe a couple of wafers, I think, in order to run it.
So oh, since the video, we should bring over the wafer for the fact one moment.
(09:15):
So this is not going to be an actual wafer from any of the companies that we talked about.
This is a long, defund wafer that I got from someone here nearby who used to work at Sandisk.
So it's a memory chip company and this is a die of a bunch of memory chips from Sandisk.
It's probably a defective one.
They had to throw out for some reason.
(09:37):
But if you look at the video, you can see a bunch of symmetrical little cubes.
They're all, you know, identical memory chips that will go into maybe like a USB stick or
something.
Yeah.
So you'd have something like this, right?
But SeriBris is interesting because it has the entire, it uses the entire wafer, right?
(09:59):
So typically what will happen is companies like Nvidia or AMD, whatever, they will go and
then they will actually make multiple CPUs.
So they'll go and then cut out just like a small piece of the wafer essentially and then
they'll use that for their chip.
But SeriBris, they will use the biggest possible square from the wafer.
(10:24):
And since it's so big, it uses a ton of power, which is like 20 kilowatts.
And I think that what is it?
A house is maybe, well, like a small house is maybe like two kilowatts or something like
that.
And then like probably like a medium to like large size house is like 10, 10 kilowatts.
So a 20 kilowatts is like a big house.
(10:47):
It's a lot of power.
Yeah, that you would need.
So if you need a couple of those, I mean, that's like, that's a lot of power.
I think a lot of power for one node, which I assume maybe can handle a lot of queries,
but still.
Yeah.
I mean, sure.
It's really fast and everything.
So that's, so that's nice.
(11:08):
But it's just, you know, that efficient.
And also since we have the chip right there, it was kind of interesting that, you know,
Syribras figured out a way to handle the issue with having defective nodes in the middle.
So usually like when Nvidia makes a chip like makes uses a way for like this.
(11:33):
They'll have for the listeners, we're looking at basically just like a frisbee.
It's about the size of a large dinner plate maybe.
And there is a bunch of squares on the chip.
So imagine a dinner plate with like a checker board kind of on it.
(11:54):
Yeah.
So if you look at the checker board, we have these really advanced machines from ASML that go
in and etch all these little circuits, these intricate patterns inside each checkered pattern.
But not every single cube is going to be perfect.
There's going to be a lot of defects.
So Nvidia has to deal with this problem of yield.
(12:18):
So this one wafer might have a yield of, I don't know, throwing out random numbers here at
90%.
The other one might have 80%.
The other one might have like 70%.
So I guess if we put that into words, right?
If there are, I don't know, I can't comment.
But let's say that, yeah, so there's 100.
But there's 100 of these individual processors or memory units on this chip that you cut
(12:45):
out.
In any video's case, there would be GPU units.
GPU units.
Then if there was 100, then if you had a 90% yield, that would mean 10 of them are just defective.
You just need to throw them away, try again.
Yeah.
So they would kind of like block out those sections, those defective units.
(13:06):
I assume through software.
And the funny thing I just kind of like realized a few weeks ago was, that's the reason why they
have different skews of their consumer or like all the grades of GPUs that they market.
So the consumer grade GPU, the RTX 4090 used to be like the best GPU that you could buy.
(13:30):
This year at CES, they released their new series, the RTX 50 series.
It's like the 5090, 5080, 70, so on.
And the 5090, the top of the line GPU is just the one with the highest yield.
Apart from a lot of other enhancements, it had like more memory, more clock speed and whatnot.
(13:50):
But one of the things that distinguishes all of these different skews, all of these
different models is just that they have better yields.
So like the whole GPU just has a lot of functioning parts.
And the ones that don't have a very good yield, the ones that have a lot of defective units,
they're not going to throw like the whole GPU away, even though a little bit and pieces
(14:13):
of the sections inside may not be working.
They just turn those off and sell it as a lower performing GPU.
Although it's not, that's not the only difference.
That's not the only thing.
There's differences in the amount of memory that it has, the amount of clock speed and presumably
other things.
But this is one of the things that differentiates the different skews.
So if I have the new fancy Nvidia 5070, 5090, I guess what you're saying is like the 5090
(14:42):
would have the highest yield, maybe 90% yield.
Yeah, almost perfect.
It would virtually no defect.
Right.
And then let's say I had the 5070, maybe that's only a 70% yield.
And I'm sure it probably doesn't map one to one like that.
But that might be a thing to think about.
And then say, okay, we're going to take this lower yielding chip because it's the same chip
(15:07):
maybe as like the 5090.
We're going to take the lower one and then we're going to slap some extra memory on there.
Maybe a little bit more cooling.
Give it a little bit more power or whatever it needs.
And then sell that as a different chip.
But that's kind of smart.
Good.
Economies of scale.
But the thing about cerebral is because they can't just chop up this entire wafer into
(15:30):
maybe like dozens of different GPUs.
They have to figure out how to deal with these defective pieces in the middle of their
gigantic integrated circuit.
So they come up with a lot of interesting rerouting algorithms to send information around
(15:53):
the defective parts which no other company as far as I know seems to be doing.
Yeah, it's super interesting how they do that.
Anyways, you remember last week when we were talking about just a switch to topic, you remember
last week when we were talking about how robots will need some sort of way to get data
(16:16):
from that through like the real world or self-play or something because you know, I had
that whole thing where I think that the LLM's are going to kind of peak around 100% of
a human, maybe like 110, 120, but they're never going to like far surpass humans, right?
So there was a paper that came out a little while ago.
It actually came out in January 2024 is when it was first submitted.
(16:40):
But I heard about it like last week when you said it to me.
But apparently there's this team out of, I think they're out of China.
And they are making this Python framework called GXP.
And that framework, what it does, is it kind of does what we're talking about.
(17:04):
So I dub it.
So baby robot.
Now the textual paper name is growing from exploration, a self-explorant, exploring
framework for robots based on foundation models, which is really a nice name, classic research
(17:24):
paper.
But I like baby robots.
So basically what it is, it's a framework where it will have essentially a robot, either
a physical environment or a simulated environment.
And it'll just go and try to noodle around and try stuff.
(17:47):
And you can give it individual tasks to create stuff.
And then essentially that could be a way to get unlimited training data.
So I think that this will be the way, or something like this, is maybe going to be the way that
we're able to kind of break through that training data problem.
(18:09):
So I would imagine that there could be either a couple of things.
And maybe it won't be necessarily this framework.
But I could imagine in the near future we could have robots, maybe millions of robots, billions
of robots, all running scientific experiments and just trying stuff.
(18:32):
So I would imagine, maybe let us go and just start mixing chemicals together and seeing
what happens.
So I'm really fascinated by this topic because I started subscribing to this new thing called
Journal Club where they send you a new interesting journal every week.
And this was the topic for last week.
And I think about it like trying to get a robot to learn a specific task.
(19:00):
So the most popular area is self-driving for like quote unquote robotics, physical robots,
physical machines that move and do things in the real world.
So for self-driving, they've been collecting a lot of driving data through different approaches
each company is trying to do it differently.
But essentially they're all just collecting as much data about driving as they can.
(19:24):
And there's kind of a limit to how much data you can collect because most people go on the
same roads within a certain country, city, whatever.
And then once you've mapped all that data, you're kind of out.
So once that environment changes, if like we start seeing forest fires, massive forest fires
(19:45):
that are unprecedented or like massive hurricanes that flood the entire street, what do you do
in that situation?
Or someone decides to go off road a little bit or a tree falls in the middle of the road,
which are all these long tail scenarios that don't really happen that often in the real
world?
It's kind of hard for these models to know what to do with things that haven't happened yet.
(20:08):
So that's where you get this kind of, it's kind of ties into and videos announcement with
the cosmos real world simulator.
So it can create new kinds of environments and allow these robots to go and explore this
new environment to collect data for themselves.
(20:31):
So I guess another example is if you wanted to do some basic chores around the house,
have it to fold laundry.
You can just have it go and throw it to a bunch of virtual clothes and have it figured
it out itself.
Just give it a reward function.
Tell it that this is what a folded pile of clothes looks like.
(20:53):
And just keep hammering at it until you figure it out.
So would the reward function be maybe you just say, hey, this is what I want the clothes
to end up like.
So you just say, hey, this is the clothes we've got.
And then here's where I want them.
Like here's them put away nice and the configuration that I would like it is.
(21:14):
Yeah.
So just do that.
And then also avoid damaging things.
Maybe like you could say, like if you hit a wall or something, that's like negative, negative
points.
A bunch of safeguards.
Don't make a mess.
Don't hurt yourself.
Don't hurt other people.
Don't break anything.
(21:34):
Try to be efficient.
Don't rip the clothes.
Some safeguards, but you don't actually tell it how to fold the clothes.
They'll just go and figure it out.
Yeah.
I mean, I feel like you could use that technique for too much everything.
Yeah.
And laundry dishes, gardening, chemistry.
(21:55):
What is the alternative?
You get a human in maybe like a motion capture suit or record them with a high fidelity 3D
camera, which captures their pose and maps that onto a robot for training data.
But then it's kind of hard to scale.
People humans take a long time and they run at one X speed.
(22:21):
Like I can't operate at two X speed.
I can listen to maybe YouTube videos at two X speed, but even that's a stretch.
But with these virtual environments, you can have these robots operating at however fast
the GPU will allow them to run.
You can run them at, I don't know, a you can get them to learn tens of thousands of regular
(22:44):
real world hours within a couple hours.
So I think the ability to scale the ability to have them learn by themselves, cover the
long tail of edge cases.
Yeah, it's going to be fantastic.
(23:05):
But yeah, have you ever read the bitter lesson by Richard Sutton?
No, what's that?
Yeah, so it's a really short essay.
And it basically kind of describes what you mentioned.
So it was written, I should've known it when it was written, but it's really an interesting
(23:28):
thought.
So it argues that general purpose solutions will always win, essentially.
So what was the context?
Well, it's more of an overarching thing in AI research where apparently, like Waymo, when
they originally tried to develop self-driving cars, they would try to solve every little
(23:55):
last piece part of the self-driving situations.
I remember they were focusing on geofencing.
They were like, only within the city limits of Phoenix is this going to work.
Right, but not just that.
It was like, oh, let's have a guy who just hard codes what cones look like.
So we have a cone guy, right?
(24:15):
So it'll be like, oh, this is a pointy cone and it's orange and this one's green.
And you just have it know what all the cones look like.
Or as opposed to teaching the car all about the cones, what another alternative you could
do is just have it mimic the way human drives in a bunch of different scenarios and then
(24:41):
just train the neural network on that.
So as opposed to trying to say, hey, let's go and figure out all the different combinations
of the way the car could drive.
You just try mimic what a human does and then it'll just kind of infer what to do based
of the environment.
Now, I think I'm sure that maybe have a lot of other safeguards because I mean, they
(25:02):
are only geofences in particular areas.
So I'm sure they have super detailed maps and whatnot.
But the end of the day, my understanding is that they use more of a general purpose solution
for their self driving cars as opposed to trying to go in and just say like, okay, this is
what we do in a red light.
This is what we do.
I mean, maybe you have the red light green light type thing, but it's not just like, hey,
(25:29):
let's hard code in all these rules.
There's more general purpose than that.
Yeah, that's really exciting to think about, not just with robotics, but if you expand
it just to regular elements too.
Why does the process of training deployment and using a regular LEM have to be limited
with, you know, you get training data, you pre-trained it, do some reinforcement learning
(25:55):
and then use it.
Well, I can't just keep exploring itself and acquire skills, learn about the real world,
build its own internal knowledge and then keep getting better.
Yeah, I mean, I think that will be the approach for us to reach AGI.
So this particular paper that we talked about in Breakstown, it's processed into a couple
(26:20):
of different steps.
One is, so this is specifically for robots and the real world.
So it uses vision language models, the LEMs and it breaks down.
It's processing to task one is perception and task generation module.
So it uses the vision language model to figure out where it is, what's going on and what
(26:43):
is the high level objective that it needs to do.
Then it breaks down that objective into a bunch of high level tasks.
And then the next step is it takes into planning and execution module.
So it figures out from the high level tasks, how do I break this down into individual sub-task
(27:05):
and then given its set of skills, what ability that robot has and what it's been trained
on, start executing each of those sub-task.
And then at the very end, it has verification and error correction modules.
So it just makes sure that the things it's doing, it's doing it right and nothing crazy is
(27:28):
happening.
And the very last step is reflect on what it's done, all the little experiments that it's
taken along the way and see if there are new skills that it can incorporate.
So I feel like we can incorporate this into an LEM too.
And as opposed to having application builders use chain and hard code of a bunch of different
(27:52):
skills and agent behavior, just have this run wild, maybe have it observe your behavior
and learn by itself.
I feel like that's, it needs a different name because that sounds more than LLM, right?
Because if an LLM is just taking text input and then trying to figure out the next token
(28:17):
that makes the most sense, right?
Because that's basically an LLM, it's just like the large language model from Chatchy
B.T.
It just says, okay, based off of this, everything that I know that comes before, here's the next
best word or the next best token.
It'll do that, right?
But this, it feels like, it's not just like words or tokens, but it's more than that,
(28:38):
right?
Because it takes it vision and maybe they can all be vision, but yeah.
But it takes in more, I mean, maybe it'll all boil down to tokens, but it's not like really
like a language model anymore.
It's like, you can have like a large world model, but something like that, like a reality
(29:02):
model or something potentially, yeah.
But no, I was thinking even within the constraints of just a purely text model, like even the O3 models,
like the reasoning models, they're kind of cool, but it's still somewhat limited.
So if there are things that it's just cannot do, yeah, so by the way, the O3 model is the most
(29:28):
advanced model that hasn't even been released yet from OpenAI.
So we were talking about it a couple of episodes ago.
It did really well.
So it is basically a chat GPT that is able to reason and think for itself.
So as opposed to doing just one singular output, it will go and then think about what it's
(29:56):
going to respond.
It might think for a while and then give a response back.
So this kind of self-reflection that it's able to do and maybe simulate out a bunch of
different answers, turns out to be really effective.
So it's basically it's like, as opposed to just giving one response, it might give a million
responses all at once and then figure out which response is the best, maybe by having
(30:22):
some sort of self-reflection.
And that's the O3 model.
So anyways, continue.
Wait, what was I saying?
Oh yeah, so I think to get the O3 model to solve any kind of task, even when it hits a roadblock,
(30:42):
I think it would be cool to give these models the ability to learn and update their weights
in a live, real time training environment.
So when I was listening to the ArcAGI challenge and the François Choulet, the--
By the way, ArcAGI is a benchmark, which is kind of like an IQ test for the LLMs.
(31:10):
Yeah, so it was an IQ test created maybe like last year because the creator of that test
realized that all of the traditional benchmarks that we had kind of had limitations and there
were all being gamified and anytime you create a benchmark, it kind of like you get something
(31:32):
that just beats it, maybe through brute force, maybe through like hacking it, finding
loopholes, whatever.
So he just wanted to create like a solid thorough benchmark that could encapsulate what
it means to be intelligent.
And this is a constant work in progress.
This is by no means like a job that's done, they'll constantly keep making the next iteration
(31:54):
of their AGI test, but this test was like a spatial IQ test where you had certain patterns
of images that lead from one sequence into another and then you see a new pattern and you
have to match the start to the end.
So one of the theories that Francois Cholet, the creator of this prize mentioned was that
(32:22):
he's starting to see these models have some kind of like live training process where each
example that it sees while it's taking the test, that example is being stored in memory,
is being used to fine tune that model so that the next answer that it gives is using all
(32:43):
the learnings from all the previous examples that it's seen.
So maybe if you had a model that is always learning, it's not fixed in that specific snapshot
that you get from that API, maybe turning locally or maybe choosing some kind of a persistent
memory or something, like the memory feature that Chattichy PT has from OpenAI.
(33:05):
And, you know, if it's constantly learning and it has the freedom to maybe explore a little
bit, try different things, and like you mentioned, spin off like a million different versions
of itself, maybe more realistically, dozen different versions of itself and see which version
(33:26):
of itself has some interesting intuitions and then go deeper in that tangent.
Well, I don't know, I think that it may be depending on the type of problem that we're looking
for, it might be a billion iteration that's you're looking at, right?
So a little bit of a story time.
(33:48):
I have a perpetually kind of stuffy nose.
I've been working on it and I've been trying to figure out, you know, why is it stuffy
so much?
And I've gone to the doctor and I just got a CT scan of my sinusus yesterday.
And I was trying to figure out, you know, what does the CT scan mean, right?
(34:09):
So, because thanks American Healthcare System, I wasn't able to get an ear nose and throat
person to look at this.
Well, I mean, I could find another radiologist or something to maybe look at it, but I really
want like the ear nose and throat person to look at it and tell me what's wrong.
And I gave, I was just like, all right, let me just try this.
(34:29):
And I gave the images to the, the Clawed 3.5 Sonnet and I had it interpret the images and
it seemed pretty good.
Now, I'm not a radiologist, so I don't, I can't vouch for how good the interpretation was,
(34:51):
but it knew from the image that it was looking at at my sinuses.
It said, oh, it looks like you have a slight deviated septum, which apparently means like
your nose is like a little bit crooked.
So like, one nostril is, wait, let me see.
Yeah, can you see the, can you see it?
Yeah, it's on the video version.
(35:12):
She's looking at my nose.
But yeah.
Is it leaning to your left a little bit?
Yeah, apparently it's leaning in one way a little bit.
I can see that.
A little bit.
Yeah.
So maybe it needs to be straightened out a bit.
But I also use, if you guys have a hard time reading these breath rights trips are really awesome.
So there are things that can kind of like pull open the nostrils a little bit and help
(35:32):
you breathe a little bit easier.
And then it also said, oh, I noticed a little bit of gray in the image because I, it's like,
well, I don't know, I have enough of it.
But apparently, like gray can indicate like some mucus build up or some additional inflammation
of the nose.
So with that being said, it said like, yeah, it looks like this could be due to all these
(35:56):
different reasons.
But, you know, obviously check with your ENT.
I was like, well, I can't talk to them so much.
So, you know, you are the next best thing.
So I said, hey, my plan is to go and then use flow nays and then also use, talk to an
allergy doctor to see if I can try to figure out, solve the allergies.
(36:19):
Because apparently one of the big causes for inflammation is when you have allergies,
the allergy that like you have like the build up of pollen or whatever, your body has an
allergic reaction that has inflammation of the nose and then it's hard to breathe.
And then if that happens enough, you get nasal polyps, which I kind of think of or sort
(36:41):
of like a nose acne on the inside.
And then they make it harder to breathe.
So oftentimes when your nose and throat doctor will do this surgery, where they kind of
like cut all the polyps out.
But what they'll typically do is they'll say, hey, why don't you take something like flow
nays, which is like a steroid for your nose, if you just get like a Walgreens or something.
And then you start it in there and then the nose will start to kind of like eventually
(37:06):
shrink the polyps or kind of like lower the inflammation.
And then if you could do a double whammy by going to an allergy doctor and then by starting
to do the thing where you reduce the allergies.
So apparently what you can they have now is they have these allergy drops.
What you can do is like you do like a daily drop of the thing that you're allergic to.
(37:27):
And then that's actually a lot safer than actually getting the injections for because apparently
like a vaccine.
No.
So apparently to in order to combat allergies, what a lot of times that they've done in
the past is the gold standard for allergies to do a skin test where they will be it's called
a prick test where they will have maybe 50 things that you might be allergic to.
(37:52):
And then they'll just kind of give a little of that thing that you're allergic to, put
on your arm or your back or your shoulder or something.
And then if it kind of swells up and they're like, oh, you're allergic to that.
So then in order to kind of fix that, what they used to do in the past is you go to the
allergy doctor and then they would just give you in larger and larger doses.
(38:15):
So the thing that you're allergic to, so it's sort of like really small and then they
would just kind of like every week or every couple weeks you'd go and then they just like
give you like slightly larger doses.
So you can build up the immunization of that?
Exactly.
So you can build up the immunization.
Essentially like a vaccine.
Yeah, kind of.
Yeah.
Give you the thing that you're, you can't fight off in larger and larger doses so your body learns
(38:38):
how to fight it itself.
Exactly.
But you know, that's a big hassle to have to go to the allergy doctor every time.
So apparently what they start having is the sublingual drops, which is just like an eye dropper
type thing of the thing that you're literally going to put like one drop under your tongue.
And then you just do that every day.
(38:59):
You do that for a few years.
And then you don't have to go to the doctor for a few years.
Well, yeah, I mean, because apparently if you do the thing for the allergy doctor, that
takes a few years as well, a few months a week.
So I mean, it's just kind of like one extra thing that you would do in order to help with the allergies.
So anyways, going, bringing this back, right?
(39:20):
I think that these types of things, a doctor is getting able to interpret it, right?
But I think that over time, what will happen is you will have the AIs trying to do the experiments
on maybe simulated environments of humans or maybe you would have the AIs running experiments.
(39:50):
And I don't know, maybe looking at cadavers and cutting up bodies and figuring out what, how
things connect to each other.
And they might have a much better understanding about what's going on in our bodies.
So then maybe in the future, we won't be asking the doctor.
And honestly, even right now, I'm not asking the doctor.
I'm asking Claude because it's available and it could give me an answer that I'm looking
(40:15):
for as opposed to, you know, need to wait until March.
So for reference, it's January.
So they said, oh yeah, we can't get you until March, which is ridiculous.
This is nuts.
It's really like some black mirror type alternative reality.
Like two years ago, if you were told, you just give this website, your CT scan, and it's
(40:38):
just going to tell you exactly what's wrong with you, that is nuts.
Yeah, so I hear and ready for it.
And let's bring on the robot doctors.
Well, I think to pump the brakes a little bit, we all know that these LMs have a remarkable
ability to hallucinate.
So it could just be pulling out of all of this out of its rear end and we would have no
(41:00):
idea.
Mark is not a medical professional.
We would definitely need to fact check all of this before making any major decisions.
Well, so to be fair, and actually to back up what I've learned, there was, I actually did
go to a doctor, not about this because I just got the CT scans today, but I would do some
(41:24):
research.
But both on me and then also for some friends and whatnot who were asking me for advice.
And I did a little bit of research on chat you be tea or anthropic and then I went to
the doctor and I was able to diagnose some really obscure things and you started.
(41:45):
Wow, you really did some good research.
The doctor did?
The doctor.
Wow.
Yeah, there was a thing, I don't want to get into it, but there was something on like a
somebody's like gall bladder and I didn't know what a gall bladder was.
But somebody said like, yeah, I have like pain in my upper right hand, skulled bile.
(42:11):
Yeah, it does.
I'll see if I just stuff in your stomach.
Exactly, exactly.
So it's like the thing that.
You're going out my high school of biology.
And apparently if you have like some pain in your upper right hand shoulder that doesn't
go away after you like you kind of move your shoulder around, that could mean that you
(42:32):
have gall bladder pain.
And that might mean that you have some bile that is like stuck in your gall bladder.
And that oftentimes that can come from like eating like fatty foods or something, have
a little inflammation there.
So apparently there is some surgery to help with that, but who wants to do surgery.
(42:54):
So what you could hypothetically do is you could just eat better and then that might like
help reduce the inflammation.
And then because what you don't want is that they have these things like gallstones and
then like the gallstones get stuck in there.
And then it's just super painful.
So I figured out that somebody had some like gall bladder issues.
(43:15):
And then because of a shoulder pain that they had.
Yes.
That is insanity.
Exactly.
And it wasn't me that figured it out.
It was the AI.
So in the doctor's like, oh yeah, wow, I don't know how you figured that out.
But here's the thing, I didn't.
(43:37):
So you know, I say bring on the AI doctors.
And then we'll just have the human doctors just make sure we don't kill ourselves.
That sounds like a good caveat to put right at the end.
But yeah, I mean, honestly, I use AI for as like a first step for any kind of issue that
I face, medical, non medical, technical, emotional, soci, you know, interpersonal, whatever it may
(44:03):
be.
So not a bad approach.
Yeah.
So anyways, I think we're about out of time.
But this is a good conversation and guys will catch you in the next one.
But before you go, before you go, one announcement.
Next week, so a week from today on January 23rd, 2025, depending on your listen to this,
(44:28):
in Palo Alto, California, there will be an event with sentient AI.
We mentioned it last time, but sentient is doing some really cool stuff.
They are building community driven LLMs and machine learning.
So they're trying to find a way to make it so that it's not just going to be the big boys
that can make the models, but maybe together as a community, we will be able to make LLMs.
(44:52):
So they're going to be given a presentation.
They're going to have a handful of the folks from their company there to present and talk
and just chat with you guys.
So there aren't very many spots left.
I think there are maybe like 30-ish spots left, 30-something spots left.
So if you're interested, make sure you RSVP because that's it.
(45:18):
I mean, it's free.
There'll be like food and stuff there, but yeah, the space is limited.
So we can't fit that many more people.
So if you're interested, sign up because yeah, the space is running out.
So yeah, we'll leave it.
It'll be a great opportunity to network, meet other people working on in this space.
Way smarter than both of us.
(45:39):
We have some of these smartest people from Silicon Valley, engineers, execs, VPs, CTOs,
of massive, very fascinating projects that these people are working on.
So definitely come check it out and you'll definitely learn something.
Yeah, awesome.
So we'll catch you in the next one.