Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
- Great, thanks, Shashank.
(00:02):
It's exciting to be alive again.
So we had a little bit of a hiatus.
There was a bit of a gap.
Some stuff happened.
I got kinda sick.
- That's a medical pun, thanks.
- So it was tough for us to get recording, but we're back.
- And it was also a slow news week, the last few weeks.
(00:24):
- I mean, you guys didn't miss too much.
- Yeah, although,
don't worry, we'll fill you in and everything that happened.
- Yeah, this week has been exciting.
- Super exciting.
So like, I don't know, there's so much to talk about.
Like Google just announced like a whole slew of AI products,
a bunch of products, and they kind of leaned into
(00:46):
meaming themselves about how many times they mentioned
the word AI.
They had an LLM count, how many times they mentioned AI
in that whole keynote.
And I think it was like 100 something times.
- Oh, really is that what I've been saying.
- Yeah, 'cause I think that in the past,
I saw like a few YouTube videos of people making montage edits
of them just saying like, AI, AI, AI, AI, AI.
(01:11):
And, well, I mean, it's kind of funny that they have
like a sense of humor to realize that, you know,
they are going hard all in, all Gen-A-I,
but not even Gen-A-I, just like all AI.
It is all AI permeating every different vertical.
One interesting announcement from last week, you know,
(01:32):
if we were to podcast that I wanted to talk about
was Alpha Fold.
- Yeah, Alpha Fold.
- I think that was the first breakthrough
in real world science that Deep Might had come up with.
'Cause before that, they had been focusing on game engines,
(01:53):
trying to beat the game of Go.
I think building a Starcraft player,
I think they beat chess.
It might have been Deep Mine or maybe like IBM's Watson.
I forget which one, but this was the first time
they applied a large model to tackling
something in the scientific field, specifically protein fold.
(02:16):
And that was several years ago with Alpha Fold One.
And then we got two, which was even better.
And now we're on three.
- Yeah, so I only kind of like briefly parsed that announcement.
So I'm not like an expert on that,
but is it where they were able to fully simulate a cell?
- Close.
(02:39):
So from my understanding, protein folding
was an important problem in the pharmaceutical industry
where they have to figure out exactly
what kind of protein would bind to what kind of receptor
to cure a specific kind of element.
And to figure out the physical structure of a protein
(03:03):
based on some chemical structure was actually really hard.
It took a brute force amount of computation
to try every different orientation of these molecules.
But then boom, they were able to simulate that.
And similar to how large language models
came up with like emergent properties
(03:24):
as they got bigger and bigger.
From just simple autocomplete with a few tokens at a time
to seemingly pretty good reasoning capabilities.
That was kind of transferred over to Alpha Fold Three
where it had these emergent capabilities
where beyond just figuring out the protein structure,
(03:47):
it was able to figure out the protein's interaction
with other parts of your biology.
- Okay, because I'm not like an expert,
I'm like how the cell works.
'Cause I think the cells within the body are pretty complex.
I guess you would say, right?
(04:08):
'Cause like, I mean, in my mind,
like getting some intuition to simulate just a single cell
is gonna help us simulate potentially like an entire body, right?
Especially 'cause like if you use AI to simulate one,
then like you can simulate how they interact with each other.
(04:30):
And also like it's, I know a lot of traditional simulation
is like super resource intensive.
And there's a lot of like brute force operations.
You need a lot of computers.
But I know a physics simulation.
- Physics simulations, yeah.
Like they have someone with the biggest computers
in the world doing these massive physics simulations, right?
(04:53):
So my intuition would be that if we could use AI,
it might be like orders of magnitude less compute
for very similar results that you get
for one of the larger physics simulations.
Like I think it was like one of the big computers
they had in like Japan or something like that
or US or whatever was one of those that computers
(05:16):
that would just simulate like things like,
I don't know if it would be like cells in the body,
but it would be like weather patterns and whatnot.
And I've heard that like, you know, when you use AI,
it can be way more efficient to simulate the weather, right?
So I would imagine that like if you're able to simulate
individual cells more efficiently,
(05:37):
that might be like really beneficial
for medical research going forward.
- Yeah, apart from, you know, high level end goal
of simulating the entire body,
I think just simulating one cell in it
if it's else seems really useful in Pharma again,
'cause if you're testing a bunch of drugs,
that affect a certain organ that all have
(05:57):
kind of a uniform distribution of a certain kind of cell,
then you wanna kind of simulate
how a bunch of different variations of drugs
will interact with this one organ,
which has these uniform cells.
So I think like that in itself would be very useful.
- Yeah, that's right, that's right.
Like I don't know enough about like drugs or science
(06:18):
to say the idea. - No idea.
- Yeah, like what that is.
- It's probably good, the high medical science.
- Yeah, conjecture.
Basically guys, plan your 2400 vacation trips
because we're gonna make it,
we're gonna make it to then because
I think we've finally reached escape velocity.
(06:39):
- It seems really close.
- Yeah, have you heard the concept of escape?
- Live long enough to live forever, right?
- Exactly, yeah, so like the idea is, is like, you know,
if every year let's say science gets better
for, let's say you to, how do I explain this?
So like, I mean, she's strong,
'cause I'd perfectly live long enough to live forever.
- So eventually we're hoping that someone's gonna discover
(07:01):
cure for death.
But you know, that's not gonna happen maybe all at once.
I think in the next 10, 20 years
we'll come up with better cure to increase the lifespan
from like 80 to maybe 120.
And then another 50 years from now,
it'll go from 120 to 160.
And who knows what's gonna happen in another 50, 60, 80 years.
(07:22):
So it's gonna be progressively increasing.
- Right, so as long as we don't like die before them,
then there's a chance that we could just like,
indefinitely live due to like advances in medical technology.
So let's just stay alive.
- That's an exciting, scary, interesting future.
I actually saw, I just read the headline of this,
(07:45):
but NFX, which is a really cool VC fund
that focuses on network effects.
Wrote an article about really focusing on longevity.
Investing in a bunch of startups and companies, organizations,
whatever to usher more innovation in longevity.
'Cause it seems feasible to do better about tracking
(08:09):
all these micro interactions
between different organs in your body,
how different foods that we ingest affect us,
different lifestyle habits,
all of these fitness trackers, wearables,
all of this data to try to take that
and give you better insights to try to live longer.
(08:31):
So they're focusing on helping people
live healthier and more fulfilled lives,
but also in kind of like a selfish capitalist,
East Coast, Wall Street focused way,
the longer you live, the more compound interest you have,
(08:51):
the richer you get, and that's kind of their mentality.
I mean, makes sense to me.
I mean, look at Warren Buffett.
He got pretty wealthy by just being old.
Yeah, just by being old, really.
I mean, not just being old,
but he made some good investments,
but if you look at his network over time,
he definitely is aware of the effects of compound interest.
(09:16):
So anyways, to take a step back to mention
you were talking about longevity and how it relates to AI.
So one thing I've been kind of playing with a lot recently
is blood glucose monitoring.
So your blood--
- You love the patch?
- Not on me right now,
because they only last for like a couple weeks,
(09:36):
I think it's like two weeks,
but basically like these blood glucose patches are,
what typically like diabetics would go and check
where they would prick your finger,
and then you get to drop a blood,
and then you look at blood sugar or blood glucose.
So basically just like the amount of sugar in your blood.
(09:56):
And why is that important?
- Yeah, so I'm definitely not an expert here on this,
but my understanding is that when you have like a high blood sugar,
your insulin will typically spike during that time.
(10:18):
So like the insulin will go and eat all the glucose
in your blood or all the sugar in your blood
until like it goes back down and stabilizes.
'Cause like you can't just have all of that sugar in your blood,
like your body needs to like use it.
So like the insulin would be the thing
that like actually uses the sugar in your blood.
(10:43):
So like diabetics, they have a hard time
actually making insulin.
So they have to like potentially inject insulin themselves.
So what normally would be the role of,
I think it's the pancreas, right?
- Sounds like it.
- I think it's the pancreas.
Yeah, the pancreas would go and then generate the insulin
and then eat the sugar in your blood.
Like that's just like a natural process.
(11:07):
Like diabetics, they can't do that
for whatever reasons they have to go and supplement.
But it turns out that when your blood sugar spikes
and like you either get like an insulin spike,
there's a bunch of like negative effects that happen
with like a high insulin spike.
(11:27):
So and also like by the effect of blood sugar spike, right?
So like you're gonna feel like forget like the long term effects.
Like I mean long term, like it's not really good.
Like I mean that could develop into type two diabetes.
That could cause like a range of metabolic factors.
But like also you just like feel tired
(11:48):
when you have like a blood sugar spike.
So there's a app that I've been using called levels
and not sponsored by the way.
I mean like if they want to sponsor us,
like I'm happy to, but yeah,
we don't make any money off this podcast just to be clear.
- There's a lot of competitors for continuing
with people's minds.
It's taken off in the fitness and pro athlete.
(12:11):
- Yeah, yeah.
So like with that monitor.
So basically like what it can do is
it can show your glucose over time.
So it'll take like a continual readings.
So you can see like how the food affects you.
So I've been going in and then as I eat food,
I'll record right at eat.
(12:32):
So they have like this thing in the app
where you can like take a picture of the food
and then it'll use AI to try to figure out what you're eating.
- Oh really?
- Yeah, I mean it's kind of terrible
like to identify like exactly what you're eating.
So you know, they're the try and so I'll give them that.
You know, bring AI back into the podcast.
It's the worst it's gonna ever be, right?
(12:54):
- I know who knows.
So anyways, I don't know what model they're using.
So there's a lot of like,
I have to enter a lot of things manually.
But the whole point is to get like your glucose reading.
So like if your glucose spikes a lot,
you feel terrible.
But then like there are certain foods that like your blood
glucose will like kind of rise gradually
(13:17):
or like, you know, stay stable.
So my wife and I, we both got the glucose monitors
and we were comparing like how the glucose spikes
would be for two of us.
- With the same diet.
- Yeah, yeah.
So like it was really funny.
So like, you know, she's, you know, Asian, she's Japanese
and you know, I'm American, right?
(13:38):
So I think that like--
- Like Asian.
- Sure.
Yeah, like I think like, ancestorsly Polish.
- Okay.
- So like, I guess like Eastern Europe.
Anyway, so like--
- Your blood doesn't care about what nationality you are.
- Well, ethnicity.
- Yeah, yeah, sure, yeah, that's fair.
So we both had like a meal where I ate like,
(14:00):
I think like four potatoes or something like that.
Like just regular like baked potatoes, like nothing special.
And like she ate like one potato.
Now her blood sugar spiked like quite a bit
from the potato, whereas mine after eating like way more potatoes,
like was more of like a gradual spike.
So it made me think like, oh wow, like this can really change
(14:23):
based off of like, you know, your ethnicity
and like your background like so like, you know,
she processes potatoes much differently than me.
- I'm sure.
- Just as chick rice.
- Oh my god, for me when I eat rice,
it spikes like crazy.
- That's what I was saying.
- Yeah.
- So and like her, not so much.
- I'm not surprised.
(14:45):
I had a similar discussion with my doctor
and people talk about the paleo diet,
just eat what are, like the old ancient caveman ate.
But specifically, I think you gotta eat what your ethnic
ancestors ate because that's what your body is used to digesting.
And that's the kind of microbiome that we inherited
(15:08):
from our parents, living physical proximity to those people.
And that's what we're naturally good at.
- Yeah, so I don't know.
For me, I thought it was just super interesting
because you know, typically like there's,
I think universal diet or like health advice,
which I think is, you know, maybe true for most people, right?
Like, you know, get plenty of sleep, avoid processed food.
(15:32):
- Alcohol, I'm just not saying that.
- Yeah, like that's all like bad, right?
But I think that like when it comes to like,
you know, what's good, that is maybe a lot more
individualistic, especially when it comes to food.
And I think that like, you know, things like levels
like these glucose monitors or you know,
(15:52):
some of these fitness trackers can help us figure out,
like what's actually good for us as an individual,
as opposed to like, you know, these population-wide studies,
which may or may not be useful.
So anyways, just food for thought there.
- So you mentioned they have like an app
that helps you take a picture of your food and some AI
(16:14):
that analyzes or automatically categorizes the meal.
- Yeah.
You notice any other AI features, maybe connecting that
to your 23 and me genetic report and maybe analyzing
or suggesting what kind of foods would be best suited
for your ethnic or genetic makeup?
- Yeah, they don't, but maybe we should build it.
(16:36):
That's like a cool app idea.
Yeah, I mean, I don't know, I would buy that.
It seems like there's a lot of room for information.
'Cause there's so much data.
- Yeah, that's the thing, right?
Like, I mean, 23 and me with like continuous glucose monitors,
like if you knew like where it was from,
like it can recommend like certain foods
and like medicines for you.
(16:57):
- Diet, exercise habits.
If you include like how different levels of exercise,
high intensity cardio, low intensity weights,
all of this affects your health and vitals.
- Yeah, yeah.
So anyways, enough about like this topic.
We never even got to the other big announcements.
(17:20):
And then very big.
Like what should we start with?
The King of the Hill GPT-40?
- I mean, we can, we kind of buried the lead here a little bit.
- Let's do it, let's do it.
- Yeah, so all right.
If you've not been living under a rock,
you've probably heard about open AIs announcement,
(17:44):
which happened to be like right after Google's it.
Or actually, right before, yeah.
So right around the same time.
- I'm sure it was very intentional.
- Yeah. (laughs)
Like in a little bit of sibling rivalry there, I like it.
- So big, yeah.
- Yeah, so if you guys ever seen the movie,
I don't know, just metaphorically,
but Shashank, have you ever seen the movie "Her"?
- Yeah.
(18:05):
- I liked it.
- That was a good movie.
- I mean, I actually, I love walking Phoenix.
- He's a very method actor.
He really immerses himself in the character and the role.
And it was a very kind of like a little bit of a dystopian
depressing down on his luck, lonely, single middle-aged man.
It was kind of depressing, but very evocative.
(18:29):
- Yeah.
So for those who haven't seen it,
basically, "Her" is a movie that was made.
I think it was like 2012 or 2013, something like that.
It was about 10 years ago.
And it was about a guy who falls in love
with his computer.
And that was around the time when a voice assistant
(18:52):
was, would have been equated with Siri.
I think Siri was the only popular voice assistant
of the time.
- It might have been the only voice assistant at that time.
- I don't think Google had anything at that time.
- You know, maybe Alexa had come out at that point.
I think that was just when Alexa started coming out.
- It was still Siri was the most popular voice assistant
at that time.
- Yeah.
And then like the iPhone was still around.
(19:14):
So like, you know, phones were a similar form factor
to the art today.
So like in the movie, you see the guy
and he's like holding out his phone in front of his face
and you know, he's having conversations
with the, I think, do you remember who the,
but the person who played the voice?
It was some like actress, really famous.
(19:36):
I don't remember Scarlett Johansson something.
- Amy Adams.
- Is that it?
That doesn't sound right.
I think there was somebody else.
- No, I think she might have been the actual person.
- Oh really?
- Oh, okay.
- Well, whatever, whoever it is,
maybe there's Amy Adams.
I thought it was somebody else, but that's okay.
(20:00):
- Scarlett Johansson.
- Yeah.
- Okay.
- Scarlett Johansson.
- Yeah.
- That sounds right.
So she voiced like the computer.
So like basically the actor, like he falls in love
with basically think like chat GPT
and he like talks to it and you know,
(20:22):
that they love each other and it's a super weird
dystopian movie because like, you know,
he's not like actually like talking to people,
but it's a computer, but you know,
it makes him feel happy.
And a lot of people start like dating their operating system.
- Super weird, but anyways, yeah.
Open AI, they built that.
- It's an app that was breaking up.
- So where it was like an episode of Black Mirror,
(20:45):
I guess now life has turned into an episode of Black Mirror.
So we'll see where this one goes.
- Yeah, so what is it?
From my perception, I don't think it is that much
of a fundamental research breakthrough in the model.
I think it's more them packaging it up into a nice UX
(21:07):
to combine all of these different modalities,
make it more efficient, allow you to interact
with it continuously rather than, you know,
asking a question, having it respond,
waiting for it to respond completely
and then asking the follow up question
for them having to interrupt it.
It's more natural where it's talking, you interrupt it,
(21:31):
asking a question and then like quickly responds
to whatever you've asked recently.
And maybe to backtrack and clarify a little bit.
The one technical breakthrough that they did make
is that where before I think they were taking your voice
request, trying to scrap it into text,
feeding that into the GPT-4 model,
(21:52):
then coming up with the response,
parsing that back into like a text-to-speech,
you know, response.
I think now it's all handled natively by the model
where it just gets the audio input from the user
and spits out an audio output.
So in other words, it's like before
(22:14):
if you want to do anything like this,
it'd be like, okay, we need to take the speech
that I'm talking and then convert that to text.
So speech to text, that is a solve problem.
We don't have to do that.
And then they would take the text, feed that into the model
and then the model would give you text as a response
and then they would take one of these things
(22:36):
that can make the text back to speech.
So speech to text, text to speech.
So that would be, I guess, convert to speech to text
as one step, text to text.
When you ask the question, that's another step.
- That's the model, right?
- Yeah, and then the text back to the speech,
that's the three steps.
So that's the three steps that the model has to make.
(22:57):
So each step takes time.
But now it's just like voice to voice.
So like, as opposed to three steps,
like the model knows how to just process voice
like immediately.
So it's just like only one step versus three steps.
So like, you know, if you eliminate steps,
you speed things up.
(23:18):
Also, like, as the model is trained on voice,
like it can be trained on like emotion too.
So like, I asked it when I was talking with it.
So like, it's out there.
And I've spent a lot of time talking with it.
And I actually had it help me practice Japanese.
So it would go and then like switch between talking in English,
(23:39):
talking in Japanese.
And then I was like, hey, like, chat to me T.
That's too fast.
Can you slow it down?
And then I said, yeah.
And then it started talking slower.
And then it started speaking Japanese.
So like, I mean, that's cool, right?
Like to be able to, you know, talk a little bit slower.
I mean, that like really helps for language learning.
Yeah.
(23:59):
I, you know, now that you mentioned that I was watching,
not the announcement, but one of the demo videos of them
showcasing GPT-40.
And I think it was Greg Brockman, one of the other countries.
He was trying to get the model to speak faster.
And he was like, no, no, speak a little slower.
And then he started speaking slowly.
He's like, no, no, actually go a little bit faster.
(24:20):
And then in response, the model went like, OK.
[LAUGHTER]
It seemed to get a little frustrated and showed that
as an emotion while communicating with the user.
So I don't think it's just faster.
I feel like each time you convert one modality
to another modality, you're losing information.
(24:41):
You're losing the subtleties of emotion,
like you mentioned, tone, other subtle context cues
that may be there in voice.
And if you can only imagine them just focusing
on video also, because I think video is part of the demo too.
I don't think that's--
Yeah.
So the video is not out yet.
(25:02):
But they have videos of people using the videos.
We're basically just like, you can face time with it.
Where it doesn't have a body, but it can see you.
So you can hold your phone up.
You can show it around you.
You can see what you're wearing.
It can see what's behind you.
I can give comments on that.
And it can process that all in real time.
(25:23):
And you can also interrupt it.
Computationaly, that just blows my mind.
I don't know how they're handling this.
I feel like they're taking some shortcuts with video.
But even putting video aside, just text alone
seems really impressive.
Oh, yeah.
Voice alone seems very impressive.
Yeah, yeah, for sure.
Incredibly impressive.
And I mean, what do you think?
What do you think is the results in this innovation?
(25:46):
Do you think it's maybe from the advances in the Nvidia
architectures?
Do you think it's maybe like software, a bit of both?
What do you think?
What do you think is really driving these improvements?
Well, I think it's the fact that what they did today
(26:06):
is possible only because of the software changes,
because they had to re-architecture this somehow
to natively understand voice.
But I think how they were able to get it so cheap
and make it free for all their users is definitely, definitely
because Nvidia's new architecture, all those new GPUs,
(26:27):
and the server architecture and the massive super computers
that they just got.
They mentioned that in the announcement.
Thanks to Nvidia, we were able to do this,
because it is, I don't know, like, two, three X improvement.
And there are passing that on to the consumers.
I mean, all the two of three X improvements,
but they said they're making it 50% cheaper
(26:50):
to users and the API customers.
It's hard to compete for everybody else.
No.
Google has similar announces.
We can talk about that later.
But the announced new chips, their new TPU,
I think version six architecture or something, which
is four to five X faster, which probably coincide
(27:14):
with the drop in prices and lower API costs.
Well, so I see what you're saying there.
I guess my point is it's hard to compete,
because not everybody can just go out and roll out
like a new hardware.
We're not going to be able to do it right here.
(27:36):
Yeah, it's only like the big boys that are able to compete.
And it moves so fast.
I mean, it was just like a couple of weeks ago,
we were talking about like, oh, Lama 3, new King and Town.
This is great.
And now it's like--
Open Source King.
Well, I think there still are the best in open source.
Yes, probably.
(27:59):
But I think it's like, I feel like the King and Town changes
every week.
Yeah.
And it's like, we were talking about Lama.
And now we're talking about back to Open AI.
But like, right before that, it was like Google.
Or I don't know.
It's like--
Yeah.
It's wild.
I guess speaking of Google--
(28:21):
All right, yeah.
Shortly after Open AI, or I mean, the Google I/O event
had been planned for a while.
It happens the same time every year.
Did you go to that?
No, I didn't.
As a Googler, you don't automatically get tickets.
It's a massive event.
There's a lot of us.
So you need to apply.
(28:41):
You get a lottery.
And then you go, I didn't bother applying,
because I kind of swung with work.
I see.
It seems like a lively environment to be.
And I'd like to check it out next time.
That's fun.
Yeah.
I'd like to go.
If you can get any extra guest passes,
I'll see what I can do next time.
All right.
But they probably have other events coming up too.
We'll see.
Sure.
(29:02):
Yeah.
But yeah, so I/O happened this week.
And it was very similar to OpenAI's recent announcements.
They came up with something similar to GPT-40 project
Astra, where it's like--
I think it's an integration with Google Lens,
(29:24):
where you can point your phone around, have a camera,
and the LLM sees what you're seeing.
I'm not sure exactly how it was started to hurt.
How native that video and voice input is--
maybe there's a few levels of immersion.
Or maybe it's native, because Google
has been working on multi-moodle stuff.
(29:45):
But yeah, you can do similar things with Google's Astra.
And they also came up with a competitor
to Sora, the text to video model.
Yeah.
I saw a couple demos.
It's pretty impressive if you hadn't seen Sora.
And I think it's not as good as Sora.
(30:07):
So awesome.
I mean, I think that is probably true.
But the thing with Sora is they haven't released it yet.
And they have just--
You haven't released there either.
Well, I mean, the thing is with Sora,
they've cherry-picked the examples.
Kind of.
But I thought Sam Waltman was taking requests for prompts
(30:29):
on Twitter, and he was just firing off responses.
I mean, you could decide which--
or like things you respond to.
I guess, I guess.
Sure.
So I mean, it's still very impressive.
Vetted by their CEO.
So I mean, not to say I doubt them.
I mean, OpenAI is ridiculously impressive,
and I wouldn't put it past them.
(30:49):
But the fact that it's been announced a while ago.
And I still haven't been able to use it.
I mean, I think that was part of the sibling rivalry
between OpenAI and Google.
Is Google is planning on announcing--
or Google announced the Gemini 1.5 Pro
with a ridiculously long 1 million context window.
(31:10):
And then boom right after we had Sora announcement.
It was not a fully fleshed out product.
It was not publicly available.
But they had to make that announcement.
Yeah.
Yeah.
That's true.
That's true.
So you would think that maybe in the future, maybe Google
come up with their own like GPD40 competitor.
(31:32):
We have it.
Project Astra.
No, but like, I mean, launch?
Is that launch project Astra?
Because I can use GPD40 today.
They've shared their vision.
OK.
Yeah.
A lot of this is being rolled out slowly.
I assume these things are ridiculously
computational intensive.
(31:53):
Even GPD40 hasn't a GPD40.
They haven't released all the features that they announced.
That's true.
They're slowly rolling it out.
Because that would assume video transcoding and understanding
would be very, very expensive to them.
Yeah, that's wild.
Like, I mean, I feel like that's a lot of bragging rights
(32:14):
at that point.
Like, what do you think is next?
I mean, like, where do they go from here?
Like, maybe like, it starts to have like sort of avatar body.
And then like, you can interact with it or something.
Can interact with the world?
I don't know.
I think just understanding the video better.
Because now it seems pretty simplistic.
(32:36):
Maybe cancer detection or taking a camera out in a factory
line and doing like QA on very high-tech assembly.
Or I don't know, just fine tuning these vision models
(32:57):
and specific domains and replacing humans one by one.
Do you think that these models may be general purpose
enough in the future to maybe like do tasks like driving a car?
I mean, is that possible already?
Yeah, but like, it's not an LLM, I think.
It is a vision transformer.
(33:18):
I think Tesla switched to the transformer model a couple
months ago.
Oh, did that?
For their full self driving, yeah.
So then they had a different older model, which
is something built on type of CNN, which
is the old way of doing fashion.
But they switched recently.
It got worse for a little bit.
And then they worked out the case.
(33:38):
I mean, it's pretty good.
Huh.
OK.
Well, then that answers that question.
Yeah.
Right.
Yeah.
So anyways, I think we're running out of time.
So I think we should probably call it here.
Unless there's anything else you want to talk about.
I mean, there's tons.
But those were the big announcements.
(33:59):
OK.
Well, anyways, thanks for listening.
We'll see you in the next one.
Until next week.
Bye folks.
[BLANK_AUDIO]