All Episodes

January 23, 2025 55 mins

In this episode, Chuck Cook dives into Tesla’s latest Full Self-Driving (FSD) update, version 13.2.5, as he tests its capabilities in the AI4 Model Y. From unprotected left turns to school zones and parking challenges, Chuck offers detailed insights into how Tesla’s AI is evolving. He also discusses the role of memory layers, crowdsourced data, and the potential of dynamic mapping in revolutionizing autonomous driving.

Tune in as Chuck shares his firsthand experience with the system’s smooth highway maneuvers, nuanced navigation decisions, and areas still needing improvement—like recognizing thin parking chains or handling emergency vehicles. He even throws in some reflections on the future of LLMs (Large Language Models) and their potential integration into Tesla’s vision system.

Whether you’re a Tesla enthusiast or just curious about the cutting edge of AI in vehicles, this episode is packed with firsthand impressions, expert analysis, and a touch of optimism about what’s next for autonomous driving.

🎧 Listen now on Spotify, Apple Podcasts, or your favorite podcast app—just search for “Chuck Cook”!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Well, hey everybody, it's Chuck.

(00:07):
Welcome back to the Model Y.
Well, we got a new software version last night and I tell you what, the Tesla AI team has
been working pretty hard because these software updates are now coming simultaneously.
The AI4 Model Y, this car, and the Cybertruck both got this build number 20244532.5 as FSD

(00:28):
13.2.5.
And those of you out there on hardware 3 probably saw the exact same build number came out with
hardware 3 version 12.6.2.
So they're kind of aligning the releases on these builds, probably wrapping up the end
of the 2024 build cycle.
We're going to start seeing 2025 soon, I imagine, on some of these build numbers, but we're

(00:50):
all kind of harmonized a little bit.
Now those build numbers probably mean the feature set.
So think of the things non-FSD related are all going to be the same.
Holiday release and things like that aren't hopefully going to be fractured on different
build numbers in the future.
But in any case, I put a poll out last night and most of you said you wanted to see the
Model Y first.
So we're going to do the first impressions drive on both the Model Y and the Cybertruck.

(01:14):
The release notes are identical to 1322, which is what this vehicle was on.
The Cybertruck was on 1324, but now they're both on 1325.
The release notes are the same for the Model Y as they were before, and they're the same
for the Cybertruck as they were before.
So we still do not have all of those future capabilities about increased model scaling,

(01:37):
context length, and all of those things are still in the upcoming improvements.
Just the top section here is all the same.
So I don't really know what to expect on this version to have changed, but I figured we'd
get out here and do it.
It is Florida cold today.
It is 34 degrees, you can see in the upper right-hand corner of the screen.
So there's no ice.

(01:58):
There's no snow, of course, out of the panhiddles where all the snow happened.
But here we are on the Model Y.
We have the press button to drive.
We don't have this in the Cybertruck yet.
And as you see, all I did was press the button.
I am parked over here near the curb, object and path detected, takeover to proceed.
There is no object in the path.
It's the curb over here on the right side.
I'll just kind of show you.

(02:18):
I'm not on the curb.
You can see there's just grass right there in the little gutter area.
I'm not touching anything.
It does say takeover to proceed.
I'm going to leave that camera up.
The steering wheel is turning away.
It feels like it might get brave enough to try something.
It should not be this afraid of proceeding.
It's on a paved surface.
So this vision system has to get better when it knows it's on a paved surface and not freaking

(02:43):
out.
And there it goes.
I don't know what to say there other than the vision system continues to be a little
bit skittish around grassy areas.
Nice smooth stop there.
Trash cans on the side of the road.
You can see the occupancy network there is very busy.
And it's going around those items there.

(03:05):
Okay.
So on protected left, it is, you know, after rush hour, we might have a little bit of traffic
here to contend with, but we'll see looking for the question mark, looking for behaviors
wide open on the left.
So it should just go and it's not wide open on the right.
So it cannot go and is it going to stop and it stopped, but that car kind of went around

(03:25):
me and now it's jumping out into the lane and I'm throwing up this.
It jumped out and did not accelerate like I liked.
There was enough gap back there.
That car had to slow down.
This car in front of me jumped over.
So it was a fine unprotected left, but I really kind of want that question mark back if you're
going to go to the median and pause.

(03:47):
It just gives you more room, gives you more time, more options and a better angle to accelerate
from.
Okay.
Let's try one more unprotected left and then we'll up or cedar or drive.
Stick with me.
Okay.
Here we go in our second unprotected left.
Let's see if we can get any traffic to deal with here.
There's our full minutes of stop.
It needs to creep and we're creeping, we're creeping and we cannot go yet.

(04:11):
It's going to be a large gap from the left and now it's going to go, but it can't go
from the right yet.
So see how it's going perpendicular and it needs to wait.
That car got out of our way, but it's waiting and the left lane is clear and then we're
proceeding.
So that unprotected left was fine also.

(04:31):
I'm just going to tell you, they have changed something with the behavior on the unprotected
left.
It is, it's less aggressive.
The question mark maneuver is not there.
It's not got the confidence it used to have.
It's like they kind of smoothed it out a little bit.
I don't like the crossing the first lane and kind of hanging out and then it felt more

(04:54):
like a Cybertruck unprotected left.
So I can't complain because it did it, but it didn't feel like they had solved it earlier
in 13, you know, one and 13, two.
It felt much more lethargic and less confident, I guess is the right word.
And when you put yourself in the meeting at the right angle, you're safe if something

(05:17):
else happens and it's not doing that at all anymore on either the Cybertruck or the model
Y.
So just food for thought.
Unprotected lefts continue to be challenging.
It did both of them, so we can't say it can't complete the maneuver, but that's where that
is.
Okay, we have a U-turn program here just kind of showing you there on the screen and the

(05:37):
upper right hand corner.
The forward facing cameras now have to find a gap, calculate a radius of turn and confidently
execute that radius of turn.
I now have a car behind me, interestingly, that should not affect the decision at all,
just to kind of know, now here's the gap and perfectly.
Little bit of a pre-roll there, nice hard turn, nice confident smooth.

(05:58):
That was really good.
So I wouldn't have changed that at all.
I want the Cybertruck to have that confidence.
And if you notice, look at the trajectory on the car when it's doing the unprotected
left.
If it's a solid blue without a lot of deviations in it, that is the confidence I'm talking
about.
It knows where it's going and it's able to execute it smoothly with the right amount

(06:18):
of speed.
If it's constantly having to recalculate its radius and it's unsure if it's going to complete
the maneuver, that's where you get into those slow turns that on high speed cross traffic
you lose time.
And of course, time is important when you're making decisions.
Got a little yellow light there, so we'll have a little bit of a pause here to bring
up a few things.
First of all, thank you all for watching.

(06:41):
Thanks for all the feedback I've been getting over the last few weeks.
I am changing up my format a little bit to kind of continue to talk about that.
I'm syndicating the audio portion of my videos via podcast feed now.
So if you're interested in just listening to audio only, many of you have given me the
support to do this.
If you just like to listen to kind of the banter and see how it's going, why you're going

(07:04):
about your day, just type in Chuck Cook in any of the podcast media outlets, Apple Podcasts,
Spotify, Pocketcast, it's all of them.
Just type in Chuck Cook.
And I say just type in my name because you'll see my mug show up most likely.
If I decide to change the name of the podcast or anything like that, I don't want to put
out a bad name or anything like that that I end up changing.

(07:27):
And I have changed the name already a couple of times.
I was going with some longer, more confusing specifics, but for now we're just going to
call it by my name and you'll see my face show up with the same profile icon I use on
X and YouTube.
Yeah, so let me know any feedback from that.
I'm not really going to try to cater to the conversation to audio only.

(07:49):
So if you are watching this on video, you do see a lot of the screen and the other things
that I have in the car and I might be pointing out and talking to, if you're an audio only,
you just kind of have to get the gist of it and feel how it's going.
This is going to be the typical Memorial Park first impressions drive, including some highway

(08:09):
time that I've been doing.
So we should be able to compare apples to apples on this from previous versions.
Now interestingly, we do have to take a left turn up here in 0.3 miles and I would be over
already and it's already kind of got itself in a situation with a car on its quarter
panel that needs to get over.
Blinkers on and it's going to have room and time to do it, but this planner and earlier

(08:34):
lane changes.
I don't want to say it continues to be a problem because the car usually finds a way of doing
it, but some of the situations it gets itself in by not going earlier forces kind of a wedge
itself into traffic and possibly even stopping traffic.

(08:54):
It happens more in the Cybertruck than the Model Y for sure.
The Cybertruck gets itself boxed out by a semi or something like that and when it gets boxed
out, the planner works pretty hard at solving the problem before it gives up and reroutes
and that's where it kind of gets dangerous.
Here we are in a flashing forward facing unprotected left with a flashing yellow and then we're

(09:17):
crossing some railroad tracks here.
Still no symbology on the screen for railroad tracks, but it went over that at a fine speed.
I do think it recognizes the railroad tracks somewhat as a speed bump or something like
that.
I don't think it sees them as like it should with the right symbology and I say that because

(09:37):
stopping on the tracks are too close to the crossbars continues to be something that I
worry about and I've seen varied performance on.
That was a good forward creep there for visibility.
Here's our first speed bump.
Cheesy clear as day all the way down to 10 miles an hour there.
If you notice, this is my narrow roads on Birkenhead and definitely don't have a curb

(10:01):
on this one.
The classy area is and I got the cameras up if you're listening just to kind of see to
the right.
It looks like it's giving it a nice good six inches.
Another good knits of stop.
It needs to creep just a little bit and then proceed.
Yeah.
Good.
No problems there.
Coming up on another speed bump, we're at 23 right now in a 30 mile an hour zone and

(10:26):
there's the slowdown.
Got all the way down to 10 again on that one.
Good.
There's that black cat.
That's the same black cat that I almost crossed in the cyber truck that one time.
Good stop here.
Now this is a divided median.
It should proceed completely, but I've seen it stop before.

(10:48):
Okay.
It just kind of cautiously continued to go.
It was almost like I can feel it looking left and right as it's kind of continuing to creep
across that double divided street.
One more speed bump and it looks like it's stopping just fine for that.
I'm going to go ahead and take those cameras off.
And now we're coming up on our suicide lane turn.

(11:10):
It's an unprotected left turn with a suicide lane that I've seen it use it as a resting
spot before and other times it goes straight through it.
So it just kind of depends what the traffic situation we see is here.
Okay.
We definitely have some traffic from the left and okay.
Now we're creeping.
I thought it was going to go, but now we're creeping.
You see, we have two cars in the suicide lane there.

(11:33):
Okay.
Now it's going.
Now it's got to have to use the suicide lane to wait because it can't go there.
Nice.
So it is using the suicide lane still.
There's the cameras to kind of see and then jumping out of traffic.
That was a really good suicide lane turn.
All right.
Tesla AI like that.
So you're using the suicide lane as a wait there.
You did it at a better angle than you even did my unprotected left hand turn.

(11:54):
So that means you have the concept of waiting in the model, which is exactly what you need
to have deleting that point.
All right.
I haven't been busted for paying attention to the road yet.
And I don't know if that's because I've been doing a better job looking ahead or if the

(12:14):
attention monitoring has been tweaked a little bit.
It seems like the amount of fiddling with the screen I do normally gets me busted a little
bit.
I'm trying to kind of keep an eye on that just to notice if anything has changed there.
I don't know if I just look down for a second with my hat on and I count for a few seconds.
I'm definitely not looking at the road.
And there's attention monitoring available.

(12:36):
Oh, it even says view the driver obscured by hat.
That might be new.
Or at least I normally don't get the hat warning.
I'm looking, it's asking me for a steering wheel.
I'm trying to use my eyes.
I might have, there's my attention on the steering wheel there just to get that back.
You see, I don't have the green light up there.
That green light means it's not using the camera yet.

(12:58):
So the camera kind of gave it a pause.
Oh, good lane change there.
It thought that car was going to stay in the closest lane, but it didn't.
That car jumped out in front of us.
It got out of the way so that car could join us.
And then let's see what happens here.
We're in standard mode, vision monitoring is active, and we actually are in a school zone
timing.

(13:19):
So I am going to get in the right lane.
So this car isn't, I'm doing this to not have that car be my lead car as I go to this speed
limit zone here.
So I'm 40 miles an hour.
I have a flashing yellow light.
It's going around that puddle and it's slowing down.
Ah, see it jumped over here behind this car.

(13:42):
Okay.
Oh, look at this.
So I've got, it stopped because they were crossing the road.
So we're in a school zone.
We have a crosswalk person out there with a stop sign.
It stopped.
I don't think it's, well, hey, it did exactly what it's supposed to do.
I'm just trying to figure out if they thought they were J walkers or anything like that.

(14:03):
And she's going back and it did a nice job with that crosswalk in a school zone.
And there's the end school zone sign right there.
So if it was reading signs or if it had that metadata in the map, it would know I'm out
of the school zone and I'm going 33 again.
So as we continue to test these school zones, that was a really good job.
Tesla AI.
I don't know if it was accidental that it all played out that way and that crossing guard

(14:26):
was acting like a pedestrian that you're waiting for, but either way, the car handled it just
fine.
That was really, really good.
The flashing yellow light is something it's got a feed off of.
But I don't know, let's have a little conversation about school zones and other signs that have
metadata that are perhaps local.

(14:46):
You know, some places you have a flashing yellow light, some places you just have a
sign with a crosswalk or sometimes you have these bright yellow signs that's got the picture
of the parent and the child crossing the road.
That's just a school crossing, but it's not a school zone.
Do we think that these signs are going to be read by a model?
Because I don't know that globally we can have the metadata embedded in every map.

(15:11):
That's depending on maps.
And I've been thinking about this a little bit more, especially as some of these new
AI models are coming out and they're getting cheaper and cheaper and cheaper.
For those of you that keep up with this industry, DeepSeek R1 came out this week and just changed
the game on the performance of an open source large language model that is comparative to
the 01 model by OpenAI.

(15:35):
And if the Chinese are able to get these models, put them in open source and they have this
kind of performance, now granted, it's a 671 billion parameter model that has about 450
gigabytes of model size.
So you're not going to download this model to your Mac and run it in Olamma or something
similar to that.
It's going to require GPUs.
It's going to require some hosted servers more than likely to run that full model.

(15:57):
Now there are distilled models that are much smaller.
That'll probably give you good performance, but not the same performance as perhaps we're
starting to call these top end models.
Now I say that only because as these models get cheaper and cheaper, and I'm saying cheaper
to use via the API or the subscriptions come down, I believe that the DeepSeek model is
cutting the OpenAI API cost.

(16:20):
What was like $5 is now like 20 cents.
It's something on that order of magnitude.
If we're to say, oh, we're going to read signs with Grok, okay, that's fine.
But what I don't know yet, and maybe some of you will comment in the conversation below,
the Grok large language model be able to read and infer signs quick enough to give the data

(16:43):
to the planner to act on.
I think that's where we're in this difference, right?
I can upload an image and it's going to read the image and it's going to say, oh, it says
this, it's going to have a conversation about it.
The reasoning models think about it even more.
We don't have that amount of time from when something approaches as a sign to give it
to the planner or even the vision model to act on.

(17:04):
We've got to get that inference time down to what a large language model can't quite
do yet, right?
So I do think that Grok will be implemented as a replacement for our voice activation
system in here.
I do think that is coming.
I think that because we can just have a conversation with the car basically and do web searches

(17:28):
and all of those sort of things.
But to read signs using a large language model might be something we're not quite there yet
on the technology, the model size, the model speeds and everything.
He's trying to get over this right lane because we're going to be joining the highway here
in.3 miles.
I liked that early lane change, right?
So we still got a traffic signal ahead.

(17:48):
It could have waited, but no, it was getting over to the right side here for this merge,
which I think is exactly what it should have done.
So I would say that did a better lane change than that first one with that flashing yellow.
Anyway, coming back to the model size to be able to read signs, somehow we're going to
have to have to do it.

(18:09):
To me, it's got to be some sort of crowdsourcing the planet kind of a thing.
And it's in the release notes now for going around road close situations.
So the fleet is already apparently communicating, hey, if a road's closed, I'm going to tell
the rest of the fleet the road's closed in a ways kind of auto reporting format.

(18:29):
Somehow we've got to get a map layer that is constantly being updated by the fleet with
things that perhaps do take more time to infer than we can do in real time.
So let's say that school zone was up there for the very first time.
Let's say the car blew through the school zone because it didn't know how to read the
sign yet.
But if it collected the data and then read the sign and inserted that in the layer, at

(18:52):
least the very next car would benefit from that data.
And very quickly, if they could come up with a system of mapping the world with Teslas,
at least in a ways sort of crowdsource data, at least that would be there.
Okay, just a quick chat here.
We do have a road ranger here with his flashing lights.

(19:13):
Just want to show you there was an accident here.
There is no sort of indication that that was an emergency vehicle with the flashing lights.
Didn't see anything there visually showing it.
It did give them room.
It gave them plenty of room to go around without a lane change possibility.
That is also something we're expecting to see more is using audio to detect emergency

(19:33):
vehicles.
So we'll have to keep an eye out for that.
But as we're here on 295, we're in 65-mile-an-hour traffic.
I'm in the standard mode for the purposes of the video.
And it's kind of staying in the right lane.
I need to edit my trip again just so I don't mess it up.
I'm going to change it here in just a second.

(19:53):
I don't want it to jump off on any of these exits here.
But it's staying in the right lane.
It's not jumping out into the middle lane.
It seems to be happy going 69 in a 65, which is perfectly fine for me.
I'm liking the way this is doing it.
All right, I went ahead and modified my route to Memorial Park and it's jumping up on the

(20:14):
highway.
Great.
See how I'm not getting busted by messing with the screen there?
It feels like they added a little bit more time to the attention monitoring, which is
great.
I needed just a little bit more, not for just the screens.
It's just in everyday life, changing a song, selecting a playlist, things that were being
forced to do with a screen.

(20:35):
It was just kind of always hitting me a little bit.
I've done most of my drives recently in the Cybertruck because I spend more time in that
vehicle than this one.
And it's pretty naggy.
And sometimes I end up using the steering wheel to correct it.
So I think the iteration on that attention monitoring had a police vehicle on the other

(20:58):
side that I heard this siren as a human.
I did not hear it as a human until it was right here at about my 10 o'clock position
with the way the windows and sound is obscured.
You can't always hear an approaching vehicle until it's right on you.
It'll be interesting to see if the Tesla has a better hearing system than a human.
Okay, got cars kind of jockeying for position here.

(21:20):
We're taking a right hand turn, 0.3 miles.
It's a two lane exit and it needs to prefer the left lane, hopefully, because we're going
to be joining to the left in a fork.
So I like to kind of watch these kind of advanced navigation decisions where it takes the correct
exit but it's also in the correct lane for an upcoming fork.

(21:41):
You know, John Gibbs, Dr. Know-it-all had a recent video where he talked about some papers
that were released recently that talked about memory.
And John, if you're watching this video, shout out to you.
You did a nice job on that.
It's a very technical subject, but I do think that you hit upon a very important subject

(22:02):
that made me think a little bit about how the localization of these cars needs to have
that memory so that the car isn't a first time driver every time it drives.
And I'm taking those words from Dr. Know-it-all's video.
I agree with your analysis that every time we drive, we have the basic driving skills

(22:23):
from the model, but we don't have the local knowledge of a local driver, the lane you
need to be in, the upcoming fork that could, you know, get you in a bad situation if there's
a traffic cue that doesn't allow you to get in it.
Those are local things that happen that create problems for the planner to have to solve.
You know, so if it knows it's got two lanes and it's going to keep using both of them,

(22:46):
but then there's a long, long cue for people turning to the right.
If you're going all the way to the front and wedging yourself over, while some people may
say that's the way we should be driving, I agree in many parts of the world we do drive
like that.
But in some parts of the world where you have local knowledge, they do set up the cue appropriately
and well, I shouldn't even use the word appropriately because you could debate whether or not that

(23:10):
is the right way to do it.
But it does create some odd situations where the car needs to kind of blend in a little
bit more as a human of a local area.
And maybe the cueing and the merging isn't the best example because that obviously has
a very good argument that we should be using both lanes and then zipper merging at the very
end.

(23:30):
But you know, other situations where like let's use that example of that fork back there,
it needed to be in the left lane and it was so that it was in the left lane for the fork
that was that was upcoming.
And then because it was in the correct lane, it did not have to perhaps stop traffic to
get over.
It did not have to slow down unnecessarily to get over that could have caused a rear-ending

(23:54):
type of accident or something similar to that.
Using the appropriate speed on off-ramps and highways is pretty important to blend into
the flow of traffic so that other people that might not have the same attention don't run
into you from behind.
Hopefully, that made sense.
And that's kind of this permanence.
But John did a good job talking about the underlying technology that this paper was documenting

(24:20):
is how does a model learn in real time of things that it needs to remember?
And the ultimate trick that they were documenting was surprise.
If the model finds itself in a situation that completely surprised what it expected, it
can take that away and say, we need to remember that.

(24:41):
You know, if something just didn't work and all of a sudden it's surprised and has to
solve a problem, maybe that would be a where an intervention might happen from a driver.
Maybe that would be a disengagement that, you know, was a safety critical disengagement.
Or maybe that was just something where the model itself found itself in a situation that
was not what it expected and it had to deviate its plan.

(25:04):
Maybe the planner had to re-root because of something.
Maybe this is how the closed roads are going to work.
You know, if I'm going up to a road and I plan on going straight, but there's a big sign
that won't let the vision system go through it and then the map has to wrap around it,
you could argue that the model was surprised at the fact that it didn't work.
Maybe take that little token away, token might not be the right word, that event away and

(25:27):
somehow put it in a layer that is persistent.
And that persistent layer needs to be either localized or distributed and relocalized.
So it, you know, either needs to go to the fleet so it's always available or creating
a dynamic layer.
And I say dynamic because that road won't always be closed.

(25:48):
You know, we need to continue to evaluate whether or not that road opens back up and
that could be time of day, that could be after the job is done, many different things.
Somehow we have to create a dynamic model and the car needs to do most of the work, not
a supervisor, right?
We can't have the robotaxi, you know, have a passenger button, you know, hitting, you

(26:09):
know, voice notes and saying, why didn't that work?
The car has to create some sort of model or we're going to have to go another way with
high definition maps like Waymo is doing.
And those high definition maps are going to have to be done by employees and people
or auto labelers or teens and reviewers, something like that.
And I just don't see that as scalable for the whole system.

(26:31):
It might have to be part of a forever system, meaning there might always have to be someone
working on maps, but the car needs to be what's queuing the data.
And I think, you know, I go in long drives and let's use Waze as another example.
You know, you might not add data to the Waze networks every single time that you see something,

(26:51):
but you might add one or two.
You might add that, yes, the police is still there.
You hit the thumbs up one time.
But if there's a car pulled over to the side of the road, are you adding every single one
of those?
Well, maybe you're not.
Maybe you're a thumbs up person that acknowledges that it's still there.
But someone had to enter that data for the first time to say, hey, there's a hazard on
the road ahead.
There's a piece of tire rubber, for example, in the middle lane, or there's a broken down

(27:15):
car on the right side of the road.
Somebody has to input that data in the Waze.
The car should be able to get to the point that it can aggregate that data and get smarter
over time.
You know, the first time a Tesla drives by a stopped car on the road, it might not know
exactly what it is.
It just knows there's a hazard.
But as soon as two cars, three cars, 10 cars, 20 cars drives by that incident, it might

(27:39):
have enough context to understand what it is and appropriately label it for the rest
of the vehicle fleet to understand.
I think that is a critical capability that's going to help the planner get better over
time to help the car see the unseen and be superhuman in a way that it knows about hazards
and things up ahead before a human ever could know without using a tool like Waze, like

(28:06):
many of us have become accustomed to, to using that crowdsource data.
All right.
That was a nice little chat on the highway mode.
Hopefully that made some sense.
I'd love some comments.
And John, if you do see this, let me know if I characterized you correctly in what you
were saying at a high level.
Now, I do want to say, pull over here to the right.

(28:27):
Look at this.
It almost got over there, saw this broken down road, gave it room without changing lanes.
That was a dynamic situation that it probably should have even been over in that middle
lane.
I saw that way ahead of time, but it got in the right lane to take this exit.
So its early lane change put it in a situation of having to do a weave around a stopped vehicle

(28:48):
on the highway.
It did it safely, but that was a scenario where a full lane change might have been appropriate.
And it didn't do one, even though there was no one on its quarter panel.
It just hugged that white line to give it room.
It was obscured by that van in the front for a second.
And you might want to go back and replay that clip and watch it.
There was a lot of data that just happened there.

(29:10):
The car did it perfectly safe, but there's probably a little bit of room for improvement
on how it could have perhaps gotten into the middle lane.
But it had two or three factors playing into it.
It even felt like it was going to take that exit to cross the bridge for a second.
If you look at that video clip very carefully, it actually corrected back.
But I think it was that van in front that it was kind of following as a lead car.

(29:34):
And then it realized, no, I'm going straight.
He's turning right.
Oh, and now we have a car to go around.
I've given you a clap there, Tesla, so I give you some credit for some really good maneuvers.
But there also might be a lot of data there for you to look at to see if you'd like to
do it exactly that way in the future.
All right, we're done with highway mode.
And now we're going over to Memorial Park with some perhaps slower traffic.

(29:57):
We are using the speed modes here down below, you know, 50 miles an hour.
Let me comment on that a little bit.
Those of you on Hardware 3, God bless you for being as patient as you have.
But now you're going to start to see 12, 6, 2 roll out.
And if you're watching these videos to kind of see how it's evolving, you are going to
notice some differences on Hardware 3 with speed profiles.

(30:18):
When speed profiles came out for the very first time, even on Hardware 4, we only had
them on the highway.
And then the next version came out and we had them on city streets above 50 miles an
hour.
And then the next version came out and we had them below 50 miles an hour.
It looks like on Hardware 3, you're still at that above 50 mile an hour city street
speed profile mode.

(30:40):
So take that into consideration.
That's your limitation at the moment.
So those highways that have 50 mile an hour speed limits, you'll see the speed profiles
show up.
But if it bumps down to 45, you're going to lose it.
And it's going to revert to that auto max mode that we had before.
I'm not exactly sure why Tesla implemented it this way.
It obviously must have something to do with the model size.

(31:03):
And squeezing everything into the Hardware 3 model so that it gets the same level of
performance.
And that's all I can say at this point.
I don't think it's because they can't make it work in city streets.
I think it's probably just how they're distilling this model down to get the most bang for the
buck in performance out of the vehicle.

(31:24):
Really good lane changes here.
Smooth as it could be.
No problems on this.
Very low traffic too.
So it's kind of doing us all without much to work around.
We're going to be coming up a memorial park and where I get the parking scenario and hopefully

(31:45):
we'll be able to give another test to a back end parking scenario with that thin chain that
has been giving us problem over time.
I haven't noticed any pre rolling here.
I just noticed the walk signs were doing the countdown.
Those countdowns can kind of give you the precursor to the light turning green.

(32:05):
I can't see the opposing lights to see it.
There is what green one's showing right there.
That little green showing turning red is what some people have speculated the car is actually
acting on.
Expecting the light to turn green.
But I didn't get any pre roll at all there.
That was very smooth.

(32:28):
And on time.
I feel the acceleration curves on the Model Y are much better than the Cybertruck.
I'm really looking forward to the first impressions driving the Cybertruck.
Because my complaint on the Cybertruck has been the sluggishness.
Has been the acceleration curves when it commits.
It's just too slow.
And it's creating rear end scenarios.

(32:49):
That's the confidence.
We've got to do a kind of a cross the traffic here.
Low speed.
No problem doing this because of the speeds.
There's a tiny little gap here.
Is it going to force itself or is it going to wait?
Oh, it's going to go.
Okay, wow.
Very good.
It did it just fine.
These slow speeds you can make decisions at that kind of rate.

(33:11):
And obviously you're forcing yourself out there.
You might make them slow down a little bit.
But that's fine at 25 miles an hour.
I could do that all day long.
Okay, we do have our POI coming up.
So it probably is going to end drive.
And we got a great scenario here.
These parking spots are blocked off.
You can see them again.
See that thin chain.
That's what I'm talking about.
That Tesla hasn't been able to see.

(33:33):
I see if it ends its spot here.
I'm going to see if it puts itself in a park.
Okay.
And it did put itself in a park.
Okay, good.
So I want to see, okay, if I put it itself in the drive, there's the parking spot.
I'm going to hit P. Oh, interesting.

(33:54):
Okay, there it is.
The car, auto park is ready.
It cannot do this.
I'm just showing you that the Tesla cannot see this thin chain yet.
I'm just kind of, I mean, I'd love for it to stop itself, but okay.
It was committing.
So you see, it can't see these thin chains.

(34:15):
It would have damaged the car there.
It was going into that spot.
This stack, I will say, I don't think is in the same stack as this one here.
So like, I'm going to go ahead and continue our drive by pressing and holding.
This stack here, I feel is smarter than the parking stack.
I think when you go into that parking mode, it's kind of changing its brain.

(34:39):
It's got its own, it might even be written in C. I don't know.
But it isn't the same brain as this full self driving stack.
Correct me if I'm wrong.
If anybody knows for sure, I know it's end to end, but it's at least shifting programs
when it goes into that mode.
And you can tell when it does pull over to park, you know, at the end of a drive in situations

(35:01):
like that, it does it differently than when you're in the parking mode.
Okay, enough on that.
Now it did pick, I might do a disengage.
I don't want it to go that way.
But you know what?
It's going to go down this alley instead of going up to my favorite oak tree sign.
Look at this.
This is the route it chose to go where we're going.

(35:23):
But look, it kind of chose an alley.
And now it's navigating down the alley.
If another car came, we'd have to kind of fight each other for space.
Huge bumps in the road.
It's going eight miles an hour.
It's just, this isn't what the planner should have chosen.
You should have stayed out on the streets, but somehow this is what it chose.

(35:44):
It's got to go around a blind curve here.
It's got a good speed.
There's a car parked there.
There's no problems because there's no one coming traffic.
And I have seen some great clips coming out of California.
AI driver, little truck bumper here.
It slowed down for that.
It did it just fine.
It just should have stayed on the streets.
AI driver and edge case forward one.

(36:09):
A few folks have made some really good clips of the vehicle kind of dodging traffic, making
the room three point turning to get itself out of bad situations and stuff like that.
So the car can do it.
And we'll just kind of have to see.
Now, I do have my cul-de-sac as my next point up here, which is where it's going.
And I leave that cul-de-sac in there for the three point turn scenario.

(36:32):
If I can actually create a radius of turn that requires it to do a three point.
If it's smart enough, it'll give itself a wide turn.
But okay, nice.
Okay, nice.
Wow.
Nice, slow, smooth commit.
Okay.
Now it's in the wrong lane.
See how it's using this gourd section here?
I marked that with the report button just to give them the data without disengaging.

(36:54):
It was treating that gourd section as like a suicide lane when it should have gone, it
shouldn't have driven through the hash yellow spots.
I don't think that it did that intentionally.
It just, it treated as if it was a regular lane.
Okay.
Let me edit the trip here.
And I'm editing the trip at the last second to hopefully create a three point turn scenario

(37:20):
that it has to back up.
All right.
I'm hitting done.
I think it's got enough radius of turn here.
Is it going to stop?
It's not going to stop.
No, it did it.
I know it can do a three point turn and the fact the speed profile isn't shown up here

(37:42):
means that it probably would have done just fine.
But it can do a three point turn.
I just didn't force this scenario and I don't want to go ahead and stop the car to create
it.
I know it can do it, but that's what that turns about.
Doing a U turn, figuring out the radius, continuing if it can.
And it did it just fine.
Our next one is one we've got.
See up here at five points where I'm going.

(38:04):
This is where I've gotten it into trouble before because the planner, this one I am
forcing it into a bad situation.
It treats that little circle up there like a traffic circle.
You see what I got up here, but it's not really a traffic circle.
There's a little flashing yellow light in the middle of the road.
So I'm being unfair to the ego.

(38:25):
Sorry, ego.
I'm just going to apologize to you now.
I'm going to change the route late so that you have to figure it out after you've probably
already committed to the right.
And that's what I'm going to do here.
If you come up here, you got one more stop sign to go.
All right, here we go.
And doesn't have a blinker on.

(38:46):
What is it doing?
Is it going to say that I'm going to go ahead and hit done?
Interesting.
It, okay.
It went straight.
And now it's got to reroute, see what it did?
It rerouted in a way that it didn't try to follow the planner that time.
I don't know if you guys added data to that or not, but the planner definitely had it

(39:09):
going around to the right to get to the spot.
Instead, it proceeded straight, very smoothly.
And then, okay.
Nice job, Tesla.
I don't know if you added data to that specifically or if it's just saw it differently that time.

(39:30):
That is a horrible intersection.
It's not even really fair to try to trick it with a map.
If I just had it try to go through the intersection, it would do it just fine.
Okay.
So it's doing a left turn on the College Street, which is an interesting choice.
Let me think.
It's going to go straight road all the way down and it's going to jump out on Roosevelt

(39:52):
there.
Yeah, let's see how this does.
This is a route we haven't ever driven before, so there may be something unexpected that
we can talk about.
All right.
So I don't know if I'm going to have time to drive the Cybertruck today.
And if I do drive it, I probably am not going to have time to edit both of these videos.

(40:14):
I don't know if you guys realize how many streams of videos that are in here.
If I got them all to render, I've got the screen video.
I've got the canvas that I have to render as a video and record it.
I've got the camera on top that's a 360 that I have to manage an Insta360 before I render
it into Venti.
I've got the internal camera that's got its own audio stream.

(40:38):
It's kind of a lot.
And then occasionally I throw the drone in there and then the drone's got to be synchronized
in layer on top of it.
So it takes me quite a bit of time to get these out to you guys.
And I hopefully understand that render times on a one hour long video can kind of tax a
computer for sure.
Okay.
So here we're on college.
We have a tree company over here.

(41:01):
It rendered all of it fine.
It put occupancy network items right where the cones were.
Parks traffic over on the right.
It's kind of weaving around.
There's definitely no lane markings on this road.
So it doesn't have a center line.
It's just kind of finding its bias over to the right.
I have no problem with what it's doing here.
This is a great, great scenario.

(41:21):
Coming up from the right has a stop or proceeding.
Yeah.
This is good.
You know, it's not overly hugging close to these cars.
It's doing a nice smooth.
This has all been standard since we went in the end.
The behavior around other vehicles has just been very smooth and magical.

(41:42):
Okay.
We got a car backing out up front and now he sees us, but he is he going to exert it
or is he going to wait?
Now the car is slowing down.
Okay.
See that?
That was a really good behavior.
The car saw him sticking out, slowed down, but not stopped, allowed his vector to continue
and he went around and through it.
Nice job.
So that was a really good clip too.

(42:03):
Very human like.
Okay.
Now workers here, they got a dump trailer, they're working around moving it, plenty of
room, even use the blinker there for a smidge to go widely around them.
Plenty of room, nice speed, nothing scary there.
Might have to remember this college street road here.

(42:25):
Okay.
We get a little bit of a pinch point here.
You see this?
That vehicle is having to use the parking spaces.
I had to go around those cars.
So if there were cars in every one of these parking spaces, I wouldn't be able to hog
the center of the road here.
But if you notice, the map has put a white line where those parking spaces are as if
it was the road boundary.

(42:45):
Yet, I wonder if it would have encroached inside of that boundary had it needed to give
another car room.
I don't know.
Okay.
So now we're coming up at the end of college and we're going to jump on.
Now I'm going to show you, I was wondering how this was going to play out.
And we've given ourselves an interesting scenario jumping onto Roosevelt here.

(43:08):
And it's at this intersection where it's got to do a left turn, then it's going to merge
into traffic.
The reason is that merge, depending on the light, has some very high speed traffic that
it's going to have to merge with.
And I want to see the right acceleration into that high speed traffic if it doesn't end
up rerouting up here.
It's a pretty good accidental scenario that popped up.

(43:30):
I don't know.
To me, this car feels the same as 2013-22.
I do think the attention monitoring has been tweaked a little bit.
It's just so hard when the car drives flawlessly to critique it.
Because it's doing it safely in the traffic scenarios it's getting.
I mean, obviously we can complain about the parking, not working with the chain and stuff

(43:55):
like that.
That's auto park, right?
And you could even argue that the parking destination is still an unsupported feature.
And I forced it into that park mode.
I selected the P and said park.
I did have some of the decision making there, right?
Maybe that's on me for saying, yeah, you told it to park across the chain and it was going

(44:15):
to do it.
So this intersection, I don't know if you're able to see this in all the cameras.
It is crazy time with lights.
And every light's pointing one direction or another and is relevant to the ego or not.
You can see this highway to the left is crossing up here to the right at an angle.

(44:36):
I'm going straight, but see if we're rendering it up here.
But up straight there's even another cross where those vehicles up there are crossing
and that's got another light.
And these lights, and these aren't even being represented right because see the closest lights
are for me in this rendering.
The second two lights that I can see a glimmer of red is for that angled road to the left.

(44:58):
Now I think what's going to happen here is I'm going to get the green.
It's going to proceed and it's going to have to realize it's running through those reds
that it's mapping.
You got a slight angle on that one render that just showed up.
If you go right there, you see it?
It's rendering one of them at an angle now and now it's flipping back and forth.

(45:18):
So there you can see the model trying to show these lights at the right angle.
There's our green light.
You see those reds are still rendered, but it's doing it just fine.
So it's kind of complex and you have to give the model credit for acting appropriately
with all of this indications.
It does give me confidence that it understands the situation better than it's able to render

(45:41):
it.
Just kind of rendering precision is different than its perception of the actual environment.
Okay, so now we're taking a left up here and then we're going to merge onto this traffic.
This is one of two ways to solve this.
You either have to go straight like these cars are doing.
And you see there's a do not enter there where I can't go that way because those cars, but

(46:03):
now I'm going to the left here with a right stop sign to merge onto Roosevelt.
Now this is the exact same road my unprotected left hand turn is on, but just a few miles
prior.
But you see to the left here, I can't see.
So it's got to do a creep and I got an easy one.
So after this truck, it's a gimmie, but this traffic could be moving very quickly.

(46:23):
And you see now I got it, it did it just fine in this traffic load.
It did it just fine.
And I don't think it would have done it incorrectly, but the creeping and crossing high speed cross
traffic is still I think the most dangerous maneuver for an autonomous vehicle because
it has the least reaction time of all maneuvers and it's using only really only one camera

(46:48):
in most scenarios.
Sometimes the repeater camera can be a help, but you know, I don't have a bumper camera
on this car.
But hey, the new model Y is coming out with the bumper camera.
It's being released in China first and boy, will it be interesting to see if the bumper
camera starts to become part of FSD's use case.
Even if it's just for parking, that's fine.

(47:10):
You know, but we need some better cameras looking to the sides to assist this B pillar
with its perception of high speed cross traffic.
In my opinion, I know that's beating a dead horse.
You guys are tired of hearing, but I say it because I feel it.
But there's really not much left for this vehicle to do that it can't do flawlessly.

(47:35):
You know, maybe we get a traffic scenario or a lane change or something unexpected happen.
But once you're on the main part of a road, the system has no problems.
City streets, highway, you know, for those of you who've listened to me a long time,
I like to use this bell curve analogy.
You know, the FSD system is just rock solid in the center of the bell curve when you're

(47:59):
out in the driving environment driving.
You know, the weird little inconsistencies seem to show up getting onto the road or getting
off the road at the terminal ends of the drive.
Right.
We still don't have a true park and on park scenario, you know, that works every single
time.
We still don't have a perfect unprotected left hand turn scenario that works every single

(48:20):
time.
But once you get out into the flow of traffic, it's really, really good at managing whatever
you throw at it.
The speed profiles where a game changer, those of you on Hardware 3 that are getting speed
profiles for the first time, like here's a really good example.
We're in a 45, you would not have standard lit up right now.

(48:41):
You're not going to have this speed profile at 45.
As soon as that speed limit, the perceived speed limit goes to 50, you'll see your profile
show up.
And another thing for you, Hardware 3 folks that are just getting these speed profiles
for the first time, I guess it's not relevant to you right now because it doesn't work
below 50.
The car will not do a three point turn or any sort of reversing maneuver if there is

(49:05):
a speed profile showing.
That's a little bit of how they've been implemented.
So if you're creating, if you're on a Hardware 4 car and you have these speed profiles and
you're creating a drive and maybe you got an intermediate destination, if you still
got that profile showing, it won't do a three point turn maneuver.
Allow it to reverse, I guess is the right way to say it because it might not always

(49:28):
be a three point turn, but it will not allow the car to go into reverse.
We'll have to see if that changes.
I don't think it's going to change to where it'll do it with the speed profile, but I
do think that the speed profiles will get better at disappearing at the appropriate
times so that you can allow the vehicle to do the reversing that it might need to do

(49:49):
for the situation.
It gets itself in.
So what do you think?
Any changes?
Did anything feel different to you guys watching it?
I was clapping my hands a lot.
I do know that.
There were a lot of scenarios that the car handled really, really well today that I perhaps
hadn't seen evolve into the scenario before.

(50:12):
I was really happy with the crosswalking guard.
I've never actually had that happen before.
It did the right thing, but I don't know that it did the right thing because it was a crossing
guard or just a VRU, a vulnerable road user crossing the road in a jaywalking kind of a
fashion that the car has obviously been very good at managing and doesn't even matter.
Maybe the way it handles VRUs covers that use case.

(50:37):
We talked a lot about the reading of signs and the size of the models and those sort
of things earlier in the drive.
There's a lot to think about there about whether or not reading signs is here.
A big one is the no turn on red and the no turn on red in this situation.
I saw one the other day that was like when you got two right lanes, it's like no turn

(50:58):
on red unless you're in the right lane.
I don't know if the car knows how to do that yet.
That was at a situation down off of Interstate 95 earlier this week.
But then we saw some scenarios in the highway where it did some good planning, but then
also giving some room to some disabled vehicles on the side of the road.

(51:19):
It did try to back up into a parking chain, but that was after I told it to park and I
hit the start button.
I had the control there, but it definitely did not render or stop emergency stop or
anything with that thin chain.
I still think we have a vulnerability there with very thin chains that are blocking our
way.

(51:42):
It managed a dynamic situation at five points and found us a new route on College Road.
It did everything just fine.
Getting us here.
It could have been a Robo taxi today.
I can see the future coming where Robo will be possible.
I don't think it will be possible everywhere in every situation, but if Tesla can truly

(52:03):
figure out how to crowdsource the data the car is inferring itself and give that data
back to the fleet, that is going to be a game-changing technology.
That will be something that Tesla gets to write a paper about.
That will be something that Tesla is contributing to the community that is working on this problem
if in fact they are the first ones to solve it.

(52:24):
I do think that the memory layer that John Gibbs Doctor Know-it-all talked about earlier
this week with that paper is an important part of localization of data that we need
to think through a little bit more and maybe we will have some more conversations on that
in the future.
If the car sees something, it needs to remember it if it ever surprises the model.

(52:45):
That is how we learn.
Being a local driver here in Jacksonville at my age, I have a lot of local knowledge
in my head.
More than just how to drive.
Some of its local demographics, maybe demographics is not the right word, but local information
about how turn lanes are used, about how traffic flow manifests itself, about how people behave

(53:09):
if you try to cut them off.
All of those are things that can be localized.
As we come up on this unprotected left turn, this is a forward-facing or protected left
turn into traffic.
I have seen the car handle it two different ways.
Right now it is at the stop line, which means it could be waiting on the green arrow.
But if it ever starts to creep out into the middle of the road, it is sort of taking on

(53:32):
the turn whether the light changes or not.
It is interesting to watch how it is doing it in this scenario.
I guess with all the traffic, it is just like, yeah, I am going to wait here.
But if it starts to see some gaps, you watch.
It will creep up.
Here is probably the first one we are going to get.
Is it going to go for this right now?
Or is that, see how it is creeping?

(53:52):
It is creeping, it is creeping, and it is going to take it.
It did it just fine.
In the old days, I would not have liked that gap it chose.
I was not confident that the car was going to continue and give the right acceleration
to get all the way through it.
But it did it just fine.
I now have the confidence to know how the model Y behaves.
In the Cybertruck right now, I probably would have been worried about that commit.

(54:13):
Because the Cybertruck is committing with a very lethargic and sluggish kind of a commit
that has not earned my trust yet.
It has gotten me into a few binds, you could argue.
I think that it is important to talk about all of them together because it is interesting
to see how the models are evolving as the hardware on the vehicles allows it to evolve.

(54:37):
Anyway, that is it for today.
If you are still with me, you have been with me almost an hour.
Tell me how many minutes you have been watching.
I love seeing in the comments for you guys that make it all the way to the end of the
video.
Even if you fast forward or listen to it at double time.
Thanks for sticking with me.
If you are interested in listening to this audio only, look for my podcast.
Just throw Chuck Cook in your podcast app.
Subscribe to that and then I will pop up in your feed and you can listen to these while

(55:00):
you are driving or something like that.
I am trying to build that subscriber base.
Spread the word to those that you think would enjoy this content.
Of course, if you are watching on X as a subscriber or a supporter, thank you for your support.
I appreciate you guys there.
Anyway, this is 13.2.5 and the AI4 Model Y. We will see you guys next time.

(55:22):
Have a great day everybody.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.