All Episodes

December 9, 2025 60 mins

Daniel and Kelly explain how physics predicts the future rain and shine, and all of the incredible science involved.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
When I moved to southern California, I felt this immediate,
immense relief, not just because I was free of the
tyranny of outside clothing, but because I was released from
the anxiety of not knowing if the weather was going
to ruin my plans. Were you planning an outdoor birthday
party for your toddler, No need to make backup plans
just in case it rains. Do you need to drive

(00:28):
a few hours away, No problem. You don't have to
worry that a snowstorm might make the roads impassable because
I could predict the weather myself since it was the
same every single day. But not all of us are
lucky enough to live in such calm climbs, so it's
still very important that we try to anticipate storms so
that they're less fortunate among us can be prepared. It's

(00:50):
not often described as important physics, but predicting the weather
is one of physics' great success stories. John Martin, professor
of atmospheric oceanic sciences, told me that weather predictions are
quote the most unheralded scientific advance of the second half
of the twentieth century. If you keep score every day,
I can't believe how well we predict the weather three
to five days in advance. In thirty years, we've gone

(01:13):
from predictions from one to two days to now five
to seven days. We have made unbelievable progress. So how
does that all work? What is the physics underlying the weather,
why has it gotten better? And what can we expect
into the future. I talked to Professor Martin and my
good friend Professor Jane Baldwin here at UC Irvine about
how the weather all works. So we'll dig into all

(01:35):
of that in today's episode, dedicated to all of y'all
who still experience regular weather. Welcome to Daniel and Kelly's
extraordinarily sunny universe.

Speaker 2 (01:58):
Hello Kelly Leadersmith. I studied fites and space and I
love rainy days.

Speaker 3 (02:05):
Hi.

Speaker 1 (02:06):
I'm Daniel. I'm a particle physicist, and I can predict
the weather in California for the next one hundred years
with my eyes closed.

Speaker 2 (02:13):
How boring, how massively dull.

Speaker 1 (02:16):
How wonderfully, delightfully, predictably, reliably boring.

Speaker 2 (02:20):
Oh, you know, one of my favorite weather moments, I
have to admit, was a southern California morning. So I
was a visiting scholar at the University of California, Santa
Barbara for a little while, and I had an office
that was like right out on the ocean. Was amazing.
And when I was driving in one day, there was
just a little bit of water on the ground and
the car tires were kicking up a little bit of

(02:42):
a spray, and there were literally rainbows following all of
the cars into school. And then I got out of
the car and the rain had stopped, and there was
a rainbow over the ocean and there were hummingbirds and
it was like a Disney movie scene. I expected like
a bunny to hop out and be like, can I
help you with anything? Anyway, it was. It was kind

(03:02):
of magical. I'll give you that.

Speaker 1 (03:03):
California is heaven. Yes, what happens when you die in
Virginia is you end up in California.

Speaker 2 (03:08):
Do you know that not all California as southern California.

Speaker 1 (03:12):
I mean all of real California.

Speaker 2 (03:13):
Oh, I see, because northern California's got some weather.

Speaker 1 (03:18):
You're absolutely right. In fact, I heard Katrina say something
really insightful the other day. You know, she's from northern California.
But now we've lived in southern California for quite a while,
and she said to somebody that she's now a complete
Californian because she's lived in both northern and southern California.
And I was like, Oh, that's cool. She's like accepted
southern California, which is hard for Northern California's I'm aware, yes,

(03:41):
not everything is Southern California unfortunately.

Speaker 2 (03:45):
Oh I really like the variability Virginia weather is amazing
for me. But so my question for you is, what
is the worst weather situation that you've experienced?

Speaker 1 (03:53):
Great question. I was on the East coast, of course,
you're doing a college tour with my son, and we
were in Massachusetts. I think we were visiting Amherst or
maybe it was Williams, I don't remember. And there was
some freak tornado which tore up a bunch of trees
and knocked down a bunch of power lines and there
was no power in the whole town for like almost

(04:16):
half a day. It was crazy and the winds were
insane and it felt a little scary, like we saw
like huge branches flying by the window.

Speaker 2 (04:26):
Yeah, yep.

Speaker 1 (04:28):
And he didn't end up going to school there.

Speaker 2 (04:31):
Yeah, I get that. I get that. So we lived
in Alabama, Tuscaloosa, and we moved there pretty soon after
that giant tornado that like made the news, and you
could see the path of the tornado because like, you know,
you'd be driving through an area with lots of like
you know, Starbucks, Panera, lots of stores or whatever, and
then suddenly there would be a like air an opening

(04:52):
in between all of the stores with nothing, and like
the tornado had just gone through there and just absolutely
picked up and thrown everything that was in there, and
even after they cleaned it out, there were still, you know,
you could tell where the tornado had gone. And we
were also in Houston during some pretty bad storms and
we had the kids and our dog and our cats
in a little hallway in the interior of the house

(05:14):
and my in laws were visiting, and my mother in
law was so sweet. She like looked around and she
she was trying to see, you know, who could get
hurt and how, And she gave her glasses to Zach
in case there was any like flying glass and she
just insisted that he have her glasses. And I was
like in that moment, I was like, gosh, you are
the sweetest person in the whole world, Like you are

(05:35):
thinking about the tiny little things you could do to
help the people around you, and anyway, she's she's the best.

Speaker 1 (05:40):
Yeah, but we've all been caught in surprise weather, right.
I remember going backpacking in Arkansas one time and being
caught in a snowstorm and the temperatures dropped into the
teens and we weren't one hundred percent sure we were
going to make it. Oh, and everybody's been, like, you know,
caught in a snowstorm or a rainstorm or in a
heat wave. Right, And these things are exciting, they can

(06:00):
be dramatic, but they can also be very dangerous, right.

Speaker 2 (06:02):
Yeah.

Speaker 1 (06:03):
People die in these crazy weather storms, and so it's
valuable to be able to know in advance what's going
to happen, not just so you can plain your picnics,
but also you can survive the increasingly dramatic weather that
we're all facing as the planet warms.

Speaker 2 (06:17):
Yeah, that's right, more severe weather is becoming more common.
And so today we're going to talk about how good
we are at making predictions and how we go about
making those predictions exactly.

Speaker 1 (06:27):
And I wanted to pull back the curtain on like
the science of this, how does this actually happen, What
are we doing, why is it hard? What are the challenges?
What improvements might we be seeing in the next five
or ten years. What problems are just fundamentally impossible and
might never be solved. And so today we're going to
dig into science of all that. But before we explain
to you how the experts do it, I was wondering

(06:49):
what everybody knew about how weather predictions happen. How do
those numbers end up on your phone? So I went
out there to ask our listeners what they knew about
how we predict the weather. If you would like to
answer these kind of questions for a future episode, don't
be shy, right to us two questions at Danielankelly dot org.
We will send you fun questions every week in your inbox.

(07:10):
In the meantime, think about it for a minute. What
do you know about how we predict the weather? Here's
what our listeners had to say. Sophisticated computer models, which
with an understanding if case theory, allows us to understand
the limitations.

Speaker 4 (07:27):
Predicting the weather is like quantum particles.

Speaker 1 (07:30):
There are many probabilities, but it is not known until
it is observed. Meteorologists they look at the current weather,
and they try to predict it by looking at the
moving clouds and all of.

Speaker 3 (07:43):
That, by measuring with velocity and atmospheric pressure and maybe
modeling these that in supercomputers.

Speaker 1 (07:57):
When a cow lies down in the field, it's going
to and when my knee aches, it's gonna snow.

Speaker 4 (08:03):
Running multiple models.

Speaker 3 (08:05):
Big computers, really really big computers.

Speaker 4 (08:10):
Feed dad data, two complicated models that run on very
parfa spoken.

Speaker 1 (08:14):
Pere i'd say, with surface measurements, satellite information and sophisticated
models and perhaps even artificial.

Speaker 4 (08:24):
Intelligence observations taken by ships, planes, ground stations, satellites combined
with models built by really really smart people that run
on some of the fastest computers that humans have ever built.

Speaker 1 (08:38):
There are sophisticated bottles that use a wide range of
observational and predictive inputs.

Speaker 4 (08:43):
By observing weather patterns and the types of whether those
patterns tend to bring.

Speaker 2 (08:48):
So I don't know if there's actually like scientific evidence
that sometimes knees will ache if like a stormfront is
coming through. But I have to admit that there's a
part of me that really hopes that if I get
arthright is when I'm older, I do have the ability
to tell when the weather's come in because I'll feel
like I'm really intimately connected to my environment. Oh, the
knees acted up again. Storms come and get the goats

(09:09):
in the barn.

Speaker 1 (09:10):
I think that really shows your fundamental optimistic nature, Kelly,
because you're like, oh, I forget arthritis. There'll be a
silver lining. I can predict the weather.

Speaker 2 (09:18):
You know, life is easier when you try to see
the silver lining.

Speaker 1 (09:22):
That's wonderful.

Speaker 2 (09:23):
But our audience had great answers, and they were, you know,
a lot of them said, you know exactly the right thing,
which is you've got to have data. Those are the
observations and you feed them into computers.

Speaker 1 (09:34):
Yeah, essentially, and that's the big picture, not just of
weather prediction but any kind of prediction. There are two
fundamental ingredients to how you make a prediction. There's the
models and then there's the data. So let's take those
each in turn. When we say the models, we mean
like we're running a computer simulation or you're calculating things

(09:54):
on paper. Fundamentally, this is encoding the rules of the
system what the future can be given what the past was.
And this doesn't have to be some really complicated thing
like the weather over ist endbull. Think about a much
simpler situation, like you're tossing a ball in your backyard.
You want to know where does it go? Well, the
laws of physics predict the future, right, this is the model.

(10:15):
These are the rules that tell you how the past
becomes the future. Right. And in this case it's simple.
It's a parabola. It flies through the air. Things to
keep in mind here, though, is that a model like
this is always approximate. If I use f eicals MA
and I just account for gravity, ignore air resistance. When
I'm describing the ball, I'm going to get a quick answer,

(10:35):
and it's gonna be pretty good. It's not going to
be exactly bang on correct. It can't account for everything,
all the little wind gusts and the air resistance and
the slight change in humidity and maybe the spin on
the ball. My model ignores some details, and that's crucial. Right.
If I included every single particle in the backyard, I
would never get a calculation. So in order to make

(10:55):
this tractable, I got to simplify the problem. I got
to pull out the things that are important and ignore
the things I think are unimportant. Because I don't think
they're going to make a big enough difference in the answer.
And this is where the juice is. This is what
physics is, right. Physics is taking the universe and simplifying
it into a model that represents the bits you're excited about,
the bits you think are interesting and irrelevant, and then

(11:17):
you use those rules and manipulate it. That's your model
of the universe, and the model gives you an answer,
and hopefully, if the model is close enough to your
description of the universe, the answer you get from the
model is similar to the answer in the actual universe.

Speaker 2 (11:31):
So one thing I think that's amazing is that something
as simple as throwing a ball up in the air
and then seeing where it lands is something we can't
completely model because there's so many complicating things. And now
you're talking about weather, which is so much more complicated
and requires so many more inputs. And of course you
can update your model. So you know, if you threw
the ball in the air and you were like, you
know what, it's a windy day, I absolutely need to

(11:52):
add wind. Now you've learned something, you add wind, and
so you know, it's an iterative process where you keep
trying to say what is in important and do I
need to include it? And does it make my predictions better?
But I also will note that you put predicting weather
under the physics umbrella. You think you guys get to
claim weather predictions.

Speaker 1 (12:13):
I mean, we're not using economics to predict the weather.
What else is in the running for taking credit for
predicting the weather? Is it chemistry?

Speaker 2 (12:22):
I feel like that also is some ecology, you know,
like because you're tracking.

Speaker 1 (12:26):
Like cowfarts or something.

Speaker 2 (12:28):
No, no, you like, you know, a.

Speaker 1 (12:29):
Fart player role. Actually, so do you think.

Speaker 2 (12:33):
That Noah has cow farts in their weather prediction models?

Speaker 1 (12:38):
I think the climate models do include bovine methane emissions. Yes,
so not the daily predictions, but the bigger trends. Yes,
cow farts do help determine the future of our planet.

Speaker 2 (12:48):
Amazing.

Speaker 1 (12:49):
I want to go back to the point you made earlier.
You're exactly right that we're always approximating, and not just
when we're doing the weather, not just when we're tossing balls, always,
every single time, every model is approximation. There's this famous phrase.
I a memember who said it, like all models are wrong,
some of them are useful, even our description of like
the fundamental particles in the universe. As far as we know,

(13:11):
these are approximations. Every bit of science we have has
boundaries of where it's relevant because there are approximations made
when we construct those models everything, literally everything. We have
no piece of science that isn't an approximation of the universe.
Maybe one day we have a theory of everything, and
it's beautiful and we can do exact calculations on very

(13:34):
very simple situations. But we're not there. We may never
be there, and even if we are there, it will
be totally impractical for anything useful. Like you couldn't use
string theory to predict the path of a hurricane because
the complexity would be insane, Right, how many strings are
you modeling? The amount of computation required to do it
exactly would be impossible. So it's always an approximation. It's

(13:57):
just a question of which approximations. That's where the science
comes in, like which ones are important. Having a nose
for what to approximate and what not to approximate, that's
what helps some scientists make more progress than others.

Speaker 2 (14:09):
Yeah, And I think another thing to just sort of
note is that because this is a human endeavor. Sometimes
you're limited by what you can afford to get data
on you know, like maybe you do want to know
how much cows are farting, but in order to get
that data, you would need seventy billion dollars so that
farmers could attach sensors to the rear end of every cow.

(14:30):
And so, like, you know, sometimes you know there's data
you want, but you can't get it because there's not
enough money or it's not possible. Maybe one day you
can get it, Maybe those sensors will become cheap.

Speaker 1 (14:39):
Is seventy billion dollars your like a fantastical number for
some like absurd amount of money for a science experiment?

Speaker 2 (14:44):
Yeah, I guess that's wow. Yeah what is yours? I
guess you're a physicist, so it's going to be like.

Speaker 1 (14:49):
Well, that's embarrassing because our next project is one hundred
billion dollars, So we're like already above the Kelly threshold
for like absurd amounts of money.

Speaker 2 (14:58):
But wait, like, okay, but that's not like your personal project.
That's like LHC or like a new particle collider or something.

Speaker 1 (15:04):
Right, Yeah, the new next particle collider budget is about
one hundred billion, Yes exactly, so more than a planet
wide cow farts sensor network.

Speaker 2 (15:12):
Well, you guys better make some really important discoveries with
that money. Otherwise I'm disappointed because I want to know
what's happening with the cow farts.

Speaker 1 (15:20):
But you bring up another point, which is the data.
So models are useful. There are a system that tell
us how the past becomes the future, but you also
need some data so you know which past you had. Right,
models describe essentially any possible universe. The rules determine which
set of universes we might live in, but the data
constrain it. It tells us which past we had. So

(15:42):
the rules tell you how the past becomes the future,
but you need to know which past we were in
so we know which future will have. So in our
ball tossing analogy, there's lots of different ways that could
toss a ball. I Coatalla said high or low, or
fast or slow or east or west. The rules connect
the initial conditions that data the past to the future.
But you need to know where did I throw the ball.

(16:05):
So if I'm writing a simulation of that ball toss,
I got to encode in the laws of physics. But
then I need a data point. I need to say
the ball was here and it was moving in this
direction at this velocity. Then I can predict the future.
Without that, it's useless. Right, So you need these two components.
You need the models, plus you need the data, and
then you need more data. Say I'm predicting the ball toss,

(16:27):
I want to check in halfway and say, hey, it's
my model correct, doesn't need an adjustment. A way to
improve your modeling is to shorten the prediction time, to
say I'm not going to predict the whole path. I'm
going to predict the second and then I'm going to
take a measurement and if it's off, I'm going to
correct it so that if my model has veered off
from reality, it doesn't get further off. And so the
more data you have, the better your model is going

(16:49):
to be. So you need these two elements dancing together,
the models and the data.

Speaker 2 (16:54):
Yeah, and I checked my weather app today and the
prediction for tomorrow was changed, And so I'm guessing we
do this same thing with weather Way update. So I
think we should talk in a second about what kinds
of data we collect to help us inform models. But
I guess my first question is we've talked about models
in general. How long have we been trying to model weather?

(17:14):
Aristotle problems, So.

Speaker 1 (17:18):
People have had some crazy ideas about the weather for
thousands of years. The first real weather models were conceived
of in the nineteen twenties. And remember we didn't have
computers really until the fifties or so, so this was
like a conception and somebody did a proof of principal prediction.
They tried to predict the weather six hours later. They
took a bunch of measurements and said, let's try to

(17:39):
do some calculations. We have an early model. That calculation
took six weeks.

Speaker 2 (17:45):
So not helpful.

Speaker 1 (17:46):
Not helpful exactly, but they did it and it wasn't terrible,
and they sort of proved like, hey, you know, if
you could do this calculation more quickly, then maybe you
could even know the weather in advance. Oh my gosh,
what an idea. Right, Yeah, it was until the nineteen
fifties that we had the first computing models to do
these calculations. So we can make predictions in time shorter

(18:07):
than the prediction period. You could have enough data and
run your model and get an answer before the universe
revealed it, right, that's that's a prediction instead of a
post addiction.

Speaker 2 (18:18):
That's better.

Speaker 1 (18:18):
So we've been doing this for decades and the last
you know, seventy years or so have been improving the
models and improving the data.

Speaker 2 (18:26):
Man, it's exciting to think that we, you know, we're
going from slide rules to make these predictions to massive supercomputers.
I'm appreciating my weather apps a bit more.

Speaker 1 (18:37):
And also, like, six weeks sounds ridiculous. I don't know
that I could do that in six weeks. Oh, it's
an amazing calculation. And think about like not just the ideas,
but all the grunt work doing those calculations and the
human error that's possible. Like, it's amazing they did it
in six weeks, you know, So don't laugh at that.

Speaker 2 (18:54):
Absolutely, So we've been doing this since the nineteen fifties.
Let's talk about what kind of data we're collect to
inform these models when we get back from the break.

(19:22):
All right, and we are back, So now we're going
to talk about the kinds of data that we use
to make weather predictions. And I'm gonna bet it involves satellites.

Speaker 1 (19:33):
Always going with space first, right, yep, yep. It does
involve satellites, but there's an amazing, incredible variety of data
sources we have to understand the weather. And yet still
it's not nearly enough. Right as you'll hear, our weather
prediction would be so much better if we had more data.
We're really limited by the data. But we have lots

(19:53):
of different kinds. We have weather stations on the surface
and so a lot of these are called like automatic
weather stations that are scattered across the country. They're just
basically a bunch of sensors with a battery and like
either a wind turbine or a solar panel to get power,
and they measure things like temperature and pressure and wind
speed and precipitation, just the raw measurements you need to know,

(20:16):
like what's going on out there, what is the state
of the weather right now, because again, if you want
to predict the future weather, you've got to know what's
going on right now.

Speaker 2 (20:26):
So is this like a citizen science thing where like
I could purchase one of these weather stations and hook
it into what's happening at like the national level.

Speaker 1 (20:33):
Yes and no. So there are a few sort of
official stations. There's a bunch of different networks. The highest
quality ones. There's like ten thousand of these scattered around
the earth, and they're operated by weather services and government agencies.
But there's a bigger network of like quarter million of
these things. Some of these are personal weather stations that yeah,
people just build and publish the data. And there's an

(20:55):
amazing network it's called COCO ras COOCOHS Community Collaborative Rain,
Hail and snow Wow. If you can just like build
your own device and add it to the network and contribute,
and I think that's super awesome because it's definitely limited
by the data we have. One problem is that these

(21:16):
things tend to be where the people are, Like we
have a few, you know, top of Mount Washington or whatever,
but mostly these things are put up by people where
people are near, and so like there's lots in India,
for example, but very few across Siberia, And often the
best ones are at places like airports. Airports really need
to know whether so they have excellent weather stations. But

(21:36):
like the weather at LaGuardia is not the same as
the weather in Manhattan, and so often the airport weather
stations are very very precise and used heavily in the models,
but they're not giving you the measurements exactly where you
want them to be.

Speaker 2 (21:49):
Okay, So is that a problem for just the people
who are in areas where there's not enough weather detectors
or is that a problem for all of us, because
what's happened in Siberia is important to what's happening in India.

Speaker 1 (22:03):
Yeah, what happens in Siberia doesn't stay in Siberia. Unfortunately.
It contributes to uncertainty and error across the model. And
the Earth is one big system, which is why you
can't just be like, I'm only going to predict the
weather Manhattan. I only need to think about Manhattan. You
need to model the whole planet in order to get
the weather in Manhattan. So yeah, absolutely, And that's why

(22:23):
we have lots of different kinds of sensors, not just
these automatic weather stations. We also have things like weather radar,
and you might have seen these on your local weather channel.
Like let's look at the Doppler, this measure of precipitation.
It also measures the velocity of those rain drops. And
this is a really cool story because it comes out
of World War Two. It's another example of like reusing

(22:44):
military technology and infrastructure after World War two to do
some science.

Speaker 2 (22:48):
Thank you war Oh boy, hot take pull it back.

Speaker 1 (22:55):
Well, there you are again finding the silver lining. Tens
of millions of people died, but we have better weather predictions.
So the way radar works is that it sends these
pulses of microwave radiation. The wavelengths are like one to
ten centimeters, that's the microwave region. And it sends a
pulse for like a microsecond, and then it listens for
return signals. So like it sends this pulse and rain

(23:18):
drops will reflect, so it gets the signal back and
it listens for like a few milliseconds, and then it
sends another pulse, and so it can tell where the
clouds are, and it can tell the velocity of those
clouds by the change in frequency. This is the Doppler effect, right,
And this is exactly the same effect as like stars
are moving away from you, so their light is red

(23:39):
shifted when the radar pulse comes back. If the frequency
is shifted, you can tell which direction that rain drop
is moving.

Speaker 2 (23:47):
So that sounds complicated because like there's not just one
rain drop out there, there's a bunch and so I
can imagine like your pulse getting lost as it bounces
off of multiple rain drops and doesn't make it back
to you. What am I miss saying? This sounds hard?

Speaker 1 (24:01):
No, it is hard, But you're not detecting individual rain drops.
You're detecting clouds mostly like which direction is this cloud going?
And you know, initially this was a problem because in
World War Two, radar operators were trying to use radar
to discover like enemy planes, and they noticed, like man,
clouds are getting in the way. And then other folks
were like, oh wait, you can use radar to see clouds. Awesome,

(24:21):
and so and so. Then after World War Two they
started using this to measure the velocity of clouds and
to see them. And there's this moment in like nineteen
sixty one when Hurricane Carlo was approaching the coast of
Texas and Dan Rather went down there to a weather
station and they were using radar to see the clouds
and to see their direction, and he had them drawn

(24:42):
like the coast of Texas over this image of the
hurricane that showed everybody like, wow, this is a massive
hurricane moving fast towards the shore and probably save thousands
of lives because he publicized this like incoming storm much
more rapidly than we could otherwise without this kind of technology.

Speaker 3 (24:59):
Wo.

Speaker 1 (25:00):
Yeah, this weather radar is really helpful.

Speaker 2 (25:02):
Do you think it still has the same effector or
do you think people are just kind of like, oh,
there's hurricanes, I've seen them before. They get big, and
they don't always leave.

Speaker 1 (25:09):
People don't always leave. There's always somebody who's going to
ride out the storm, right, Yeah. And I don't know
with the psychology there, but at least now we can
inform people further in advance and let them know where
these things are likely to go. But there's still always uncertainty,
and we'll talk about that in a minute. You don't
just have one weather prediction. You have an ensemble. You
have an envelope of predictions because you don't have perfect

(25:31):
data and you don't have a perfect model, and so
often what you do is you vary your data a
little bit within the uncertainties and run the model again,
and then you get a different prediction. And I'll give
you a sense of the spread of the possible outcomes.
So you might see when there's like a hurricane approaching
the coast of Florida. They have a bunch of possible trajectories.
Those are all like different runs of the weather model,

(25:51):
assuming different initial conditions. Because we have uncertainty, we don't
have perfect data.

Speaker 2 (25:55):
I personally really enjoy learning about the uncertainty in life
in general. And whenever I look at those I have
this weird feeling of security, like, yeah, like they figured
it out and they know what the errors are. We're good,
we know what to avoid. Maybe that's maybe that's a
little bit giving it a little too much credit, but
it's still amazing.

Speaker 1 (26:14):
And another really important source of uncertainty in our models
is what's happening in the ocean, like how hot is it,
how cold is it, how things circulating, all this kind
of stuff, and so we need data about the ocean.
But not a lot of people live in the ocean,
so we don't have like these automatic weather stations, but
we do have buoy's. These are like floating weather stations,

(26:35):
and around the world there's a couple of thousand of these,
depending on the type, that have these like temperature sensors
on the surface. But we also have this hilarious data
from what's going on deeper in the ocean that historically
has come from people on ships taking a bucket, dropping
it into the ocean, pulling it up, and then measuring

(26:57):
the temperature of the water. And it's like really that
lo fi. But for many years that's all we had.
We had like no other way reliably to know how
cold is it in the ocean. And this is an
example of like it's not just data. You need to
take data and interpret it and clean it and correct it.
And I spoke to an expert here, you see, I
Jane Baldwin, who told me that like you had to

(27:19):
correct for like how long the bucket was out of
the water before they dunked the thermometer in it, and
how Japanese ships and US ships used a different bucket
and it had different effects, and like you got to
really know, you got to be an expert and how
this data was taken and what it really means.

Speaker 2 (27:34):
Yeah, So for a while I was doing some water
quality work and we had this like tube and you
would put the tube underwater and then you'd sort of
press a button and like caps would pop into place
on both sides of the tube, and then you could
lift it up and so you could get a water
sample from specifically different depths, and it was it was
always kind of fun to use that device.

Speaker 1 (27:52):
Yeah, and you might think, like that's ridiculous, what a
silly system, And it's a little bit silly, but if
it's the only data you have, it's better than no data. Yeah, right,
as long as you understand the uncertainties in it. And
my friend Jane was telling me that misunderstanding this data
might be a cause for some weird pauses and global
warming trends, that it could just be like a misinterpretation

(28:13):
of this ship bucket dunk data.

Speaker 2 (28:16):
I know, we're so moch.

Speaker 1 (28:20):
These days. We have these cool robotic floats that like
float on the surface of the ocean and then dive
down up to two thousand meters measure things down in
the ocean, and then come back up and beam it
to satellites or whatever. So we're getting better obviously, yes,
But you know what's really valuable is longitudinal data. Like
you want data as far back as you can so

(28:41):
you can understand bigger trends. So you can't just like say, oh,
that ship bucket dunk data is ridiculous, let's ignore it.
It's the only data you have for like thirty years
and so trends in that data do tell you something
very cool.

Speaker 2 (28:53):
Okay, so now we've gone down deep, how do we
get data from up high? Yeah?

Speaker 1 (28:58):
Because the weather's not just at the surface, right, And
the weather folks call the surface the two meter level
because they want to measure the temperature not on the
ground literally, but like two meters up like where your
head is, essentially. But they also need to know what's
going on even further, so we take measurements in the
upper atmosphere. We use weather balloons, and these are literally
what you imagine. You put like a bunch of helium

(29:20):
in a balloon and you put a weather station on
it that commissure altitude, pressure, temperature, humidity, wind speed, et cetera.
And you just let it go and it rises because
helium rises, and as it goes up, the balloon expands
because the pressure in the upper atmosphere is less, and
eventually it pops and then the thing comes back down.
So these are like one time uses, right, and they

(29:42):
can go up like twenty kilometers.

Speaker 2 (29:44):
When I visited Saint Catherine's University in Minnesota to give
a talk, they had a special day where they launched
a weather balloon, like you know for my visit and
you know, did a demonstration for all the students and
it was the coolest thing ever.

Speaker 1 (29:57):
Super cool. Right, These are amazing experiments. And I know
people who do physics experiments on balloons where they like
go the Antarctic and they let up a balloon and
it floats in the atmosphere for like up to a
month or something, and like, wow, that's really brave work
because you spent like four years building this instrument and
then you're putting it on a balloon and to the
atmosphere and sometimes it's just gone, like it just disappears

(30:20):
and you lose your whole thesis. And this seems like
kind of bespoke, right, and it is. There's like a
couple hundred launches per day in the United States, but
it's not reliable. It's not like the place you've visited.
They do exactly the same balloon launch every single day
at the same time, right, which is the most useful
thing for a weather model. It's like reliable data and
we don't have a lot of them. But again, this

(30:42):
helps you probe the upper atmosphere. We don't have many
ways to measure the temperature in the upper atmosphere. This
is a really powerful one.

Speaker 2 (30:49):
Do we also use planes.

Speaker 1 (30:51):
We do use planes because every airplane you've been on
has really valuable information about weather because it samples from
the two meter level up to like thirty thousand feet.
An aircraft, of course have sensors to measure wind speed
and temperature and all this kind of stuff. So every
commercial airplane has these sensors, collects this data and then
sells it to the government. Noah buys this data because

(31:14):
there's so many flights, Like look at a map of
airplane flights for a single day in the United States.
There are so many flights they crisscross the country, and
it's incredibly valuable data. And this is usually very high
quality data because it's very important for these planes to
understand the weather.

Speaker 2 (31:30):
I had no idea Noah was getting access to all
of that data. That's super cool.

Speaker 1 (31:34):
It's super cool. Basically, any way you can imagine to
learn the state of the weather somewhere on Earth, somebody's
doing it, because the more data we have, the better
these models get. But then of course we can go
all the way up to space, right because there are
places where there are no automatic weather stations and there
are no buoys and there are no airplane flights, yet
they still contribute to the weather prediction in Kansas or

(31:57):
in Mexico City or whatever. So we have satellites, and
since about nineteen seventy nine we've had weather satellites. We
of course had satellites earlier than that, but none devoted
to like gathering weather data, and the primarily cover things
like storm systems and cloud patterns. They can tell you
where the snow is. They can also tell you like
where wildfires are, which is an important part of the weather.

Speaker 2 (32:20):
Yeah, and so they.

Speaker 1 (32:21):
Can't directly measure like what is the temperature in Houston
right now, but they can make indirect measurements like, for example,
they can measure the amount of infrared radiation from the surface,
and that is connected to the temperature, but it's actually
connected to the temperature of the surface, not the two
meter level. Right, So, like how hot is the blacktop

(32:41):
in Houston right now? That's what your satellite is telling you,
and you have to use that to infer how hot
is it two meters above the blacktop in Houston, which
is what you actually want to know.

Speaker 2 (32:51):
That sounds hard, it's hard.

Speaker 1 (32:53):
Yeah, exactly. And so we also don't have a lot
of satellites because they're expensive. There's something like twenty satellites
are between geostationary and polar orbits. Eight of them are
operated by Noah. But there's a bunch out there. But
my friend the climate scientist says that we might be
on the verge of having a lot more data because
launching stuff in a space is cheaper, and now we

(33:14):
can do like small satellites, CubeSats. These might give us
more data, not as high quality as like the dedicated
you know, super nerd designed billion dollar satellites. On the
other hand, we don't know what the future holds for
like supporting and operating these satellites. This requires money to
fund these things and have people interpreting these things. We

(33:35):
don't know how long the government is going to continue
to support it. They could just like unfund this stuff
or turn off weather stations. And you know it's more
than just like, oh, we turned it off for a year.
Having continuous records is super important for these models for
predicting the immediate weather, but also for the long term
climate models, which are essentially an average of the weather,

(33:56):
and so even turning it off briefly, could be very
damaging for our abilities to do long term predictions.

Speaker 2 (34:02):
And I'm kind of blown away by the fact that
we only have twenty satellites. I guess I had assumed,
since you know, there's like five thousand satellites up there
or something. I guess most of them are dedicated to
like beaming cat videos to us from anywhere in the world.
But like weather seems so important, you know, for farmers,
for like people who are traveling, just like for everything.

Speaker 1 (34:21):
Yeah, that's true, but the satellites don't give you a
direct measurement of what you're most interested in. They're essentially
like really good for filling in the gaps or places
where you have no other measurements. So yeah, it would
be great, but they're also super expensive, so you'll hear
at the end, I asked one of the climate scientists
I spoke to, like, if you had a billion dollars,
what would you spend it on? And satellites is not
their top priority.

Speaker 2 (34:42):
Huh okay, all right, so maybe twenty is the right number.

Speaker 1 (34:47):
So you have all these different kinds of data. You
have automatic weather stations, you have radar, you have buois,
you have ship bucket data, you have weather, balloons, aircraft,
you have satellites. What you need for your model are
the initial conditions. What you need for your model as
a set of what is the temperature and the pressure
and the humidity everywhere on the planet right now, so
that I can run it and predict it in the future.

(35:08):
And there's not a trivial step from like here I
have all this data to what are the initial conditions?
Because the data can disagree, right you have multiple measurements,
sometimes nearby, using different kinds of sensors. How do you
incorporate that, How do you clean this data, how do
you decide what to use? How do you merge all
of this into your best prediction? And so there's a

(35:29):
lot of work in this area. It's called data assimilation
of running sort of mini models fluid dynamics to do
like physics informed interpolations between the places where you don't
have measurements, and to factor in the various uncertainties from
the various different kinds of measurements. So sometimes you like
back up the model a little bit and feed in

(35:49):
some data and then use it to predict the current
initial conditions before you go to your full model, and
then you do what we talked about earlier, which is ensembling.
You say, well, here's my best guess for the weather,
like right now, before we even run the model. But
I'm going to make like one hundred versions of it,
and each one I'm going to tweak my assumptions a
little bit. So I get an envelope where I hope

(36:10):
reality somehow is described by one of these models, or
is near one of these models, or the spread in
these models describes my uncertainty in the state of the
initial conditions. We haven't even done any predictions yet. This
is just like measuring what's happening.

Speaker 2 (36:23):
Now right well, and what's so stressful for me to
thinking about this is like your data are coming in constantly,
and so it's not like you do this once and
then you're like, okay, good, now we will project. It's
like every second more data are coming in. So this
has to be like a constant process that's happening over
and over again. And integrating the information into bigger models
in exactly, amazing, exactly.

Speaker 1 (36:44):
And yeah, and we haven't even talked about how the
models work.

Speaker 2 (36:47):
And so let's take a break and when we get back,
we'll talk about how those models work. All right, So

(37:10):
now we have all of this data and you've got
it into an ensemble and you sort of maybe know
what's happening right now plus some uncertainty. How do you
now predict what's going to happen next?

Speaker 1 (37:22):
Yeah, so simple. You just break out your pencil and
paper and you do a bunch of strength theory calculations
and that's it. Right, it's just like physics. It into
the future.

Speaker 2 (37:30):
Finally, strength theory is useful.

Speaker 1 (37:33):
Yeah, unfortunately not, as we said earlier, like you can't
describe everything. You have to make assumptions about what you're
going to calculate and what you're going to simplify. Otherwise
you're never going to be able to make a prediction, right,
or your predictions are going to be done in a
thousand years for the weather that's happening in an hour,
and that's not useful. And so it's always a question
of how to judiciously make those assumptions. So the current

(37:56):
state of the art for weather modeling has basically two
big pieces. One is directly model the atmosphere itself as
if it's a big fluid. So you use like navea
Stokes equations and think about how it flows and how
temperature moves through it. That's the dynamical core of the model.
But there's a bunch of stuff that influences the atmosphere

(38:17):
that you don't explicitly include in the model. The clouds,
the convection, the ocean, the radiation, the surface temperature, all
this kind of stuff. Your model doesn't explicitly include that stuff.
We don't have like a complete model of the ocean
or the clouds, etc. And so we have like various
inputs to this core piece that they call parameterizations that

(38:38):
like capture the big picture effects of all these pieces
that are not included directly in our model but are
influencing us.

Speaker 2 (38:47):
So I feel like this is a question where you
think to yourself, am I about to ask a really
stupid question? But I'm going to move forward because that's
my job in this podcast. The atmosphere is not a
fluid though, right, So like guy, am, so, what why
are we doing? Are we modeling it as a fluid
because we just can't model it as something else because

(39:10):
it's too complicated and fluids are a simplification or Daniel,
I don't think the atmosphere is a fluid.

Speaker 1 (39:18):
Well, it depends on what you mean by a fluid,
and you know, when it comes to like how things
flow and pressure, et cetera, the fluid dynamic equations do
describe the atmosphere. And so you know, fluid doesn't mean liquid, right,
Fluid is about how things flow and move. Right. So,
for example, like the mantle of the Earth is a fluid.

(39:38):
It flows. It's not a liquid, right, It's this weird
solidy kind of state and it moves, but it also flows,
and so you can describe it and it has convection.
You can describe it with fluid equations. And so the
Navi or Stokes equations are these famous equations that describe
fluid dynamics and they're pretty good at modeling the atmosphere.
They're not perfect, right, They're not perfect, but they're pretty

(40:01):
good at it. So, yeah, I think fluid is not
a liquid. It's just things that flow.

Speaker 2 (40:05):
Okay, in my head, fluid is synonymous with liquid. And
so I have learned something today that will probably help
me not look silly in the future. That's great.

Speaker 1 (40:13):
No, it was a great question. And so this is
the big picture. You have the dynamical core, and then
you have these parameterizations and we'll dig into that and
we'll describe sort of the US approach to it. But
there are three sort of major weather communities. There's the US,
the UK, and the Japanese, and they have slightly different approaches,
which is good because you know, different predictions can crosscheck

(40:35):
each other. But then some people think it's bad because hey,
let's pool all of our resources and make one big
global model, and that's awesome, but then you only have
the one and you're not sure. Maybe it's all wrong.
There's a lot of debate about, you know, how to
deal with global questions and global resources.

Speaker 2 (40:49):
But who's the best.

Speaker 1 (40:53):
Oh, I'll give you a ranking at the end. Okay,
all right, So the dynamical core, Right, think of the atmosphere.
We're going to treat the atmosphere basically like a spherical cow. Right,
It is a spherical fluid, right, the atmosphere. Yeah, it's
a thin shell around the Earth, and you know the
temperature and the pressure, and then you can describe how

(41:13):
it's going to flow, how the temperature and pressure are
going to change using the Navier Stokes equations. So Navia
Stokes is a set of really gnarly equations. They're nonlinear
partial differential equations. A differential equation is one where like
the value depends on how quickly it's changing. For example,
like antecology, you have differential equations that describe like predator

(41:36):
and prey. Right, these two things are coupled, and so
these are nonlinear partial differential equations, which means like that
depends on things squared or cubed. All that to say,
they're very, very difficult to solve. In fact, differential equations
in general are hard to solve. If you've taken a
differential equations class, it's basically like differential equations are not
solvable except for these four examples that we have answers

(41:58):
to and we know how to solve them, and so
you just got to memorize those. It's a little bit
like chemistry.

Speaker 2 (42:04):
I gotta say, oh no.

Speaker 1 (42:05):
It's mostly unsolved, right. And the Navias Stokes equations we've
known about them for like two hundred years that were
initially developed to try to answer these questions about like
how do things flow and how does momentum and mass
flow through pipes, et cetera. Essentially, people took Newton's second
law ethicals MA and applied it to fluids and then

(42:25):
added terms for like stress and pressure and viscosity. And
it's like a real triumph that we can describe this
at all. But calculationally it's a real bear. You can't
like sit down and derive a solution and say, here's
my pressure and temperature. Let me crunch it through the
Naviastokes equation. It's going to give me a formula. It's
all numerical approximations, which means it takes a lot of

(42:47):
computing to go from now to one second from now
or two seconds from now, and that computing means approximating things.
You're like doing numerical derivatives instead of exact analytical derivatives.

Speaker 2 (42:59):
Okay, so some of that got pretty complicated, But what
I guess what I want to know is when this
is all done, I feel like, if we are trying
to model fluids, does this just tell us that, like,
the wind is now over here going this fast but
before it was over there? And how many are we
going to get to like how you get from that
to like and it's raining, Because that seems like a
different problem sort of than how fluid is moving around.

Speaker 1 (43:22):
Yeah, so there's a couple of things to know there.
You're exactly right. It takes the current conditions and tries
to predict the future conditions. And those conditions are pressure
and temperature, wind speed, humidity, right, these kinds of things.
But because we're solving this numerically, we can't solve it everywhere.
If you have a formula and analytics description you can
write down, for like, where is my ball as I've

(43:43):
thrown it? I can write that down on a piece
of paper a formula. I can tell you where the
ball is at any point in time. You ask me
any point literally any value of T, I could plug
it into my formula give you an answer. But if
I don't have a formula, that's called an analytics description.
If all I have is a numerical estimate, then I've
made a grid. I've said I'm going to sample it
at time one time, two time, three time four, I'm

(44:04):
going to make an estimate of those times, and it
don't have an answer everywhere. And that's the situation we
have with weathers that they put a grid on the
planet and they estimate what's going to be the weather, temperature,
et cetera, in a grid of points, not everywhere over
the planet. And you might think, oh, I bet that
grid's pretty small, right, maybe they measure down to the
centimeter or something. No, the grid sizes are like ten

(44:28):
kilometer cubes.

Speaker 2 (44:29):
What, Yes, that's too big.

Speaker 1 (44:31):
It's too big, right. They are averaging the temperature and
the humidity over cubes of atmosphere ten kilometers on a side,
it's crazy and I think that's way too big. On
the other hand, that's still a lot of cubes, right,
Like the atmosphere is a lot of ten kilometer sized cubes.
And then the time steps are tens of minutes, right,

(44:52):
And this is awesome that we can even do this.
It requires massive supercomputers. We'll talk about it in a minute.
But the problem is that it ignores a lot of
little details like how big is a cloud? Usually they're
like a kilometer or less. And so you're missing out
on a lot of stuff by making your grid. Anything
that happens that's subgrid. That's crucial and important, but it's

(45:15):
small than the size of your grid is not being
described by your model. But your question was like, is
this directly outputting? Like, hey, it's going to rain on
Kelly's picnic. In a sense, yes, the direct outputs are
things like temperature, pressure, humidity, and those are enough to
tell you like, okay, it's going to rain because the
pressure and humidity are above some threshold or whatever. So

(45:35):
it's not directly outputting like three centimeters of snow. There's
another step you have to take after that, but it
feeds into that. So those are the inputs you need
to the next step, which says how much snow is
going to fall?

Speaker 2 (45:46):
Okay, So let me see if I can do a
super simplified version of this. You get all of the
data that you have about a square in the grid,
and you do the best job you can to sort
of summarize it and ensemble it, and then you put
it in the model the runs through the equations. Than
does the information from the surrounding grids feed into your

(46:07):
grid as well, because you would, okay, because you would
expect there to be similarities between closely related squares in
the grid.

Speaker 1 (46:13):
Yeah, you can't solve one grid at a time. You
have to solve all the grids. To grids touch each
other and influence each other and wind flows right right,
And that's why Siberia affects Manhattan over time because you've
propagated these things from grid cell to grid cell.

Speaker 2 (46:28):
Absolutely, So does Siberia have bigger grid cells or just
the same number of small grid cells each with poorer
data in them.

Speaker 1 (46:36):
Yeah, great questions. So some of these models are adaptive, right,
they have bigger grid cells where we have more uncertainty,
and smaller we have more data. The most precise ones
are the UK supercomputers. They go down to two kilometers
in some cases. Some of them are like fixed grids,
and some of them are adaptive exactly. It depends a
little bit on the model. But you know, there's lots

(46:58):
of details that are not described here, and these are
called the parameterizations, like especially subgrid stuff and exchanges with
other parts of the system. They're not just the fluid.
And one important thing are the clouds. Like you cannot
model every individual cloud because clouds are smaller than your
grid size. We do not have the compute to do that.
People have tried, and you can like do dedicated runs

(47:19):
on subsets to try to resolve clouds, but then you
don't have enough computing to do like ensembles. So you
can be like one prediction you're like, well, here's a prediction,
but I don't know what the uncertainties are on it
at all. And so instead what you tend to do
is parameterize the bulk outcomes, you know, the vapor, the clouds,
the liquid, the ice, the rain, the snow, etc. The condensation,

(47:40):
all this kind of stuff. You try to like grab
all that average over what's happening in that grid cell
and use that to inform your naveor Stokes equation. So
things you're not explicitly modeling, you're sort of like averaging
over You're losing all the details and saying like, well,
on average, this is going to be the effect of
clouds on my grid cell.

Speaker 2 (48:00):
Do you think that as Oh, well, I was going
to say, do you think that as we continue to
have more and more computing power and more and more supercomputers,
at some point we'll be doing better here? But we
were just talking about how More's law. We've maybe hit
the end of that. So are we like, is this
as good as wherever we're going to get at it?
This is probably an end of the podcast question, but
I'm thinking it right now.

Speaker 1 (48:22):
No, I think that there's lots of possibilities for making
this faster and more efficient, and not just wait till
computers get faster. Okay, there's definitely clever ideas and we'll
get there. Yeah, But there are lots of parts of
the weather that are not directly described in the dynamical core,
and not just the clouds, but also things like convection,
like vertical transport of heat. You know, especially there is

(48:44):
complex boundary mixing near the surface, like the lowest kilometer
or so of the atmosphere, where you have like heat
from the surface and turbulent momentum exchanges as wind is
like hitting mountains and stuff. These things. You can't model
all of those details, and so you have like parameterization
schemes that model the turbulence and the boundary level mixings.

(49:06):
There's radiation from the surface also, right that changes from
day to night. You have models of vegetation and snow
how those things couple. But then the biggest one is
the ocean. Right, Like we would love for our models
to include also a Navio Stoke simulation of the whole ocean, right,
might as well do that because the ocean it plays

(49:27):
a big role. But we don't have the compute for
that at all, So we just like use a slab
ocean model. We just say, let's just assume the ocean
is like simple, and we have a certain temperature, and
we assume like how the energy transfers from the boundaries,
and it's really quite simplified. But that's we're just limited, right.
We don't have great data in the ocean, and we

(49:48):
don't have the compute to also model the ocean as
well as the atmosphere. So places where we don't do
our best approximation, which is like Navia Stokes equations of
the atmosphere, we have simplified versions, which are called parmetization. Says,
feed in to the core. But in the end you
got to take it to the computers. And this is
why we have like massive supercomputers to make weather predictions.

(50:10):
So Noah in the US has a couple of really
big facilities. They're called Dogwood and Cactus. One of them
is in.

Speaker 2 (50:16):
Virginia, You're welcome, everyone.

Speaker 1 (50:18):
And one of them is in Arizona, and they're huge,
amazing computers. Those two have like twelve point one petaflops.

Speaker 2 (50:26):
You made that word up.

Speaker 1 (50:28):
It sounds like a made up word. Peta means quadrillion
and flops are floating point operations. So you know it
takes the computer time to add like three point nine
to one to fourteen point four two and floating point
numbers those numbers with a dot in them, right, not
integers are more computationally expensive to add or subtract. And

(50:50):
that's what most of these models do. They're like, add
this number, multiply by this, and so this is like
the way you measure the speed of a computer. And
so these computers can each do twelve point one quadrillion
floating point operations per second.

Speaker 2 (51:04):
Wow.

Speaker 1 (51:04):
Right, imagine the guys back in the nineteen twenties, they're
like adding two numbers. It probably takes them a minute, right,
or they're super good, takes them twenty seconds. The computer
does twelve quadrillion a piece per second. Right. So together
with all of their computers, Noah has about fifty petaflops
and that's what it uses to run its model. And

(51:25):
so that's the state of the art. In the United States.
The Europeans have a couple of computers. Is one really
big one in Bologna called Bull Sequanya and has about
thirty petaflops. But the biggest, most powerful weather computer in
the world is in the UK. It's at the Met
Office and it's built by Microsoft and has sixty petaflops.

(51:47):
And this is why the UK has some of the
best weather prediction in the world because they have the
biggest computer. They beat us. Yeah exactly, they just spent
more money. They bought more computer. This is literally like
money equals computing.

Speaker 2 (52:00):
Tea drinking bastards.

Speaker 1 (52:02):
Good day. Yeah, well, you know, they got tricky weather
over there, and so they need it. It's an island
after yet guys and the Japanese. The Japanese have a
big investment in weather prediction computers. Also, this one called
Prime HBC. It has thirty one petaflops. So these are
really powerful devices and they run these huge models. And

(52:23):
you know, think about what the model does. It predicts
the state of the atmosphere on these pretty chunky grids.
But it's still it's a huge amount of data, like
every few minutes, every ten kilometers. My friend Jane was
telling me that sometimes the data is so big that
you just throw it away. You run it, you get
like a summary number, but you can't keep all of

(52:43):
the data because it would just like fill up all
of the hard drives. Everywhere. And this is familiar for
me because like at the LHC, we also we run
these experiments every twenty five danoseconds. We throw away most
of the data from that because it would just fill
up all of our storage. And they're in a similar situation.
They produce more data than they can store.

Speaker 2 (53:01):
So are these facilities where the Navier Stokes equations are
being run or are these facilities where you have the
output from each grid and now you are translating that
into information about where the rain is falling?

Speaker 1 (53:17):
Both? Yeah, okay, So these programs do the data similation,
come up with the current initial conditions, and then also
run the model forward to make those predictions, and from
that glean things like weather details, snowfall, et cetera. And
so what you're getting on your phone, what you're hearing
on TV is not just like what Jane, your local forecaster,

(53:37):
came up with. She's relying heavily on these central predictions
from major resources. Right, So, for example, if worldwide governments
decide we don't need these computers anymore, we don't need
these satellites, it's not like you could be like, that's cool,
I got my local weather forecaster I don't need you.
No your local weather forecaster is getting that information from

(53:58):
these big models that are being run by the government.

Speaker 2 (54:02):
Oh wow, yeah, and did Noah get cuts recently? I'm
going to bet they did.

Speaker 1 (54:07):
There were some talk about cuts. I don't know how
much of that is going through. It's all kind of scary.
It's hard to know.

Speaker 2 (54:13):
Yeah, okay, all right, we won't get into that. Moving on.

Speaker 1 (54:16):
But amazingly, currently we can pretty accurately predict the weather
five to six days in the future, you know, and
you mostly remember when the weather prediction is wrong. You
mostly don't realize that most of the time it's right. Yeah,
you know, it tells you it's going to rain, it
tells you it's going to be study. It's mostly correct.
It's amazing, but you know, there's still challenges. Things are
not perfect. One of the biggest challenge is just incomplete information.

(54:41):
You know, we don't have sensors in enough places, and
we don't have enough sensors, and sometimes this data availability changes,
you know, things go offline or come online. Now your
model has to compensate for that. I don't have the data.
Do I assume it's similar to the past. Do I
try to ignore that kind of data. It's not easy
to be running a model if the puts are constantly changing.

Speaker 2 (55:01):
The bucket had a hole in it, So now you
don't have good bucket data exactly exactly.

Speaker 1 (55:07):
They got a new kind of bucket. You don't know
how to calibrate it. I spoke to John Martin, he's
a professor of meteorology, and he said that this might
be the biggest challenge is how to combine the data
to make a high quality initial state. That's one challenge.
The other are these subgrid parameterizations. Can we develop better
models for turbulent flow at the boundaries or for latent

(55:28):
he's released back into the environment. And another limiting factor
is just the computing cost. More computes, more GPUs from
Nvidia means smaller grids, which means the effect of these approximations,
these parameterizations is less. Another continuing challenge are rare and
extreme events, like we're pretty good at predicting the bigger picture,

(55:50):
like is it going to be sunny here, is it
going to be rainy here? But like small, rare extreme events,
like there's a tornado right here, that's more challenging because
they depends in detail well on things that happen within
the grid that we're averaging over. And so there's a
lot of work being done right now. One thing we're
hoping to do is like, let's reduce the grid size,

(56:10):
get more computing, more accurate. Right. But another really promising
error of research is using machine learning. Oh, there's this
movement in many fields of science to use machine learning
to make predictions by essentially skipping the physics. Like, the
physics is hard, it takes a lot of time to
push the initial conditions through these equations. In the end

(56:31):
you have an input and an output. And the idea is, well,
can we train machine learning, not a chatbot, not LMS,
it's AI, but it's not LMS to map the initial
conditions to the output because in the end it's just
a mapping and one could learn it. And so we
have these machine learning models that are simple functions that

(56:52):
take the input and give you the output, and they
don't have the physics encoded in them, but they learn
from the simulations, they learn the patterns, they learn what
the rules are implicitly, and so you don't have to
go through all the detailed calculations. So this can dramatically
speed up your predictions. We use these the large handroom
collider all the time so that we don't have to,
for example, model every single particle that might hit the

(57:15):
detector and create another particle and another particle. We can
learn to predict the final thing we're interested in and
to sort of leapfrog over all the tiny details.

Speaker 2 (57:24):
And his machine learning being used right now for weather predictions,
or they're just starting to work on how you would
do that.

Speaker 1 (57:29):
They're using that now. There's sort of experimental. But there's
a guy here at you see Irvine, Mike Pritchard, who
is an expert in this kind of stuff, and it's
very powerful, absolutely cool. Yeah. So I asked John Martin,
if I give you a billion dollars to improve weather predictions,
what would you do, And he said he would spend
a billion dollars on ocean probes, like he wanted a
more substantial understanding of how water is circulating in the

(57:52):
ocean and temperature in the ocean and how that's all working.
Because his suspicion was like, we're right next to this
other big fluid that's affecting our temperaatereure, and we don't
have much enough data about it. If we just knew
more about the ocean, and this just highlights like how
little information we have. It's not just a question of
like puzzling out the rules of the universe, but just
like knowing what's happening. If we had more data everywhere

(58:15):
about temperature, pressure, about cosmic rays, we would just learn
so much about the universe. And we have so few
ways to probe. But we're really just like taking the
tiniest teaspoon out of this massive river of data and
trying to use that to understand the whole river. It's crazy.

Speaker 2 (58:32):
How good do you think weather prediction would have to
be before people stopped complaining about weather prediction?

Speaker 1 (58:37):
I asked John that question, and his prediction was, quote,
the complaining will never stop amazing. I think that, you know,
weather prediction has improved a lot over the last few decades.
It used to be you couldn't get any reliable prediction
more than a day in advance. Now five six days,
it's pretty reliable. But people expect that and they get
used to it, and they're like, what, you didn't predict
the weather or my ski trip in two weeks? I'm you,

(59:01):
And so yeah, the complaining will never stop because we
always just get used to the level of technological prowess
that we've had, and so people want more because it's
so important and it's a hard problem. There's so much
physics here, there's instrumental science, there's so many different kinds
of science at interface with each other. It's really an
exciting field. And let me throw a special thanks to

(59:22):
Professor Jane Baldwin here you see I who told me
a lot about weather predictions, and Professor John Martin at
Wisconsin who answered a lot of naive questions of mine.
Thanks to both of you.

Speaker 2 (59:31):
Thank you community. All right, see you all next time.
I hope the weather is nice where you are.

Speaker 5 (59:35):
It always will be nice where I am.

Speaker 2 (59:46):
Daniel and Kelly's Extraordinary Universe is produced by iHeartRadio. We
would love to hear from you.

Speaker 1 (59:51):
We really would. We want to know what questions you
have about this Extraordinary Universe.

Speaker 2 (59:57):
We want to know your thoughts on recent shows, suggestions
for future shows. If you contact us, we will get
back to you.

Speaker 1 (01:00:04):
We really mean it. We answer every message. Email us
at questions at Danielandkelly.

Speaker 2 (01:00:10):
Dot org, or you can find us on social media.
We have accounts on x Instagram, Blue Sky, and on
all of those platforms. You can find us at D
and K Universe.

Speaker 1 (01:00:20):
Don't be shy, write to us,
Advertise With Us

Follow Us On

Hosts And Creators

Daniel Whiteson

Daniel Whiteson

Kelly Weinersmith

Kelly Weinersmith

Show Links

RSS FeedBlueSky

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by Audiochuck Media Company.

The Brothers Ortiz

The Brothers Ortiz

The Brothers Ortiz is the story of two brothers–both successful, but in very different ways. Gabe Ortiz becomes a third-highest ranking officer in all of Texas while his younger brother Larry climbs the ranks in Puro Tango Blast, a notorious Texas Prison gang. Gabe doesn’t know all the details of his brother’s nefarious dealings, and he’s made a point not to ask, to protect their relationship. But when Larry is murdered during a home invasion in a rented beach house, Gabe has no choice but to look into what happened that night. To solve Larry’s murder, Gabe, and the whole Ortiz family, must ask each other tough questions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.