All Episodes

July 22, 2025 27 mins

MIDAS: Multi-Agent Trajectory Imputation in SportsThe episode collectively addresses the critical problem of missing data in multi-agent sports trajectory analysis, a common challenge due to factors like camera occlusions and data confidentiality. "Trajectory Imputation in Multi-Agent Sports with Derivative-Accumulating Self-Ensemble" introduces MIDAS, a novel framework that accurately infers missing player movements by predicting positions, velocities, and accelerations, even with limited training data. This approach is contrasted with traditional interpolation techniques and machine learning regression algorithms explored in "FOOTBALL ANALYTICS BASED ON PLAYER TRACKING DATA USING INTERPOLATION TECHNIQUES FOR THE PREDICTION OF MISSING COORDINATES," which also aims to reconstruct incomplete sports data. "Missing Data in Time Series" provides a broader overview of time series imputation methods, categorizing them and discussing their suitability based on data characteristics. Finally, "TranSPORTmer: A Holistic Approach to Trajectory Understanding in Multi-Agent Sports" presents a unified transformer-based model that not only handles trajectory imputation and forecasting but also classifies global states within multi-agent sports scenarios, highlighting the use of attention mechanisms for capturing complex interactions.

🎧 Listen now on Spotify & Apple Podcasts! Don’t forget to subscribe, share, and leave a ⭐⭐⭐⭐⭐ review to help more players and coaches discover the power of Techne Futbol and Data Technology in the beautiful game.

#TechneAfricaFutbol #TechneFutbol #TrainWithTechne #Playermaker #TraceFutbol #Hudl #FootballTech #FootballInnovation #PlayerDevelopment #CoachTools #SmartFootballTraining #AfricanFootball #CAFOnline #FIFA #FIFAYouth #CAFDevelopment #YouthFootballAfrica #NextGenFootball #FootballScouting #TalentIdentification #DigitalScouting #EliteYouthDevelopment #FootballJourney #FootballAnalytics #FIFAForward #FutebolAfricano #TrainTrackCompete #FootballExcellence #FootballAfrica #TheFutureOfFootball


Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome fellow explorers of knowledge to the Deep Dive.
Today we're plunging into a world that fascinates millions.
The magic of sports. Whether you're a die hard
fanatic or you know someone who just enjoys a good match,
there's this undeniable allure to watching players move,
strategies unfold, those momentsof genius lighting up the field.

(00:20):
We're all trying to understand the why behind it, aren't we?
Why did that pass work? How is that player just at
everywhere? What was the thinking behind
that winning play? And here's where it gets, well,
really interesting. Find every single dazzling play,
every strategic move, every Sprint, every goal.
There's this huge, often invisible amount of data.
Imagine capturing every tiny movement multiple times a

(00:41):
second. It's like having a superpower
microscopic view. But here's the big question for
today. What happens when that data
isn't all there? What if we only get glimpses,
fragments, missing pieces? How do we really get the full
story? That's exactly the challenge
we're diving into. This deep dive is really an
exploration into how cutting edge tech is solving this

(01:01):
problem of missing pieces in sports data.
Specifically, we're talking about those intricate, fluid
movements of players and the ball.
And these movements are incredibly dynamic, they're
constantly interacting, and honestly, they're notoriously
difficult to track perfectly in the real world.
It's not just one player's path,it's, you know, this complex
dance of 22 players in soccer, or 10 + a ball and basketball,

(01:23):
all moving, all influencing eachother constantly.
Think of it like trying to watcha really complex, fast ballet,
but the stage lights keep flickering, or a dancer just
steps off stage for a second. You'd miss crucial bits,
wouldn't you? So our mission today is to show
you how brilliant scientists andengineers are building these
really sophisticated tools to fill in those gaps.
The goal is to give us a clearer, more insightful and,

(01:45):
frankly, a much more truthful picture of the game.
And it's not just about, you know, plugging numbers.
It's about revealing A deeper understanding of the sport
itself. That sets the stage perfectly.
So let's unpack this challenge, this imperfect sports data.
The value of tracking data is huge, absolutely undeniable.
We're not talking basic stats here like goals or assists.
No, no, no. We're talking granular details,

(02:08):
positions, velocities, accelerations captured like
multiple times every single second.
This level of detail is just it's a goldmine.
It reveals hidden tactics. You can measure player
efficiency right down to their decisions in specific spots,
visualize team formations changing in real time.
It's invaluable stuff for coaches, analysts, anyone
wanting to look deeper. But, and this is the big but

(02:29):
right, this invaluable data is almost always incomplete.
And that's where the real problems kick in.
You've hit on the core issue. There are actually several
pretty common reasons why we endup with missing player and ball
data, especially if you're trying to get it from, say,
broadcast video, which is a hugesource for analysis.
One of the biggest culprits is pretty straightforward camera

(02:52):
limitations. Broadcast cameras, well, they're
designed to follow the action, right?
But that means they only capturea limited slice of the field,
the observed area, at any one time.
OK, so like a spotlight moving around?
Exactly like a spotlight. Imagine huge football pitch or a
basketball court. The cameras panning, zooming,
trying to keep the ball or the key player in focus.

(03:13):
Anyone outside that frame, outside that moving spotlight,
their exact location is just unknown, right?
For instance, in a typical soccer broadcast, it's not
unusual for only, say, 14 out ofthe 22 players to be visible on
average in any given. Moment.
Wow, that's nearly half. Yeah, nearly half the players.
Think about all those off ball runs, the defensive shifts
happening way over on the other side.

(03:35):
All that can just vanish from the data.
And it's not just players outside the mainframe, those
frequent close-ups or instant replays.
Great for TV but they create huge data gaps because you lose
the wider view temporarily. OK, so you see the tackle up
close, but everyone else's movement at that exact second
gone. Precisely.

(03:55):
It's like a puzzle where pieces are constantly disappearing and
reappearing. It makes building a complete
picture really hard. Then, beyond the cameras,
another big hurdle is confidentiality.
Professional sports leagues, andyou can understand why, often
treat detailed player tracking data as highly confidential.
Competitive advantage. Exactly.
It's a strategic asset. So those large, high quality

(04:17):
data sets, you really need to train the most sophisticated
analytical models. They're rarely if ever, made
public. This scarcity forces researchers
and analysts to be super innovative with the limited data
they can access, which, you know, just makes the missing
data problem even tougher. And of course, you always have
technical glitches in the real world, you know, sensor failures
on wearables, transmission issues, maybe even just human

(04:40):
error during collection. It's never perfectly clean.
Sure, practical stuff. Right.
And one more subtle thing is player identification issues.
Sometimes players are so brunched up or moving so fast
together, the system might get confused.
It might see two players as one,or just fail to ID someone for a
moment that has another layer ofmissing or inaccurate info.

(05:03):
OK, so the data is fragmented for lots of reasons.
What's the real tangible impact then?
It sounds like more than just anannoyance.
Could it actually like warp our understanding?
Oh, it's absolutely far more than an annoyance.
These gaps are profoundly problematic.
They lead to discontinuous player tracks, unreliable
identification. I mean, imagine trying to
analyze a player's endurance if their path keeps breaking up.

(05:25):
Right, you can't measure distance covered properly.
Exactly. If a player vanishes off camera
and reappears, their track is fragmented.
This makes analyzing those crucial off ball actions, the
runs creating space, the tactical shifts that happened
over several seconds when a player might be out of frame
incredibly difficult, and those are often the hidden keys to

(05:45):
success. Ultimately, these breaks make it
almost impossible to get reliable granular insights for
tactical analysis or proper player performance evaluation,
or even developing subtle game strategies.
You end up with what analysts call undesirable results due to
data complexity. Undesirable results, meaning
just wrong conclusions. Pretty much take something like

(06:06):
creating a pitch control map. That visual showing which team
controls which parts of the field.
Vital for understanding tactics but if you're missing half the
players positions. Your map is basically fiction.
Fundamentally flawed, Yeah. It might show players offside
when they weren't, or wildly misjudged past probabilities in
key areas. If you can't track the full
pass, how do you truly assess total distance sprinted, or

(06:29):
contribution to a press, or their role in setting up a goal?
It's like trying to draw a detailed picture while someone
keeps erasing crucial lines. The insights just aren't
reliable. That makes incredible sense.
It's not just filling a blank. The filled in bit has to make
physical sense, tactical sense within the whole game.
So OK, the obvious next question, how do we start fixing
this? Researchers didn't just jump to

(06:50):
the fanciest AI, right? They started with more
traditional methods first, building a foundation.
It's exactly right. Before deep learning became so
dominant, and actually so alongside it, researchers relied
on a fundamental toolkit of interpolation techniques.
These are basically mathematicalways to estimate unknown values
that fall between known data points.

(07:11):
The first line of defense, you could say the simplest one, the
one most people kind of get intuitively, is linear
interpolation. You could literally picture it
draw a perfectly straight line between two known points to
guess the values in between. Point A to point B straight line
constant speed. Precisely if you know where a
player was at time T1 and then at time T2.

(07:33):
Linear interpolation assumes they move perfectly straight
constant speed. It's fundamental.
Super easy computationally but maybe not the most realistic for
an athletes fluid movement. Right, Yeah, Players change
directions, speed up, slow down.Exactly.
Linear interpolation just can't capture those sudden shifts very
well, so moving up in sophistication you get spline
interpolation. Things like cubic splines,

(07:54):
natural splines instead of 1 stiff straight line splines use
what are called piecewise polynomials.
Piecewise polynomial. OK, break that down.
Basically, imagine fitting lots of smaller, low degree curves
smoothly together. Instead of 1 big rigid curve,
you get a series of connected curves that flow more naturally.
It gives you a much smoother, less jagged estimate of the

(08:15):
player's path. More like real movement.
Hopefully, yeah. And a natural spline
specifically is designed to behave nicely at the ends of the
data, so it doesn't suddenly predict the player flying off in
some weird direction. It aims for smooth and accurate
through the known points. Then there's one called Steinman
interpolation which is pretty interesting and tries to combine
the best bits of other methods. Uses something called piecewise

(08:37):
rational interpolation. Sounds complex.
It does, but the key thing for us is it's designed to preserve
smoothness and what's called monotonicity.
In simple terms, if the originaldata shows smooth changes, like
a player steadily speeding up, the interpolated data will also
show smooth changes. It avoids creating weird
unnatural wiggles or sudden impossible jumps in direction,

(09:00):
even if the input data changes just slightly.
That focus on physical realism is a big plus for sports data.
OK, so we start by drawing thesesmarter, smoother lines.
But movement isn't just individual paths, is it?
It's about patterns, how playersrelate, how past actions affect
future ones. That sounds like where

(09:20):
regression and time series models come in, predicting based
on other stuff. Exactly.
That's the next step up. We move to methods that try to
capture more complex relationships.
Regression models are powerful here.
Instead of just connecting knowndots, they try to predict
missing values based on relationships with other known
features or covariance in the data.

(09:43):
Like what kind of features? Well, for predicting player
coordinates, common things used are maybe the player's position
just before or after the gap, the exact time stamp, how far
they are from the ball, maybe even how far they traveled in
the last few frames. That gives you a clue about
their speed. And within regression, people
have tried some pretty sophisticated algorithms.
Random forest regression for example.
That's an ensemble method. I don't think of it as one

(10:05):
model, but like a big committee of decision trees.
Lots of trees voting. Kind of, yeah.
Each tree makes its own prediction based on the
features, and then they all voteor average their predictions.
The idea is the combined result is more accurate and stable than
any single tree, like getting a consensus from diverse experts.
Another really powerful one is Extreme gradient boosting,

(10:25):
usually called XG Boost regression.
It's also a boosting algorithm, but works differently.
Instead of just voting XG, Boostadds new trees iteratively,
specifically designed to correctthe mistakes the previous trees
made. So it learns from its errors.
Exactly. It's a refinement process,
constantly adding components to get better and better.
It builds a strong predictor by combining lots of weaker ones in

(10:48):
a clever sequence. And then there's K nearest
neighbors KNN regression. This one's simpler conceptually,
but often surprisingly effectiveto guess a missing value.
It looks at the whole data set and finds the most similar
observed data points. Similarity could be based on
time, location, proximity to others.
Finds the closest examples we dohave data for.
Right, and then it just averagestheir values to make the

(11:10):
prediction. What's neat about KNN for
trajectories is it inherently considers time because neighbors
close in time will likely be similar, so it's naturally
suited for movement sequences. And given we're talking about
movement over time, time series models are also a very logical
fit. A classic example is ARIMA that
stands for Autoregressive Integrated Moving Average.

(11:31):
Right, I've heard of ARIMA. Yeah, these models specifically
look at a player's past coordinates to predict future or
missing ones. They often use differencing to
handle underlying trends in the movement.
They're built to capture those temporal dependencies.
How what happened a second ago influences now OK.
That's a whole toolbox. Lines, curves, voting trees,
error correcting trees, nearest neighbors, time series.

(11:53):
How do you actually know which one is best?
You need solid ways to measure performance, right?
What metrics do they use and what did they find out?
Great question. Yeah, you absolutely need
evaluation metrics to compare these properly.
A key one is position error PE, often reported as root mean
square error RMSE. That's straightforward.
It measures in distance units like meters or feet how far off

(12:17):
the predicted positions are fromthe actual true positions, which
you know in a test scenario where you artificially remove
data. Lower RMSE is better, closer
prediction. Exactly.
Smaller is better. Then there's mean absolute
percentage error MAPE. This gives you the error as a
percentage of the actual value, which helps compare performance
across different scales of movement.

(12:37):
Maybe, and really important for sports is assessing physical
plausibility. Does the filled in movement look
realistic? For that step change error SCE
is used. It looks at things like velocity
variance to see if the path involves impossible
accelerations or direction changes.
Making sure the player doesn't suddenly teleport or turn on a
dime unrealistically. Precisely.

(12:57):
We also look at things like Pearson correlation, how well do
the imputed values track the actual values statistically, and
sometimes Cohen's age, which measures similarity between
distributions. And here's where things got,
well, genuinely surprising. You might assume the complex
fancy deep learning regression models would blow everything
else away, right? But for predicting player

(13:19):
coordinates, some of the simplerinterpolation methods,
specifically Stein and interpolation and also KNN
regression, often actually outperform things like Random
Forest and XG Boost in pure accuracy.
Lowest RMSE in map. Really the simpler ones one.
In many tests, yes. It suggests that for this
specific problem, filling gaps in sequential movement,

(13:40):
approaches that inherently focuson temporal similarity and
smooth natural flow are incredibly effective.
It's a fantastic reminder that sometimes simpler tools, chosen
well for the data's nature, can beat brute force complexity.
More complex isn't always better, especially when time
sequence is key. That is fascinating.
So sometimes understanding the nature of the problem and
picking a tool that respects it is more critical than just raw

(14:02):
power. Great lesson but OK.
A game isn't just smooth individual paths, it's this
dynamic multi agent chaos, right?
Everyone interacting. Surely that complexity needs
something more, A new breed of AI.
This must be where modern deep learning really comes into its
own. Absolutely.
While those traditional methods are surprisingly good, even

(14:23):
great for individual paths, theydo struggle with the true messy
reality of multi agent sports. You've got, you know, 22 players
in a ball, 10 players in a ball,all interacting, affecting each
other, constrained by bio mechanics, speed limits,
acceleration. Limits you have the system
dynamics. Exactly.
And that's precisely where advanced AI, especially modern
deep learning architectures, offers a real paradigm shift.

(14:44):
One really groundbreaking development here is a framework
called Midas that stands for Multi agent computer with
derivative accumulating self ensemble.
Big name, but the goal is clear.High accuracy and that crucial
physical plausibility. OK, Midas.
So it aims for realistic movement, not just filled gaps.
Right, the imputed trajectory shouldn't just mathematically

(15:05):
connect points. They need to look and behave
like real human movement, respecting physical limits, and
its core innovation is this derivative accumulating self
ensemble. Let's unpack that because it's
clever. Most models just predict
position. Midas predicts position and
velocity and acceleration all together.
Position, speed and change in speed.

(15:27):
Why all three? Because player motion is
governed by those things, knowing not just where they are,
but how fast they're going and how quickly they're changing
speed or direction let's the model generate vastly more
realistic paths. Then it refines these
predictions by recursively accumulating the predicted
velocity and acceleration from the nearest observed points,
both forward and backward in time from the gap.

(15:50):
This helps explicitly enforce physical consistency.
So it looks ahead and behind at the real data to make sure the
filled in part connects smoothlyand realistically in terms of
physics. Exactly.
And finally, it combines these different predictions, the
initial guess, the forward refinement, the backward
refinement, using a learnable weighted ensemble.
Basically, the model learns the best way to blend these

(16:10):
different estimates to get the most accurate and physically
plausible final result. OK wow.
So it's not just guessing a point, it's guessing the point,
the speed, the acceleration, andthen double checking against the
real data before and after. That sounds incredibly thorough
for ensuring physical sense. It really is.
This focus on derivatives and physical constraints is a game

(16:31):
changer. It ensures the imputed paths are
plausible, capturing those finerdynamics of motion.
And crucially, it helps stop errors compounding over time
time which can happen in simplermodels, small errors building up
into big unrealistic deviations.Another huge plus for Midas is
its data efficiency. Remember we said pro sports data
is often scarce. Midas performs exceptionally

(16:53):
well even in limited data settings.
That's vital if you don't have tons of training games.
Absolutely. It means you can get better
insights even without mountains of data, which is often the
reality. In tests, Midas outperformed
other top methods significantly when trained on less data and
Midas tackles the multi agent interaction problem using
advanced components like set Transformers and bio STMS.

(17:16):
Set Transformers are great at handling permutation
equivariance. That term again remind me.
Yeah, it sounds complex. It just means the model
understands that the order players are listed in the data
doesn't matter, but their relationships and interactions
do. It gets that player A and your
player B is about their spatial relationship, not their ID
numbers. Got it.
Interactions matter, not labels.Right and buy LST Ms. help

(17:39):
capture the sequence how past movements influence future ones
in a continuous flow. This sounds incredibly powerful,
especially the data efficiency part.
So what's the real world pay off?
How does having this super accurate, physically plausible,
imputed data actually change things for analysts or coaches?
The impact is huge. It enables much more reliable

(17:59):
downstream tasks, the analysis people actually care about.
For instance, coaches can now accurately calculate total
distance covered, differentiate high intensity sprints from
jogging, assess fatigue much better, helps manage players
optimize training. Better workload management.
Definitely. It also revolutionizes
understanding game context. You can get way more precise

(18:21):
estimates of past success probability.
Imagine the attack where playersbriefly go out of view.
Now you can understand the true risk reward of a pass in that
moment. Seeing the full picture of the
options. Exactly, And for tactical
analysis it's a massive leap. You generate highly accurate
pitch control maps showing who controls what space dynamically.

(18:41):
You get reliable player heat maps showing preferred zones.
You can reconstruct complete smooth player paths revealing
those crucial off ball runs. Without this accuracy, those
maps could be wildly misleading,showing players offside,
misjudging passing lanes. Midas provides maps that more
accurately reflect the true gamesituation.
Real tactical insights become possible.
And beyond individual players, team strategy.

(19:04):
Profoundly enhances that too. You can create precise Voronoi
diagrams, visualizing the space each player controls, revealing
gaps or overloads. Calculate the average formation
line AFL, how high or low the team is playing collectively.
Measure team compactness, distances between players,
understanding defensive shape, Attacking spread.
So you can see the team's structure much more clearly,

(19:26):
even with missing data points. Exactly.
Interpolated data gives a truer,more consistent picture of team
strategy compared to Apache data.
You can see precisely how a team's defensive shape held up
under pressure, second by second.
That's actionable intelligence for coaches.
Okay, Midas is a huge step for filling gaps with physical
realism, unlocking tactical clarity.

(19:46):
But you also mentioned transportMer.
What's its angle? Is it doing something different
or building on these ideas? Excellent question.
Transport Mer offers a unique, maybe even more holistic
approach. While Midas is laser focused on
top tier imputation Transport, Mer is framed as a unified
transformer based system that handles multiple analytical
tasks at once. Multiple tasks like what?
So a single transport Mer model can not only impute missing data

(20:10):
but also predict future movements, forecasting for
players and the ball. It can infer the status of
unseen agents, like guessing theball's position even if it's
hidden, or knowing if a player is out of play.
And it can classify global game states like telling if it's a
pass, possession, uncontrolled or out of play.
Whoa, so one model does imputation forecasting,

(20:31):
inference, and game state classification.
Exactly. It eliminates the need for
separate models for each task, streamlines the whole process,
gives more cohesive view. How well it's architecture uses
set attention blocks as a BS. These are key for capturing both
individual global player dynamics over time and those
complex social interactions between players, all while
respecting that permutation, ecovariance, understanding

(20:53):
relationships matter, not IDs. OK, so it handles time and
interaction smartly. Yes, and it uses clever masking.
An input mask tells it what's missing or what tasks to do, but
crucially it also has a learnable uncertainty mask.
This is really interesting. It helps the model understand
and account for how uncertain itis about its own predictions or

(21:15):
the hidden values around a gap. So it knows when it's less sure.
Kind of, yes. It factors that uncertainty in
leading to more robust overall predictions.
And it refines its understandingusing course to find encoders
processing information in layers, moving from a general
overview to find details. So it's not just about what's
missing, but the context, the purpose, even its own confidence

(21:35):
level that sounds deeply integrated.
It is this holistic approach gives transport Mer a real edge.
Reports show it outperforms state-of-the-art task specific
models in many areas, forecasting imputation,
especially ball incutation and inference, fighting the ball
when unseen, and by improving game state classification, it

(21:56):
boosts the accuracy of trajectory modeling.
Overall, it's about getting a really comprehensive,
semantically rich view of the game.
Imagine not just filling a players run, but understanding
why that run happened in the context of a potential pass.
That's the level. This is all incredibly
impressive stuff. These AI models might us
transport Mer with their set Transformers by LSTM's.

(22:18):
They're clearly doing some seriously complex work to give
us these insights. But you know, you hear AI, deep
learning, neural Nets. It can still sound a bit like
magic, maybe a black box. How do they actually learn?
How do they adjust and get better?
What's the engine underneath making the learning happen?
That's a fantastic fundamental question and it brings us right
to the core technology enabling pretty much all modern AI and

(22:38):
deep learning, automatic differentiation or auto diff.
Auto diff OK. At the most basic level, how
these models learn is through a process called gradient descent.
Think of it as an optimization algorithm.
The model makes a prediction, compares it to the truth,
calculates an error, how wrong it was.
OK, I'm minimizing the. Error.
Exactly. Gradient descent is the method

(22:59):
the model uses to systematicallychange its internal weights or
parameters, taking tiny steps toreduce that error.
The classic analogy is being on a foggy mountain, wanting to get
to the lowest point. You can only see your feet, so
you feel the slope where you areand take a small step in the
steepest downhill direction. Repeat, repeat, repeat.

(23:19):
That's gradient descent in a nutshell, but in a very high
dimensional mathematical. Space finding the bottom of the
Air valley, step by step. Precisely.
Now, for gradient descent to work in these huge networks with
maybe millions of parameters, the model needs to know the
steepness or slope of the error with respect to each of those
parameters. Those slopes are the derivatives
or gradients, and that's where the brilliant algorithm

(23:41):
backpropagation comes in. Backpropagation finds the
slopes. Yes, it's the incredibly
efficient method used to find all those gradients for every
single parameter in the network.It's often called backward
propagation of errors because itcalculates the error at the very
end the output, and then cleverly distributes that error
backward through all the network's layers.

(24:03):
This tells each parameter how much it contributed to the error
and thus how it should adjust. OK, so backprop figures out the
slopes. Gradient descent takes the
steps. But how does backprop find those
slopes for these monstrously complex functions?
That still feels like the trickypart.
And that is exactly where the real powerhouse automatic
differentiation comes in. Autodiff isn't like symbolic

(24:25):
differentiation where you get a neat formula, or numerical
differentiation which uses approximations.
It can have errors. Autodiff is a precise technique
to numerically evaluate the derivative of a function
specified by a computer program.It gets the exact derivative
numerically. How does it do that
automatically and exactly? The clever trick is realizing
that any computer program, no matter how complex, boils down

(24:46):
to a sequence of elementary operations.
Addition, multiplication, basic functions like log, sine,
exponential. Autodiff exploits this.
It applies the chain rule from calculus, which tells you how to
differentiate composite functions repeatedly to these
tiny elementary operations as the program runs.
Breaks the big problem down intotiny calculus steps.

(25:07):
Exactly. It decomposes the complex
calculation and applies the chain rule piece by piece.
The specific version used heavily in frameworks like
Tensorflow and Pie Torch is reverse mode automatic
differentiation. This is super efficient for deep
learning because it calculates all the gradients with respect
to 1 output the error in a single backward pass.

(25:27):
It does this by caching operations on the forward pass,
remembering intermediate values as data flows through, and then
using the known simple derivatives, the basic
operations to compute the overall gradient by working
backward. Wow, so it's not estimating,
it's calculating the exact derivative, just in a really
smart, automated, efficient way.That truly sounds like the
foundation. It absolutely is.

(25:49):
It's an analytical derivative just computed numerically and
automatically the significance of auditive.
It's massive. It's the core enabling
technology for deep learning. Frameworks might change,
architectures evolve, but autodiff is here to stay.
Many people basically call frameworks like Tensorflow and π
Torch, auto differentiation software plus GPU acceleration.

(26:10):
That's the heart of it. It's the invisible engine
driving all the learning and precision we've discussed, from
filling player paths to classifying game states.
Wow, OK, what an incredible journey.
Today we went from, you know, fragmented, almost invisible
sports data to these incredibly precise, physically plausible,
deeply insightful views of the game.

(26:30):
We saw how traditional methods lay groundwork, how AI like
Midas and transport more push boundaries on understanding
interactions. And then right at the core, this
fundamental brilliance of automatic differentiation,
making the learning actually happen.
It really feels like bringing the unseen tactical threads of
the game right out and at the open it.
Truly is. The implications are profound.
Having accurate continuous trajectory data, whether it's

(26:53):
completed meticulously by Midas or analyzed holistically by
transport, just transforms our understanding.
We're moving so far beyond basicstats to this granular real time
appreciation of tactics, player contribution, game flow.
And like you said, it's not justfilling dots, it's revealing
those invisible, intricate threads weaving through every
single game. It let's coaches make better

(27:15):
decisions, analysts find deeper truths, and even fans like us
see the sport with honestly unprecedented clarity and depth.
It elevates how we appreciate athletic performance and
strategic thinking. And it really leaves you with a
powerful thought, doesn't it? If we can now accurately
reconstruct every hidden step ona football pitch, understanding
the unseen dynamics of that complex system, what other

(27:36):
complex, dynamic systems could this potent mix of data science,
AI, and fundamental math unlock?Imagine applying this to city
traffic, autonomous vehicles, robotic swarms, maybe even
complex biological interactions.The potential for understanding
and optimizing these hidden patterns seems almost, well,
limitless.
Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.