All Episodes

December 29, 2025 • 23 mins
A comprehensive educational resource for understanding foundational machine learning concepts. The text introduces readers to the principles and applications of machine learning, categorizing different learning approaches such as supervised, unsupervised, and reinforcement learning. It then explores various algorithms, including linear and logistic regression, Support Vector Machines, neural networks, and decision trees, providing detailed explanations and practical Python code examples. Furthermore, the material addresses crucial topics like overfitting, regularization, and the feasibility of learning, emphasizing the challenges and ethical considerations within the field. Overall, it functions as a structured guide for building and analyzing predictive models, complete with information on the author, publication details, and distribution.

You can listen and download our episodes for free on more than 10 different platforms:
https://linktr.ee/cyber_security_summary

Get the Book now from Amazon:
https://www.amazon.com/Building-Machine-Learning-Systems-Python-ebook/dp/B094D57T82?&linkCode=ll1&tag=cvthunderx-20&linkId=59afdcea3169cd990bef00b3d436c41e&language=en_US&ref_=as_li_ss_tl

Discover our free courses in tech and cybersecurity, Start learning today:
https://linktr.ee/cybercode_academy
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Have you ever wondered how your phone seems to, I
don't know, finish your sentences, or how that streaming service
uncannily suggests your next binge worthy show.

Speaker 2 (00:11):
Right, it often feels like some kind of personalized magic.

Speaker 1 (00:14):
Exactly, but it's actually this unseen intelligence that powers so
much of our digital world.

Speaker 2 (00:20):
And that's just it. This magic, Well, it isn't some
wizard pulling levels behind a curtain. It's sophisticated algorithms, constantly
learning and adapting.

Speaker 1 (00:28):
That's where we're headed today. We're taking a deep dive
into that very world. Machine learning or mL. It's the
engine behind all that digital intelligence, and honestly, it's far
more pervasive than you might realize. Absolutely, So we've shacked
up some fascinating insights from building machine learning systems using
Python and some other related.

Speaker 2 (00:48):
Sources you've shared, Yeah, some really good stuff in there.

Speaker 1 (00:51):
Our mission today is really to unpack what machine learning
truly is, maybe explore its surprising origins.

Speaker 2 (00:57):
Which are quite surprising.

Speaker 1 (00:59):
Yeah, it's huge impact on our daily lives, and maybe
most importantly, shine a light on the crucial challenges it faces,
especially that often overlooked issue of bias and fairness.

Speaker 2 (01:10):
That's a big one, definitely.

Speaker 1 (01:11):
So get ready for some hopefully genuine aha moments.

Speaker 2 (01:16):
You know, understanding mL isn't just for tech enthusiasts anymore,
is it. It's becoming like an essential literacy for anyone just
navigating our digital landscape.

Speaker 1 (01:24):
Frtully agree. Okay, let's unpack this then. So what exactly
is machine learning at its core?

Speaker 2 (01:29):
Well, our sources define mL pretty clearly as the ability
of a system to learn automatically through experience without being
explicitly programmed for every single step.

Speaker 1 (01:40):
Right, So, instead of a programmer writing rules for everything,
the system learns the rules itself exactly.

Speaker 2 (01:45):
Imagine the sheer scale of problems we can tackle when
software isn't limited by human programmers defining every possibility. It
just teaches itself adapts.

Speaker 1 (01:53):
Tackling everything from what medical diagnostics to climate modeling.

Speaker 2 (01:57):
You got it. That's the real power behind the definition.
It's essentially building its own rule book just by looking
at the data, teaching itself how things work.

Speaker 1 (02:05):
That self teaching idea is key.

Speaker 2 (02:07):
Yeah, and the concept isn't entirely new either. The term
machine learning itself that was actually coined way back in
nineteen fifty nine.

Speaker 1 (02:14):
Nineteen fifty nine, Wow.

Speaker 2 (02:16):
Yeah, by Arthur Samuel. He was an American scientist, an
expert in computer gaming and AI. He really laid the
groundwork for this idea of computers learning without explicit step
by step instructions, and then it got.

Speaker 1 (02:28):
A more let's a formal definition later on.

Speaker 2 (02:31):
It did. In nineteen ninety seven, Tom Mitchell put it
really well. He said, a computer program is said to
learn from experience E with respect to some task T
and some performance measure P if its performance on T,
as measured by P improves with experience E.

Speaker 1 (02:46):
Okay, that's a bit dense, but let's break it down.
Experience E is like more data, more practice exactly.

Speaker 2 (02:53):
Task T is what it's trying to do, like recognize
faces or predict traffic.

Speaker 1 (02:57):
And performance measure P is how well it's doing that task.

Speaker 2 (03:00):
Yeah, precisely. So if it gets better at the task
P improves the more data or practice it gets E increases,
then it's.

Speaker 1 (03:09):
Learning kind of like a child learning to identify animals
from pictures. Right, they get better with each new example.

Speaker 2 (03:15):
That's a perfect analogy, simple, but it captures the essence.

Speaker 1 (03:19):
Okay, so this brings up the history. How did we
get from these early ideas to where we are now
used at nineteen fifty nine.

Speaker 2 (03:26):
Well, the history has surprisingly deep roots. If you go
back even further to the nineteen forties, with the invention
of the first big electronic computers like the Enie, Right,
the initial idea was already kind of there, this dream
of building machines that could mimic human learning and thinking.
It was very early days, of course.

Speaker 1 (03:44):
Incredible to think about that long ago. What were the
first real maybe sparks of this. Where did it start
to click?

Speaker 2 (03:52):
Well, a significant step was in the nineteen fifties we
saw Frank Rosenblat's invention of the perceptron.

Speaker 1 (03:57):
The perceptron, what was that.

Speaker 2 (03:58):
It was a very simple type of classifier. Think of
it as an early, very basic precursor to the neural
networks we talked about today, A crucial first step.

Speaker 1 (04:05):
Okay, and then things really took.

Speaker 2 (04:07):
Off later, definitely, the nineteen nineties was when machine learning
truly started hitting the mainstream.

Speaker 1 (04:13):
Why then, specifically, a.

Speaker 2 (04:14):
Couple of things came together. These probabilistic approaches in AI,
basically using statistics to handle uncertainty and make predictions, started
merging really effectively. With computer science. And crucially, this happened
just as we started getting access to much larger amounts
of data. Suddenly you had the methods and the fuel

(04:36):
the data to build systems that could actually learn from
vast amounts of information.

Speaker 1 (04:40):
And computers were getting more powerful too.

Speaker 2 (04:42):
Assume absolutely that was essential. And then there was a
big public moment ah Deep Blue exactly IBM's Deep Blue
Chest computer beating world chess champion Gary Kasparov. That was huge.
It really captured the public imagination and showed what was
becoming possible.

Speaker 1 (04:58):
Yeah, I remember that it shifted from just academic papers
into something real, something that could beat the best human
minds at a complex tax.

Speaker 2 (05:06):
Precisely, it was a landmark.

Speaker 1 (05:08):
So okay, we know what it is roughly and a
bit about its history. But you mentioned it's not a
one size fits all thing. There are different flavors.

Speaker 2 (05:16):
Of learning, that's right, and understanding these different types is
key to seeing how it's applied everywhere.

Speaker 1 (05:22):
Right, Let's do a quick tour. Then, first up is
supervised learning. What's the deal there?

Speaker 2 (05:28):
Think of supervised learning as well learning with a teacher
or like having the answer key. The system gets fed example,
data that's.

Speaker 1 (05:36):
Already labeled labeled house.

Speaker 2 (05:37):
So like historical traffic data paired with the actual congestion
outcomes that happened, or pictures of cats labeled cat and
dogs labeled dog. The system learns the relationship between the
input and the known correct.

Speaker 1 (05:50):
Outpoot ah okay. So it uses those examples to learn
how to predict the outcome for new unseen data, like
predicting tomorrow's traffic based on past pasthatterns exactly.

Speaker 2 (06:00):
It learns a mapping from input to output. The supervision
comes from those correct labels in the training data.

Speaker 1 (06:06):
Got it? So what's next?

Speaker 2 (06:08):
Then you have unsupervised learning, and this is more like
learning without a teacher. There's no answer key provided.

Speaker 1 (06:13):
So what does it do? Then?

Speaker 2 (06:16):
Here the system analyzes data without any associated target responses
or labels. Its goal isn't really to predict a specific output,
but more to find hidden patterns, structures, or to segment
the data into similar groups.

Speaker 1 (06:29):
Can you give an example?

Speaker 2 (06:30):
Sure, think about grouping customers based on their purchasing habits.
With unsupervised learning, you wouldn't tell the system beforehand find
groups A, B and C. You just give it the
purchase data and it figures out that maybe there are
distinct clusters of customers who buy similar things. It discovers
the structure itself.

Speaker 1 (06:47):
Okay, so it's finding patterns we might not have even
know we're there. Interesting.

Speaker 2 (06:52):
And the last one, the third main type, is reinforcement learning.
This one is a bit different. Again, It's somewhat similar
to unsupervised in that it often doesn't have explicit labels
for every piece of data, but it learns by interacting
with an environment and receiving feedback in the form of
rewards or penalties for its actions.

Speaker 1 (07:09):
Ah like training a dog with treats.

Speaker 2 (07:12):
Kind of think about training a robot to navigate a maze.
If it takes a step that gets it closer to
the exit, it gets a positive reward. If it hits
a wall, it gets a negative penalty. Over time, it
learns the sequence of actions the policy that maximizes its
total reward.

Speaker 1 (07:28):
So it learns through trial and error guided by feedback.

Speaker 2 (07:30):
Precisely, it's really powerful for things like gameplaying, AI, robotics,
and control systems.

Speaker 1 (07:36):
Okay, supervised unsupervised reinforcement.

Speaker 2 (07:39):
Different ways machines learn, and these different approaches, often working together,
are what create that everyday magic we talked about at
the start. mL really does shine in so many applications
we use constantly.

Speaker 1 (07:49):
It really does. Like, let's talk specifics, virtual personal assistance,
Alexis Serie, Google Now Prime examples.

Speaker 2 (07:57):
They're constantly collecting and refining information based on your past requests,
your preferences, even your location, to understand your queries better
and give you relevant answers. They learn your voice, your habits.

Speaker 1 (08:09):
It's almost spooky sometimes it learns.

Speaker 2 (08:10):
And then there's social media services. Oh boy, mL is
everywhere there.

Speaker 1 (08:14):
How so beyond just the ads?

Speaker 2 (08:17):
Oh yeah, think about the people you may know feature
on platforms like Facebook or LinkedIn.

Speaker 1 (08:23):
Right, how does that work?

Speaker 2 (08:24):
It's analyzing tons of data, your existing connections, profiles, you've visited,
your workplace, groups, you're in common interest to figure out
who else you might realistically know.

Speaker 1 (08:34):
It's connecting the dots in a way a human couldn't
just because of the scale exactly.

Speaker 2 (08:39):
Or face recognition Facebook's Deep Face project, for instance, it
learns to identify unique features and photos to automatically suggest tags.

Speaker 1 (08:48):
For your friends, even if the angles are weird or
the lighting isn't great.

Speaker 2 (08:51):
Yeah, it learns to account for variations like poses and projections.
It's incredibly complex stuff happening behind the scenes, but it
feels seamless to us.

Speaker 1 (08:59):
Okay, moving beyond social media, self driving cars, it's a
huge one.

Speaker 2 (09:03):
Absolutely. Companies like Tesla heavily rely on machine learning, particularly
forms of unsupervised and reinforcement learning, for perception detecting objects, pedestrians,
other cars, lane lines, all in real time. That's mL
interpreting sensor data.

Speaker 1 (09:19):
Mind boggling complexity there, and something may be a bit
more mundane but still powerful. Product recommendations.

Speaker 2 (09:26):
Ah, yes, the customers who bought this also bought magic right.

Speaker 1 (09:32):
How does that work? Is just based on my past purchases, That's.

Speaker 2 (09:35):
Part of it, But it also looks at items you've groused,
things you've put in your cart but didn't buy, what
similar users bought, maybe even brand preferences inferred from your behavior.
It's constantly building a profile to anticipate.

Speaker 1 (09:47):
What you might want next, try and attempt me.

Speaker 2 (09:49):
Basically, yes, and critically. mL plays a vital role in
security too, like online fraud detection.

Speaker 1 (09:55):
How does that work? It must be like finding a
needle in a haystack. It is.

Speaker 2 (09:59):
Companies like Paypa how banks. They use machine learning to
analyze millions, even billions of transactions. The algorithms learn patterns
associated with normal, legitimate activity versus suspicious, potentially fraudulent activity.

Speaker 1 (10:12):
So it can flag things that look out of the
ordinary based on learned patterns.

Speaker 2 (10:16):
Exactly, it can spot anomalies much faster and more accurately
than humans waiting through that much data. It helps prevent
things like money laundering or identity theft.

Speaker 1 (10:26):
Okay, wow, So from predicting my next purchase or bingewatch
to preventing serious financial crime. mL is truly woven into
the fabric of our daily lives.

Speaker 2 (10:35):
It really is.

Speaker 1 (10:36):
But and there's always a butt, right. With such incredible
power comes naturally some pretty significant challenges and maybe dangers
we need to unpack.

Speaker 2 (10:46):
Absolutely, it's not all smooth sailing, and it's crucial we
talk about the downsides and the risks.

Speaker 1 (10:51):
Where do we even start?

Speaker 2 (10:52):
Well, A critical point is what happens when these powerful
systems bump up against difficult ethical terrain or lead to
unexpected did maybe harmful outcomes Like what one really compelling
example our sources highlighted involves ethical dilemmas with autonomous weapons.
Remember Google's Project.

Speaker 1 (11:09):
Maiden, vaguely those using mL for drums.

Speaker 2 (11:12):
Right exactly, using mL to analyze drone footage, potentially for
targeting and military applications. It sparked massive protests from within
Google employees, scientists, and externally too.

Speaker 1 (11:23):
Why the protest The ethical.

Speaker 2 (11:24):
Concerns were huge. Thousands signed petitions asking Google to abandon
the project, worried about mL being used to create truly
autonomous weapons that could make life or death decisions without
human intervention. It highlighted this very real, very difficult ethical typerope.

Speaker 1 (11:43):
Wow, that's a heavy example right off the bat. Technology
definitely isn't neutral there, not at all.

Speaker 2 (11:49):
And then there are other, maybe less dramatic, but still
problematic challenges, like the phenomenon of false correlations sometimes called
spurious correlations.

Speaker 1 (11:58):
Okay, what's that sounds intriguing.

Speaker 2 (12:00):
This is when you have two things that seem statistically related.
Their trends move together on a graph, but there's absolutely
no real world connection between them. They're independent, but the
numbers look linked. You give an example, my favorite one
from our sources, it's almost comical. Is a documented false
correlation between the increase in people using car seat belts
and a decrease in astronaut deaths from spacecraft accident.

Speaker 1 (12:21):
Wait what seat belts and astronaut deaths?

Speaker 2 (12:23):
Exactly? They have absolutely nothing to do with each other,
but maybe purely by coincidence, the graph showing seat belt
use went up around the same time the graph for
astronaut deaths went down. The numbers correlate, but it's meaningless.

Speaker 1 (12:36):
Huh. Okay, that's a great illustration. It's a stark reminder
not to just assume causation from correlation.

Speaker 2 (12:44):
Right. Absolutely. It's a classic statistical trap, and algorithms, if
they're not designed carefully, can fall right into it. They
might identify these spurious correlations in data and based decisions on.

Speaker 1 (12:55):
Them, leading to potentially nonsensical or even harmful outcomes.

Speaker 2 (12:58):
Precisely, and maybe even worse than false correlations are feedback loops.

Speaker 1 (13:03):
Feedback loops? How are they different?

Speaker 2 (13:05):
This is more insidious. It's when an algorithm's decision actually
affects the real world, changes the situation on the ground, okay,
and then the algorithm uses that new altered reality, which
its own past decisions helped create, as evidence to confirm
its original conclusion, even if that conclusion was initially flawed
or biased.

Speaker 1 (13:24):
That sounds circular and potentially dangerous. Could you give an
example of that.

Speaker 2 (13:28):
Yeah. Think about a crime prediction algorithm. Let's say it
analyzes historical crime data and suggests sending more police patrols
to a specific neighborhood because reported crime is higher there.

Speaker 1 (13:39):
Okay, seems logical so far.

Speaker 2 (13:40):
But if you put more police in that neighborhood, what happens.
People might report more minor incidents simply because there are
officers readily available to take a report. Police might make
more arrests for low level offenses because they're patrolling more intensely.

Speaker 1 (13:54):
Ah, So the reporting crime rate goes up partly just
because of the increased police presence exactly.

Speaker 2 (14:00):
And then the algorithm sees this higher reported crime rate
in the next batch of data and says, see, I
was right, this neighborhood has high crime. We need even
more police here.

Speaker 1 (14:10):
Wow. So the algorithm's initial prediction, potentially based on biased
historical data, creates the conditions that seem to validate it,
leading to a cycle.

Speaker 2 (14:19):
That's the feedback loop. The algorithm effectively creates the data
that justifies its own potentially biased decisions, reinforcing existing inequalities
or errors.

Speaker 1 (14:28):
Yeah, that's a really clear and concerning example. So beyond
these conceptual or ethical challenges, are there more practical hurdles? Oh?

Speaker 2 (14:37):
Definitely. A big one is just the sheer computational needs.
Our sources really emphasize this. Machine learning, especially deep learning
with huge data sets, requires immense computational power. You mean
like supercomputers often, Yeah, or at least very powerful servers
packed with specialized hardware like GPUs graphics processing units.

Speaker 1 (14:59):
Those chip originally for video games, the very same.

Speaker 2 (15:02):
They turned out to be incredibly good at the kind
of parallel calculations needed for mL. But accessing this kind
of power is expensive, and even with it, training complex
models on large data sets can still take days, sometimes weeks.
It's not like running your typical software.

Speaker 1 (15:17):
So resources are a bottleneck. And what about the models themselves?
Can they go wrong?

Speaker 2 (15:22):
Absolutely? A very common problem is called overfitting.

Speaker 1 (15:24):
Overfitting like a suit that's too tight.

Speaker 2 (15:27):
Kind of It happens when a model learns the training
data too well. It becomes excessively complex. It doesn't just
learn the underlying patterns you want it to learn. It
also learns the specific noise, the quirks, and the random
outliers present in that particular training data set, so.

Speaker 1 (15:42):
It memorizes the training examples instead of generalizing exactly.

Speaker 2 (15:45):
It's like that student who memorizes every single word in
the textbook, including the typos, but can't apply the concepts
to a new problem they haven't seen before. An overfitted
model performs great on the data it was trained on,
but fails miserably when you show a new unseen data.

Speaker 1 (16:01):
Because the real world doesn't have those exact same quirks
and noise. Right.

Speaker 2 (16:05):
The goal is what's called appropriate fitting, a model that
captures the genuine patterns but ignores the noise. The opposite
problem is underfitting, where the model is too simple and
fails to capture even the basic patterns. Finding that sweet
spot is key.

Speaker 1 (16:20):
Okay, overfitting, computational costs, feedback loops, ethical mindfields. Quite a list,
But there's one more huge one we flagged earlier. Bias
and fairness.

Speaker 2 (16:31):
Yes, and this is arguably one of the most critical
challenges because it directly impacts people's lives in very real ways.

Speaker 1 (16:37):
Let's define it first. What is bias in the context
of mL.

Speaker 2 (16:41):
Bias in mL usually refers to results that are systematically prejudiced.
It's often a disproportionate weight in favor of or against
an idea or thing, often stemming from underlying human biases
that get encoded intentionally or unintentionally into the algorithm or
the data it learns from.

Speaker 1 (16:59):
So the out algorithms can essentially inherit our own societal biases.

Speaker 2 (17:03):
Precisely, if the data used to train an algorithm reflects
existing societal inequalities or prejudices, the algorithm will likely learn
and potentially even amplify those biases.

Speaker 1 (17:12):
That feels like a massive problem. If it's baked into
the data. How do you even spot it? Our sources
mentioned different types they did.

Speaker 2 (17:19):
It can creep in at various stages. For example, during
data collection, well there's selection bias. Imagine you're developing a
health app, but you only collect data from young, tech
savvy users because they're easiest to reach. The resulting algorithm
might not work well for older adults or less tech
litterate populations. The sample isn't representative.

Speaker 1 (17:38):
Okay, that makes sense. What else?

Speaker 2 (17:40):
There's the framing effect. How you ask questions in a
survey used to gather data can influence the answers you get.
Introducing bias or even systematic bias from faulty equipment. Imagine
a sensor that consistently reads slightly too high. That error
gets baked into the data.

Speaker 1 (17:56):
So bias can enter right from the start.

Speaker 2 (17:59):
Just in how data is going absolutely and then there's
bias that can arise during data modeling itself.

Speaker 1 (18:03):
How does that happen?

Speaker 2 (18:05):
A really prominent real world example our sources discussed was
Amazon's experimental hiring algorithm from a few years back.

Speaker 1 (18:12):
Oh, I think I remember hearing about this. What happened?

Speaker 2 (18:14):
They tried to build a tool to help screen job
applicants resumes, but it turned out the system effectively penalized
resumes that included words like women's like women's chess club captain,
and it favored candidates who sounded more like the company's
predominantly male workforce at the time.

Speaker 1 (18:31):
Wow. So it basically learned the existing gender imbalance from
past hiring data exactly.

Speaker 2 (18:38):
It wasn't explicitly programmed to be sexist, but it learned
that male candidates had historically been hired more often, especially
in technical roles, and it started associating male typical language
patterns with success. Amazon ultimately scrapped the system.

Speaker 1 (18:52):
That's a powerful and sobering illustration of how historical bias
gets perpetuated, even amplified by a now algorithm.

Speaker 2 (19:00):
Really is. It shows how systems trained on bias data
can easily replicate and even scale those biases.

Speaker 1 (19:07):
So if we know these biases exist and we can
sometimes detect them, what on earth can we do to
fix them? How do we strive for fairness?

Speaker 2 (19:16):
That's the million dollar question. Really, there's no single magic bullet,
but there are approaches.

Speaker 1 (19:20):
What's the starting point?

Speaker 2 (19:21):
Well, a basic principle, though not always sufficient, is to
try and avoid explicitly including sensitive attributes things like race, gender,
religion as features in the model's training data, especially if
they aren't directly relevant to the task.

Speaker 1 (19:37):
But that Amazon example shows bias can creep in even
without explicitly using gender as a feature right through correlated
language patterns exactly.

Speaker 2 (19:47):
So simply removing sensitive attributes isn't enough. We need more
sophisticated mitigation strategies. Are these approaches like just patching holes?
Or can we build fair systems from the start?

Speaker 1 (19:57):
That's the crucial question. What are the sources?

Speaker 2 (20:00):
They outline several approaches, often categorized by when you intervene
in the mL pipeline. Okay, like what First, there's preprocessing.
This involves trying to modify the training data before you
even start building the model, to remove or reduce the
biases present in the data itself, like resampling the data
to ensure better representation or transforming features to remove correlations

(20:22):
with sensitive attributes.

Speaker 1 (20:23):
So cleaning the data before you use it makes sense.
What else?

Speaker 2 (20:26):
Then? There's in processing. This means building fairness constraints or
objectives directly into the model's learning process. So as the
algorithm is training, it's not just trying to be accurate,
it's also explicitly trying to avoid certain types of biased outcomes.

Speaker 1 (20:42):
Okay, trying to teach it to be fair while it learns,
sort of yes.

Speaker 2 (20:46):
And finally, there's post processing. This involves taking the predictions
made by an already trained model and adjusting them after
the fact to improve fairness, maybe recalibrating prediction threshold for
different groups to ensure more equitable act outcomes.

Speaker 1 (21:00):
So intervening before, during, or after training. Right.

Speaker 2 (21:03):
Each approach has its own pros and cons, and often
a combination might be needed, but the key takeaway is
that achieving fairness is an active process. It requires conscious
effort and specific techniques.

Speaker 1 (21:15):
Okay, so let's try and wrap this up. We've covered
a lot of ground. We explored the incredible power and
the sheer pervasiveness of machine learning.

Speaker 2 (21:23):
Yeah, from its surprisingly early origins with people like Arthur
Samuel and the Perceptron.

Speaker 1 (21:28):
Right through its explosion in the nineties with deep Blue,
and into its everyday applications now you're a virtual assistant,
social media feeds, product recommendations, even fraud.

Speaker 2 (21:39):
Detection definitely woven into daily life.

Speaker 1 (21:42):
But we've also taken I think a necessary hard look
at the significant.

Speaker 2 (21:46):
Challenges, absolutely the ethical dilemmas like with autonomous weapons, those surprising,
sometimes funny, sometimes dangerous false correlations.

Speaker 1 (21:56):
And those insidious feedback loops where algorithms can reinf force
their own errors or biases. Plus the practical issues like
computational cost and the trap of overfitting.

Speaker 2 (22:05):
And critically that whole complex issue of bias creeping in
from data or modeling, and the ongoing work needed to
achieve fairness through things like pre in and post processing.

Speaker 1 (22:16):
The implications of all this are huge, aren't they really are?

Speaker 2 (22:19):
I mean, if machine learning systems can be tricked by
adding just a tiny bit of noise to an input,
like some examples show with image recognition right.

Speaker 3 (22:26):
Making it misclassify something completely, or if they can subtly
manipulate our choices over time, like maybe those movie recommendations
that gradually narrow our viewing habits without us realizing it,
fundamentally changes.

Speaker 2 (22:39):
How we interact with information and the world, which brings.

Speaker 1 (22:42):
Us to a final thought. For you, our listener, if
these systems can be vulnerable or biased or subtly shape
our reality, what active steps can you take? How can
you critically evaluate the algorithmic influences in your own digital
life and maybe even advocate for systems, whether at work
or in society, that are designed to be truly, fair, transparent,

(23:04):
and robust.

Speaker 2 (23:05):
That's a really important question to ponder, because the more
we all understand these systems, their power and their pitfalls, the.

Speaker 1 (23:10):
Better equipped we are to actually shape their future development
and the deployment responsibly.

Speaker 2 (23:15):
Exactly food for thought.
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.