Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Okay, imagine this for a second. You're in a lab,
maybe a testing facility. There's this cutting edge humanoid robot there,
you know, the kind designed for amazing precision lifting heavy stuff,
maybe even doing backflips, super advanced, say the art stuff. Yeah, exactly,
And it's undergoing some kind of test. Maybe it's suspended
by a crane, you know, for calibration or something. Engineers
(00:22):
are all around watching monitors, super focused.
Speaker 2 (00:24):
Right, typical tests, setup for initial movement checks.
Speaker 1 (00:27):
Then just bam, out of nowhere, something goes wrong. It's arms,
its legs, They just start flailing wildly. Not just a
little twitch, no, like smashing into equipment nearby, crashing against
the crane holding it up. You can almost hear the impacts, right.
People are scrambling, yelling, pure chaos for a moment. Yeah,
complete shock. Someone apparently yelled, what on earth did you
(00:49):
guys run? I mean this machine, it's got hundreds of
pound feet of torque at each point. Suddenly that power
is just unleashed, uncontrolled.
Speaker 2 (00:58):
It sounds like something straight out of aside, doesn't it.
Speaker 1 (01:01):
It really does. Yeah, And it immediately makes you think
right about control, about predictability. Where's that line between incredible
capability and well potential danger. So that's what we're diving
into today. We want to unpack these kinds of startling
incidents go beyond the headlines, you know, the one shouting
about robots snapping or looking like the terminator.
Speaker 2 (01:21):
Yeah, I get past the sensationalism.
Speaker 1 (01:23):
Yeah, exactly. Our mission here is to really dig into
the technical side of things, understand the huge challenges engineering
societal of actually integrating these powerful human like machines safely
into our world.
Speaker 2 (01:39):
And critically what's being done about it, Because there are
massive efforts underway globally to make sure development is safe, reliable.
Speaker 1 (01:45):
Right, So we'll look at what happens when these sophisticated
robots hit unexpected bumps in the road. How do we
even decide what's safe enough when the tech is evolving
so fast, and what are the big steps being taken
right now to shape their future alongside us.
Speaker 2 (01:58):
It's not just about the tech, it's it's about how
we live with it.
Speaker 1 (02:01):
Absolutely, this is more than just robotics. It's about a
future that's honestly already starting to arrive.
Speaker 2 (02:07):
And what's really telling about these incidents, these moments of chaos,
is that they aren't just random glitches. They actually shine
a light on a much bigger systemic challenge for the
whole field of advanced robotics. So well, it's about the
incredibly complex dance between the physical hardware, the intricate software
controlling it, and all the unpredictable stuff the real world
(02:30):
throws at them. Okay, when we talk about humanoids, we
mean machines built to act like us, right to walk, move,
interact with things in ways traditional robots, the ones bolted
to factory floors never could.
Speaker 1 (02:42):
That flexibility is the whole point, is it.
Speaker 2 (02:43):
Is, But that flexibility, that immense power, it adds layers
and layers of complexity, especially when things deviate even slightly
from the plan. We're talking embodied AI intelligence that actually
lives in the physical world and that physical interaction. That's
where safety becomes paramount.
Speaker 1 (03:00):
You mentioned that shock, that visceral reaction at the start,
and it comes from real events like this specific incident
with the unitary H one robot. Some people apparently call.
Speaker 2 (03:10):
It dre affectionately or ironically.
Speaker 1 (03:12):
Maybe baby, but this wasn't just a minor blip. It
was by all accounts, a pretty terrifying few moments during
a test. It was suspended by a crane common setup,
as you said.
Speaker 2 (03:22):
Standard procedure for isolating movements, checking balance systems without gravity
fully in play.
Speaker 1 (03:27):
Yet right, and then suddenly it just starts thrashing, uncontrollable,
not just small movements. We're talking violent, forceful.
Speaker 2 (03:35):
Swing enough to cause damage.
Speaker 1 (03:37):
Oh yeah, it was hitting nearby gear, crashing hard against
the crane structure itself. The engineers, the staff watching, they
had to back up fast, scramble out of the way.
Speaker 2 (03:45):
You can imagine the adrenaline dump in that room totally.
Speaker 1 (03:48):
And that reported shout what on earth did you guys run?
And apparently the ETO of Rex, the company involved, was
also shouting in disbelief. It just paints this picture of
raw power completely off the leash, not sci fi, but
right there in the test slack, a.
Speaker 2 (04:02):
Very stark reminder of the forces involved. And it's crucial,
I think, to understand the why behind that specific event
because it helps us get past the robot gone wild
idea and see the real engineering challenge.
Speaker 1 (04:16):
Okay, so what was the technical reason for Dre's freakap?
Speaker 2 (04:20):
In that case, the instability kicked in when they activate
a really complex full body control policy while the robot's
feet weren't touching anything.
Speaker 1 (04:28):
A full body control policy like its main operating system.
Speaker 2 (04:31):
For movement sort of. Yeah. I think of it as
the central coordination system. It manages all the motors, sensors,
balance everything together in real time, trying to keep the
whole body stable and moving as intended. Okay, Now, the
analogy that's often used, and I think it's a good one,
is imagine hitting the gas pedal hard on a powerful
car while it's lifted up.
Speaker 1 (04:50):
On jacks wheels spinning in the air.
Speaker 2 (04:52):
Exactly the car systems expect road resistance traction feedback. When
it gets none, things can get weird. Similarly, Drey's control
system was expecting signals from its feet, pressure, contact, friction
data it needs to balance and follow its programming, but.
Speaker 1 (05:07):
It wasn't getting any because it was hanging.
Speaker 2 (05:09):
In the air precisely, no ground contact, no expected feedback.
That created a massive mismatch in the system's state. It
couldn't compute, couldn't compensate for this unexpected situation, and that
triggered the violent instability, the erratic, almost violent movements.
Speaker 1 (05:25):
So it wasn't angry or sentioned.
Speaker 2 (05:27):
Oh, definitely not. It was a fundamental failure in the
software hardware interaction under very specific, unexpected conditions. The system
basically lost its bearings because the input didn't match its
model of the world at that moment.
Speaker 1 (05:41):
And the really compelling part, maybe the unsettling part, is
that this kind of thing isn't just a one off.
It wasn't just Dre having a bad day. We're seeing
a pattern. Right as these advanced robots start moving out
of those super controlled lab environments.
Speaker 2 (05:54):
That's right into more let's call them real world adjacent
test scenarios and other similar intact.
Speaker 1 (06:00):
Have popped up, like the one in China. Back in Make.
Speaker 2 (06:02):
Exactly, another testing facility looked like an identical robot model,
also suspended by a crane.
Speaker 1 (06:08):
Same thing happened, pretty much.
Speaker 2 (06:09):
Started flailing suddenly, apparently with enough force to knock over
a computer nearby, and reports said two workers had to
sprint to get clear.
Speaker 1 (06:18):
Wow, do we know what caused that one?
Speaker 2 (06:20):
That one was apparently traced to a coding fault, a
bug in the logic, But interestingly, an engineer was able
to stop it by physically repositioning the base it was
mounted on, which tells you, well. It highlights a couple
of things. One the need for robust emergency stops in interventions.
Two how the physical setup interacts with software glitches. Stopping
(06:43):
it physically suggests the instability might have been contained or
influenced by the mounting, not just the code itself. It's
about the whole system.
Speaker 1 (06:51):
And there was another one. Wasn't there at a public festival?
Speaker 2 (06:54):
Yes, back in February, also in China, a humanoid robot,
this time behind a safety barrier but still near the public.
Speaker 1 (07:01):
What happened there?
Speaker 2 (07:02):
It suddenly lurched forward pretty forcefully, startled the onlimmers quite
a bit, as you can imagine. Security stepped in quickly,
got it under control.
Speaker 1 (07:10):
No one hurt, but still not a great look when
you're trying to build public.
Speaker 2 (07:13):
Trust, absolutely not. It feeds those anxieties about predictability, especially
in public spaces. So yeah, you put these incidents together,
Dre the May incident, the festival lurch, they do point
to a pattern unpredictability can emerge, especially under stress or
when the robot encounter's unexpected inputs or conditions.
Speaker 1 (07:32):
It really shows the challenge of moving from the lab
to the real world, even just a slightly more complex
test environment.
Speaker 2 (07:39):
It's a huge leap, and safety becomes exponentially more complex.
Speaker 1 (07:42):
Okay, so let's drill down on the machines themselves for
a minute, because knowing what they're capable of really puts
these incidents in context. Dre. The robot from the first
incident is based on the unit tree H one, Right,
and you mentioned something key earlier. This isn't some top
secret prototype. The unit trey Wich one is actually for
sale commercially available.
Speaker 2 (08:02):
Theoretically, Yes, if you have the funds and the know how,
you could acquire one.
Speaker 1 (08:05):
It's out there, which is wild now the specs. It's
about five point nine feet tall, weighs around one hundred
and four pounds, pretty human like dimensions.
Speaker 2 (08:14):
Fairly standard for this class of humanoid trying to operate
in human environments.
Speaker 1 (08:18):
But here's the kicker, the number that really stands out.
The torque each joint can deliver up to three hundred
and sixty five pound feet.
Speaker 2 (08:26):
Yeah, that's a lot of rotational force, a lot.
Speaker 1 (08:28):
That's like more than many car engines deliver at their peak. Right,
And this thing has that per joint hips, knees, ankles.
Speaker 2 (08:37):
It's serious muscle that's the term often used, and it's accurate.
That level of power is needed for the dynamic movements
they're designed for Exactly. That torque isn't just for show
or for lifting super heavy waste. Necessarily, it's fundamental to
its designed abilities running, balancing, dynamically, handling uneven ground, even
those spectacular stunts like backflips, playings.
Speaker 1 (08:59):
That require rapid, forceful.
Speaker 2 (09:00):
Adjustments precisely agility and power combined in the right hands,
with sophisticated control software, these machines can do incredible things.
They're pushing the boundaries of embodied AI. But the flip side,
the flip side, as these incidents show, is that when
something does go wrong a software bug slips through, hardware,
gets stressed unexpectedly, or even does a simple setup mistake
during a test like.
Speaker 1 (09:22):
The no feed on the ground scenario.
Speaker 2 (09:24):
Right, that immense power becomes a serious liability. It has
more than enough force to cause significant damage to equipment, property,
and potentially to people.
Speaker 1 (09:34):
The incidents we talked about were near misses basically.
Speaker 2 (09:38):
Essentially, yes, they demonstrated the potential for harm without thankfully
causing major injury in those specific cases, but they were
stark warnings, and.
Speaker 1 (09:48):
This wead straight into that perception gap we touched on.
When videos or stories about these things hit the Internet,
the reaction is often immediate and visceral.
Speaker 2 (09:58):
People see a powerful robot seemingly going berserk, Yeah.
Speaker 1 (10:01):
They jump straight to it snapped, It's like Terminator, it's RoboCop.
Those sci fi comparisons come thick and fast. It taps
into those deep seated fears about AI and robots losing control.
Speaker 2 (10:12):
It's an understandable gut reaction when you see that kind
of power moving unpredictably.
Speaker 1 (10:16):
But what gets lost almost every time is the technical
reason behind it. People aren't discussing the failed control policy
or the specific coding error or the lack of ground
contact feedback.
Speaker 2 (10:26):
No, the nuance is usually the first casualty.
Speaker 1 (10:29):
They're seeing a machine that looks frighteningly out of control,
and it confirms their existing anxieties and that gap between
the public's often sensationalized perception and the complex engineering reality.
That's a massive challenge for the whole industry, isn't it.
Speaker 2 (10:44):
It really is. How do you build and maintain public trust?
How do you communicate effectively that a malfunction isn't malice
or rebellion, but a system encountering an unexpected state it
wasn't prepared for. It's a huge communication and education hurdle
which brings us squarely to this really fundamental quality, maybe
even a philosophical one for the field and for us
(11:06):
as a society. How do we actually define what safe
enough means for these machines, Machines that look and move
like us, operate in our spaces, but have this inherent
power to cause real harm if things go wrong.
Speaker 1 (11:19):
It's not like a traditional machine bolted down in a factory.
Speaker 2 (11:21):
Cage, not at all. This is different. We're asking what
level of risk we're willing to accept for the potential benefits,
and it's a question we're grappling with right now, just
like previous generations did with other transformative technologies.
Speaker 1 (11:32):
That's a great point, and maybe looking back helps think
about cars. Right, go back one hundred years in the US,
driving was incredibly.
Speaker 2 (11:40):
Dangerous, Roads were terrible, cars are basic safety features almost
non existent.
Speaker 1 (11:46):
The statistic is something like twenty deaths per one hundred
million miles driven, which is just mind boggling.
Speaker 2 (11:53):
To etiranomical risk compared to now exactly.
Speaker 1 (11:56):
Now, fast forward today it's below one point five deaths
per hime hundred million miles. Still not zero obviously, and
every death is a tragedy, but the improvement is dramatic,
immense progress.
Speaker 2 (12:06):
Decades of engineering, regulation, infrastructural changes.
Speaker 1 (12:09):
Right and you could argue that humanoid robots we're in
their early days right now, their unpredictable, sometimes chaotic face,
like cars in the nineteen twenties.
Speaker 2 (12:17):
It's a reasonable parallel. We should expect malfunctions, mistakes.
Speaker 1 (12:21):
Will happen, and sadly, misuse is also possible. Like with
any powerful tool, perfection right out of the gate just
isn't realistic.
Speaker 2 (12:29):
And critically, that realization that imperfection is inevitable at this
stage shouldn't paralyze us. It shouldn't mean we just stop
development altogether.
Speaker 1 (12:36):
Because the potential upside is huge, exactly.
Speaker 2 (12:39):
Think about the alternative. If we overregulate purely out of fear,
if we slam the brakes on innovation too hard, we
might actually prevent technologies that could save lives. Robots doing
hazardous jobs humans.
Speaker 3 (12:51):
Shouldn't disaster response, nuclear cleaning things like that, or revolutionizing
eldercare providing physical assistance and companionship, for aging population, taking
over strenuous, repetitive, dangerous jobs and factories or warehouses, freeing
people for other tasks, improving efficiency across the board.
Speaker 1 (13:09):
The potential benefits are massive, they really are.
Speaker 2 (13:12):
So the challenge isn't just about the initial failures, as
scary as they look. It's about managing the path forward,
understanding that our definition of safe enough will evolve. It's
not just engineering. It's a sietal acceptance. It's trust built
through transparency and rigorous testing. Hashtag tag tag two point
two the inevitable imperfection.
Speaker 1 (13:30):
But let's be crystal clear about one thing. We're never
going to get to one hundred percent safety with these
machines or any complex technology.
Speaker 2 (13:37):
Really, that's a crucial point. The idea of perfect bug
free software is a myth. The idea of hardware that
never fails under stress, never wears out, that's also a myth.
Speaker 1 (13:47):
There are just too many things can go wrong, right, Yeah,
when you put these incredibly complex systems into dynamic, messy,
real world situations.
Speaker 2 (13:56):
Think about it. Hidden software bugs that only show up
under specific conditions, Hardware components failing after thousands of hours
of stress, freak accidents, something falling, an unexpected.
Speaker 1 (14:08):
Obstacle, or just simple human error. Someone makes a mistake
in the programming, or the setup, or the maintenance exactly.
Speaker 2 (14:15):
Any one of those or a combination can lead to
an unexpected outcome, an unsafe outcome. But again, acknowledging that
imperfection doesn't mean we give up. It redirects the effort.
It means the focus has to be on figuring out
where we draw that line, what level of risk is acceptable,
how safe is safe enough?
Speaker 1 (14:30):
So how do we figure that out?
Speaker 2 (14:31):
The consensus seems to be data, lots and lots of
real world data, just like we did with cars, with airplanes,
with industrial machinery over.
Speaker 1 (14:39):
Decades, learning from failures.
Speaker 2 (14:40):
Basically, yes, sometimes painfully, we gathered enormous amounts of information
on how they performed, what went wrong, why it went wrong,
and then use that data to iterate improve designs, add
safety features, create better regulations, develop better training, make them
progressively safer over time.
Speaker 1 (14:59):
So we need to do that same for humanoids.
Speaker 2 (15:01):
That's the path forward. We need environments where we can
collect that crucial data safely, learn from every glitch, every
near miss, every malfunction, and use that knowledge to constantly
refine the robots, their control systems, how we operate them.
Speaker 1 (15:15):
It's about quantifying risk and designing systems to mitigate it
down to a level society finds acceptable, often.
Speaker 2 (15:21):
Called lr as low as reasonably practicable. It acknowledges perfection
as impossible, but strives for the best achievable safety within
practical limits.
Speaker 1 (15:29):
Okay, So, given these massive challenges, the need for data,
the safety concerns, the sheer complexit, what are people actually doing?
And this is where we see some really interesting large
scale strategic pos.
Speaker 2 (15:41):
Like the development in China.
Speaker 1 (15:42):
Recently, exactly on July eighteenth this year, they officially open
this huge new AI robot training ground in meand in
Sichuan Province.
Speaker 2 (15:51):
And this isn't just another lab.
Speaker 1 (15:53):
No, that's a key point. It's described as a full
scale platform purpose built to train, test, and reform fine
these embodied AI robots, specifically in complex, high intensity scenarios.
Speaker 2 (16:06):
Someone called it a boot camp for humanoid robots, which
seems pretty apt.
Speaker 1 (16:10):
It does a dedicated space for these machines to learn
how to handle the messy, unpredictable real world, not just
the clean room.
Speaker 2 (16:18):
And what's really striking is the comprehensive nature of it.
It's not just testing zones. It includes research facilities, yes,
but also commercialization support, trying to bridge that gap from
lab to market.
Speaker 1 (16:28):
Which is often a big hurdle.
Speaker 2 (16:29):
A huge one. And crucially, it includes extreme environment simulations.
This goes way beyond what most standard labs can do.
Speaker 1 (16:36):
So simulating fires or earthquakes or chaotic.
Speaker 2 (16:39):
Crowds, things like that. Presumably it shows a very proactive, structured,
long term strategy from China. They're not just focused on
building the robots. They seem equally focused on ensuring they
can be integrated safely and reliably into society. It looks
like a major national commitment hashtag tag tag tag three
point two address the bottlenecks of development, and it.
Speaker 1 (17:01):
Seems like this whole initiative, this boot camp is directly
aimed at tackling those big roadblocks we talked about earlier,
the things that have held back robotics development.
Speaker 2 (17:10):
Yeah, the major bottlenecks, like first the lack.
Speaker 1 (17:13):
Of good quality real world data. Robots need that data
to learn, but getting it safely is hard.
Speaker 2 (17:18):
Especially data from messy, dynamic situations. Lab data is clean
but limited.
Speaker 1 (17:24):
Second, that feedback loop how does what happens in the
real world actually get back to improve the design quickly?
Speaker 2 (17:30):
Often that loop is slow or incomplete. You need rapid
iteration based on actual performance.
Speaker 1 (17:35):
And Third, just the sheer difficulty of safely testing these
powerful experimental machines in unpredictable conditions. You can't just let
them loose.
Speaker 2 (17:43):
Absolutely not. The risks are too high in the early stages.
Speaker 1 (17:46):
So this training ground seems designed to provide a controlled,
yet realistic way to address all three of those problems exactly.
Speaker 2 (17:53):
It gets right to the heart of that difference between
lab performance and real world robustness because real environments they
are incredibly messy.
Speaker 1 (18:03):
Yeah, not sterile, far from it.
Speaker 2 (18:05):
Think about dust, noise, interfering with sensors, electromagnetic interference, messing
with electronics, people moving around, unpredictably weird lighting, uneven surfaces,
chaotic schedules.
Speaker 1 (18:15):
Things you just don't get in a controlled lab setting, right.
Speaker 2 (18:18):
A robot that works flawlessly in the lab might just
freeze up or worse, malfunction dangerously on a noisy construction
site or in a crowded subway station, or even in
a busy hospital ward.
Speaker 1 (18:28):
So the mood camp forces them to confront that messiness precisely.
Speaker 2 (18:32):
It exposes them to those variables in a safe, controlled way,
forces them to adapt, learn, maybe fail safely, and generate
the critical data needed to make them truly robust. It
finds the weaknesses before they get deplayed widely. It's about
stress testing everything, perception, movement, decision making in conditions that
mimic realities challenges hashtag TASTAG three point three structure, ambition,
(18:56):
and strategic growth.
Speaker 1 (18:58):
At a scale of this thing, the ambition, and it's
pretty staggering. It seems to reflect the real long term
strategic vision. How is it structured?
Speaker 2 (19:05):
Well, the reports say there's one main innovation center that's
focused on the core tech system integration, getting all the
parts working together smoothly, developing the key algorithms for intelligence
and movement, and even pilot manufacturing helping bridge that gap
to production, and.
Speaker 1 (19:20):
Then the actual training happens elsewhere.
Speaker 2 (19:22):
Right, there are two large scale scenario training bases. That's
where the robots really get put through their paces, tested
across a whole range of different sectors and simulated real
world situations.
Speaker 1 (19:33):
What kind of sectors are we talking about?
Speaker 2 (19:35):
It's quite diverse. Things like emergency response, navigating difficult terrain,
maybe handling hazardous materials, high precision manufacturing tasks, healthcare applications,
assisting staff delivering things, maybe even helping with patient mobility.
Speaker 1 (19:50):
Okay, so really practical stuff.
Speaker 2 (19:52):
Very practical. Also urban services, think security patrols, maybe infrastructure inspection,
waste management assistance, and even things like tourism interactive guides perhaps.
Speaker 1 (20:03):
Wow. So it's about pushing them way beyond just basic
walking and balancing.
Speaker 2 (20:07):
Way beyond It's about stress testing them for reliability and
safety in complex, dynamic tasks across many different potential job roles,
facing all the unexpected things those environments might throw at them, and.
Speaker 1 (20:19):
The targets they've set are well, they're not messing around.
By the end of this year, they want at least
seven robotics companies working inside the training ground.
Speaker 2 (20:27):
Building an ecosystem right from the start.
Speaker 1 (20:29):
Yeah, and they plan to ramp that up to thirty
companies by twenty twenty seven. That's a lot of activity
concentrated in one place.
Speaker 2 (20:36):
Creates a powerful hub for innovation and collaboration.
Speaker 1 (20:39):
Plus, they aim to launch over ten new robot products
and apply more than thirty advanced technologies across different sectors,
so really pushing for tangible outcomes moving.
Speaker 2 (20:50):
From research to real world application is clearly a priority.
Speaker 1 (20:53):
And underpinning all of this is significant government backing a
special fund from the Sichuan provincial government. The goal is
explicitly stated make this facility a national leader in embodied
AI development.
Speaker 2 (21:06):
It's a clear signal of strategic intent. This isn't just
a project. It looks like a long term investment to
secure a leading position in what they see as a
critical future industry. And if you zoom out and connect
these dots, the initial glitches, the safety challenges, this massive
new training facility, it all points towards a huge underlying driver,
the commercial potential momney. Basically, well, yeah, the market projections
(21:29):
are just enormous. For China's humanoid robotics market alone. They're
talking about potentially hitting eight hundred and seventy billion yuan
by twenty.
Speaker 1 (21:36):
Thirty, which is how much in US dollars.
Speaker 2 (21:38):
Over one hundred and twenty one billion dollars. That's a
colossal number, and it explains the urgency, the scale, the
strategic focus we're seeing.
Speaker 1 (21:45):
It's not just scientific curiosity driving this.
Speaker 2 (21:48):
Not primarily no it's about leading a future global economy,
an industry that many believe is poised for absolutely explosive growth.
It's about commercializing a whole new form of intelll automation,
maybe even labor exactly.
Speaker 1 (22:02):
And then commercial drive shifts the whole focus, doesn't it
away from just lab experiments or flashy demos towards the
actual jobs these robots are being designed for.
Speaker 2 (22:12):
Right, They're not just toys or research projects anymore.
Speaker 1 (22:14):
They're being engineered, tested, refined to do real work. Lifting
boxes in warehouses, making logistics way more efficient.
Speaker 2 (22:22):
Or assisting in delicate tasks in hospitals, potentially improving surgical
outcomes or patient care.
Speaker 1 (22:28):
Responding to emergencies in places too dangerous for humans chemical spills, disaster.
Speaker 2 (22:33):
Zones, search and rescue, and unstable buildings. The possibilities are vast.
Speaker 1 (22:37):
And you even mentioned some teams exploring robot versus robot combat,
which sounds kind of crazy.
Speaker 2 (22:43):
It does sound a bit like BattleBots on steroids, but
from an engineering perspective, it's another form of extreme stress testing,
pushing the limits of dynamic movement, resilience, autonomous decision making
under pressure.
Speaker 1 (22:54):
Okay, makes sense in that context, so those mishaps we
talked about earlier, the ones that look alt arming, they're
kind of growing pains, you.
Speaker 2 (23:03):
Could see them that way. They're part of the messy,
complex process of not just fixing bugs, but of scaling
up an entire industry, preparing these machines for real, tangible,
vital work across almost every sector you can think of.
Hashtag tag tag four point two. The crucial need for
evolved safety frameworks. But as these powerful, increasingly autonomous machines
(23:25):
get integrated more deeply into our world, it opens up
amazing possibilities, yes, transformative potential, but it also undeniably introduces
entirely new kinds of risks, complex risks we haven't really
faced before.
Speaker 1 (23:37):
Full safety rules don't necessarily apply.
Speaker 2 (23:39):
Exactly the safety frameworks we built for, say, industrial robots
locked in cages, or for simply automated systems. They just
aren't sufficient for dynamic human like robots moving around freely
in shared spaces. We need a fundamental rethink, which.
Speaker 1 (23:52):
Is why people are saying there's this massive urgent need
for updated safety frameworks. And it's not just about writing
better code.
Speaker 2 (24:00):
Because as we said, perfect code is impossible. Hardware can
fail it has to be broader than just the software.
Speaker 1 (24:06):
So the big question is if these humanoids are going
to be walking down our streets, working alongside us in
factories or hospitals, maybe even helping in our homes, how
do we make sure we know how to handle it
when things inevitably go wrong?
Speaker 2 (24:19):
What happens when there's a glitch in a public place
or a sensor fails during a critical task? Who's responsible?
What are the immediate mitigation steps? And that forces us
to think about the key pillars for responsible integration. What
do we actually need? Well, clearly, better training systems for
the robots themselves, like that boot camp idea, exposing them
to far more diverse and unpredictable situations safely makes sense.
Speaker 1 (24:43):
What else?
Speaker 2 (24:43):
Smarter fail safes, not just a big red emergency stop button,
which can sometimes cause its own problems if a robot
just freezes midaction. We need more intelligence systems. Systems that
can gracefully degrade performance if something's wrong, guide the robot
to a safe, stable position, maybe enter a low energy state,
even if it means abandoning the task, prioritizing safety above
(25:05):
task completion when necessary.
Speaker 1 (25:07):
Okay, tech solutions, but what about.
Speaker 2 (25:09):
Rules absolutely crucial. Clearer, more comprehensive regulations, addressing liability, defining
where and how different types of robots can operate, tackling
the ethical dimensions of autonomy.
Speaker 1 (25:22):
And transparency seems key to.
Speaker 2 (25:23):
Hugely important more transparency about what these machines are doing,
how they're learning, how they make decisions, especially when operating
autonomously near people. It's this combination technology policy, public understanding
that's essential for building trust and making sure we can
actually coexist and collaborate safely with these advanced machines hashtag tag.
Speaker 1 (25:43):
Outro So we've covered a lot of ground from that
pretty startling image of a robot, you know, losing control
during a.
Speaker 2 (25:48):
Test, engineers scrambling.
Speaker 1 (25:50):
Right all the way to these huge, ambitious national projects,
these robot boot camps designed to tame that unpredictability. What
seems maybe terrifying at first glance, it's also I think
a really powerful sign of this huge transition we're.
Speaker 2 (26:05):
In a major shift happening right now.
Speaker 1 (26:07):
Yeah, we're moving fast from a world where these kinds
of advanced robots were mostly sci fi or stuck in
very specific industrial settings, to a world where they're becoming
real powerful physical, rapidly learning realities that are about to
step into our shared spaces.
Speaker 2 (26:23):
Absolutely that future we used to read about or see
in movies, with intelligent machines walking around interacting with the world,
it's not really the future anymore. It's rapidly becoming the present.
These aren't just ideas, they're tangible machines learning and adapting
incredibly quickly, challenging everything we thought we knew about automation
and intelligence, which.
Speaker 1 (26:42):
Brings us to maybe the final thought for you listening
to this deep dive. As these amazing, powerful machines become
more integrated into our lives, changing healthcare factories or emergency responds,
maybe even our cities, what's our role in all this?
What responsibilities do we have, not just the engineers and
the regulators, but each of us. How do we understand, adapt,
(27:02):
and help shape this future where humans and advanced robots
need to coexist, hopefully collaborate.
Speaker 2 (27:08):
It's a huge question facing all of us.
Speaker 1 (27:10):
It really is. It feels like the dawn of a
completely new era, and figuring out how we navigate it
together safely, well, that's going to define a lot about
the decades ahead.