All Episodes

July 19, 2024 40 mins

Panic has erupted in the cockpit of Air France Flight 447. The pilots are convinced they’ve lost control of the plane. It’s lurching violently. Then, it begins plummeting from the sky at breakneck speed, careening towards catastrophe. The pilots are sure they’re done-for.

Only, they haven’t lost control of the aircraft at all: one simple manoeuvre could avoid disaster…

In the age of artificial intelligence, we often compare humans and computers, asking ourselves which is “better”. But is this even the right question? The case of Air France Flight 447 suggests it isn't - and that the consequences of asking the wrong question are disastrous.

For a full list of sources, see the show notes at timharford.com.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin. When the trouble started in the middle of the Atlantic,
Captain Mark Dubois was in the flight rest compartment, right
next to the flight deck. He was in charge of
Air France flight four four seven, en route overnight from

(00:37):
Rio de Janeiro to Paris, but he was tired. He
had been seeing the sights of Rio with his girlfriend
Copacabana Beach a helicopter tour, and he hadn't had a
lot of sleep. The airliner was in the hands of
flight officers David Robert and Pierre Cedric Bonard, and when

(00:57):
the trouble started, first Officer David Robert pressed the call
button to summon Captain Dubois. When you're asleep and the
alarm goes off, how quickly do you wake up? Captain
Dubois took ninety eight seconds to get out of bed
into the flight deck, not exactly slow, but not quick enough.

(01:23):
By the time Dubois arrived on the flight deck of
his own airplane, he was confronted with a scene of confusion.
The plane was shaking so violently that it was hard
to read the instruments. An alarm was alternating between a
chirruping trill and an automated voice store Stare Store. His

(01:45):
junior co pilots were at the controls. In a calm tone.
Captain Dubois asked, what's happening. Co pilot David Robert's answer
was less calm. We completely lost control of the airplane
and we don't understand anything. We tried everything. Two of
those statements were wrong. The crew were in control of

(02:08):
the air plane. It was doing exactly what they told
it to do, and they hadn't tried everything. In fact,
one very simple course of action would soon have solved
their problem. But David Robert was certainly right on one count.
They didn't understand what was happening. I'm Tim Harford, and

(02:30):
you're listening to cautionary tales. The disappearance of Air France

(02:57):
flight four four seven in the early hours of the
first of June two thousand and nine was at first
an utter mystery. The plane was an Airbus A three thirty,
a modern airplane with an excellent safety record. In the
fifteen years since being introduced in the early nineteen nineties,

(03:18):
not a single passenger A three thirty had crashed anywhere
in the world. This one was just four years old
and fully serviced. The crew were highly trained, the captain experienced,
and there seemed to be nothing too challenging about the conditions.
And yet somehow Flight four four seven had simply fallen

(03:42):
out of the sky. Search teams found traces of wreckage
on the surface of the waves a few hours later,
confirming that the plane had been destroyed and all two
hundred and twenty eight people on board were dead. But
the black box flight recorder, containing possibly vital clues to

(04:03):
the cause of the disaster, it was somewhere on the
bottom of the Atlantic Ocean. There wasn't until nearly two
years later that the black box was discovered and the
mystery could start to be solved. This dear listener, is
not just a story about a plane crash. It's a

(04:25):
warning to all of us about what's coming. Air France
flight four four seven had begun with an on time
takeoff from Rio de Geneio at seven twenty nine pm
on May the thirty first, two thousand and nine, bound

(04:45):
for Paris with hindsight. The three pilots had their weaknesses.
Captain Mark Dubois fifty eight, had plenty of experience flying
both light airplanes and large passenger aircraft, but it had
very little sleep. Pierre Cedric Bonnard thirty two, was young
and didn't have many flying hours under his belt. David

(05:08):
Robert thirty seven had recently become an Air France manager
and no longer flew full time. He was flying this
route to keep active his credentials as a pilot. Fortunately,
given these potential fragilities, the crew were in charge of
one of the most advanced planes in the world, an

(05:30):
Airbus three thirty. Legendarily smooth and easy to fly. Like
any other modern aircraft, the A three thirty has an
autopilot to keep the plane flying on a programmed route,
but it also has a much more sophisticated automation system
called assistive fly by wire, a traditional airplane gives the

(05:52):
pilot direct control of the flaps on the plane, it's rudder, elevators,
and ailerons. This means the pilot has plenty of latitude
to make mistakes. Fly by wire is much smoother and
potentially safer. Two. It inserts itself between the pilot with
all his or her faults, and the plane's physical mechanisms.

(06:16):
A tactful translator between human and machine, it observes the
pilot tugging on the controls, figures out how the pilot
wanted the plane to move, and executes that maneuver perfectly.
It will turn a clumsy movement into a graceful one.
This makes it very hard to crash an A three thirty,

(06:40):
very hard, but it turns out not impossible. As the
plane approached the equator, the junior pilot, Pierre Cedric Bonan,
was flying, or more precisely, was letting the autopilot fly.
Captain du Bras was with him. Ahead on the weather radar,

(07:03):
they could see tropical thunderstorms gathering, which at that time
of year and in that location was common enough. We're
not bothered by storm clouds, eh, said the old hand Dubois.
Young Bonan didn't respond. He was it would turn out
very much bothered by the thunderstorms, and many captains would

(07:26):
have chosen to divert around them for the comfort of
the passengers as much as anything. That wasn't a possibility
that was discussed. Instead, Dubois noted, we'll wait a little
and see if that goes away, and if not, then
what not Captain Dubois's problem. A few minutes later, at

(07:50):
eleven PM, Rio time, he pressed the buzzer to summon
David Robert so that Dubois could take a nap. This
wasn't particularly unusual. Everyone needs a rest after all, and
the junior pilots need to get some experience making decisions
about the plane. With the plane on course to fly

(08:12):
straight into thunderstorms, Dubois's decision to leave the flight deck
raises questions. The chief investigator of the crash, A Lamb
we Are, spoke to the writer and pilot, William Longevisha
about that his leaving was not against the rules. Still,

(08:32):
it is surprising if you're responsible for the outcome, you
do not go on vacation. During the main event, with
Dubois Gonne, Pierre Cedric Bannan's nerves about the storms became
more apparent. Put a lavash poutin, he yelled. At one point.
The outburst, the French equivalent of fucking, hell, fuck, seemed

(08:55):
to be provoked by nothing in particular. He talked with
David Robert about how it was a shame that they
couldn't fly high enough to clear the storms. But they couldn't.
There's a limit to how high a plane can go.
That high you fly the further you are from dangers
on the ground. But the thinner the atmosphere becomes, and

(09:18):
the atmosphere, of course is what the wings are using
to support the aircraft too high and the margins for
error become tight. That's okay, though, because on an A
three thirty, the assistive fly by wire system always keeps
the pilots within those margins. As the plane approached the storm,

(09:41):
ice crystals rattled unnervingly against the windscreen and ice began
to form on the wings. Banan and Robert switched on
the anti icing system to prevent too much ice building
up and slowing the plane down. Robert nudged Bonan a
couple of times to pull left, avoiding the worst of

(10:02):
the weather. Banar seemed slightly distracted, perhaps put on edge
by the fact that they hadn't plotted a route around
the storms much earlier. A faint odor of electrical burning
filled the cockpit and the temperature rose. Robert assured Bonan

(10:24):
that all this was the result of the electrical storm,
not an equipment failure. But the ice wasn't just forming
on the wings. It had also blocked the plane's air
speed sensors, meaning that the autopilot could no longer fly
the plane by itself. A defrosting system activated to melt

(10:45):
the ice and unblock the sensors, but in the meantime
the pilots needed to take control. An alarm sounded in
the cockpit, notifying Bonar and Robert that the autopilot had disconnected,
and a message popped up, adding that at the same
time and for the same reason, the assistive fly by

(11:06):
wire system had stopped assisting. No longer would it be
the smooth tongued interpreter between pilot and plane. Instead, the
system was a literal minded translator that would relay any instruction,
no matter how foolish. Pierre Cedric Bonnard was in direct,

(11:29):
unmediated control of the airplane, a situation with which he
had almost no experience. Still, all he needed to do
was to keep the plane flying straight and level for
a couple of minutes until the air speed indicators defrosted.

(11:50):
How hard could that be? Cautionary tales will return after
the break. Not long ago, Fabrizio de Laqua, a researcher
at Harvard Business School, ran an experiment to see how

(12:12):
people performed when they were assisted by an algorithm. The
experiment was designed to be practical and realistic. It involved
professional recruiters being paid to evaluate real resumes, equipped with
commercially available software to use the sophisticated pattern recognition we
call machine learning to assess and grade those resumes. Some

(12:37):
of the recruiters were given software that was designed to
operate at a very high standard for simplicity. Delaqua calls
that good AI. Other recruiters chosen at random, were given
an algorithm which didn't work quite as well, or bad AI.
They were told that the algorithm was patchy, it would

(12:58):
give good advice, but it would also make mistakes. Then
there was a third group, also chosen at random, who
got no AI support at all. That the computer assistance
was very helpful. Whether recruiters were given good AI or
bad AI, they made more accurate recruitment choices than the

(13:19):
recruiters with no AI at all. But here's the surprise.
The recruiters with good AI did worse than those with
bad AI. Why because they switched off The group who
had the good AI spent less time analyzing each application.

(13:39):
They more or less left the decision to the computer.
The group who knew they had a less reliable AI tool,
spent more effort and paid closer attention to the applications
they used the AI, but they also used their own judgment,
and despite having a worse tool, they made more accurate decisions.

(14:00):
With the rise of powerful new AI systems, we tend
to ask who's better humans or computers. The De Laqua
experiment reminds us that that might be the wrong question.
Often decisions are made by humans and computers working together,
and just using the best computer doesn't necessarily get the

(14:23):
best results out of the humans. Pierre Cedric Bonan was
flying at high altitude in thin, unforgiving air into a thunderstorm.
It was dark, with an unnerving burning smell in the
cabin because of the electrical charge in the air and

(14:45):
the clatter of hailstones on the windshield. Then there was
the sound of the alarm disconnecting the autopilot Boan. Anne
needed all the help he could get, and just at
that moment the assistive fly by wire system disconnected, but
Anne had no real experience flying without it. When the

(15:08):
autopilot disengaged, Banan grabbed the control stick and immediately the
trouble began. The plane rocked right and left and right
and left, and each time Banan over corrected. He was
used to flying in the thick air of takeoff and landing,
whereas at high altitude plane behaved differently. And more importantly,

(15:33):
Banan was used to flying with the assistive fly by wire,
gracefully interpreting his every move, and suddenly he was having
to fly the plane without it right and left and
right and left. It rocked ten times in thirty seconds.
The side to side rocking of the plane must have
been unsettling, but it wasn't particularly dangerous. What was dangerous

(15:59):
was that Banan also pulled back on the control stick,
sending the plane into a climb in the thin air
a climbing plane could easily store. Stalling is what happens
when the wings don't generate enough lift. A stalling plane
is pointed upwards trying to climb, but it's losing forward

(16:20):
speed and losing height, scrabbling for altitude as it slides
down through the air. So why did Bonan point the
plane up and risk a stall. It was an instinctive
reaction from a pilot used to taking control of the
plane at takeoff and landing when a stall is unlikely

(16:42):
and the main danger comes from not having enough height
and slamming into the ground. If there's a problem as
you're landing, you gun the engines and point the nose
of the plane upwards. That's what Banan was doing. In
an article in Popular Mechanics, the aviation journalist Jeff Wise explained,

(17:05):
intense psychological stress tends to shut down the part of
the brain responsible for innovative, creative thought. Instead, we tend
to revert to the familiar and the well rehearsed. At
more than thirty seven thousand feet, the familiar and well
rehearsed action of pointing the nose of the plane up

(17:27):
wasn't going to make Bonan safer. It was bringing the
entire plane closer to catastrophe. In nineteen forty two, two
psychologists Abraham and Edith Luchins they were married, published the
results of a famous experiment. In this experiment, subjects were

(17:50):
given three different sized water jugs and asked to figure
out how to measure out a certain amount. For example,
one jug might have a capacity of twenty ounces, the
second one hundred ounces, and the third four ounces. The
question is, how would you measure seventy two ounces using

(18:14):
these jugs? The answer, Fill one hundred ounce jug, then
pour off twenty ounces into the medium sized jug. Then
you filled the small four ounce jug twice from the
big jug. With the pencil and paper, it's not too
tricky to figure this out. One hundred minus twenty minus

(18:34):
four minus four gives you seventy two ounces. The Luchins
gave their experimental subjects several of these problems, each with
different sized jars and a different target volume of water,
but each time the solution followed the same pattern. Fill
the big jar, then use it to fill the medium

(18:56):
jar once and the small jar twice. Now comes the trick.
The Luchins would give people a problem like this. The
big jar hole thirty nine ounces, the medium jar holds fifteen,
the small jar holds three. How do you get eighteen ounces? Well,

(19:21):
you can repeat the same process as before, fill the
big jar and use it to fill the medium jar
once and the small jar twice. It works, but if
you do it that way, you're over complicating things, because
you could simply fill the medium and the small jar
fifteen plus three is eighteen. That's much easier. But a

(19:42):
lot of people missed that obvious solution because they'd already
solved a bunch of previous problems that required the more
elaborate method. Abraham and Edith Luchin also had a control group.
They hadn't been given any practice problems. Instead, they started
with the eighteen ounce problem, and of course most of

(20:02):
them found the simple solution. Not having practiced was actually
an advantage. They saw the problem with fresh eyes and
solved it quickly and simply. The people who had practiced
tended to get stuck with a clumsy solution. The Luchens
called this the einstellung effect. Einstellung is perhaps best translated

(20:25):
here as state of mind. The practiced participants found a
simple rule of thumb that seemed to work, and so
they began applying it unthinkingly. As the Luchins put it,
the problem solving act had been mechanized. Banan's instinctive attempt

(20:48):
to climb by pulling back on the stick demonstrated an
Einstellung effect in two ways. First, as Jeff Wise explained,
he was reverting to his instinct that when you're in trouble,
safety is to be found by pulling the plane up
and seeking height. Second, Banan had almost always flown the

(21:11):
a three point thirty plane with the assistive fly by wire,
and with the assistive fly by wire operating, you literally
cannot stall the plane. The computer won't let you. Banan
had been trained by his own airplane never to worry

(21:31):
about stalls, never to even think about stalls, because stalls
simply can't happen. As Flight four four seven began to
lose air speed and altitude, an automated voice announced stall. Kescassa,

(21:52):
said David Robert, what was that stall? Store? Over the
next four minutes, the word stall would be repeated more
than seventy times. But Bonar and Robert, it seems, couldn't

(22:13):
grasp that a stall was possible. Their ein stellum, their
state of mind made that risk inconceivable. I first heard
the story of Flight four four seven told on the
ninety nine percent Invisible podcast back in twenty fifteen. By

(22:35):
the way, nineteen nine percent Invisible is amazing, and if
by some miracle you're not already a listener, go and subscribe.
He can thank me later. Now. In twenty fifteen, this
seemed like a warning about self driving cars. Here's a
pilot who grew so reliant on his assistive technology that

(22:56):
he forgot how to fly a plane at high altitude.
So what happens when the self driving cars take over
and we all become Pierre Cedric Bonar, unable to remember
what to do when the computer needs us to take over.
I see the story differently now. It's not just the

(23:18):
self driving cars. It's the appearance of artificial intelligence everywhere.
Consider those decision making algorithms that Fabrizio de Laqua gave
to professional recruiters, which made them switch off and let
the algorithm handle the problem. He called that study falling

(23:38):
asleep at the wheel. I think you can see why.
Or generative AI, which we use to paint pictures, create videos,
write essays like the assistive fly by wire on an
A three thirty. It's a technological miracle. But like the
assistive fly by wire, the question is not how well

(24:01):
the computer works, it's how well the computers and the
humans work together. Consider the hapless lawyers who turned to
chat GPT for help in formulating a case, only to
find it had simply invented new cases. Not only did

(24:22):
this actually happen, it's happened more than once. In a
New York case, the lawyers were fined five thousand dollars
and ordered to write letters of apology to the judges
whose names had been taken in vain by chat GPT.
In Canada, another lawyer was let off with a warning.
The Supreme Court of British Columbia believed her when she

(24:46):
said she didn't really understand how chat GTP worked, which
I can believe too. By now, surely even the lawyers
have figured out that you can't ask chat gpt to
prepare a legal submission for you without checking. But problems

(25:07):
with generative AI can occur in more surprising places. Cautionary
tales will return in a moment. Jeremy Hutley, Kean Gohart,
and Henwick Verdelin are experts in ideation, or, to use

(25:29):
its more everyday label, brainstorming creative problem solving as a group. Naturally,
when they heard about the launch of chat GPT, they
asked themselves what this new tool might bring to the
ideation process. After all, chat GPT was a sudden sensation, powerful, flexible,

(25:50):
easy to use, and the problem that the lawyers had
that chat GPT just makes stuff up isn't a problem
for ideation because the aim isn't accuracy but to generate
a huge range of solutions as quickly as possible before
you work out the details later. So the three researchers
decided to conduct a simple experiment in which they compared

(26:13):
ideation sessions using chat GPT with ideation sessions without it.
Jeremy Utley, who teaches innovation at Stanford University, thought that
chat GPT would help teams produce vastly more ideas, maybe
twice as many, five times as many, one hundred times
as many. He told the podcast You Are Not So

(26:36):
Smart that he thought the question their study would answer
was how many multiples more ideas are AI assisted teams generating?
And then he saw the results, he told you were
not so Smart. My first thought was, oh no, oh no.

(27:01):
For many of the teams using chat GPT, the entire
collaborative back and forth of the ideation process stopped. Instead,
the room would be silent except for the pecking at keyboards.
Each person would be staring into their screen, displaying what

(27:21):
the researchers came to describe as resting AI face, and
the ideas they produced utterly mediocre. Equipped with the latest, greatest,
most sophisticated tool in the history of brainstorming. These teams
produced totally predictable stuff, nothing brilliant, nothing particularly varied, nothing

(27:48):
that didn't need a lot of development work, and above all,
just not many ideas, which is insane because ideation is
all about creating a huge variety of ideas and sorting
through them later, and chat GPT is absolutely a machine

(28:08):
for reducing a huge variety of ideas. It was the
ein stelling problem again. What people really needed to do
was to engage with each other and engage with the AI,
prompting it, discussing the prompts, going back to the machine,
mixing things up, varying their queries, asking for more. But
what chat gpt gave them looked a lot like a

(28:33):
Google search bar. You type in your question, you get
an answer, and then you stop. You feel like you've
seen this situation before, and so you do what you
always do, and if it doesn't work often you just
do it again. You get stuck. Pierre Cedric Bonan was

(28:59):
certainly stuck. His instinct was to pull back on the
control stick, which was stalling the plane, which was sinking, sinking,
sinking to towards the Atlantic Ocean all around him and
David Robert. Alarms were sounding, including the alarm store of
Storm Stall, but they just didn't seem to be able

(29:23):
to diagnose their self inflicted problem. By this time, even
the air speed indicators had defrosted. There was literally nothing
wrong with the plane. If they'd gently pointed the nose
of the plane downwards, it would have regained speed and
lift and pulled out of the storm. They had plenty

(29:46):
of altitude to do that, but they didn't. Robert had
pressed the button to summon Captain Dubois from the rest cabin.
Fuck where is he? In a panic, he mashed it
again and again, Fuck is he coming or not? Remember?
Captain Dubois took only ninety eight seconds to reach the

(30:08):
flight deck. What's happening? Dubois seemed calm given the circumstances.
David Robert and Pierre Cedric Bonan were not. Bonan had
stalled the plane, which was plummeting out of the sky
nose way up in the air at one hundred and
fifty feet per second. David Robert had noted that the

(30:31):
air speed indicators had failed, and although the other readings
were accurate, including the store stare stall, he didn't believe them.
The air France pilots were hideously incompetent, says William Longevisha.

(30:51):
Longevichha argued that the pilots simply weren't used to flying
their own airplane at altitude without the help of the computer.
Even Captain Dubois had spent only four hours in the
last six months actually flying the plane rather than supervising
the autopilot, and it had the help of the full

(31:14):
assistive fly by wire system. If the plane flies itself,
when do the pilots get to practice. So far, we
haven't seen that problem with modern AI systems, but it's
obvious that trouble is coming. Think of the recruiters who

(31:36):
fell asleep at the wheel, the lawyers who didn't understand
chat GTP, and the brainstorming groups who stared slack jawed
at their screens rather than talking to each other. In
each case we can see an all too human willingness
to abandon our own judgment and let the computer do

(31:57):
the thinking. And the more we do that, the less
practice we will get. Better AIS are coming, of course,
than that will only make the worse. The psychologist James Reason,
the author of Human Error, explains why skills need to

(32:18):
be practiced continuously in order to maintain them. Yet an
automatic control system that fails only rarely denies operators the
opportunity for practicing these basic control skills. When manual takeover
is necessary, something has usually gone wrong. This means that

(32:39):
operators need to be more rather than less skilled in
order to cope with these a typical conditions. This is
called the paradox of automation. Unreliable automation keeps the operators
sharp and well practiced, but the better the automated system gets,

(32:59):
the less experience the operators will have in doing things
for themselves, and cruelly, the weirder the situations will be.
When the computer gives up. You might say, well, then
we shouldn't use these automated systems. Pilots should practice their

(33:20):
skills rather than using assistive fly by wire. We should
memorize phone numbers instead of relying on our smartphones. Kids
should learn long division rather than using calculators. Heck, books
are a disaster. In the good old days, before books,
people used to just remember fifteen hour epic poems such

(33:40):
as the Iliad Wow, and that's not going to happen anyway.
These tools don't just make life easier, they improve our performance.
You can do more sophisticated calculations with a pocket calculator
than without one. A library can contain vastly more information

(34:02):
than any human could memorize, and modern planes with autopilots
and assistive flyby wire are much much safer than the
old fashioned kind. But there is a price to be paid.
Sometimes we'll find we can't remember a phone number or

(34:23):
how to do long division. Or perhaps we'll find we've
asked an AI system to help us brainstorm, or to
help us decide who to hire, or write new laws,
or help us control weapon systems or plan military strategy.
Maybe we stop paying attention, or become so helplessly out

(34:46):
of practice that when the computer lets us down, we
don't even notice. By the time Captain Mark Dubois returned
to the flight deck, it was still possible to rescue
the plane, point the nose downward, regain forward air speed,

(35:07):
and dive out of the store. The plane still had
enough altitude to make that possible, but Dubois would have
had to take in a lot of information in a
very short space of time to diagnose the stall, and
neither he nor Robert could directly see that Bonan was
still yanking back on the control stick instinctively trying to climb.

(35:31):
The plane was falling so quickly that some of the
indicators had stopped giving redoubts, and the ones that were
working might have seemed unbelievable because of the extreme speed
of the fall. And then there's a fundamental ambiguity in
a stall. You're pointing up, but you're falling down. To

(35:51):
stop descending, you'd first have to dive. That can make
it difficult both to diagnose the problem and to talk
about it. Less than a minute after Captain Dubois entered
the flight deck, there's an exchange. Robert says, you're climbing.
Then he says, you're going down, down, down, down. Is

(36:15):
that an instruction to point the nose down or a
description of the plane which is falling fast? Captain Dubois
echoes going down. Bonan asks, am I going down now?
Robert and Dubois both disagree. Robert answers, go down. Dubois says, no,
you climb here. That's a description, not an order. Robert adds,

(36:38):
go down. Bonan says, I'm climbing, okay, So we're going down.
It's a mess. Are they climbing or going down? Both?
The nose is pointed up Bonan stick is back, and
they're falling at more than ten thousand feet a minute.
Maybe Captain Dubois has realized their story. Maybe not. He

(37:02):
doesn't say so directly, and he's not at the controls Bananas.
All the while, computer voice is adding store stall. It
takes another minute before there's some kind of clarity. The
plane has fallen through the ten thousand feet mark. There's

(37:23):
now less than a minute left. Robert says, Climb, climb, climb, climb,
Of course he does. The plane is plummeting. Banan replies,
but I've been at maxy nose up for a while.
At last, Captain Dubois seems to understand what Banan has done. No, no,

(37:45):
don't climb. At this point, David Robert pushes a button
to switch controlled his seat and pushes the nose of
the plane down. Banan, presumably panicking, pushes his button and
silently takes back control of the aircraft and sticks the
nose back up. It doesn't matter, it's too late anyway,

(38:08):
They only have seconds left. Pierre Cedric Bonan's wife is
back in the passenger cabin. Their two young sons are
back in Paris. Does Bonan realize they're about to be orphaned?
Probably we're going to crash, he says. This can't be true. Fuck,

(38:30):
we're dead, says David. Robert. In less than three seconds,
the plane will barely flop into the Atlantic Ocean, instantly
killing all two hundred and twenty eight people on board,
Robert Bonar, Captain Dubois, Bonan's wife, Dubois girlfriend, everyone, Pierre

(38:54):
Cedric Bonan's last words, but what's happening? For a full

(39:21):
list of our sources, see the show notes at Timharford
dot com. Cautionary Tales is written by me Tim Harford
with Andrew Wright. It's produced by Alice Fines with support
from Marilyn Rust. The sound design and original music is

(39:42):
the work of Pascal Wise. Sarah Nix edited the scripts.
It features the voice talents of Ben Crowe, Melanie Guttridge,
Stella Harford, Jemma Saunders, and Rufus Wright. The show also
wouldn't have been possible without the work of Jacob Weisberg,
Ryan Dilly, Greta Cohne, Eric's handler, Carrie Brody, and Christina Sullivan.

(40:04):
Cautionary Tales is a production of Pushkin Industries. It's recorded
a Wardour Studios in London by Tom Berry. If you
like the show, please remember to share, rate and review,
tell your friends and if you want to hear the
show ad free, sign up for Pushkin Plus on the
show page, if Apple Podcasts, or at pushkin dot fm,

(40:27):
slash plus
Advertise With Us

Host

Tim Harford

Tim Harford

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.