All Episodes

September 22, 2025 • 19 mins

Optus is in the firing line once again over an outage that left customers unable to call Triple Zero for 13 hours.

In that time, four people died – including an eight-week-old baby.

Authorities later said they don’t believe the baby’s death is linked to the outage. 

Today, technology editor David Swan on whether the telcos can be trusted to run Triple Zero.

Subscribe to The Age & SMH: https://subscribe.smh.com.au/

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:00):
From the newsrooms of the Sydney Morning Herald and The Age.
This is the morning edition. I'm Tammy Mills filling in
for Samantha Sellenger Morris. It's Tuesday, September 23rd. Optus is
in the firing line once again, this time over an
outage that left customers unable to call 000 for 13 hours.

(00:24):
In that time, four people died, including an eight week
old baby. Today, technology editor David Swan on whether the
telcos can be trusted to run 000. David, welcome to
the Morning Edition.

S2 (00:43):
Yeah, thanks for having me.

S1 (00:44):
Take us back to when the news of this first broke.
So this was late on Friday afternoon. What did the
Optus CEO, Stephen Rue, tell journalists at that time?

S2 (00:53):
Yeah, it was sort of 430 on a Friday, which
is really like take out the trash. Time for news
in general. If you have good news, you're telling people.
At like 9 a.m. on a Friday so that everyone
can get ahead of it and make sure that there's
a spot for it in the newspaper and on TV
and things like that. So when you're getting an email at, say,
430 on a Friday, not good. So I rang their

(01:15):
people and I said, what's going on? And they said,
we can't tell you. But they said, um, you will
want to tune in to this.

S3 (01:24):
The loss of the lives of three people, two in
South Australia and one in Western Australia, is absolutely tragic.

S2 (01:33):
000 outage that affected about 600 Optus customers in total.

S3 (01:38):
I also want to reiterate that we take full accountability
for the technical failure, and that we were unaware of
this for a period of time, which is an unacceptable
gap in time. I will ensure is fully investigated.

S2 (01:53):
He didn't specify the length of the outage and the
whole press conference lasted about ten minutes in total.

S4 (02:00):
That's life or death if people aren't getting one.

S3 (02:03):
Once we were aware of the 000 calls coming through,
we stopped the process and reinstated 000.

S4 (02:09):
The outreach to the public. It was hours before.

S3 (02:12):
The the the the.

S2 (02:14):
There'll be now multiple investigations. But yeah, clearly just failure
all over the place here.

S1 (02:21):
It sounds like there was a lot of confusion and
a lot of questions still remaining. But before we kind
of get into that, the number that he talked about.
So an outage that affected 600 customers. What does that mean?
Is that the number of people who tried to call
000 during the outage, or the number of customers that
just couldn't use their phones?

S2 (02:42):
Yeah, around 600 people tried to call 000 and couldn't
get through effectively. So that meant then that welfare checks
had to be conducted on 600 people. So that means
that Optus and then other, other authorities, like police, had
to then check on 600 people individually to make sure
that you know they were okay. It's important to note

(03:04):
as well that normal calls could get through.

S1 (03:06):
So that's so strange.

S2 (03:09):
Yeah, it's why Optus says that it took so long
for it to, um, become aware of the situation that,
you know, no alerts or red flags sort of came
across to Optus, that they didn't notice it for hours
and hours. So just specifically affected the 000 system and
not regular calls.

S1 (03:27):
Is the 000 system. Does that sit on another? I
wouldn't even know the right term to use, but another
system as opposed to the the normal calls.

S2 (03:36):
Yeah. That's right. And, um, this is stuff that will
come out in the investigation of which part of that
system has failed here. What's meant to happen is that
if Optus is part goes down, that that Telstra or
TPG will then its network will pick up the slack.
Clearly that hasn't happened.

S1 (03:53):
Okay. And so by the next day so we're talking Saturday,
more information starts to come out, including that there'd been
another death linked to the outage. So can you run
us through what's been made public about the circumstances of
those deaths?

S2 (04:06):
Yeah. So, um, this is stuff that's still coming out now.
But what we do know, at this stage on Monday morning,
there was an eight week old boy and a 68
year old woman from South Australia, two men from WA,
one aged 49 and one aged 74. The information about
the fourth death from WA that only came late on Saturday.

(04:28):
So I guess I had thought that this might happen
because Steve and Roo, in that first press conference said
that welfare checks are still ongoing. So I thought there
might be more to come. And sadly, there was. There
was a fourth death announced, um, late on Saturday that
emerged through police welfare checks. Police were going door to

(04:49):
door effectively and, um, checking in on the people that
had tried to call 000. And then it emerged that
a fourth death had occurred. Other information came out yesterday that, um,
the newborn that had been reported, the eight week old,
was unlikely to have died as a result of the outage.
The baby's grandmother said that she used another mobile phone
in the house, um, when hers wasn't working, so she

(05:12):
was able to effectively get through to 000 from using
a different phone that wasn't Optus. So, you know, we
can say that there were at least three deaths linked
to this outage.

S1 (05:24):
Oh. So sad. Um, can you tell us what Optus
has said about what caused the outage?

S2 (05:31):
Yeah, so our sources told us on Saturday, and this
was later confirmed by Optus that it was, um, a
routine firewall upgrade. So firewall is part of the network
that deals with security. So things like preventing against cyber
attacks and stuff like this. And that was being upgraded
at in the early early hours of Thursday morning, so

(05:52):
about 12:30 a.m. and this outage started in South Australia
and then, uh, spread quickly through the WA and Northern
Territory as well. And yeah, as I said, it didn't
affect regular calls. They could still be made. It affected
the 000 system.

S1 (06:10):
Kind of the system that you really don't want to
be affected. Right.

S2 (06:14):
That's right. It should be sacred. It should be the
thing that works at all times. There's no excuses.

S1 (06:20):
Yeah. And when did Optus first become aware there was
a problem?

S2 (06:24):
So Optus says that two customers called it at 9 a.m.
on Thursday. So this is about eight and a half,
nine hours after the issue started that people weren't getting
through to 000. So two customers called Optus and said
we're not able to get through to 000. But crucially,
those calls were not acted upon. So people called up

(06:47):
this and then those calls were not escalated to management.
So Stephen Rue says that effectively, the company wasn't aware
until 130 on a Thursday when, um, people around him
were told that there was this issue. So 1:30 p.m.
on a Thursday, that's, you know, 13 hours after the

(07:10):
the outage started. So for 13 hours, effectively Optus wasn't
aware of the issue. But I think one of the
issues here at play was that Optus had offshored its um,
support capabilities. So they were call centres in the Philippines
and in India. And this will come out in the investigation,
I'm sure. But it seems that there was an issue

(07:32):
where people called Optus to try to warn them and
let them know they weren't able to get through to 000,
but then these offshore, um, support people didn't escalate it,
maybe didn't realise the seriousness of the situation. So, um,
processes won't follow them.

S1 (07:49):
So we've got the adage that goes for, say, more
than 13 hours until 1:30 p.m. or thereabouts on Thursday.
But neither the public. Perhaps even more crucially at that point,
police weren't told until, what, 5:30 p.m. on Friday or
a bit before. That's something like 40 hours after all
this first started happening. What happened?

S2 (08:10):
Yeah, that's sort of the crucial question of, well, there
are many crucial questions, but I think it's one of
the most important ones is as soon as Optus knew
about this and effectively fixed it right, they said, okay,
we're switching 000 back on. We're halting any upgrades. They
said at 130 on Thursday they effectively fixed whatever the

(08:33):
issue was. But then you're right, it was sort of
40 hours that it took to for people to become
aware of this. And when people like, you know, the
South Australian Premier, Peter Malinauskas, he was angry at his
press conference. He was like he was deeply incensed.

S5 (08:50):
Is that once the problem has been identified, making sure
that Optus are informing the appropriate state authorities of all
the information they have as quickly as they've got it,
and the fact that that didn't occur until after a
press conference yesterday beggars belief. I've got no doubt.

S2 (09:08):
And federal government's up in arms, as well as they
should be to.

S6 (09:12):
Optus, has perpetuated an enormous failure upon the Australian people
and they can expect to face significant consequences.

S2 (09:19):
Clear like there have been multiple failures across the the outage.
The outage itself happened and that's a failure. And that
could have been human error or that's not common but
it happens. What then is rare or makes this remarkable
is that there were failures. Then after that in the

(09:40):
communications chain, like there were subsequent failures and then obviously
tragic consequences in terms of deaths. Then make this one
of the worst corporate failures that in Australian history. Um,
it's just it goes from being a routine outage to
then and that happens to then just this colossal failure across,

(10:05):
you know, communications across, um, just all levels of, of
the business. It seems like just this has been a
colossal failure.

S1 (10:16):
We'll be right back. And back to the cause of
the outage itself. This, quote unquote, routine network upgrade. That's
the same cause as the last outage, right? Like there
was an outage not that long ago that caused the

(10:37):
same thing.

S2 (10:38):
Yeah. That's right. So, um, the outage last time, November 23.
At the time, that was a routine software upgrade as well.
That sort of cascaded through the network.

S4 (10:49):
It's been described as the biggest national phone outage ever.
More than 10 million Optus customers across Australia are in
turmoil after the collapse.

S2 (10:57):
That one effectively led to the CEO's resignation as well.
The former CEO, Kelly Bayer Rosmarin.

S7 (11:04):
We really exist to give great service to our customers
and we're very sorry that we let people down today.

S2 (11:11):
It was similar to this outage in that it was
an outage due not to like what you might assume
in terms of like a cyber attack or anything deliberate.
It was kind of an own goal or human error
in terms of this routine upgrade. That happens all the time.
People upgrade their systems all the time. It's sort of
just part and parcel of telecommunications and tech, where it's

(11:31):
like upgrades are just, you know, the day to day. Um,
but then, you know, when something goes wrong, then not
managing that in the correct way and then, you know,
tragic consequences.

S1 (11:42):
So is it actually not that unusual to have an
outage of 000, or is that in itself unusual?

S2 (11:51):
000 outages are more rare, far more rare. It's very
common to have an outage, but that's more just of
normal phone calls for example. But you know, Telstra for
example as well has suffered A000 outage and it's been
penalised by the communications regulator. I think though, this shows,
in my opinion, that the telcos themselves can't be trusted

(12:15):
to run this system. Um, I think one of the
recommendations out of it was called the Bean Review. So
Richard Bean did a review into the 000 system after
the last Optus outage and said that we need like
an independent body to run this system and make sure
it's working as, as intended. Because whether it's Telstra, whether

(12:35):
it's Optus, um, for there to be repeated failures of
this system. It's like it's not acceptable. It's like if
000 isn't working, as we said earlier, it's the most
critical communications infrastructure that we have. It's life and death. So, um, yeah,
I think we do need something, an independent regulator or
something to be fixed here because it's clearly unacceptable.

S1 (12:59):
So that could be a possibility that something government run
or a completely different service takes over the operation or
the telecommunications of 000.

S2 (13:09):
Yeah. And the communications minister has said that that was
something that they had been working on when. So it's
sort of the recommendation had been accepted by government. And
they said, yeah, we accept this and we'll start implementing it. Um,
just hadn't happened in time for, um, for this outage, unfortunately.

S1 (13:25):
Yeah. Well, because it happened relatively soon after that. 2023 one.
Is there something specific to Optus, do you think that's
happening as well?

S2 (13:34):
Yeah, I think it is fair to say that outages
are part and parcel of the telco industry, but there's
something unique to Optus in which they've had these massive
failings that are at a different scale to what we've
seen from Telstra or TPG. I think it points to
a few things. I think it points to underinvestment in

(13:54):
infrastructure and staff resourcing, for example, that this can happen
time and time again. I think what Stephen Rue has
inherited as a relatively new CEO of Optus is a
network that's been neglected for a decade plus, and there's
no escaping that fact now. I think outages happen, but
for it to be at this scale and, and time

(14:18):
and time again, it just it's not an accident. It
just shows that you need to be pouring more money
and more effort into this, to making sure that, um,
something like this doesn't happen. So, um, the other factor
as well is that and this will come out, I'm sure,
in over time. But Optus is owned by a Singaporean company, Singtel.

(14:39):
And to what degree then does that affect, you know,
management over the company that you're in Singapore and your, um,
managing a network in Australia? Maybe just inherently that means
they're not as close to the local context here. I
know Singapore, it's been reported Singtel has been interested in
selling Optus in the past to find a new buyer

(15:01):
for for the company. I imagine those questions might pop
up again. Now why would they be interested in owning
something that's so trouble prone and has so much controversy
around it? So. Um, I think there are questions to
be asked about, uh, Optus ownership as well.

S1 (15:18):
Okay. Um, so for that 2023 outage, Optus were fined
something in the order of $12 million over its failures. Um,
I'm guessing we can expect probably greater punishment in this case,
given four deaths have been linked to it.

S2 (15:33):
Yeah, when there are deaths involved. It just brings things
to a whole nother level. And I think, yeah, the
communications minister has said on Monday morning in a press
conference that Optus can expect severe consequences from its actions.
And I think everyone can probably agree that the $12
million fine that it received last time obviously wasn't enough. Um,

(15:55):
if this can happen again, then it just means that,
you know, the consequences weren't severe enough last time. So
I think we can expect fines of more than 12 million,
I think as well. I wouldn't be surprised if some
of the families of the victims, for example, launched maybe
civil claims against Optus in court. I think if they
can point to Optus 000 failings as being, um, at

(16:17):
least partly responsible for some of the tragic outcomes here, then. Yeah,
wouldn't be surprised at all if there was court action.
The most severe thing that the regulator could do, like
by law, is revoke Optus licence. Um, completely. I wouldn't
expect that, I think the telco landscape needs Optus. We
need that competition. Even though Optus has failed time and

(16:39):
time again, I still think this will continue to to
exist at least.

S1 (16:43):
And David, this is your beat right? Like you report
on technology day in day out. What are some of
the questions that you would want answered in relation to
this latest outage?

S2 (16:53):
Yeah, I think a lot of it is still to
play out. I think I want to know how this
company can be sitting on this information for so long,
and not let the relevant governments or authorities know 40
hours is such a long period of time. And I
look back to the last outage that Optus had, and

(17:15):
it's clear that the relevant lessons haven't been learned. I
think we need to zoom out on the 000 system
as well. And as I mentioned earlier, we need to
make sure this never happens again. It's, um, a failure
of the highest order. So how do we fix the
system so that it's fit for purpose? As I said,

(17:35):
I think that requires. Yeah, the government called it A000 custodian.
I think that's a good idea. We can't clearly trust
the likes of Optus to be handling this itself. So
I think some sort of independent oversight of 000 would
make sense. And we need to look at the competitive
telco landscape more broadly. Like obviously Optus uh isn't doing

(18:00):
its part in terms of its it's got a reputation
as being a scrappy competitor to Telstra charging lower prices and,
you know, being that alternative to Telstra. But clearly it
hasn't invested enough in its networks or its infrastructure to
be reliable. And customers can no longer trust it. So
that begs just some more existential questions about our telco industry,

(18:21):
I think, and whether it's sort of fit for purpose.
So there are some big questions to think about over
the next week and months as these investigations take place.

S1 (18:31):
Yeah, well, we'll be certainly very interested in the answers
to those questions. And David, thank you so much for
your time.

S2 (18:38):
Yeah, thanks for having me.

S1 (18:44):
Today's episode of The Morning Edition was produced by Julia Katzel.
Tom McKendrick is our head of audio. To listen to
our episodes as soon as they drop, follow the Morning
Edition on Apple, Spotify, or wherever you listen to your podcasts.
Our newsrooms are powered by subscriptions, so to support independent journalism,
visit the page or. Subscribe and to stay up to date.

(19:11):
Sign up for our Morning Edition newsletter to receive a
summary of the day's most important news in your inbox
every morning. Links are in the show. Notes. I'm Tammy Mills,
this is the morning edition. Thanks for listening.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.