Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Matt Maw (00:02):
But in 2008 the
primary means of communication
to the branch network was viadial-up modem.
I literally had banks of Netcommodems and US Robotics 56 6K
modems.
That was the single biggestconnection and those that needed
slightly more bandwidth were onframe relay.
So at the time, tats was thesingle reason why Telstra was
(00:24):
continuing to run the framework.
They couldn't shut it downuntil we migrated off.
Michael van Rooyen (00:29):
Today I have
the pleasure in having a chat
with Matthew Maw, known as MattMaw.
He is the Chief DeliveryOfficer for our critical
infrastructure business, Orro,Critical Infrastructure, and
today we are releasing thispodcast around the Melbourne Cup
event.
Matt has a great history inworking in critical environments
.
Particularly, we're going totalk today about his experience
and time at TATS Group, where hewas a CTO between 2008 and 2016
(00:52):
.
And we thought it'd bebrilliant to have a chat about
the challenges of running such amajor event.
It's pretty critical to thecountry.
It's not criticalinfrastructure as we think about
today, but I think it'simportant we talk about it.
Once you gave me the historyaround the event and the
criticality.
It was fascinating.
I thought we should letcustomers and people who listen
to this hear that.
So welcome Matt.
(01:12):
Thanks very much, mvr.
Before we get started, give usa bit about your journey.
I know that you spent time atCisco, nutanix et cetera, but
you have quite a deep history.
Do you mind giving us a bitabout your career journey and
then what led you to become theCTO of Tats, which is the
largest gaming and wageringbusiness, and particularly
between 2008 and 2016?
Matt Maw (01:30):
Yeah.
So look, my journey has beenone of finding organizations and
industries that are ripe fordigital change.
So, fundamentally, my careerhas been really driven around
helping organizations adoptdigitalization and driving
outcomes associated with that.
So early in my career, I'vespent time at pharmaceutical
(01:51):
manufacturing organizationswhere we literally built
factories and delivered physicalpharmaceutical products.
I've spent time in the miningindustry and I've also spent
time in the delivery of healthservices.
I was the first CIO for one ofAustralia's largest aged care
providers and was actuallyinstrumental in putting in one
(02:11):
of Australia's first digitalcare planning systems within the
aged care environment andcertainly very topical these
days as we get an olderpopulation and more and more of
us starting to enter into that.
Driving digitalization in thatspace literally saved lives.
We were losing patients due topoor handwriting, to not getting
access to medical records.
So, yeah, driving thedigitalization of that space was
(02:34):
pretty critical.
The highlight of that digitaltransformation really was around
Tats Group and I was both, Iguess, lucky and unlucky,
depending on how you look at it,to be at Tats, right at the
heart of the digitalization ofthe entire industry.
So when you think about today,we've got a plethora of online
gambling providers I'm sure youcan rattle off half a dozen
(02:55):
straight off the top of yourhead.
When I first started at Tats,they didn't exist.
They'd just been a legal changeto the environment when they
were now flooding the market.
So I took over an organizationthat was predominantly bricks
and mortar business that stillearned billions of dollars
through bricks and mortar typeorganizations.
That's the typical TABs, andI'm sure you've all got the
image of the smelly TAB and thepeople generating, you know,
(03:18):
bets through that environment.
But then how do you digitalizethat environment whilst not
losing that revenue base?
It's very easy for people tosort of look at a digital
journey when you don't have toworry about that sort of
incumbency of things.
And then how do you do so in aheavily regulated environment?
So TATS was regulated on astate level, not a federal level
.
So although we ran centralsystems, we were still regulated
(03:42):
by New South Wales, by Victoria, by Queen's.
Everybody had their ownindividual regulatory
environments that we had to meetthat were incredibly onerous.
So you know, how do you drive adigital agenda in that space?
So yeah, it was a fascinatingjourney to get to that point and
glad I did it and glad I'm notdoing it anymore.
Michael van Rooyen (04:00):
Well, I
guess we're on the cusp of
Melbourne Cup again and, yeah,it would have been certainly a
very interesting journey to gothrough the digital
transformation, but also to havethese multiple events running
and the race that stops thenation.
And you know, I guess I thinkabout the race you had to run to
get prepared for that day.
I mean, what did you call itearlier when we chatted?
The most stressful?
Matt Maw (04:19):
boring day ever is the
outcome that we were chasing.
Michael van Rooyen (04:23):
I really
like that and I guess it's
fascinating for me around theamount of preparation and how
you had to plan and scale forthat.
You mentioned one point I'msure we'll touch on as we talk
about the time you had a majorlotto draw on at the same time
as the Melbourne Cup, so even adouble bubble, which would be
fascinating.
So off the back of that, youknow managing infrastructure for
large scale organisations,especially during these
(04:43):
high-stake events such as theMelbourne Cup.
It is certainly no small taskon its own.
Can you give us an overview ofyour responsibilities when you
were the CTO and how your teamsupported these major gaming
events?
Matt Maw (04:54):
Yeah.
So it's important to rememberthat Tats Group wasn't just a
wagering business.
It also owned all the lotteriesbar Western Australia, across
the country.
That's your typical, your allsevens, your power balls, your
typical block game lotteriesthat ran.
We also had one of thecountry's largest radio networks
in radio TAB, so we had over300 transmitters across the
country.
That you looked after that welooked after.
(05:15):
We also had a very extensivenetwork of gaming within the
pubs and clubs environment.
So we did all the monitoringfor the poker machines.
We did things like gamingservices.
So if you're in Queensland youknow if you have to put your
license in to get to a pub orclub.
That system was part of ourenvironment.
So in all we had a little over11,000 physical locations on the
(05:36):
network, which made it thelargest corporate network in the
country and you know, literallywas a true 24 by 7 operation.
So when you look at myresponsibilities, even though we
had independent owners of eachof those different lines of
business, the decision was madeto consolidate all of that
infrastructure into a central,single platform, single hosted
(05:57):
capability for economies ofscale and for efficiencies, and
so my role was literally to ownthat entire plethora of
infrastructure and capability todeliver out those requirements.
And when you start to look atsome of the interesting
challenges that we had there, itwasn't just scaling up to the
likes of Melbourne Cup Days andAus7 Super Draws which, yes, we
(06:20):
had a Super Tuesday where we hadboth on one day and literally
two-thirds of the adultpopulation interacted with my
systems at least once on thatparticular Tuesday.
We also then had to scale downto a couple of transactions per
second on a quiet Monday.
So it's not just good enough toscale up, but we also had to
scale down and still beprofitable at very small
(06:40):
transaction rates, but then beable to scale up to very high
transaction rates at the sametime.
The other thing that wetypically tend to forget when we
think about Melbourne Cup isthat it's right in the middle of
Spring Carnival, and SpringCarnival constitutes nearly 50%
of wagering's total revenue fora 12-month period.
Michael van Rooyen (06:58):
Really.
Matt Maw (06:59):
So there's six weeks
of Spring Carnival.
So when you think of Cox, plate, caulfield Cup, all those sort
of big race days, they literallyconstitute about 50% of
Wadering's total business.
We also forget that MelbourneCup is MR Race 7.
That is burned into my brain.
But the second biggest race forthe year is MR Race 6 and the
third biggest race is MR Race 5.
(07:19):
So that race day, those spikes,and you watch it during the day
.
You know you can, you can seeit through the transaction load.
You can see well, there's raceone, there's race two, there's
race three, and then all acrossthe country, just about every
racetrack across the country,runs their own race days for
that particular year.
So Brisbane, you know Ascotwill run one, and then they'll
run one at Randwick, et cetera,et cetera across the country.
(07:42):
So you have all of those peoplethat are all betting on those
local events at the same time.
So it's a lot more than thatsingle race that stops the
nation, which we all know andlove.
It is actually a very extensiverace day for the year and you
can imagine the sort of revenuethat's being generated from that
perspective.
One of the things that'scritical to understand is that
(08:02):
for a lot of organizations.
They talk about things likedowntime and the loss of revenue
that they generate for downtime.
You know the bank network is aclassic example.
If the ATMs go down, there'ssignificant millions of dollars
lost per hour.
The loss per hour is aninteresting metric in that it's
not really lost money, it'sdelayed money.
You know so most people, ifthey need to get money out of an
ATM, the ATM is not available.
(08:23):
They'll come back in an hour'stime, two hours' time, three
hours' time.
Sure, there'll be brand damage,there might be maybe a loss of
reputation and it's not a zerocost.
But for us, from a taxperspective, once that horse
race jumps, once the balls dropin the lotto, there is no
ability to get that money back.
That is gone.
(08:45):
You no longer can get morerevenue on that.
So if you are down, then you'renot getting back those outcomes
.
What we also know is thatnearly 80% of bets Melbourne
Cup's a little bit different,because it's Melbourne Cup and
that's always a bit different.
But nearly 80% of bets areplaced within the last five
minutes before a horse racejumps.
So the spikes that we get, justeven on a normal Saturday, are
tremendous.
So managing those spikes,managing those workloads was a
(09:05):
critical factor in what weunderstood, and our Melbourne
Cup journey started literallythe day after Melbourne Cup.
So we would spend 364 dayspreparing for Melbourne Cup and,
ultimately, spring carnivalthat went around it.
So, yeah, it was a hell of a day.
It was, as I said.
It was hopefully the moststressful, boring day that we
(09:26):
get, and if it wasn't boring,then we had problems.
Michael van Rooyen (09:29):
Yeah, I was
going to say that's what you
build for, right.
So in that preparation the 364days of preparing what were some
of the unique challenges thatfaced the team in preparing and
managing the infrastructure forsuch high demand events?
Matt Maw (09:41):
So critical to the
journey was to actually
understand the outcome.
I know that sounds really,really simple, but we took a lot
of time and effort to breakdown the Melbourne Cup day that
had just gone, to understandthose spikes, to understand
those loads, to understand thoserequirements, to really then
look at what did our datapredict that we thought we were
(10:04):
going to get to, and did we?
Did we get there?
So we, we used to take a wholelot of, you know, statistics and
readings leading up to theevent.
You know what was corfu cup,what was cox play, what was?
You know some of the state oforigin.
You know, etc.
Etc.
So that we could then try andget a bit of predictive analysis
as to what those outcomes wouldlook like.
You know, what was the bettingtraffic going to look like, what
(10:24):
was that competition doing?
And then we would then do adeep dive as to how close do we
predict it correctly, so that wecould then look at moving
forward Once we understood whatthe journey then looked like.
Then what we had to do is thensay, right, what is the most
efficient way for us to get tothose outcomes without having an
unlimited budget, becauseultimately it's very easy to
drive high transaction volume ifyou've got an unlimited budget,
because ultimately, you know,it's very easy to drive high
(10:46):
transaction volume if you've gotan unlimited budget.
But we were still anorganization, and an
organization that had very slimprofit and operating margins.
You know, wagering typicallyoperates at about a two to 3%
profit margin on its business bythe time it pays its regulation
, pays its fees, pays its staff.
So it's not a business that hadunlimited budgets.
So we had to make sure that weused that budget wisely in order
(11:09):
to achieve those results.
So it very much became abouthow do we get efficient at what
we do, how do we utilize ourscale, how do we use the
economies of scale?
And then what we started tolook at is how can we use our
other businesses to give us someof that buffer capability?
So we then drove a standardizedapproach that set our common
(11:30):
infrastructure that existsacross the lottery business,
across the gaming business,across the wagering business.
We could then reutilize thatinfrastructure and move it from
place to place.
Today we would call that cloud,you know, as most people would
come to know and love that.
But back in 2008, there was nosuch thing, as you know, the
public cloud.
There was no such thing as youknow how do you do that.
So we essentially had to buildour own private cloud capability
(11:52):
being able to move those, andit was a great plan right up
into the point where we gotSuper Tuesday when we had an
Aus7 Super Draw on the same day.
Our DR plan for wagering was touse the lottery workload and
the DR was to use the wageringworkload.
We had the same on the same day.
Michael van Rooyen (12:06):
So that was
an interesting and fun day,
super complex and super bit ofplanning.
I just want to go back on apoint you made earlier.
So the massive spike, right,it's a good point.
First of all, everyone, mostpopulations just think of
melbourne cup, the, you knowrace seven.
It happens, you know and it'sdone.
But you made a good point aboutthe lead up and down.
So I think about a statisticalview of this big bell curve that
you guys would go through andbe pretty sharp up and down,
(12:29):
very short, sharp bell curve.
So if I think about spikes intraffic, even though they're
trans transactions, thousands oftransactions a second, etc.
What measures did you take toensure you know reliability and
performance?
Matt Maw (12:39):
you know with with
your own systems, vendors, good
architecture, etc so resiliencyby design became an important
part of everything that we did,and so we were maniacal about
designing our systems toguarantee resilience from the
ground up.
We went to the nth degree in alot of these respects when, when
we were deploying our datacenters, we ran our own data
(13:00):
centers within our environments.
We had one particular pit wherethe diverse path was going to
run through from out of onefacility to run to the other,
and it worked out to be aboutthree meters worth of common
conduit that existed between thetwo.
That was not good enough for us, so we then went and deployed
nearly another kilometer worthof pits that we had to literally
(13:24):
plow into the ground in orderto make sure that the cables ran
through diverse paths and didnot run through that same three
meter stretch of of pit in theroad.
So that's the sort of lengththat we went down to and, in
fact, on melbourne cup Cup Day.
So in general, we used to rundual dual everywhere.
So we'd have a primary hostsystem, we would run a dual
within the data center and thenon the redundant data center we
(13:47):
would run another set of dualhosts.
So essentially, for every oneproduction host, we would have
three DR hosts and on MelbourneCup Day we worked with our
vendor community and weessentially had a fifth data
centre worth of capabilitystored in the back of a Pantech
and I hired two guys to sit inthat truck, to sit halfway
between the two data centres sothat if the event of any
(14:09):
equipment failure or anychallenges that we had at either
data centre, we were able todrive that truck to the location
and to deploy that extra set ofequipment.
So that was the levels that weused to go to in order to make
sure that on that particular dayit was those sort of outcomes,
but when you're running at 1,200, 1,300 transactions per second
through the systems, that's thesort of thing that you can do.
Michael van Rooyen (14:33):
Wow, wow.
But how did you approach therisk management and disaster
recovery planning to ensure thatthe operations were seamless?
Was it just again thatmeticulous planning, thinking of
a fail scenario?
What sort of opportunities didyou have to simulate failovers
et cetera?
Was that part of the weeklymonthly process?
Matt Maw (14:49):
So, look, it's a
really interesting question.
It's one of those things whereyou can't just get paranoid and
you can't think, oh, what aboutif an asteroid?
You know you, there is a limitto what you need to think about.
What you also need to do is beprepared to be brave throughout
the year.
Yep, so things like a cable cuttest.
(15:10):
Most organizations are prettyreluctant to do that because you
know that inevitably comes at acost, because there'll be
downtime, because you haven'tthought about it.
We were prepared to suffer someof those challenges throughout
the year to ensure that, on theevent of Melbourne Cup Day, we
had those true capabilities inplace.
We were also meticulous andmaniacal about learning from our
(15:31):
mistakes and learning from ourfailures, so that every single
time we had an issue throughoutthe year, we would do a root
cause analysis.
We do a deep dive as to exactlywhat went wrong, what happened,
what would be the potentialimpact on things like Melbourne
Cup Day, super Draws, and thenreally made sure that we drove a
learning experience out ofthose exercises that then drove
(15:51):
into continual improvementprograms associated with that.
So lots of data, lots ofanalysis, lots of learnings, and
we needed to have a culture oflearning from our mistakes.
We didn't try and hide them, wedidn't try and cover them up.
Mistakes were going to happen,issues were going to occur,
challenges were going to be partof our everyday life.
(16:12):
We needed to make sure that ourculture was, that we would own
it, we would uncover it, wewouldn't take a persecutorial
type perspective on it, and thatwe would have an open and
comfortable place for people toput their hand up and say this
went wrong.
And so culture was as muchabout the outcomes that we're
able to achieve as much as wasabout good design and good
(16:34):
architecture.
Michael van Rooyen (16:35):
Yeah, fair
enough too.
You touched on building yourown data centers.
Now I know this we're talkingabout 2008 to 2016, and there's
a lot of discussion about publiccloud et cetera, and I'm
curious to understand you reallyled that transformation to
building your own private cloud,which made a lot of sense at
Tatsun and it may still be valid.
So what motivates you to makethat change, build, build your
(16:57):
own and what benefits to get atthe time?
Matt Maw (16:59):
so so there's two
elements to it.
There's a technical element andthen there's actually a
commercial element associatedwith it.
We used to have a this to be arunning joke that we used to
have undated resignation lettersso that if we ended up on the
front page of the courier mailon melbourne cup day, that was
an easy.
That was an easy answer.
But, all joking aside, what itmeant was that there was
(17:20):
accountability and ownershipassociated with the delivery of
those capabilities.
The problem is, from a publiccloud perspective, how do you
write a commercial contract thatsays if you're down for 30
seconds, I want an SLA checkwritten back to me for the up to
$1 million worth of lostrevenue that we're?
Michael van Rooyen (17:37):
going to get
.
That's a fair point.
Matt Maw (17:38):
That's a very
difficult commercial structure
for anybody to sign up to.
So by owning our own datacenters and having our own
capability, we had control ofthose sorts of elements.
Now, that's not to say that Idon't think public cloud plays a
very important role for thoseburst capabilities.
So one of the things that wewere looking at towards the end
of my tenure there was can wetake our test and our
(17:59):
development environments?
We were a very big developmentshop.
We had over 400 developerswithin the organization.
That requires a lot ofinfrastructure, requires a lot
of sand pits.
It requires a lot ofdevelopment environments.
Can I pick those workloads upout of our private data centers,
move them into the public cloudfor the six or eight weeks of
Spring Carnival, allow ourdevelopers to continue to
operate based off the publiccloud, get through Spring
(18:21):
Carnival with the infrastructurewe had on-prem and then bring
it back down off the publiccloud onto those private
infrastructures to reduce ourcosts as we move forward?
So, looking at it as that, howdo I take our totality of
workloads, not just productionbut all the other workloads that
exist within an organization,and then how do we drive that
(18:44):
out?
It was one of the criticalthings that we did was start to
look at those workloads.
We classified all our workloadsas either revenue generating,
business critical or businessimportant, and so that everybody
knew what those particularworkloads were and where we
would put them.
Revenue generation never wentinto the cloud.
Business important that was itsplace of first position.
What did we get from owning ourown infrastructure?
(19:04):
We ultimately had control.
We ultimately knew thatMelbourne Cup Day is the second
Tuesday in November.
Now everybody knows that.
Sorry, first Tuesday inNovember we don't talk about.
Everybody knows that, but bigcloud providers don't.
If you're an Amazon or you'rean Azure, you don't know that
Melbourne Cup Day exists on aTuesday.
(19:24):
And so how do we know ifthey're going to do a core
infrastructure upgrade?
How do we know if they're goingto do a network change?
So we went to extreme lengthsto make sure that we could lock
our environment down, that wedidn't make changes.
We even went as far as gettingTelstra at the time as our core
network provider.
We went all the way to the CEOto ensure that Telstra made no
changes on Melbourne Cup Day.
So we ultimately gained controlof our own destiny and made sure
(19:46):
that we knew and could predictand control as many of the
variables as possible, which youdon't typically get in a public
cloud environment.
Michael van Rooyen (19:53):
You touched
on Telstra as your provider of
the network.
You also touched on 11,000sites to deploy this network so
people can buy a ticket anywhereor have a gamble or place a bet
.
I should say, if I think aboutagain the timeline when you did
that 2008 to 2016, reallypre-NBN you would have had to do
a lot of hard work with Telstraat the time, with carriers.
(20:15):
Obviously, you must have made asignificant increase in
bandwidth and reduced costs aspart of any WAN upgrade.
There's motivators why to do it.
Can you share a little bitabout this upgrade?
And but also, if you have toreflect on doing that again
today with the NBIN, would thatplay a role?
Would you look at itdifferently from a SD-WAN point
of view, like, can you just tellme what your thoughts are
around that?
Matt Maw (20:40):
Yeah, it might be
surprising for listeners to hear
, but in 2008, the primary meansof communication to the branch
network was via dial-up modem.
I literally had banks of Netcommodems and US Robotics 56 6K
modems.
That was the single biggestconnection and those that needed
slightly more bandwidth were onframe relay.
So at the time, tats was thesingle reason why Telstra was
(21:01):
continuing to run the framerelay network.
They couldn't shut it downuntil we migrated off Again.
I mentioned earlier in thepodcast the change that was
happening from a macroperspective.
We were having more and morecompetitors where we didn't have
in the past, with monopolisticretail licenses, and so we had
an organization various parts ofthe organization was trying to
(21:23):
drive a richer experience.
So one of the things theywanted to do was drive video and
you know, real-time video intothose branch networks.
Now you can imagine my feelingwhen they spoke to me and said I
need to deploy real-time videoover feeling when they spoke to
me and said I need to deployreal-time video over dial-up
modem.
It was never going to fly.
(21:47):
So we also had a Telstracontract that was an
amalgamation of some 30 tonearly 40 different Telstra
contracts.
We didn't actually know exactlyhow many connections we had.
We had well over 30,000 Telstraservices at the time.
We didn't have backup for mostof the sites, so it was a
significantly antiquatedenvironment.
But at the same time, you'vegot to remember that the primary
thing being transmitted wasactually very small bits of data
(22:10):
.
A bet is literally sometimes assmall as eight characters.
It's very small bits of data,just a lot of it being
transmitted at the same time.
So we had to look at that andsay you know, how do we drive an
efficient outcome that wasgoing to allow us to meet
today's requirements and thenscale up.
So, yeah, we looked at thingslike Ethernet, light and you
know, the forerunners to NBN andBDSL type services.
(22:32):
We drove a consistency ofrouter.
You know saying before you know, know how do we standardize our
core infrastructure so we canutilize across multiple
environments?
We also did the same thing withour routers in our, in our
retail network.
So rather than trying to findthe best shoe for the best foot,
we went with a common shoe thatevery foot would fit into.
(22:53):
That meant that some sites gotmore than what they needed, but
what it did mean was that wecould roll out the same exact
same piece of equipment acrossevery one of those 11,000 sites,
which meant our sparing, itmeant our standard operating
procedures, it meant ourefficiency.
It just drove down the price ofmaintaining and operating that
environment.
Yes, it cost us a little bitmore from a CapEx perspective,
(23:14):
but our OpEx capabilities reallydrove through the floor.
And then, once we had that baselevel, then what we could start
to do was look at thoseindividual sites that needed
more capability.
We could then sort of use the,shall we say, the golden
screwdriver and drive up thosenetwork capabilities as needed.
So it was very much aboutdriving for the outcome, driving
for the end game, and lookingat our operating capabilities as
(23:36):
we moved forward.
Michael van Rooyen (23:37):
Yeah, fair
enough, and I suspect now, with
NBN everywhere, probably justyour choice.
You know, multi-carriageprobably would have been a
consideration.
Obviously you want to hold thecar to account because of the
criticality of your SLAs, but Iguess it would have probably
given you a bit more freedom tohave those discussions possibly
Look, sd-wan is an absolute gamechanger when it comes to that
sort of capability.
Matt Maw (23:59):
The ability to deploy
different carriage services,
different capabilities, wrap itall together.
Redundancy, you know, evensomething like Starlink or you
know some of those sort ofcapabilities that you can now
wrap up with, still maintainingthat same level of operational
consistency, those same level ofoperation.
You know you don't need toworry about because it's site on
carrier A or carrier B.
It operates the same, it looksthe same, it just happens from a
(24:20):
background.
So SD-WAN from that perspective, and that's where again you've
got to think about thoseoperational characteristics.
How do you consistently drivean outcome at the lowest cost of
operations you can get yourhands on?
Michael van Rooyen (24:33):
Yeah, fair
enough.
Moving slightly from thetechnology stack, I just want to
talk a little bit abouttechnology bit, about technology
leadership.
I know that you've obviouslyrun large teams and driving
innovation etc.
So if I think about your timerunning across many, many
organizations, from integratorto vendor to customer, clearly
in many situations you know, ifI think about your time, you
(24:56):
were leading a technologyfunction of, I think, think over
160 people.
How did you manage and alignsuch a large, diverse team,
because no doubt you had allsorts of people in that.
And then you know, when I thinkabout how you combine those
separate business units into asingle cohesive team for the
mission of delivering, obviously, experience to customers.
Matt Maw (25:14):
Yeah, look, so TATS
was an amalgamation of a number
of different entities.
So TATS was an amalgamation ofa number of different entities
and the integration of teamsbefore I took over the group.
We will have them all reportingto the same manager or the same
chief executive and then that'sintegration done.
And we actually ended up with asituation where one of the
teams didn't trust one of theother teams, so they put a
(25:36):
firewall on the internal networkso that they couldn't see them.
And then the other team decidedwell, if you're going to do that
, so am I, so I ended up havingtwo different firewalls on
internal networks.
And you've got to remember thatour firewall infrastructure was
regulated.
So I couldn't make changes tothe firewalls without the
regulators pre-approving.
So I couldn't even do internalchanges without having the
regulators make.
So you can imagine howinefficient that particularly
(25:57):
was.
So we actually embarked on acouple of things.
The first was we embarked onwhat we called the the one tats
program.
The really simple way to explainit is when we used to ask
people who they work for.
We started off with I used to,I work for golden casket, I work
for tattersalls, I work forunitab, I work for all the
different entities.
And certainly by the time thatI left, when we'd ask somebody
(26:18):
who do they work for, they usedto work for TATS Group, you know
.
And that was the change ofbehaviours, the change of to
drive that together.
And there was a whole raft ofthings we did in order to do
that.
You know, build an environmentof safety, build an environment
of career growth andopportunities.
We used to celebrate turnover,if that makes any sense, in that
what we used to do is celebratewhen people would grow their
(26:40):
careers to the point where theycould no longer achieve what
they wanted to within theorganization and then we would
help them move into industry,move into other areas.
So I used to lose people to,you know, people like Microsoft,
people like Avaya, people likeDell.
They you know, one of my guys mylead PABX engineer went became
the lead PABX engineer for Avayafor Asia Pacific, based out of
(27:02):
Hong Kong.
So we would then used to.
You know, lord, that that was afantastic outcome, and so we
would get people who would wantto come to join an organization
where they would grow theircareers, grow their skills and
have those capabilities.
One of the key things I used touse a lot is things like
guiding principles.
So we spent a lot of time andeffort to develop our guiding
(27:23):
principles.
We would then spend a lot oftraining exercise to make sure
people understood what they were, but what that allowed people
to do was be very autonomous,very low into the organization.
You know help desk people andyou know they knew what they
needed to do and how they neededto lean in the right area
without any need of guidancefrom management or senior
leaders, and so it meant thatthey felt very empowered to do
(27:46):
what they needed to do on aday-to-day basis.
They didn't feel micromanaged,they didn't feel like, you know,
like there was the the bigweight of of the hand, but they
knew when they needed supportthey could get that.
Yeah, very much about helpingpeople understand that, you know
, growth and development waswhat we wanted, was what the
outcome that was lauded andsuccessful gave them the tools
(28:08):
in order to drive that.
And then, you know, promotefrom within all that sort of
good fun stuff is what we did tomake that work.
Michael van Rooyen (28:15):
Yeah, great,
and is that the same sort of
principles and approach that youtook to driving culture of
innovation, or fostering aculture of innovation and
collaborating with the teams?
You know, particularly duringthese periods of transformation
and transition, as well as youknow, having these high pressure
events at the same time right,yeah, look, absolutely.
Matt Maw (28:34):
We used to have a
concept of push workloads.
So the architecture team usedto have a requirement that says
how can I find things that wecan do better, more efficiently?
And they'll push that into theproject teams to deliver.
The project teams thendelivered against it, pushed it
into the level three operationalsupport.
Who pushed down to level two,pushed down to level one and
(28:55):
then really interesting is levelone then pushed back to the
architecture team.
They would find the frequentflyers, they would find the ones
that you know, if we made somearchitectural changes, they
could make more efficiency.
And so we then created thatflywheel of push that then
allowed people to you know,genuinely have a view that says
there weren't a dumping ground,they weren't just the ones that
(29:16):
hold in the can that they hadthat opportunity to take new
workload at the top, pushworkload off down the bottom and
then create that flywheel ofinnovation that exists
throughout the business.
We also then did things like ouroperational and project teams.
We put in a process that everytwo months we would sit down as
a management team, look at theindividuals, look at the
(29:37):
projects that we had on and thendivvy up those teams between
the operational team and theproject team.
So it meant that you never knew, as an engineer, whether you
were going to be in theoperational team or the project
team, which meant you didn'tthen end up throwing things
across the fence that weren'toperationally efficient, because
then you could potentially bethe person who ended up being
the person who had to supportthat or drive it.
I like that, but it also meantthat you weren't the people that
(29:59):
were always in that.
You know, you got to play withnew things, you got to do new
things.
So it really created thatbalance between those innovation
and operational efficiency andoperation, because ultimately,
that's what we were measured onwas uptime, resilience,
reliability, all those sort ofgood type environments.
Michael van Rooyen (30:14):
Yeah, that's
a great way to do it.
And then, rounding that out,knowing that you've been a CTO
in a number of roles, whatadvice would you give aspiring
CTOs who are looking on takingleadership roles, whether it be
in large-scale environments andhigh-demand industries like what
you've worked with, or vendors,integrators, et cetera?
Have you got any comments forpeople wanting to get into those
sorts of roles?
Matt Maw (30:33):
I'd say a couple of
things.
The first is that it's not beerand skittles.
You know like it's.
There's moments in my careerI've had the phone call at one
o'clock in the morning where theperson's gone.
We unfortunately turned thepower off to the data center and
then turned it back on andeverything came up in the wrong
order and corrupted a whole lotof data and you know all that
(30:54):
sort of good fun stuff.
So the whole duck on water is agood analogy.
You know, sometimes what itlooks like on the surface isn't
what it's like underneath.
It is a journey.
What I would say a few tips ortricks is take the time to bring
the team on the journey.
Michael van Rooyen (31:11):
Yes.
Matt Maw (31:12):
You are never going to
do it on your own, and if you
think you can do it on your own,then you're in the wrong job.
It is a collaboration effort,whether it's your own internal
team, whether it's your broaderecosystem of partners and
capabilities.
Nobody does it all on their own.
So take the time to bringpeople on that journey.
Give them the tools, give themthe way you want them to operate
(31:35):
.
Build an environment wherepeople feel as though they can
take a few challenges and take afew risks, but do so in a safe
way, with appropriate guidelinesand guardrails.
It means that that doesn't havean adverse impact to the
overall organization.
Now, that's easier said thandone, but if you can create that
culture, you can create thatright environment.
Then what you will find is thatpeople will thrive, and when
(31:57):
people thrive, they do amazingthings.
And that's probably my biggestadvice is that don't try and be
the smartest person in the room.
Hire a whole lot of people thatare a whole lot smarter than you
, and they will do amazingthings.
Michael van Rooyen (32:10):
If I then
just turn to think about your
new role or the role you'rerunning today, which is Chief
Delivery Officer for theCritical Infrastructure Business
, and the reason.
I want to just touch on thisfor a couple of minutes and
we'll certainly do a furthersession at some point around
this, but maybe you could wrapup kind of those experiences of
running these high valued events, the criticality of them,
(32:30):
understanding risk, all thesethings you've just talked about
for the last period where we'vebeen talking how that really
relates to the criticalinfrastructure industry.
You know how those skills andwhat we're developing and
building for those customers andhow they relate.
Can you bring them paralleltogether?
Matt Maw (32:44):
Yeah, look, absolutely
.
The world of criticalinfrastructure is actually
expanding into various differentindustry sectors.
The need to have highlyresilient, highly capable
platforms to deploy theseapplication sets that
(33:05):
essentially make the world work.
Now the opportunity to havesomething as simple as a meat
processing plant or a fooddistribution centre without
technology now is basically zeroand if the technology goes down
then those operations stop.
And if we think about our timeat COVID, even the perception of
a lack of toilet paper, forexample, drove huge disruption
(33:27):
into our supply chain and intoour society really.
So critical infrastructure issomething that is a passion of
mine and we need to really thinkabout our critical
infrastructure from the samemindset that I had at TATS, at
RSL Care, at a number ofdifferent organisations I've
been at, which is resiliency bydesign.
We need to think about howwe're going to design our
(33:49):
capabilities.
We need to think how we do thatin a cost-efficient and
effective manner.
We need to think about how weoperate that for the long term
and we need to think about whathappens when things go wrong,
because inevitably things willgo wrong, whether that's bad
actors doing bad things, whetherthat's internal people making
changes for all the rightreasons, but unfortunately it
goes wrong.
(34:10):
There will be issues and thingsthat happen within those
environments and we need to makesure that they are resilient
and that they can self-heal, andthey are easy to resolve and
bring back to an operationalstate.
Michael van Rooyen (34:21):
Just
reflecting on some of this work
you're doing, what are some ofthe most memorable or
challenging moments youexperienced in managing the
infrastructure for, like aMelbourne Cup and other large
scale events, and what keylessons did you learn and take
away from it?
Matt Maw (34:32):
Look, I remember
vividly one Melbourne Cup.
We had a memory leak in one ofour applications that was
consuming the resources on theenvironment and you probably
don't know, but the horse raceruns for a couple of minutes and
at that time we literallydropped to zero transactions for
that period, because everyone'swatching the race and seeing
(34:54):
what happens.
So I remember vividly weactually went through an entire
web farm reboot during thatperiod in a rolling reboot
perspective to try and get itback up, to solve the memory
leak, to keep us up and runningfor the for the period.
So, uh, yeah, there arecertainly some moments of of
clenching, yeah, yeah and thencertainly, from a external
perspective, no one knew thatthat happened.
But yeah, we, we, we timed itdown to the literally and we
(35:17):
came up with about six secondsbefore the end of the race
before they came, everyonestarted logging back on.
So yeah, there was some moments.
Yeah, some of the key takeawaysfor me is well, the old Scouts
motto be prepared.
You know, do your homework, doyour effort, make sure that you
do it right.
For us, melbourne Cup Day usedto start at four o'clock in the
morning, where we always used tocome on, have a breakfast, have
(35:39):
the teams ready to go.
Nothing was ever done onMelbourne Cup Day that we hadn't
done before.
Everything was prepared,everything was.
If this happened, then we didthis.
If this happened, then we didthat.
We had structures, we hadprocesses, we had no one had to
think on Melbourne Cup Day andthat was a key element.
That we did is that, you know,we made sure that we had role
(36:00):
played and we had a stresstested, everything before that
environment.
And so you know that from a keytakeaway and learning is you
know, do your thinking duringthe middle of the day, do your
thinking when it's pressure isoff.
Don't try and think at oneo'clock in the morning.
Don't try and think in themiddle of a critical incident.
Don't try and think at oneo'clock in the morning.
Don't try and think in themiddle of a critical incident.
Don't try and make sure thatyou rely on muscle memory at
(36:23):
those periods and that meansyou've got to do your homework.
Michael van Rooyen (36:25):
Of course,
of course.
And then one last question fortoday's session.
Tell me about the mostsignificant technology change or
shift you've been involved withor seen in your time in the
industry or in your life even.
Matt Maw (36:39):
Wow, I've been through
it.
I think back I don't knownearly three decades in the
industry.
You know I was certainly therewhen voice and data collapsed
into the network and when I wentthrough, you know we went
through when virtualization camein and kind of drove, so we
went through centralized back todecentralized, back to
centralized, back to back todecentralized.
(36:59):
You know we've been throughthat round about many times.
You know, probably the mostsignificant technology shift for
me really was the, the, therise of the, of the internet.
You know I know that's going tosound very, very old, old
school of me, but you know,fundamentally that has driven an
(37:20):
interconnectivity that you knowwe never saw before and we
really still are seeing theimplications of.
You know, I mean, without theinternet we wouldn't have had a
cloud.
Without the cloud we wouldn'tprobably have AI Connecting
organizations together andreally taking them out of
essentially analog paper andreally driving a digitalization
(37:44):
really was the first domino, ifyou like, of what we're now
seeing as a multi domino fall.
So yeah, I guess probably I'llgo a little bit old school and
say the rise of the Internet andthe interconnectivity of
organizations.
Michael van Rooyen (38:00):
That's a
good one.
I mean, again, it comes down toplumbing, right.
I think we're now at the pointwhere the internet and has been
for a while, you know is stable.
It's there, it's thehyper-connected network.
Everyone's transitioned tousing it as their backbone.
So I agree, I think, as adigital plumber at heart, I
think the internet is absolutelyone that I agree.
It's kind of one of the wondersof the world, effectively right
(38:20):
to some point, and I thinkpeople that have grown up with
it now, particularly the newgeneration it's just there,
Connectivity is there they don'tthink about it.
Matt Maw (38:26):
It just works right.
Yeah, the days of not beingconnected are gone.
So, yeah, what we can now doand what we have done with it
and what we are doing with it isis quite amazing.
But yeah, technologist at heartI, I guess probably you know
that's that that's probably it,and I agree with you.
Michael van Rooyen (38:43):
If I think
about, uh, when the guys created
that arpanet etc.
You know I think it was mid 60s.
I mean, if you think aboutfundamentally how it runs today,
still, that's, it's a prettyimpressive engineering design
concept, uh, end to end.
So so I completely agree, it'sa great one.
Matt, I appreciate your timetoday.
Thanks for the great insightsinto the major event that
everyone in Australia knowsabout and I look forward to
(39:06):
talking to you more aroundcritical infrastructure in the
future.
Matt Maw (39:09):
Thanks, NPR.