Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Vocus Inspire, the podcast full of brilliant ideas
for business.
Speaker 2 (00:10):
Hi, I'm Luke Coleman, head of government and corporate affairs
at Vocus, Australia's leading specialist fibre and network solutions provider.
Before we get things underway, we want to acknowledge and
pay our respects to the traditional custodians of the land
from wherever you're listening. In this podcast, we dive headfirst
into what's on the minds of Australian business and government
(00:30):
leaders to help inspire you and your organisation to go forward,
go further, and go faster. So let's go.
Hi and welcome to the Vocus Inspire series podcast. I'm
Luke Coleman, and I'm joined today by two special guests
to talk about the topic on everyone's lips, AI, artificial intelligence.
(00:55):
I'm joined today by Peter Collins, who's the executive director
of Peter Collins and Associates, and Katie Ford, who is
the data and AI specialist at Microsoft. Now we'll start
off by getting a brief introduction of our two special
guests today. Katie, why don't you tell us a little
bit about yourself before we get into the questions?
Hi, Luke. Hi, everyone. Thanks for having me. So, um,
(01:17):
I am a, uh, a lawyer by background. I'm a
bit of a strange creature in the, um, in the
tech area. Um, I spent about 10 years in government.
It's actually where I met you, Luke, um, doing a
lot of public policy, um, work, and then in one fateful,
I guess, twist, I ended up being introduced to Kate Burley,
(01:38):
who was then head of Intel at, um, Microsoft Australia.
Got very um inspired by women in tech like Kate
Burley and also Genevieve Bell, could really relate to them, um,
and found myself um quite strangely in doing things like
um AI policy and IoT policy and,
Then spent a number of years at Data61, our national
(02:01):
science agency within CSIRO, um, working with some of the
most brilliant people I've ever met, um, people who were
using machine learning to solve things like how are we
going to feed 9 billion people on the planet, um,
how do we create, um, drought resistant crops, how do
we better diagnose mental illness, so I was highly inspired.
(02:21):
Um, working with them and then ended up coming to Microsoft,
where I really, um, you know, really wanted to go
much deeper into the technical implementation of AI and data
systems in organisations. So here I am. Fantastic. Thanks so much, Katie,
for joining us today. Peter, over to you. Well, I'm,
I'm like Katie, I've had a diverse background. I really
(02:42):
started my professional life as a consultant McKinsey and Company,
and I became a specialist in working with leadership teams
and boards.
During the course of which, the pieces around ethics caught
my attention as areas where businesses were failing. We've had
X number of royal commissions. I do work with our
anti-corruption commission down in Melbourne, and I've done various things.
(03:03):
Our cricketers, when they sandpapered the ball that time, disgraced
the country, unlike the Matildas, who done nothing but make
us proud. I did the reports into that, so.
That's grown, it's, I'm 3 weeks into having just attained
a Dil at Oxford. I went off to do a
doctorate to really be able to give the kind of
advice in ethics that Katie wants to give in technology,
(03:26):
and I think, you know, between the two of us,
ethics and AI and in Katie's world, uh, my mind,
we just have to connect them to make this work.
Perfect introduction. Thanks so much, Peter. Katie, I might throw
the first question to you. In terms of AI being
used as a transformative tool, can you tell us about
(03:47):
some of the ways that chat GPT can be used
in organisations and some of the lessons that you might have, uh,
seen organisations learning to today? Yeah, absolutely. So we're, we're
9 months in, right, since GBT was publicly launched to
the world. So we've had a good gestation period.
Um, and it's, it's been, it's been incredibly busy but
(04:08):
incredibly exciting, and I think what's made this different from
other AI technologies that I've worked with in the past
is because it's so,
Personal because it's so humanlike in its responses, because it's
so accessible, you know, we get not just the, the
tech nerds within organisations playing with it, we get everyone
right down, CEO, um, secretaries, all the, all the way
(04:31):
down through organisations, and that I think has driven very
different conversations, right, so, um,
You've got um education departments, you've got um police departments,
you've got telcos, you've got retailers, small businesses, all of
them thinking about, you know, we have all these problems
accessing information, whether it's our customers accessing information, um, whether
(04:53):
it's our own employees accessing information in a timely way.
Um, and this has really created that impetus to sort
of help people understand. So how do we not just accurately, um,
capture information but summarise it, categorization, serve it up to
people in a way that is really intuitive, cause you're
just asking a plain English question. You're using natural language
(05:15):
query to find information and data.
Um, so that's been, that's been really, really exciting. And
because it's so easy just to plug and play, I've,
I've never before seen in my career, and I've been
in the tech sector for about 10 years, as I mentioned.
And organisations go from 0 to 100 so quickly.
Um, it, it's enabled, I think, a lot of fast learning, right?
(05:37):
So to go from POC to, to, to creating a
pilot when you test it with a few 100 people
right through to production across the enterprise. Um, that I
think has been one of the most exciting things that
I've seen. And most organisations that we work with are
pointing these models at their own data, right? At their
own internal data, whether it's publicly available data or internal
(06:00):
only data.
Um, and they've been, you know, I think coming up
with use cases that I would never have dreamed of. Actually,
one of my favourite use cases, one of the universities, um,
decided that their IT department wasn't so popular with the
rest of the university, and they realised that if they
got the GPT models to generate answers to questions and
(06:21):
responses to tickets for them,
Then they felt like the IT department actually cared about them,
and I love this idea of using, using machines, their
capabilities to make humans more, um, more likeable and more, uh,
approachable and accessible, um, to their peers. So anyway, like,
all sorts of great use cases. Obviously, I work a
lot in education, and we've got a lot of departments
(06:43):
who are looking at not just how do we equip
students with skills for the future, given we're going to
be living in such an AI confused world.
Um, but also how do we save teachers' time? How
do we do things like personalised education? You know, if
you're teaching 30 students in the classroom, for some, the
lesson may be going too fast. For some, it may
be going too slow. We cannot afford, we're one of
(07:05):
the wealthiest countries in the world. We cannot afford to
have 1 to 1 teaching, um, in our education system.
So how do we use technology to enable some of
that to get better educational outcomes for students? Anyway, it's
a whole lot to talk about, but, um, I'll, I'll
pass back to you.
Thanks Katie. That's a great introduction. I love the idea
of uh of using Chat GPT to say, have you
(07:25):
tried turning it off and on again so that the
IT department doesn't have to get involved. Now, Peter, I
might throw over to you, let's have a look at
some specific industries where you've had some experience. Can you
tell me about some of the use cases for AI
in healthcare and emergency services?
So it can just run through a long list. I'll
just give you some of them, but we can now
(07:46):
investigate things in medical research. I used to work in
that area.
And for instance, they are able now to get 55,000
X-rays and look at them in an afternoon. You've got
to get the approval, you've got to work out where
the X-rays were stored and consent. I didn't know this
until I met these researchers, there's brilliant. There's 1300 of
the institutes I used to work at. They said the
(08:07):
least understood part of the human body is the bottom
of the right rib.
And they think they can draw out insights about cancers, genetics,
and all the rest. I'll leave it to them. That side.
It's like leaving to Katie and her crew, some of
the technical aspects, it's brilliant. Um, emergency management, it's hard
to believe, but down here in Melbourne, they're talking about
(08:27):
another Black Saturday kind of event. We're gonna have them
once in a century fires every 2 years.
That now means we face the risk, and I'm not
predicting anything, that we'd have to evacuate if the worst
came to worst, a million people.
AI can do that, Heathrow's done it. I don't know
if you've been to Heathrow lately, you now get through Heathrow.
They pick out all the family groups.
(08:48):
And they get them through, they like, we can now
work all these very complex uh issues out, so that's
climate change that is addressing in that just one instance.
Um, there's, there's other applications. There's a, a, a syndrome
called Tourette's syndrome where a young person with a nervous
condition in this case is with child protection gets very
agitating or spit at people. It's quite, you, you can
(09:08):
make this personalised. That's the thing. I even like the
word chat rather than GPT in this. Because it implies
an intimacy amongst friends. It's there as a personalised, like
a a an AI tutor in the case of education, this,
you know, to help people work through that. Healthcare.
The amount that you know, we'll always need nurses.
(09:30):
Uh, but this could be a compliment to what nurses
do in hospitals and all the caring that goes on
and can just add to it. I think we're at
the start of this transformative revolution. I know Katie used
the word excited about 10 times. Well, I'm excited too,
but I think it's transformative, and one of my concerns
in this is we can talk it down too easily
(09:51):
or so, oh, you know, but this I think we've
got to see the potential and make it work and
my litmus test here or my sort of moral compass.
Is about human flourishing, does it help us as a community,
as society and as individuals, how can we, how can
it enable us to do the things to make our
lives all the richer?
Thanks Peer. Now Katie, you mentioned earlier some insights into
(10:14):
how organisations are using AI now. Tell me, how can
companies ensure that their leadership is aligned across the organisation to,
on one hand, to create a safe environment to use
this technology, but also to drive the necessary actions to
capitalise it. capitalise on it, I should say.
Yeah, I look, I don't think it's a trade-off. I
(10:35):
don't think it's, you know, you manage risk versus innovate.
I think you can actually do both at the same time, very,
very well. Um, in terms of getting alignment within an organisation, um,
I actually, I think this is probably the biggest risk,
one of the biggest challenges we have. I've seen a
few things happen.
Um, one is where, um, there's a history of AI
(10:59):
being something that was buried deep in the belly of
an organisation that somebody with, you know, an incredible background,
a PhD in, in machine learning was playing around with
things that never saw the light of day. So there hasn't,
in many organisations, been a long history of using AI
as part of their core embedded into their core business process.
So there can sometimes be a bit of scepticism.
(11:22):
I think what, what, what, what organisations need to go
back to is their why, and I, sorry, I know
that sounds very, um, you know, uh, uh, you know, it's, it's,
it's a thing that, that, um, the business world talks
about a lot, but I think the reason why, for example,
South Australia Department of Education moved.
So quickly in embracing AI within schools is because they
(11:46):
had a chief executive, a minister, um, a CIO, all
highly aligned around the need to equip students with skills
for the future so they can flourish, as, as Peter mentioned. Um,
and so that, that drove them.
And they looked at it from the, this is the opportunity,
this is the risk of not actually, you know, we
(12:07):
have a moral imperative as an education system to enable
our students to succeed in the future, and this is
a critical part of that. What are the what are
the risks, right? What are the risks and how, how
do we manage it? Um, and the key ones that
often come up is, you know, data protection,
Um, accuracy of information, bias, all of these can be
(12:28):
managed very, very easily within the technology and within organisations.
Most organisations have had to handle similar challenges before around risk, right? They've,
they've got quite mature risk management processes. Um, and, and
I think in terms of getting that alignment across the organisation, Luke,
that you mentioned,
(12:50):
AI is a very cross-functional area. Um, so what I advise, um,
you know, the agencies that I work with to do
is to set up, um, sort of like a governance
board or a steering board that includes. Of course, you've
got your IT people, your data science people who are
working on the technology, but you've also got people from
the business side, right? So, um, who are customer.
(13:11):
Facing, um, who are developing policies, so you usually have, um, uh, legal, cybersecurity,
HR in some instances, if there's a big change management process,
you need buy-in from across the organisation and everybody to
be aligned around that mission, that why you're doing it.
And I think that just enables everything else to, to flow.
(13:33):
Thanks Katie. Now, Peter, do you think that organisations should
have some frameworks or guidelines in place that tackle the
social and ethical impacts of AI? Look, I look, I absolutely,
and to echo Katie's comments about management boards, I'd push
it even further. I think we've got to get people
outside the organisation to look at this. One of the
(13:54):
brilliant aspects of health ethics has been that every project,
Has as the leader of committees, an indigenous person, has
two people on it who know nothing about the health project, like,
I think we've really got to ramp up our our
decision making here so that we can,
Draw on the very rich traditions about non-discrimination, the right
(14:17):
use of data, data integrity, um, about ensuring this, that,
you know, this is beneficial to humankind, you know, even
the fact that it is drives productivity, ethics hates waste.
Because we've only got so little to work with. But
the other side is, we're not that great yet at
making decisions and AI will test us, so I think
(14:39):
there's a whole aspect of ethics, modern ethics, which says
unintended unethical outcomes don't come because people seek to do
the wrong thing, they mismanage their decision process. So you
get blind spots, you get groupthink, you get all these
things that we know about, but we don't put them
into the room.
We make decisions, or we, you know, we think, oh
(15:00):
we're fine or whatever. And to pick up even uh
not even, to pick up one of Katie's phrases, the
first person to ask the why question was Aristotle, Katie,
2400 years ago. He called it the Telos, he said,
what's the Telos of humankind, it's human flourishing. We've got
to keep asking that question. Is this leading us in
that way? So that's the first part. The second is,
(15:20):
AI will challenge.
Us. Do you remember that film, um, I'll try to
remember it now with uh about um Alan Turing.
Bletchley Circle, what was the first thing they discovered when
they cracked the computer, was that they couldn't use that
information to save lives at that time.
(15:41):
So we're gonna, it's not gonna be a straight road
about what we do, we'll throw up dilemmas for us
about where we use these things, how we use them,
do we give a preference to indigenous education, do we
allocate resources deliberate that way, how do we think about
our social licence to operate, how do we, you know,
foster a global view of this?
So that globally it will work, not just for sort
(16:03):
of good, you know, countries like Australia which are so
advantaged and that's a good thing, but how do we
do that. So it's a lot here, but I think
ethics has a long tradition and a very modern piece
here to add to that decision making process. We've had
too many royal commissions to convince me that we're good
decision makers in business, um, and we need to really
(16:24):
take those lessons on board, and we can. There's no
reason why we can't.
Now I'm gonna ask this next question to both of you,
but Peter, I'm gonna go back to you first because
you're on a roll here. In your view, what are
the biggest risks that AI poses to organisations and following
from that, do you think that enough is being done
with policies, practises and regulation to mitigate those risks both
(16:46):
in Australia and globally? The Biggest risk to organisations if
they treat this with fear.
That's the thing, the biggest thing we have to fear
is fear itself on this. We, we can thread our
way through. We know how to, to put together the
whole pieces around law and risk, etc. and put ethics
there as well. And the other risk we have with
regulation is governments by definition are slow. I know they've
(17:10):
just passed the law in the EU, there's various things
happening globally, but I think it's incumbent on business, and
that's one of the reasons I'm delighted to rejoin Katie,
a global firm.
Taking a lead about acting responsibly, I think businesses have
a really strong leadership position to take here and to
show the way, because we can't just wait for someone
(17:32):
else to do it. And we also run the risk
that if we just make it a matter of compliance,
we'll miss the intricacies here, we'll just kind of do
it cos that's what has to be done, and that
would be a terrible shame. There's so much that can
be done with it and will. So that's my, my
view of it.
But of course we need regulation, and of course businesses
should act thoughtfully, but not just, you know, get afraid
(17:54):
of it.
I'm gonna throw that same question to you now, Katie, what's,
what are the biggest risks that AI poses to organisations,
and do you think enough is being done to mitigate
those risks?
Look, I think one of the things we need to
learn from the history of technology application in in organisations
is we run the risk of throwing technology at a
(18:15):
problem and assuming that will digitally transform a company, right,
or an organisation. It it doesn't work that way. I've,
I've seen it many times, I'm sure many of us have.
Um, we have to think about how to redesign our
processes to do things better, right? And, and I think
one of the risks is, oh, we'll just replace that
(18:36):
task with, with some AI and some automation and problem solve,
productivity goes up, costs go down.
I actually think we have the opportunity with, you know,
these AI models with them being embedded across our productivity tools,
which is what, you know, Microsoft and other organisations are doing.
To think about how we can do things better and
how we can build trust in things. So, you mentioned regulation,
(19:00):
one of the reasons I joined Microsoft was because I
was impressed with Brad Smith and the company's approach to
to AI, the responsible AI framework, the work had been
doing since 2017.
To bring, um, responsible AI standards, not just into the
into the product, but into the whole process, end to end,
right from the beginning of the, um, cycle, um, through
(19:23):
the end. Um, but also Brad Smith, the vice president,
had said, um, you know, AI regulation is actually important, right? Um,
you know, you and I get on flights all the time.
Um, there are multiple, um, highly sophisticated technical systems that
are keeping that flight safe. They're acting as a co-pilot
(19:47):
to the, to the actual physical pilot on board. Um,
I don't know exactly how those systems work. I don't
need to. I know that it's regulated. I know that
these companies have a strong safety history.
And so there is trust, and I think with whenever
we use AI putting trust at the centre of what
(20:09):
we do is absolutely critical. The other thing is, there is,
I mean, I, I, I say this as I mentioned, I'm,
I'm a lawyer by background, um.
There, there is actually a lot of law that applies
to to AI already, right, so you've got anti-discrimination law,
you've got consumer laws, um, you've got tort laws, do
no harm, don't be negligent, etc. you've got industry specific laws,
(20:32):
so medical technologies, there's a lot of laws that already apply.
The question is,
Are our, is our legal system maladapted to the age
of AI? What do we need to do to make
sure that it can move quickly and innovate as quickly
as the technology changes?
Um, so, yeah, absolutely, regulation is important, I think, to
(20:53):
every technology that's introduced and also to, um, to AI.
Beautiful. Let's do a bit of blue sky thinking here
we're going to have a future where we will need
to coexist with AI. So, Katie, I'll throw to you first,
what do you see as an ideal scenario in ensuring
a safe future where we're working with AI, drawing on
(21:14):
its benefits to be more innovative, but particularly with respect
to sustainability, the environment. What's the ideal scenario look like
for you in future?
So, I'm very passionate about sustainability, um, I, it, it's, it's,
I think any, any big player in the ecosystem here
has to think about their footprint, right? And, and, um,
(21:36):
I know from the Microsoft side of things, that has
driven a lot of our strategy in terms of our
cloud computing infrastructure, so,
Um, back in 2012, we went, um, net zero, in
two years' time we will be 100% renewable energy focused,
by 2030 we'll be water positive and waste, um, neutral
as well. Um, so a lot, a lot of thinking
(21:58):
goes into ensuring that this is sustainable for the planet. Right?
Computing takes up about, consumes about 10% of, um, electricity
generated worldwide. So it's important that we build sustainability into
the heart of this.
The other thing I want to mention, and, and um you,
you touched on this, Peter, around the,
(22:20):
The people side of things, because I think the technology
is relatively straightforward as I, as I mentioned, I think
part of the challenge is, um, thinking about what this
means for particular um jobs within organisation and tasks within organisation.
I think one of the risks here is that,
We, um we have a fixed mindset about technology and
(22:42):
how it can be used, uh rather than thinking about,
you know,
Looking at the friction points on a day to day
basis of what workers go, go through, if you, if you,
if you speak to, I mean, you would find this
in Vocus in any government department, in any major company,
when you speak to people who are actually hands on
doing the jobs, there are, there are plenty of pain
(23:03):
points that we experience every day.
And I think the opportunity here is to think about
how AI can be like a, a co-pilot to remove
that friction. I love what Service New South Wales did, um,
a number of years ago and won many awards for,
by the way, when they had a look at the
citizen journey, the life journey, as, as they talk about,
(23:25):
you know, what are the pain for
Points that our citizens have when it comes to, um, registering, um,
deaths and births and marriages, all sorts of government processes
that they need to, um, interact with the government for. Um,
how do we make that easier for them? I think
that exact same methodology can be looked at from the
perspective of what are all the pain points for our employees.
(23:47):
Uh, leadership is about attracting and retaining smart people, right,
who want to grow. So I think what I'm excited
about is the big opportunity to change the way we
work for the better. And I think that needs to
be done in collaboration with the workforce to understand how
things can be improved. So that would be a key
thing from my perspective.
(24:07):
Beautiful, Peter, same question to you, co-existing with AI in future,
what's the ideal scenario in how we have both a
safe future and an innovative one? I'm not such a
fan of coexistence because it implies there's a thing called
artificial intelligence on the one hand, it's too binary, on
the other hand, there's something else. I, I like co-pilots.
Uh, more, but I, I don't, I, I'd even, you know,
(24:29):
if I really be bold, I'd change the term artificial intelligence,
because it implies there's this mechanistic thing over there we've
let loose and it doesn't have controls and it will
do all these terrible things and you hear people talk
about or take over lives. I think it just creates
the wrong impression. I think what we have to say
is we know how to make technology work, provided we
(24:49):
bound its use and are very clear about what we're
doing when we do so.
And I think that's where it leads us to think
about decision making in this contexts. Look, I have been
called in to some disasters on decision making and artificial intelligence,
and the organisations, you know, it's been awful, they've gone
for self-interest. They've ignored in one case, 3 universities, ignored
(25:13):
the Chief of defence ringing them up saying don't do it,
it's a war crime to do what you're about to do.
And they still did it.
And you go, you know, this is when they say
to me after then, what, what should we do? I
say we should have listened a while ago and uh
you should get out of this thing. Taking money for
defence with a foreign, a country that wouldn't have the
right rules of engagement in foreign incursions. Now, this is
(25:36):
where we've got to be really realistic, that organisations say
they want to do the right thing and they do,
but they don't in practise.
Because the decision path goes wrong, our friends' self-interest kicks in,
bias kicks in. Oh, we've got integrity because we've got values.
All the things you hear and see in royal commissions,
we've got to really be attentive to, so I think
(25:58):
it's a real challenge to our decision making on how
to make it work with us.
With us making the decisions about its use, not some
thing that's going to, you know, uh, escape our ability
to do it, and I think that's very important because,
It has so much potential, it has so many applications
that we're just starting with, and the more we can
(26:21):
see that and guide its use in that way and
make decisions, including the decision not to use it in
some cases.
And I think that's important too, um, I'll give you
one example, I, I, I'm in charge of a health
ethics committee. We publish our decision making like the Reserve Bank.
I think leadership teams should do that about something, they can,
(26:42):
you know, go so far as to publish what they've decided,
take out some confidential bits, but say these were our reasons.
In order to make like a good judge does, Katie's
an ex-lawyer or still a lawyer or got a practising
certificate somewhere, but I mean this is what we do
publicly about transparency of decision making. And I think these
are really things that we ought to do to make
(27:03):
sure that it serves us and does, and we do
the right thing in using these technologies. So you'll hear
a bit of my, my passion for this, but I
see too many bad decisions made by large organisations and small.
And you think, right, we don't start from a neutral
standing point.
(27:23):
We start, we've got to learn here and work out
how to do this better, and we will, we can.
And I think ethics has a role to play in that,
as well as, you know, great organisations, global lines like
the one Katie works with and the disciplines she talked
about and others and say let's really take this down
the right path, let's really get serious about this. So
(27:44):
that's how I see it.
And I think there's all sorts of applications here that
we're only really scratching at the surface of. Now I'm
going to finish by asking both of you if you
had one wish, one thing that you could put out
there to all business and government leaders to either, um,
you know, avoid making a mistake or to grasp a
(28:05):
real opportunity. If you had that one wish, Peter, what
would it be?
Make better decisions, take it seriously. Do you know this
country does not offer offer a course in decision making
for senior leaders. No business school, no university, it's not
offered anywhere, um, and, and it's very rarely found overseas.
Some of them are because they're teaching it now overseas,
(28:26):
but our ability to make decisions I think could be
a real determinant of the success of this.
And the idea that we should just do the same
thing we've always done with decisions, set up a committee
or whatever, and make it work, I think we've really
got to stress test that a lot more than we
arguably are now.
Because it's a hard thing to make good decisions.
(28:48):
Requires a real skill in a group, you've got to
put different domains together. It's a really, it's a deep
skill and um you can possibly hear in this answer
that this is my defill topic, that decision making in crises,
and you think of COVID.
Um, and how badly many decisions were made in COVID.
(29:11):
And I think we've just got to remind ourselves that
uh we, but we don't, we don't have to end
up like that. There were some good ones as well,
but we didn't, we made some very poor decisions um
in the COVID period and they had a huge.
So I just think there's something here around decision making,
and ethics has a lot to offer here, even more
so after the last 20 years. Great answer. Thank you, Peter. Katie,
(29:32):
same question to you. You get one wish for all
business and government leaders, what would it be and why? Yeah, I,
I'd like to go back to that point about having a,
a learning culture. Um, I, I think, you know, there's
so much talk about a growth mindset, um, that we
as individuals can have, but I look at the organisations
who are doing this really, really well.
(29:53):
And what they've got is a common understanding of their mission.
They've got a minimum viable understanding, and I think that's
really important. Having a common knowledge across the workforce of
what is AI, what does it mean to them, what
are the risks that they should manage, have a clear
strategy for managing that risk, both from a personnel, human perspective,
(30:15):
but also how you do this within the technology. Um,
and I think
You know, I think being as agile as possible, you know,
test things out, pilot things, get feedback, be willing to learn, um,
I think the, the fascinating thing is that,
Developers, what I'm seeing is developers come to the fore.
(30:36):
Software developers have always worked agile, right? That's how they
build products successfully. And I'm loving watching these developers come
forward and go, Right, we've got sprints, we're gonna push
this out, we're gonna get feedback. It's not gonna be perfect,
but we're willing to learn and we want to talk
to our colleagues and get as much feedback as possible.
They're having really rich conversations, and it's an absolute pleasure
(30:56):
and a privilege to watch, um, and, and support from the, from,
from my job's perspective.
Thank you, Katie. Now we're gonna finish up with some
rapid fire questions. I'm gonna throw these questions at you
really quick, off the cuff responses, Katie. I'm gonna start
with you and run through uh a bunch of these.
What's your favourite piece of technology? Oh gosh, probably my
smartphone because I can listen to audiobooks and music whenever
(31:18):
I want.
How do you disconnect? Books, music. What's the most important
thing that you do for your well-being? Uh, exercise, I guess, and, um,
and reading as well. I love, I'm a bit of
a book nerd. What's one thing that would surprise people
about you?
A long time ago, see, I peaked too early in
(31:39):
my career. Um, I was a ministerial advisor, and at
the time I was, I had, um, the wine industry
and dairy industry, and I was known internally as the
advisor for wine, cheese, and chocolate, and I don't think
I will ever get a better title in my life.
What's the one personality trait that's most important to success? Um,
(32:02):
I think a learning mindset, I mean, I, I do
not pretend to be a to be a successful person,
but I think the only thing I've done consistently through
my career from law, public policy, research, technology, etc. is
just be open to learning. Um, if somebody told me
back in,
You know, in 2020 when I finished, uh sorry, in
(32:23):
the 2000s when I finished school, that I would become
an artificial intelligence specialist, I would have, I would have
been shocked, so there you go, careers are funny things.
What's the one thing that there needs to be more
of in business today? Collaboration, internal collaboration, rather than fighting,
rather than a siloed mentality, collaboration to deliver a mission. Peter,
(32:44):
I'm gonna throw the same questions at you. What's your
favourite piece of technology? My iPhone, how do you disconnect?
I turn it off. I can't stand going out for
dinners in restaurants and 4 people are, you know, playing
on their iPhone. I just think, just turn it off,
put it in your pocket.
Drives me mad, I won't go any longer. I'll drive
your listeners mad, but uh unbelievable. We blame the iPhone.
(33:06):
We don't take responsibility for it ourselves, oh the fire
phone's there, it could ring anyway. What's the most important
thing you do for your well-being? Every morning, I buy
a cup of coffee at the local cafe for my wife.
Skinny latte, what's the phrase, super hot.
And uh that I think if you can keep the
marriage going, if you every day you start ahead with
(33:28):
a coffee, there you go, that's what the little secret,
that's another story, but uh.
What's one thing that would surprise people about you? I
think my most profound work I've ever done was in
a neonatal intensive care unit with grief. I wrote papers
that led to the end and worked to end.
Dreadful practises a while ago whereby babies were whisked away
(33:50):
from their parents after they died with no right to funerals.
I helped to change that. I think that's the one
that was surprised given.
Uh, that, what's the one personality trait most important to success? Well,
just in case my sons are listening, hard work.
Uh, I think humility, which is the ethics word for learning,
(34:11):
I think you've got to be grounded. I think you've
got to just get on and do it and, and, um,
and just keep learning, keep learning, keep at it. What's
the one thing that needs to be more of in
business today? I think the genuineness, business telling us how
great they are about this or this or they've got
this or that, it just doesn't wash with people, there's
such a low trust in business.
(34:33):
And the other side of this is, I agree with
the collaboration, but I think collaborating outside of the boundaries
of your organisation.
And I think that's uh, even with ethicists, whatever they,
they're not as good as lawyers, but even ethicists have
their value. Peter Collins and Katie Ford, it has been
a great pleasure talking to you both. Thank you so
much for your time joining us on the Vocus Inspire podcast.
(34:57):
Thanks so much for listening. I hope you've enjoyed this
episode of Focus Inspire, and we look forward to bringing
you more inspiration in coming episodes. If you've enjoyed this conversation,
we've got so much more to share with you. We've
just released a detailed report called Connectivity for.
0.0, the new business imperative, featuring trends and insights from
(35:18):
industry leaders and experts and importantly, practical steps to help
you lead your organisation through change. Head to our website
at ocus.com.au to download the full report.
Speaker 1 (35:30):
And don't forget, if you want more inspiration and more episodes,
head to Vocus.com.au/podcast. You can follow us on LinkedIn and
Twitter to stay up to date with all things Vocus.
Listen out for the next episode of the Vocus Inspire podcast.