All Episodes

July 29, 2025 43 mins

Tell us what you thought about this episode

What happens when you stop treating technology as a cost centre and start viewing it as a strategic enabler for your organisation? Anna Campbell, Chief Support Services Officer at Whakarongarau Aotearoa (New Zealand's telehealth services), provides compelling answers in this thought-provoking conversation about breaking down traditional silos.

Anna's unique portfolio encompasses technology, data, people, marketing, and Māori services development – creating a powerful integration point that puts service users at the heart of operations. Rather than maintaining artificial boundaries between departments, this structure focuses everyone on a shared mission: delivering essential 24-7 health and wellbeing support across New Zealand, helping one in four Kiwis access care when they need it most.

The conversation explores how organisations can shift from viewing technology initiatives as budget items to seeing them as strategic solutions for real pain points. "Keep your mission critical and strategically aligned," Anna advises, emphasising the importance of measuring impact and articulating value in terms that matter to the broader organisation.

Particularly valuable is Anna's insight into responsible AI implementation. Starting with clear ethical principles, Whakarongarau Aotearoa has developed governance frameworks that prioritise privacy, transparency, and data sovereignty. From internal efficiency tools to clinical note summarisation, their AI applications follow the guiding principle of "nothing about us without us" – ensuring that technology enhances rather than replaces human connections.

For leaders navigating technological transformation, Anna offers practical wisdom: understand both the capabilities and limitations of tools like generative AI; focus on reimagining work rather than merely automating existing processes; and remember that change management requires acknowledging what's not changing alongside what is. By meeting people where they're at and maintaining trust throughout the journey, organisations can harness technology to deliver truly human-centred services.

Ready to transform how your organisation approaches technology? Listen now for insights that bridge the gap between technical possibilities and meaningful organisational impact.

Music by arnaud136 from Pixabay

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Today's episode features a valuable discussion
on how businesses canstrategically align technology,
people and processes to drivegenuine impact and achieve
sustainable growth.
We move beyond viewingtechnology merely as a cost and
instead focus on its role as acore enabler for operational
excellence and competitiveadvantage.
I'm speaking with Anna Campbell, the Chief Support Services

(00:21):
Officer at WhakarongarauAotearoa, the New Zealand
telehealth services.
Anna's unique portfolioencompasses technology, data,
people and marketing, providinga compelling case study on
breaking down organisationalsilos.
In our conversation, annashares how Whakarongarau
Aotearoa leverages integratedsolutions to deliver essential
24-7 health and wellbeingsupport across New Zealand.

(00:43):
We explore the strategicrationale behind their
consolidated structure and theclear benefits it brings to both
service users and internalteams.
We also delve into the criticaltopic of artificial intelligence
.
Anna offers a pragmaticperspective on whakarongura
Aotearoa's responsible andethical use of AI, from internal
efficiency tools to enhancingclinical support, always with a

(01:04):
strong emphasis on privacy andmulti-data sovereignty.
This discussion offers directinsights for any lead in
navigating technologytransformation, aiming to
optimize operations or seeking aclear path to AI adoption.
Anna's advice on shifting theperception of technology from a
cost to a value add and theimportance of foundational
principles is particularlyrelevant for driving tangible

(01:25):
business outcomes.
Join us to gain a clearerunderstanding of how integrated
strategic thinking can trulyfuel sustainable growth in
today's dynamic environment.
Kia ora Anna.
Welcome to the show today.

Speaker 2 (01:35):
Kia ora.
Thanks for having me.
I'm looking forward to it.

Speaker 1 (01:39):
And great to have you as a guest as well.
I'm going to ask you to talk alittle bit about your background
, because it's an interestingtopic that I talked on a couple
of episodes ago with Moderna'smerger of HR and IT functions,
and it's something you're livingas well with your organisation
and we'll journey down that onein a moment.
But talk to me about what yourrole is in the organisation

(02:00):
you're working for.

Speaker 2 (02:02):
Yeah, sure.
So I am the Chief SupportServices Officer for an
organization called WhakarongaroAotearoa.
So we're the New Zealandtelehealth service for the
country and our job is to puthealth and wellbeing support in
the hands of all of NewZealanders.
So generally, we help one infour New Zealanders get the
right care in the right way, andwe're a social enterprise.

(02:25):
We deliver free,government-funded national
telehealth services 24-7, acrossa number of different digital
channels.
We've got 30-plus services thatwe offer.
Probably the most well-knownone's a health line, which is
the line that you call in themiddle of the night when your
child is sick, and 1737, whichis our text, mental health

(02:48):
network.

Speaker 1 (02:48):
I've been a big user of Healthline.
Yeah, it's saved us a couple oftimes.

Speaker 2 (02:52):
Oh, fantastic, yeah, and I mean that's the point of
it.
Right is that we are able totriage and get people the care
that they need in the momentthat they need it, at the right
sort of level, and my role isthe Chief Support Services
Officer there, and so myportfolio covers all the
technology data program teams,the people and capability team,

(03:16):
maori services development andour customer and marketing team,
so it's really across theend-to-end people technology
process in the organization thathelps our frontline clinicians
absolutely do what they do everyday well.

Speaker 1 (03:35):
Yeah, nice, and that's something I've been
talking about a lot and wetalked about before we started
recording on this as well whichis that technology.
When we often talk about IT,it's used as a blanket coverall,
but really there is an elementof people, process, system and
partnership in there.
Like I say, you guys have gotthat structure.

(03:55):
What was it that sort of droveyou towards having those the two
departments particularly, butalso marketing.
What was the decision to wrapthose up under one reporting
line?

Speaker 2 (04:07):
I think any executive's role in an
organization is to put thecustomer at the forefront of
what they do, and we've realizedas an organization that, in
order to do that, any changethat you drive through an
organization these days reallyoften is a technology change

(04:30):
that impacts your people yeahand then you need to tell your
customers about it so thattriangle is really an end-to-end
operating system across theorganization.
So we just we just think in thatway and it's interesting when
we put that structure in place.
To start off with, there werequestions from the team about
how is this going to work, whyare we doing this, what is this
going to achieve?

(04:51):
And now in my leadership teammeetings regularly the team's
talking about the synergy thatwe get between those departments
and that we're able to delivermuch better outcomes for our
kaimahi.
So the people that work in theorganisation and our tangatawhai
ora, the people that call inlooking for support, because

(05:11):
we're breaking down the thinkingthat stays within the
functional channel to be muchmore about what are the outcomes
that we're trying to achieveand what are the big problems
that we need to go after.

Speaker 1 (05:20):
To simplify or solve yeah, and let's stick with the
benefits.
We'll come back to thechallenges along the way later.
But how quickly did that sortof start to happen, and was it
an organic change, or is itsomething that you really had to
?
And by organic, I mean, did itsort of come naturally to the
people around the table, or isit something you had to work on

(05:42):
with them to build up?

Speaker 2 (05:46):
I think we were really deliberate about it.
So, um, you know, verydeliberate and very determined
that our job was to make thingsbetter for tangata whaiora on a
daily basis, and we recognizethat to do that, the, the mixed
skill set that we had, um, andthe diversity of thinking that
the different specializationsbought was going to help us
deliver better outcomes.

(06:07):
And really we committed earlythat, as a leadership group, our
job is about experience and itdoesn't matter what the title is
.
We're either creating amazingexperience for the people that
work in the organization, peoplethat the organization serves,
or the tooling that you know,the tooling that people use and
so that that focus on theoutcome that we wanted to and

(06:30):
the impact that we wanted tohave meant that that team gelled
really quickly yeah, and Iguess the the benefit that that
builds into with that speed ofgelling is also that it stops
everything else being anafterthought.

Speaker 1 (06:43):
And what I mean by that is I've worked in corporate
environments before worked fora bank, where there were a lot
of silos because of thestructure of the organization
and things tended to be anafterthought.
You know, a technologyconversation can get quite far
down the path of delivery beforeanyone thought to wrap it back
to the people or to how we'recommunicating.

(07:04):
But equally, a marketingjourney could do the same If you
go quite far out beforetechnology got brought on board.
You guys are cutting that off,aren't you?

Speaker 2 (07:12):
As much as we can.
So that's our intention.
You know human nature comesinto it, so nothing's perfect,
but yeah, we are and we're beingvery deliberate about that
process, and I think anyorganisation that starts to

(07:38):
think really clearly about youknow, is this right for the
customer, is it right for thepeople and can we execute it
really well?
Then you're are delivered in amuch more cohesive way, because
you've thought about thattriangle up front and really,
you know, remove technology,remove AI, remove any of those
things.
A leader's job in anorganization is to get people to
do differently than they didand imagine a different future.
So to do that, you have tobring those three parts of the

(08:01):
triangle along with you.
So, yeah, it works really well.

Speaker 1 (08:05):
Yeah and look it works really well.
Yeah, and look, I agree it'snot an all or nothing kind of
approach, it's a collection.

Speaker 2 (08:12):
I think my challenge sorry to interrupt you, but my
challenge back to organizationsis I don't think it needs to be
a structural thing though.
So for us that structure worksreally well.
But actually silos in anorganization is really the
result of a question that hasn'tbeen asked yet or a
conversation that hasn't beenhad.
So it doesn't matter what yourreporting line is or what your

(08:32):
formal structure is in anorganization.
If you're taking that mindsetof the outcome that you're
trying to achieve and that youneed those multi-points of
expertise, then it shouldn'tmatter whether the reporting
line is gelled or not.
It's about putting the outcomeat the front of anything you're
doing, and so you're having theright conversations in an

(08:54):
organisation, Yep.

Speaker 1 (08:56):
I've often talked about it as being a
problem-first approach, which isfiguring out what problem
you're trying to solve and thenworking your way back from there
to actually solving it.
And you're trying to solve andthen working your way back from
there to actually solving it.
And you're right on that oneyou don't have to have everyone
reporting to a single point tomake it work, as long as
everyone's at least focused onthat problem.

Speaker 2 (09:14):
Yeah, I think you know, then, one of the
advantages of having that singlepoint is that you've got that
broader oversight and you canmake the connections for people
if they haven't lined those upthemselves.
So you know, that is oneadvantage of it, but anyone
that's had a tech role wouldknow that they spend a lot of
their time politely tellingpeople yes, that's a wonderful,

(09:38):
new, shiny tool that you'vefound.
Can you just describe theproblem that you're trying to
solve and the impact that that'sgoing to have, and the impact
that that's going to have, andthen let's start at that point
and then we'll come back towhether this is going to be the
right.
If it's still shiny at thatpoint, is it still?

Speaker 1 (09:52):
shiny, is it?
Is it?
Yeah, yeah, absolutely.
And and I've seen it too oftenwith with the we've got an idea,
we've got an opportunity, we'regoing to deliver this outcome.
Like cool, what?
What problems is this?
It doesn't solve any problems.
It's just we were promised adeal if we could turn it on by
the end of this quarter.
This isn't going to go so wellfor someone.

Speaker 2 (10:12):
Yeah, yeah.

Speaker 1 (10:14):
So what other challenges have you run into?
Whether internally orexternally, has there been many
other challenges in the way thisstructure has sort of evolved
and grown?

Speaker 2 (10:26):
Yeah, um, no, actually it's been.
It's worked very well for ourorganization.
Um, obvious normal teethingstructure changes that you have
when you're setting up a newteam.
Um, but actually as a, as aleadership team, we've worked

(10:46):
really hard to make sure thateveryone in the wider team knows
each other as people,understands our mission as a
team, understands how the workthat they do every day fits into
their overall picture and whythey're part of that broader
team, and actually that's been areally fun process.
And so, as a result of that, ifyou take things like I don't

(11:10):
know, an induction program orsomething like that that touches
all of those functions and sothey understand each other's
input into it, I thinkappreciate each other's work a
lot better.
Um, understand what thosedifferent teams are trying to
achieve and then howcollectively it rolls up to
serve the um.
Our tangatawhai ora um on abetter on a better way.

(11:31):
Nice, have there been any?

Speaker 1 (11:31):
and I'm going to come out here on a better way.
Nice have there been any?
And I'm going to pick on otherroles in the organisation here
when I do this.
Have there been any challengesfrom outside your team?
I'm looking at finance, becausethey often can be a barrier to
driving change, but have theybought into this as well?

Speaker 2 (11:48):
Oh look, we're a small, tight team.
Our finance team is fantastic.
We have a great team modelwhere we have robust debate in
our organisation.
We love the Lencioni model.
So we start with a basis oftrust, have great debate and
then move to agreement andaccountability so that we're
delivering results results, andso we focus on trying to unlock

(12:12):
capacity in the organizationcapacity and creativity.
We have um a whole lot ofclinical experts that, um you
know, our job really is to makesure that we're unlocking their,
their capability, on a dailybasis, and I think you know the
finance team and us are reallyaligned on that and actually
they've been really helpful inhelping us define.

(12:34):
What does capacity mean?
and how do you unlock capacityand how do you actually measure
that, rather than focusing in onthis function is around just
cost saving or you know a lot ofa lot of functions.
A lot, I think, focus in on howdo you use tech as a
cost-saving device rather than avalue-adding device.

(12:57):
And you know, while, like allorganizations, we definitely
have goals around streamliningour technology, making sure we
don't have any redundantservices, making sure that we've
got it right size for theorganization redundant services,
making sure that we've got itright size for the organization
we are all really aligned onmaking sure that we are safely
delivering good services andwe're helping our frontline team

(13:19):
be able to do that to the bestof their ability.
So, you know, I think financehelps us in that respect a lot.

Speaker 1 (13:26):
Yeah, nice, nice.
And one of the things I didwant to ask you about was on
that cost center versus valueadd piece.
So let's go there now Foranyone listening to this who's
exploring a change,transformation and it doesn't

(13:46):
have to be as bold as startingto align these services the way
we've been talking starting toalign these services the way
we've been talking but whatwould be your advice to someone
who's about to embark as to howthey approach that technology as
a value-add mindset.

Speaker 2 (14:03):
It's tricky right, Because technology functions
often have the biggest budget inthe organisation.
So if there's any financialpressure, that's the area that
people come after to try anddeliver cost savings and
efficiencies in the organization, and I think that's a realistic
part of the role that you'vegot to understand.
However, if you are going afterthe right things in the

(14:29):
organization and you areunderstanding the drivers of the
organization, where the painpoints are in the organization,
then you can really quickly flipthat conversation to be a
value-add conversation.
So it kind of loops back to whatwe were talking about before
really understanding theproblems that are going to have
the biggest impact in theorganization and are going to

(14:51):
take away pain points for thepeople that work in the
organization or the people thatuse the services of the
organization, and so to me it'sabout getting real alignment to
what is the strategy of theorganization, what are those
pain points and then what arethe big problems that we're
trying to solve, and then beingvery rigorous around
prioritising around thoseaspects.

(15:14):
Second, I think, is measuringyour impact and being able to
talk about the impact thatyou're having.
And third is really knowingyour numbers so that often you
need to make an investment upfront to be able to deliver
either savings or capacity orproblem solving.
Being able to really articulatethat in a way that makes sense

(15:35):
to the business and loops itback to the issues that the
business is trying to solve.
So you know in a more succinctway, it's like keeping your
mission critical andstrategically aligned.

Speaker 1 (15:48):
Yeah, yeah.
And it's interesting there aswell, because for many New
Zealand businesses putting thisinto a New Zealand context for
many of them their technologyteam I'm using quotes here is
largely outsourced.
They're using tools thatthey've picked up off the shelf,
and it's more than just thetraditional tech functions.

(16:09):
They've got an HRIS, maybethey've got well, they've
definitely got a finance systemof some sort.
So when you look at it fromthat sense, traditionally many
organizations don't haveownership around those.
They don't have someone owningfinance.
It's pretty clear that it'sowned by the finance department.
But a CRM or an HRIS gets alittle bit vaguer as to who's

(16:30):
responsible, and what you'vekind of got here is actually
we've got a function who'sresponsible for not just the
people that deliver tech but thesystems that our people use as
well.

Speaker 2 (16:40):
Correct.
Yeah, I'm really keen onco-ownership as well.
The tech has to work and it hasto deliver to the people.
So if you think about ourstructure but again structure
agnostic you know there's ahuman experience, there's
customer experience, there'syour tech experience, everyone
is experience owners.
So if you put the experience atthe forefront and the impact

(17:03):
and the problems you're tryingto solve, then actually we all
own those things.
We have the functional ownershipin the HR team for an HRIS, and
we maybe have the functionalownership in the HR team for an
HRIS, and we maybe have thetechnical ownership in the tech
team, but ultimately, if thesystem doesn't work, it doesn't
work and the people who aretrying to use it suffer.
So, rather than arguing aboutwho owns it, actually, if we
focus in on who's using it andwhat's the experience that

(17:23):
they're having, we can have amuch better outcome as an
organization yeah, absolutely,and I think that that line of
sight and it's a good way to putit back around is the impact on
the user base.

Speaker 1 (17:31):
But that also you've got better line of sight than a
lot of organizations, I see,where there's just no one in the
middle between the user baseand the decision makers, so the
decision makers may just neverknow that what the people are
using isn't working.
And you've at least put thatmiddle call it an integration
point but that middle layer inthere that says here's what
we're hearing from the peoplewho are using this tool, but
that middle layer in there thatsays here's what we're hearing

(17:53):
from the people who are usingthis tool.

Speaker 2 (17:55):
Yeah, and also I think in anything you're doing,
and particularly if you'restarting to move into the AI
space as well, that thatfrontline feedback loop is
really critical.
So making sure that their viewis integrated into your design
thinking and you're giving themtools and ways to provide that
feedback also means you're goingto get a better outcome,

(18:17):
because the person who's on thephone talking to the people who
are contacting your serviceknows so much more about what
they need than someone who'ssitting in the support centre.
Regardless of how focused theyare on a good outcome for that
service user, it's the personwho's dealing with them every
day that knows what they need,but also knows the things in
their flow of work that arecausing them issues in terms of

(18:39):
being able to deliver thatamazing service.
So you do have to listen tothem.

Speaker 1 (18:43):
Yeah, definitely, and I think there's a really good
point in there as someone who'sused the health loan service
before.
As I mentioned, what I foundreally valuable with the service
was the uh.
The nurse that I ended updealing with for our daughter
had um an infection and we wereringing health line as a first
point and she was able to.
Because of the tool she wasusing, the way she was
approaching it, she was able tofocus more on the triage than on

(19:05):
the technology and thequestions she needed to ask, if
you catch what I mean there yeah, and you know first.

Speaker 2 (19:10):
You know first of all , we have the most amazing team.
We are so lucky with theclinical experts that we've got
and they help so many people inexactly that situation, Right
when you're really vulnerable inthe middle of the night and
you're really worried about yourchild.
The work that they do is justphenomenal.
Very, very proud to beassociated with those awesome
humans.
If you've got your tech workingright, you shouldn't notice the

(19:33):
tech in the process.
It should feel really humanstill, even if you've got really
advanced tech working in there.
I think the trick to makingtech really successful is
remembering that it's a humaninteraction that you're trying
to create and the tech shouldsupport and enhance that and

(19:53):
protect it.
So your governance and yourstance around privacy and your
stance around ethics and how youmake sure you keep everything
safe is really important, butultimately it's about enhancing
that human connection and humanexperience 100%.

Speaker 1 (20:09):
Yeah, the example I remember.
Remember.
It was made the call.
Obviously, a nurse can't see achild through a phone, and so
she asked for a photograph.
It was a.
It was stagylitis on her arm.
We we had a link that she shesent us a link that we were able
to take a photo, upload thelink and then within seconds, it
was a straight, straight caseof yep.

(20:33):
That's what this is.
You need to go to um afterhours.
Perfect again.
Like I said, the focus was onthe triage, not the system
inertia that she needed toovercome to be able to do that
correct.

Speaker 2 (20:41):
Yeah, and ultimately, when you're calling in, you're
in distress and you want to knowthat you're getting really
solid advice.
That um one-way photo has beenreally phenomenal for our um
service users.
It means that, as you've justdescribed, the nurses can see
what is going on really quicklyand provide the assurance of no,
you're fine to stay home, oractually you do need to go in to

(21:02):
see someone and you need to goquite quickly.

Speaker 1 (21:05):
Pretty quickly.

Speaker 2 (21:06):
yeah, and we were joking about our locations and
living.
For people like you that liveout of the city, that's a big
deal to have to in the middle ofthe night drive into an ED.
So you want to know that youneed to go right.

Speaker 1 (21:19):
Yeah, yeah, exactly, and you've been given that
guidance that when you'restanding in front of the
registrar, I guess at thehospital or the ED or wherever
someone's told you to be there.

Speaker 2 (21:29):
Yeah, that's right.

Speaker 1 (21:30):
Yeah, yeah.

Speaker 2 (21:31):
That's right yeah.

Speaker 1 (21:31):
That's right.
We were armed with just thatlittle bit more information that
wouldn't have otherwise beenthere, and I think that's really
good.
As a service user, I think itwas fantastic.

Speaker 2 (21:41):
That's so good to hear.

Speaker 1 (21:43):
That led me to a logical question, I think.

Speaker 2 (21:46):
You've been AI.

Speaker 1 (21:50):
I'm not going to ask where you see that fitting into
the service model, because Ithink that's the wrong way of
looking at it, but obviouslythere's benefits to it.
And with the structure you'vegot, how are you able to
approach assessing andprioritizing AI-based
initiatives?
And I'm using AI very broadlyand I'm hesitant to do so, but
in this context I'm sort ofcovering all the different bases

(22:12):
generative AI, machine learning, predictive analytics, et
cetera, et cetera.
How are you approaching that inthis structure?

Speaker 2 (22:17):
We we've been doing a lot of work with AI.
We think that it has hugepotential in the health and
social service sectors, usedwell.
Our starting points have been,first of all, thinking really

(22:40):
carefully around what are ourethical non-negotiables and what
are our principles ofapplication?
So for us, it's things likechoice.
We innovate responsibly.
You know we're focused on themahi that matters most, so
enabling our clinicians tocontinue to provide excellent
clinical advice, that we'retransparent, so people will

(23:05):
always know if they're dealingwith a generative AI.
So we are very proactivelyusing AI in a very safe and
carefully governed way.
We make sure that our frontlinevoices are included in our
thinking about AI, are includedin our thinking about AI and,

(23:31):
like I said earlier, ourintention is that the AI fades
into the background and lets thehuman shine.
So we are using it in a numberof different areas.
So we have an innovation lab,we have an internal innovation
lab and we also have innovationlab partnerships with some of
our strategic allied partnersand we've developed some

(23:53):
in-house tools which are justgoing live now actually.
So one is our CoopSpot, whichis ring-fenced around policies,
protocol, finding informationfor people in the organisation
Now obviously really carefullyring-fenced and ragged
appropriately, and that is tofacilitate people getting

(24:15):
information in the organisation.
We have a bot that sources allour HR data and information to
help people there, and thosefall under our banner of the
Mara Kowhānau, which our virtualteam member really, really
that's there to support us andyou know, those are iterative
design processes where we getfeedback from our frontline

(24:36):
feedback from people in terms ofthe digital innovations that
they'd like to see, and then webuild things to support them.
We're also using notesummarisation in some of our
calls and that's been reallyhelpful for some of our people
that aren't the fastest typists,so it means that they can
really focus their attention onthe call they're in, not on the
notes, but they still maintaincomplete control over the note

(25:00):
before it's submitted so theycan check and change it and make
sure that it says exactly whatthey wanted to say, but again,
just downplaying some of thatmental load that they have, that
they can focus in on the personwho's on the call that they're
serving and not have tomultitask as much.
So trying to unlock theircapacity really and provide them
with support.

(25:20):
And then we are working withsome really cool partners at the
moment and working in the spaceof how do we help people when
they contact us in that delaypoint before they get to one of
our specialists, but they stillreally need help.
So how do we hold them withempathy in a really, really safe
way but also be the interactionwhen they get to a person as

(25:43):
well, because we've capturedsome of the key stuff that's
going on for them?

Speaker 1 (25:47):
Yeah.

Speaker 2 (25:52):
So you know we.

Speaker 1 (25:52):
It's used on multiple levels across our organization.
Yep now, and you've got a veryinteresting problem but one of
the better word here with ai andwith data in particular, and
that you've got.
You've got employee data,you've got company data that's
one silo there.
Then you've got company datathat's one silo there.
Then you've got service userhealth data because they're
ringing through with medicalproblems and there's health data

(26:13):
in there.
And then you've got individualdata around service users,
particularly that are comingthrough the Māori health sector
and know that Māori treat viewdata in a different way to
Western norms and treat it as ataonga.
So you've got these threedifferent elements of data that

(26:34):
are all useful, but you're goingto have to treat them in a
different way.
Have you had to go through aprocess where you are either
segmenting or siloing those datasets so that you're not just
tripping over yourself as youtry and implement some of the AI
initiatives?

Speaker 2 (26:48):
So we are treating really carefully from a data
perspective.
Privacy is our number onepriority, so we prioritize
privacy.
We don't train our AI on any ofour service user data.

(27:09):
If we did do something likethat in the future, it would be
fully with permission and wewould be extremely careful about
that process.
We are completely transparentwhen we're using any form of AI
in the organisation and we'regiving a really good thought to
Māori sovereignty, particularlyin relation to data, and our

(27:30):
approach is really nothing aboutus without us.
So for us, we're seeking input,we're seeking guidance and
we're being really careful aboutwhere our data is stored,
having assurances about what anyproviders do with our data,
having assurances about what anyproviders do with our data.
You know, we recently partneredwith Microsoft and we were one

(27:51):
of the anchor tenants that movedinto the Microsoft North Asia
environment, which means thatdata is in New Zealand and
stored here and in the cloud,and we know where that is and we
know that that is ring-fencedand very carefully managed.
You know, we know that that isring-fenced and very carefully

(28:13):
managed For us.
We work really hard to be veryrespectful and to embed tikanga
Māori into our services and wetake the same approach with our
data and recognising thatactually, you know, for Māori,
actually for any indigenousgroup of people globally, it's
really important to keep thatownership of their data, and AI

(28:36):
has the potential to be reallysignificant in terms of
revitalizing some of thelanguage and creativity and art
for Indigenous groups.
It also comes with a great riskand great responsibility around
making sure that there is hugework to protect that data as

(28:59):
well and make sure that it'sused in a way that is absolutely
aligned to the way that thepeople who own entries of that
data want it to be used.

Speaker 1 (29:11):
Yeah, definitely, because, at the end of the day,
every organisation goes down thepath of who owns the data.
But I kind of hold a view thatthe individual who gave you the
data always owns that data aslong as it's about them, and I
like the point that you madeabout how did you say it?
It was, if it's with us,includes us.
What was it?

Speaker 2 (29:29):
yeah, nothing about us without us that's it, yeah
yeah, and so you know we that'sa different way of phrasing one
of the principles that weapplied right at the start of
our ai journey, which was that,um, you know, no one owns the
best ideas.
Everyone, everyone is learningon this journey together.
We have to go into it withreally clear boundaries of what

(29:52):
we will and won't do ethicallyand then govern it really well
to make sure that we continue tostick to those principles.
Yeah, and you know, protectingour service users is such a
priority for Whakarongaro and itshould be for any organisation.

Speaker 1 (30:07):
Absolutely.
You need to take those thingsvery seriously and that
principles first approach.
I really really likeorganisations when they're doing
that as well, because they'veat least got something they can
draw back to as a firstprinciple.
And if privacy is the one, isthis decision going to breach
that principle of privacy?
Yes, we're not doing it.
It's very simple.

Speaker 2 (30:29):
Yeah, and that's how we make our decisions.
So we do.
We check ourselves against ourprinciples all the way through.
And back to your questionaround how can you?
I can't even remember thequestion now, but I think it
links back to that where youhave those principles as one of
the very first steps that youneed to take when it comes to AI
, to that where you have thoseprinciples as one of the very
first steps that you need totake when it comes to AI.

(30:49):
And once you're clear on those,it actually makes the rest of
it easier, because thetechnology is evolving so fast
and we're all trying to learnand keep up with that technology
.
But the principles reallyground you.
And what are you here to do?
What are the things that areimportant to your organization?
Who do you need to protect inthat process?
And so then, when you're doingthe work, it just removes that
cognitive.
Then when you're doing the work, it just removes that cognitive

(31:09):
load when you're trying to makedecisions with ambiguous
information.

Speaker 1 (31:14):
Absolutely, absolutely and definitely with.
The principles also tie intothe broader conversation that I
often have with clients aroundteeth of what's the problem
we're trying to solve?
Can we do it with this tool?
Can we do it with this tool?
Should we do it with this tool?

Speaker 2 (31:30):
Correct.

Speaker 1 (31:31):
And if we do it with this tool, what principles are
we struggling to achieve bydoing so?
Or, I guess, are we going tobreak any of our principles.
And if you can answer all fourof those questions, what
problems are we trying to solve?
Can we do it this way, shouldwe do it this way, and what
principles are at risk becauseof that?
You actually get a very cleardecision.

Speaker 2 (31:46):
Yeah, that's right, that's right.

Speaker 1 (31:51):
Yeah, what's your advice to anyone who hasn't got
those principles in place yet?
Where would you suggest theystart?
How should they go about that?

Speaker 2 (32:01):
Oh, look, use AI to help you find them.
We're happy to share ourthinking with other
organizations as well, becausewe've put a lot of work into our
governance frameworks.
Um, but really, um, it's aboutcentering in on what are the
ethics that you were going toapply to this, what are your
strategic principles, and thenbeing really, really staunch

(32:21):
about them.
How are you going to measureand hold yourself to account to
them?
Um, and then the rest of itbecomes quite clear.
We have governance meetings, wehave measurement.
We hold ourselves accountablefor delivering the results that
we said we were going to deliver.
We also stop things reallyquickly when we know that
they're off track or they're notperforming, and that's totally
safe in our environment, and soI would encourage you to think

(32:46):
of somewhere between three to 10principles that you are going
to follow, and then just followthem.
Microsoft has great guidesabout using AI.
Lots of the big players outthere have got really good
information out there that youcan use, and they share that to
help you on your journey.
And, like I said, happy toshare some of our thinking about
principles as well.

Speaker 1 (33:07):
Definitely and this is the enterprise architect in
me coming out I've done too muchwork, yeah, yeah, I've been
down that pathway plenty oftimes with clients and
organizations as well.
It's an interesting journey,particularly if you can get into
not so much a disagreement Idon't think it's disagreement
but a different viewpoint onwhat one principle may or may

(33:28):
not mean, as you're creating itas well.

Speaker 2 (33:30):
Yeah, and I think if you really think about not just
AI but technology in the realm,people often forget that the
impact is an emotional impactthat it's having on people.
So they think about the toolsor they think about change
management, but they forget thatall of it is an emotional
impact.
But they forget that all of itis an emotional impact, and so
you're dealing with that rawpart of being human, and so,

(33:54):
ultimately, everything you dowill build or detract from trust
.
So your principles need to bearound.
How do you ensure that you'remaintaining trust and building
it in an environment wherepeople are taking on new things
and for some people, they'rereally excited about AI.
Other people feel reallynervous about that, and you've

(34:16):
got to be able to maintain trustright through that process.
So that's why I hold true to theprinciples being really
important, because thoseprinciples catch you and stop
you from doing things that aregoing to erode trust along the
way, and your job as a leaderreally is to be able to frame up
the future.
So if you do it in a way thatbuilds trust rather than erode

(34:40):
it, then your team feels reallyheard through that process.
Your customer ends up feelingreally their needs are
anticipated and they feelappreciated, and you're removing
noise so people can show up attheir best, instead of worrying
about the technology and that'sa really important part of a
leader's job is, you know,creating that vision and then um

(35:00):
helping people regardless ofwhere they're at.
I had great advice from someoneonce that I hated at the time,
but I it often is meet peoplewhere they're at.

Speaker 1 (35:09):
Yeah, you should meet people.
I was just talking about thatin another group this morning.
Exactly that when it comes toAI Meet people where they're at,
work with them where they're at.
And I liken it back to someadvice I once had years ago from
Carly Orr, who's a change andpeople leader as well, but a
change very, very experiencedchange manager, and she was

(35:33):
walking us through the sevensteps of change and she said one
of the first things you need tobe doing for people is
reminding them or not evenreminding them, but telling them
very clearly what's notchanging as much as sorry, fully
interrupted you, but yeah, Icouldn't agree more right, and
so much doesn't change.

Speaker 2 (35:54):
So much doesn't change and we forget to talk
about that.
And I get frustrated withchange models that you know.
It's like create a burningbridge.
It's like, yes, there is a needto change and this is the
problem we're going aftersolving, but actually all this
is the stuff that you just don'tneed to worry about because
that is still the same no,exactly that's even.

Speaker 1 (36:13):
She even used um, a building move as an example in
there.
She said you know, I don'toffer the move happening all the
time and you can take forgranted that people are going to
be fine with an office move.
But part of your changemessaging needs to be that on
Friday you'll leave and onMonday you'll come into the new
office.
You'll come in through the lift, you'll walk out of that lift,
you'll find your.
You'll come into the new office.
You'll come in through the lift, you'll walk out of that lift,
you'll find your desk.
You'll sit down at your desk.
Your computer will be there,you'll turn your computer on,

(36:34):
you'll log into your computerand your work will arrive and
start being done.
None of that's changing.
What's changing is this is thenew key card you'll use to
access that.
Toilets are over there insteadof over there, and I'm pointing
left and right here, and thekitchen is around the corner and
the coffee machine is there.
Um, that's the stuff that'schanging that you need to know
about, but the rest of it it'snot changing.

Speaker 2 (36:53):
You still need to know that yeah, I think what's
interesting at the moment forpeople is generative ai does
feel like it's at a pace ofchange, yeah, and I would just
really encourage people to dothe work to understand what
generative ai is, what itscapabilities are because there's
amazing opportunities but alsowhat it can't do and its

(37:14):
limitations.
And so the more you know aboutit and the more you understand
how it works, the less afraidpeople will feel about it.
Because, while I think you knowthe work isn't going to go away
how we do the work and you knowthe work isn't going to go away
, how we do the work and the youknow the work as we know it
might change, but the people whoare going to emerge well

(37:38):
through this and the companieswho will come out well through
this change aren't the ones thatjust think about re-skilling.
They're the ones who thinkabout re-imagining how things
can work, and you know, thattakes leadership, courage and it
takes curiosity and it takesall those soft skills that
humans have.

Speaker 1 (37:56):
Yeah, and it's a really good point is that
generative AI seems like it cando so much, particularly when
you're watching videos ofBigfoot finding an energy drink
and walking through and catchinghis fish.
They're great video, they'regood for humor, but they don't
have that human factor in themand all they are doing is just

(38:18):
showing you this is what youcould use AI for.
They're not showing you thehuman side of what's gone into
that process to get there in thefirst place.

Speaker 2 (38:27):
And I think if people really think about ai as an
opportunity to, I guess,digitize and streamline a whole
lot of the stuff that's annoyingat work and, um, really
emphasize the stuff that isintellectually stimulating and,
um, that human to human contactyou know that's quite a nice way

(38:49):
to frame it so by by all meansautomate all that stuff there,
that a cognitive load and takestime, and then think about what
is the space it gives me then todeliver in the organization and
how can I go about my workdifferently and what can I
create as a result of that.

Speaker 1 (39:04):
Absolutely, absolutely.
It's been great chatting withyou and I'd just like to start
to draw us to a close here.
Is there any final thoughtsyou've got on organizations that
are either looking to peoplewe'll call it people, not
organizations, because it's ahuman decision people that are
considering how they might bringcertain functions people and

(39:26):
tech, or people and marketing,or tech and marketing closer
together, and then any finalthoughts on what AI could mean
for them as well?

Speaker 2 (39:35):
Oh, that's such a big question, isn't it?
Yes, yes, so you know, I wouldsay go for it if you want to
bring those things together,whether you do that structurally
or intellectually, because tobe successful you need to think
about that enterprise view, butalso the customer, the employee

(39:57):
journey, and solve the problemsfor them.
Either do it structurally or doit by having good conversations
and asking good questions andunderstanding the linkages
outside of departments, so thatyou can get a better outcome for
everyone.
But it's working really wellfor us having that team that is
specifically focused ondelivering capacity into the

(40:19):
organisation and betterexperiences for our customers
and our kaumahi.
So I encourage you to thinkabout, whether you do that
through structure or not, howyou can do that.

Speaker 1 (40:30):
Then I've forgotten the second part of your question
, which was Our second part ofthe question was for just where
the future of AI, or the futureof work in general, might be
taking us as well.

Speaker 2 (40:40):
Oh, I think, on quite an adventure.
So you know there's justwonderful opportunities.
When I think about some of thebig challenges, particularly in
the wellbeing space, you knowthere's significant mental
health issues, loneliness issues.
People are trying to do morewith less across every sector,

(41:03):
every industry, so frame the AIas an opportunity.
I do think it comes with a hugeresponsibility for leaders in
the organization, though, toreally think about what is their
job in this, and do they wantto shape how AI is used and is
it used for good, or do theywant it to shape them so?

(41:25):
you know, get ahead, understandand then deploy all the really
human skills around courage,curiosity and care consistently
across your thinking, because,ultimately, the decisions that
we make right now are going tohave an impact on the next
generation, so we really want toget that.
So I think leadership is alwaysa responsibility.

(41:49):
Right now, ai has to beapproached.
It's a strategic capabilitythat you need to have as a
leader.
You need to understand it.
So your job is to understand,take calculated risks, be
courageous and then translatefor the organisation why it's
important and how people stillmatter.

Speaker 1 (42:06):
So thank you so much, and, anna, thank you very much
for your time.
Where's the best place to findyou if anyone wants to follow up
questions, particularly aroundthe principle and some of the
other stuff we talked on?

Speaker 2 (42:16):
Oh, good question.
So they can reach me atWhakarongaro or they can contact
me via LinkedIn if they want aswell.

Speaker 1 (42:22):
Excellent, perfect.
Thank you very much, anna.

Speaker 2 (42:25):
Great talk, thank you .
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.