All Episodes

December 17, 2024 56 mins

Tell us what you thought about this episode

Discover the future of HR through the lens of AI innovation with Annie Johnson, co-founder and chief product officer of Humaneer. Annie takes us on her incredible journey from a traditional HR role to spearheading AI-driven solutions that are reshaping how organizations operate. She uncovers the concept of purpose-led design, aligning technology with company values to create ethical and effective AI systems. Annie doesn't shy away from the tough topics either—addressing fears around AI, job displacement, and transparency, while offering practical strategies to mitigate these concerns.

As we continue, the conversation highlights the complexity of media's role in shaping public perception, particularly through fear-based narratives and information overload. By comparing industry-specific insights to mainstream media's sensationalism, we explore the balance needed to stay informed. With a keen eye on governmental and regulatory challenges, we delve into the importance of clear guidelines for privacy and security in this digital era. Annie shares how startups like Humaneer make strides in fostering an AI ecosystem that prioritizes safety and productivity.

Our discussion also reflects on the broader implications of technological advancements on job evolution, emphasizing the need for a thoughtful integration of AI that augments rather than replaces human roles. By examining historical shifts in job markets, we advocate for a strategic approach that aligns with organizational goals and enhances the human element in work. Join us as we celebrate the innovative spirit of New Zealand, where Annie and her team are making remarkable progress in crafting AI solutions for the HR industry. This episode is packed with insights, strategies, and a hopeful outlook on the collaborative future of AI in the workplace.

Music by arnaud136 from Pixabay

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Ant McMahon (00:00):
In this episode, I carry on the conversation on the
world of AI and itstransformative impact on the HR
industry.
My guest this month is AnnieJohnson, co-founder and chief
product at Humae.
She shares her uniqueperspective on how AI is
reshaping the future of work.
We explore the rapid evolutionof AI, especially since the
mainstream surge in late 2022,and discuss the critical

(00:22):
importance of ethical AIimplementation in HR.
Annie sheds light on commonfears surrounding AI adoption,
such as job displacement andlack of transparency, and
provides some good strategiesfor mitigating these concerns.
We delve into the concept ofpurpose-led design, a framework
that empowers organizations toharness the power of AI while
aligning with their core valuesand long-term goals.

(00:42):
Annie explains how startupslike Humane are at the forefront
of AI innovation, offeringflexible and customer-centric
solutions to address the uniqueneeds of the HR industry.
Join us as we unpack thepotential benefits and
challenges of AI and discoverhow your organization can
navigate this exciting andcomplex landscape.
Hey, annie, welcome to the show.

(01:03):
Thanks so much for coming onboard.

Annie Johnson (01:05):
Awesome Thanks, Ant.
It's so great to be here.

Ant McMahon (01:08):
Just before we get into some of your background and
where you're going humaneer andpurpose-led design, just for
our listeners, give us a bit ofan introduction about yourself.

Annie Johnson (01:18):
It's a big story.
It's quite convoluted in a way.
I've spent 20 years in the HRindustry and listening to
Melissa's podcast with you.
I went the opposite way, so Istudied psychology down here in
Canterbury, went into the HRindustry and ran a traditional
career path, popped out into acommercial role and did a bit in

(01:42):
the general management typespace and started to get into a
little bit of tech andenablement.
So working with a business andripping apart that ecosystem and
rebuilding it to make sure thatwe were more efficient and that
work was being done in theright way, that sort of gave me
a healthy dose of passion aroundtech and human enablement.

(02:07):
I popped out of that role andthen started running my own
consultancy when my daughter wasborn and so the honest human
came from.
You know I need to startpaddling my own canoe and I
really wanted to work with SMEsin the HR space, but a very
commercial lens.
I'd learned a lot from frombeing a general manager and and

(02:30):
so bringing a different flavorto how you create people
strategy and how you create theright environment and culture
for your people to enable thembecame what the Honest Human was
about.
And then AI burst onto the scenein what was it?
November 2022.
And I was what you would callan early adopter.

(02:51):
My first prompt in there wascan you write me a position for
a sales manager?
And it just blew my mind andfrom there a light bulb went on.
It became a good sort of twoyears of intensive curiosity
around the technology.
I could see where it couldbenefit so many industries and

(03:16):
teams and roles, and so Istarted to think about you know,
our industry in HR and and wewere heading we suffer a lot of
burnout.
In our industry.
We are heavily compliant driven.
We deal with a lot ofadministration, so it became a
bit of a side passion trying tolook at how we can use AI in our

(03:40):
roles as HR in a really ethicaland safe way.
And so this year I met the girlsHumaneer him and Corrine who
were building out a similarproduct in Australia whilst I
was building here in New Zealand, and so we decided to join
forces and it's a really awesomestory of connecting randomly on

(04:02):
LinkedIn.
Over several months, we werehaving calls weekly, sometimes
three times a week, and werealized we had a really aligned
vision and a huge purposearound supporting our industry,
hr and building one communitythat supports the industry, so
that is the Humaneer CommunityApp, where it's free to sign up

(04:24):
for community members.
That creates a really safespace for support and connection
and development um, somethingthat a lot of us don't often
have.
And the other side of it, which, which I lead as chief of
product, is our hr aiproductivity partner, and so
that has been built around.
A deep understanding of what wefind in HR takes a lot of our

(04:49):
time, which is not necessarilythe awesome work or the work
that contributes to helping anorganization and grow.
So a lot of what we're doing isreally listening to the
community to understand thefears around AI.
What do they need in terms of asafe and secure product?
What would they want to see inthere in terms of how it helps
them and augments their role sothat they can go back to, you

(05:13):
know, being the people, people.
So we're on a really bigjourney and it's been super
exciting.
Kim and Corrine are justamazing co-founders, hugely
inspiring women, and so comingtogether it's been such a dream.
A lot of hard work, and we'rereally excited for 2025 and what
we're going to deliver.

Ant McMahon (05:34):
Nice, thank you for that background and there's
something in there I'm going toI was going to talk about
purposely, but we'll come backto that because something you
said I think is a really goodpoint, and it's the fears around
AI.
To me, from looking at it fromthe tech lens back, there's been
a lot of hype, particularly inthe last two to three years,

(05:54):
around what AI can do, what itmight mean for us as humanity
and how it might take our jobsand take us away.
Do you feel that those fearsare commonly held and do you
feel that they're accurate fromthe conversations you've been
picking up with the community?

Annie Johnson (06:10):
Yeah, that's a really good question and it's a
big question.
When I hear about fear that canbe on so many different levels,
there's a lot of fear around.
You know the narrative that AIis going to take your job.
You know people are wearingt-shirts and and it does nothing

(06:30):
to support the good use of AIand and you know the ethical use
of AI.
So you know there is a lot offear around people's roles
disappearing and in the pace inwhich that might.
There's a lot of fear aroundthe transparency with AI.
When you start to get intothings like source credibility

(06:53):
and you actually start to talkto people about where that
information is going and what'scoming back, you can sort of see
a bit of a light bulb go onthat.
Perhaps I've never fullyunderstood this technology and
what it's doing.
I think there's also a lot offear around the pace in which
this is iterating.
It's created an incrediblyintensive time for individuals

(07:18):
and organizations and that pacewhen we don't have time to sit
in that change and understand itand feel okay about it and
we're almost forced to move withI talk to it can be a number of
things that is creating thatfear and so talking to our

(07:50):
community, particularly in HR.
We play a big role in thischange and supporting employees
and people to understand thistech and the risks and manage
that process.
This tech and the risks andmanage that process.
You know we are shepherdingbusinesses and leaders and
employees a lot of the time whenwe don't necessarily understand
the technology ourselves.

(08:11):
So that's a big ask.
But also within our industryalone, within any industry, you
are racing to try and understandwhat should I use?
You know, how do I use it?
What is the security?
Is this?
Right?
You know it's a lot of sort ofpersonal questions that come
into it.

(08:31):
So fear is a big one.
Fear is a big one.
And look, I think with the hypecurve, it's not going to go
away, because the way in whichthis is iterating that hype
curve just seems to be ongoing.
Right, there's the originalstage of gen and AI and
everything that starts to comeover top of that in this race to

(08:53):
AGI.
You know, people just don't geta break from it, and so it is
something that we need to managereally carefully and really
understand it, sometimes at theindividual level, which is quite
hard when you're potentially anorganisation of 500 people.

Ant McMahon (09:10):
Yeah, absolutely.
And really good mention of thehype curve there, because
traditionally the tech if wefollow that technology goes
through a point of what theycall the trough of
disillusionment.
And I feel, looking at wherewe've gone with AI and
particularly generative ai, isthat it really jumped that
trough pretty quickly.
But what happened was not somuch that people got

(09:30):
disillusioned with it this isnot why it jumped it.
They didn't get disillusioned,they just realized it had
limitations a lot quicker thanwhat we've done with previous
tech.
So chat gpt hit the scene andeveryone went.
This is amazing, and then werealized it wasn't accurate.
Yes, and to me, what mostpeople have done, and certainly
what I've done myself and seeothers doing as well, is we've
moved away from using ChatGPT orCopilot or Google Gemini as a

(09:53):
creation tool and we're nowusing it to curate so that
prompts are becoming morespecific and maybe more verbose,
and we're trying to get it tojust rewrite what we've already
written to be in a way that'smore consistent or in a slightly
different tone, and I feelthat's where it's jumped the
hype curve pretty quickly for us, just the way we use it.

Annie Johnson (10:13):
Yeah, yeah, agree , and a lot of that,
particularly what I have foundwhen the Honest Human started to
lean into more about AIeducation from a very human
level, so not the tech speak.
How do we talk about this on alevel that people understand and
in a language that doesn't sortof overwhelm people?

(10:33):
And you know, part of that isaround training people to use it
.
Well, that white box that yousit in front of and you sit
there thinking what is that textthat I need to put in there to
get that prompt, that's come along way.
That's part of that jump ispeople are now understanding

(10:54):
more about the technology andits limitations, but they're
also picking up how that isdriven by your human interaction
.
So you know the prompting space.
You can very quicklyovercomplicate that, but it's
worth education and trainingbecause you're right, you know
it can really 10x your outputsand it becomes a tool that you

(11:17):
then understand and the outputsare more accurate and getting
you 90% of the way instead of60% of the way.

Ant McMahon (11:23):
Yep, exactly, and certainly I've found for myself.
The prompts have changed fromwrite an email that does this,
this, this and this to here's anemail I've drafted rewrite it
for conciseness or rewrite itfor clarity and therefore the
content hasn't been generated,but it's been reformatted in a
way that makes it clearer.

Annie Johnson (11:40):
Yes, yes, and you can get caught up in thinking
that this, that prompting inthese general models, is a
science and look, it cansometimes be that if it's a more
complex type task.
I think the big thing that weneed to also remember is
prompting is one thing, andtrying to get words into that

(12:00):
query box in a way that it isclear and it's the right context
and it's got the rightguardrails it is an art, but
once you get there, it doesbecome a lot easier.
The other thing to consider,which a lot of people don't
realize, is also the type ofmodel you're using, and as Chief
of Product in an AI platform,I'm learning huge amounts around

(12:21):
models and capability and usecases and for a lot of people
just using it on a daily basis,chat GPT is general, so you're
going to get some pretty verboseand general statements back,
but as you get deeper into it,you start to understand that
different models can have verydifferent outcomes because they
are purpose-built.

Ant McMahon (12:42):
Yeah, and it's a really good point about the type
of model you're working withit's.
It's something earlier um thisyear I caught up with both asa
cox and then tim warren, whohave been involved in ai
startups as well, and one of thethings that we talked about in
those two podcasts with each ofthem is just that ai is a pretty
blankage term for a lot ofdifferent technologies, and what

(13:09):
we're talking about here and Iwon't try and pitch the whole
humaneer umane r into any ofthese, but what we're talking
with co-pilot and and chat gptis that generative ai.
That's very different topredictive ai or analytical ai,
or.
Some people will probably uh,write in and tell me I'm wrong
on this one, but it's evencompletely different to the ai
in your toaster which controlsthe timer, because that's
artificial intelligence as well.

Annie Johnson (13:27):
Completely, completely, and it's a really
overwhelming space for any Joeblogs to try and understand.
I see a lot of people confusingtypes of AI as well, and in
fact, it was a key question whenI spoke at the HIRNZ conference
recently, where, you know,asking the audience, do you

(13:48):
actually know what AI is or whattype of AI I'm talking about?
And you know, pretty standard,not many hands went up.
So so there is.
There is a lot of context tounderstand in the background as
well, which, coming back to yourquestion around fear, the
overwhelm of that information aswell, you know, creates this

(14:09):
fear that you're going to beleft behind or I can't
understand it, or it just goesover my head.
It's too complex, and so youknow, we we Hare are trying to
find a really great way ofexplaining the use of AI.
It's not always going to be genAI.
We definitely constrain alltypes of AI because bringing

(14:29):
that together gives you abroader scope.
So, yeah, it's a lot for peopleto take in.
Even my learning curve thisyear has been incredible and if
I say just one year, I could putit into dog years in terms of
the Synapse connections.
Yeah, yeah, it's just anincredible space to learn about.

Ant McMahon (14:50):
Yeah, and within that, you talked about the
headlines that are driving someof this fear as well, and I
think that there's a point thatwe've reached where the way we
consume not so much consume, butwhere we get our information
from has evolved quite rapidlyas well.
The headlines we see mainly inmainstream media are written to

(15:12):
generate clicks and get peoplereading the article and drive
revenue, so they are going to betailored more towards that fear
, and I'm not going to say it'syellow journalism or the old.
if it bleeds, it leads kind ofmentality, but it's certainly
we'll put this out there, we'llget people commenting and we'll
get people debating, butsometimes, actually, that
information that the media areportraying isn't the right place
to be consuming it, and that'swhere, like your blogs or Asa

(15:35):
with Intela, they have thatthey're blogging as well.
There's companies that aredriving AI that are blogging,
and maybe that information ismore relevant to be consuming
than the fear-based headlinethat's just there to drive
revenue.

Annie Johnson (15:46):
Exactly, and you know we saw it recently with
airlines and turbulence.
You know, one example of a bitof a wobble, or big wobble you
can see how it just grows legs,doesn't it?
And with a fear of flying.
That didn't do any good for mytravel plans.
But you're right, you know themedia plays a big role in this

(16:09):
and and sometimes it's notresponsible reporting.
Uh, I also think that there's alot of people out there at the
moment wanting more structureand guidance from our government
.
Um, you know, we've beenwaiting to see what New Zealand
will put in place around policy.

(16:29):
It's been pretty fluid to date.
I think 2025 there's more of astrategy coming out, so there's
a lot of people sort of wantingto lean on.
What is the guidance?
What are the guardrails for us?
And you know, one of those areasis around privacy.
We see a lot of horrible mediaand stories coming out about

(16:51):
privacy and security, and it's avery important consideration
for organizations.
Some would argue that it'slargely controllable if you're
doing the right thing.
However, there's a lot ofbusinesses you know really
afraid of, you know, crustaround privacy and security.
Is it smoke and mirrors?
Is it actually doing what I'masking it to do?

(17:11):
Do we have data access in place?
And so you start to look at theguidance around the Privacy Act,
and what sort of guardrails dowe need to lean on to make sure
we're doing the right thing?
Do we need to lean on to makesure we're doing the right thing
?
So it is a very complex way todigest information on something
that's so new and moving so fast.

(17:32):
People can feel genuinelyincredibly overwhelmed and, as
humans, our natural tendencywhen that's happening is that
it'll make us feel incrediblyuncomfortable.
We're creatures of habit and welike to know that things are,
you know, routine and safe, andso there's a lot to consume.

(17:53):
You're right, it's.
It's just making sure that youdo go to another source of truth
, something that's going to giveyou more accurate answers, not
not saying the media is thewrong source of truth, but I
think there's that balance there.

Ant McMahon (18:04):
I like you're.
Media's the wrong source oftruth, but I think there's
that's balance there.
I, like you're forming anothersource of truth.
Let's find that balance.
Read as much as you can aboutai and then form their opinions,
rather than read a singlearticle from someone and form an
opinion of that, the governmentpiece you touched on.
I think there's a really goodpoint in there because, arguably
, governments haven't or havestruggled to keep up with tech

(18:26):
regulations anyway.
We've brought out all variousjurisdictions have brought out
their privacy laws, and thattalks more to the impact of a
privacy breach rather than theguardrails at the start.
So do you do you feel in yourexperience that the regulators
are still struggling to catch up, even with just general tech
guidelines and tech regulations,rather than and not just

(18:47):
specifically to AI?

Annie Johnson (18:48):
yeah, I do, I do and, and you know what you've
touched on there is is a lot of.
What we're dealing with is thelag indicators.
You know if, if you're facing aprivacy breach, it's too late.
Right, I absolutely see a needfor us to move faster, but I
also appreciate how hard it isto keep up with this.

(19:10):
And you know, even looking atthe states and how models or
your broader organizations, likean open AI or a anthropic, are
designing these models andtesting these models, there's no
proper guard.
That again feeds into this fearof well, if nobody's got control

(19:34):
over this thing, once it's out,it's out.
So I can see how governmentsare really struggling to keep up
and, unfortunately, we're goingto need to wait for some of
those test cases.
There's going to be that firstbusiness who is held to account
for a privacy breach because ofAI, and in HR, we've learned to
deal with that because we'realways looking at case law, so

(19:56):
someone will be that person.
But I do hope 2025 is going tobring more clarity from our
government and I believe that'son the cards.

Ant McMahon (20:05):
Yeah, yeah, hopefully, hopefully, fingers
crossed.
And within it, I think and thiscomes back to the point about
alternative sources ofinformation is, I think that
there's a big role for thestartups like Humane Air and
like Contented, which is alsoanother New Zealand startup
coming through and leading thisconversation.
And we've got Microsoft and AWSand Google at the top leading

(20:26):
the conversation, but theirconversation tends to be more
product-driven.
If you use this product, if youuse our product, this is the
outcomes you'll get and theseare the skills you need, but I
think it's the startups that arecoming through that can help
really shape the conversation ofno, no.
Here's where AI will play itsrole and here's how the
guardrails and here's thelessons we've learned by
tinkering and by beingintensively curious to your

(20:49):
point.
These are the lessons we'velearned and this is what we're
going to share, rather than thatbig Microsoft.
Trust us, we've got this nailedkind of mentality.

Annie Johnson (20:57):
Yeah, and you know, when you think of a
Microsoft it's just a beastright.
I think the way startups thinkin this space genuinely we want
to understand that problemstatement.
You know we're not withHumaneer, we're not building a
product on assumptions.

(21:18):
We have 60 years collectiveexperience across us three
co-founders.
We've got a global communitythat we lean into to ask the
questions and test ourassumptions.
And when you're building fromthe ground up, you start to
solve a lot of those fears andtrust factors because you are so

(21:40):
close to your customer.
Microsoft doesn't want to reallygo in right now and understand
HR at a deep level.
And when you look at MicrosoftCopilot, that's going to be an
incredible product in two years'time.
But when you look at the largeorganizations, hr is often at
the bottom of the barrel interms of getting some proper
tech expertise or your IT teamsare overwhelmed so they don't

(22:03):
have time to you know mapco-pilot properly.
And then there's your dataconcerns and so, yeah, I think
startups are a great space to bein and it's super exciting
because you can be nimble, soclose to your customer that
feedback feeds directly intoyour dev plan.

Ant McMahon (22:23):
Yeah, definitely, and you also.
There's something in that.
Where you talk about HR beingat the bottom of the barrel is
with the AI.
There's also an element of thetechnology department can only
do so much with AI tools, andparticularly the Microsoft and
the co-pilots is, yes, we canturn them on, yes, we can secure
them, but we don't necessarilyknow the business problems that

(22:44):
they'll solve.
So to introduce the let's callthem, the top-shelf tools
co-pilot, paid GPT, paid Geminito introduce those into the
business is a collaborativeeffort that everyone needs to be
in the HR department, thesecurity team, the tech team,
the finance team because itmight be finance problems that

(23:05):
have been solved the customerservice, whatever.
But with the startups, they'rebringing that whole package
together and saying, well,here's the problem we solve,
this is the problem we solve,and the CIO might be sitting to
the side going, cool, we can fitthat into our tool stack here.
But if the problem you'resolving is for the HR department
, then that's the people youshould be talking to as well.

Annie Johnson (23:24):
Completely, and I call it your AI control tower.
You know, bringing thosebringing that expertise together
at the table to understand yourholistic position and how each
of your functions in yourbusiness will use it.
It is a big job, right?
If you even consider theanalysis that goes with

(23:47):
understanding and ripping apartworkflows and that needs
analysis is huge, and not everybusiness comes with amazing
teams of BAs, and so, yeah, itis.
I think that understanding theproblem statements at the core,
but also understanding the truestatements around how you use it

(24:13):
and why you use it in thatfunction, there's a very careful
approach with human error andHR in particular, because we are
an industry that deals withsome pretty hefty jobs, you know
, and so we've got to beincredibly careful with our use

(24:33):
of AI.
And a lot of HR's fear is thatwhen you go into a chat GPT,
it's such a general model,unless you're skilled enough to
lock that down in your settingsand you prompt well, or you can
build GPTs.
Nobody's got time for that, youknow.
When I speak to HR, it's wewant to be using AI, annie.

(24:54):
We see the opportunity for us,we're excited by it, but tell us
what to do.
Don't make me go in and buildagents.
Don't make me learn how to be aprompt engineer.
You know it's all of that, andI think that's where the space
of startups are really leaninginto.
It's handing that solutionstraight to them.

Ant McMahon (25:16):
Perfect.
And as you were talking as well.
The idea that I had springinginto mind was performance
reviews and a tool to do theperformance review.
So on the one hand there'sChatGPT, where a manager sits
down at the end of the year andsays, oh, I've got a little
performance review on Annie andjust types in all the stuff and
says, right, I'm in myperformance review, annie.
And just types in all the stuffand says, write me my
performance review and they getan outcome.

(25:37):
And let's not dwell too long onwhat the outcome is going to be
, because it's going to be lessthan ideal for everyone versus a
tool that's tailor-made forperformance reviews, where
through the year that manager iswriting notes about Annie and
is maybe being prompted withthose notes, and at the end of
the year they get a generateperformance statement and the
statement that comes out is alot more curated and factual

(26:01):
because it's built itself upover I don't know 52 weeks of
prompts or 26 weeks or whatever.
However long it is, it's justbuilt itself off that.
So there's a history behind itthat becomes more valuable,
doesn't it?

Annie Johnson (26:12):
a hundred percent and and a lot of what we face
in hr are old, traditionalprocesses.
You know, like your performancereview process, we are
constantly pushing leaders to.
You know you need to have yourconversations, you need to do
your reviews and and we get thepushback going.
I don't have time.
Your templates are hard, it'sclunky.

(26:33):
So there is amazing opportunityto capture data on the go, but
we also have to be cautious withthat approach, because there's
a lot of conversation out thereat the moment, particularly when
it comes to employee data oremployee interaction with these
types of tools, where it can beincredibly invasive.

(26:53):
So if you you're using a toolthat you know there's tools out
there at the moment that claimsto capture your performance data
across emails and Slack and youknow it's trying to bring in
this ecosystem of all of thisdata and create a view for
employees, that's quite invasive.
And if that feels invasive, isthat going to change how we

(27:16):
interact in iRespond?
And so with any process usingAI or automation, you have to
understand the human interactionfirst and what is going to
build trust human to human, andhow AI can underpin that and
augment it, but not control it.
So there's a little bit of arole to play there in unpacking

(27:39):
how it's used.

Ant McMahon (27:41):
Correct and it ties back to something you talked
about in that intro right at thestart, as well as around using
AI in an ethical and safe way.
Using AI in the way we've justdescribed for performance
reviews may be ethical and theremay have been a lot of thought
gone into it, but it's that safeway, because safe is quite
broad, isn't it of?
We're now assessing everythingand we're evaluating you based

(28:03):
on how many emails you send andreceive.
It may not be safe.

Annie Johnson (28:07):
Yes, and we used to have an old saying in HR what
gets measured gets done, whichwas quite an archaic way to view
the world.
But yeah, if you're an employeesitting there and I don't know,
there's been some technologyrecently that claimed to measure
how many times you smile as asupermarket checkout operator

(28:28):
and there was a big articleabout that, because people are
sitting there like forcing theirsmiles to kind of jimmy the
system a bit.
So you, you do, you do have tobe a bit careful, and and that
invasiveness also comes down toconsent, which is a big question
coming out for HR at the momentemployee data in the next few
years and how it's used willchange and so therefore, what is

(28:53):
the change we manage with that?
But also what is the consentrequired?
And I've already hadorganizations approach me and
say, annie, I've got a lot ofpushback from employees not
wanting their data used with AIbecause they're fearful.
You know that pushback isbecause they've heard about
hallucinations and are worriedit's going to be biased, which
are very real concerns.

(29:13):
But because there's a fearthere, they've really pushed
back.
So that's going to challenge alot of organisations to be
really transparent, build thetrust and manage that change in
consent well.

Ant McMahon (29:26):
Yes, and that consent piece is key.
I think there's somethingwithin there about what we're
doing with the tool and whatproblems we're solving and
avoiding the use of AI when youtalk about it.
When I say the use of AI, I'mnot meaning avoid using AI, but
avoid talking about it as AI.
It's almost more valuable tobuild that trust by actually
talking about the problem thatyou're solving or the solution

(29:49):
that's coming through, or theoutcome, rather than just saying
, oh, we're going to use AI tomeasure how many times you smile
.
It's more about I don't evenknow what the problem you're
trying to solve with that one,because it's so vague.
Let's go back to performancereviews.
Instead of saying we're usingAI to run your performance
reviews, now it's more about heylook, we know that performance
reviews take a lot of time, sowe've now got tools that help
speed up the process throughoutthe year, so that the final

(30:11):
performance review is a lotcleaner and measured.
And sure, it's powered by AI,but you don't need to mention
that to people.

Annie Johnson (30:17):
Absolutely, and that all comes down to how we
should be managing change withthis beast of technology right,
and I think this is an area thatwe've underestimated the level
of change we need to manage, andit's an area I think we need to
really double down on ourinvestment.
And I'm of the view, with all ofthis around consent and how

(30:39):
we're using your data, or how weuse AI, as all part of ensuring
that we we create a reallytrustworthy, transparent
narrative around AI and it'smaking sure that we don't put
the full stop just after settingpolicy.
You know like there's a lot oforganisations that will go like
got our policy in place, oh,we're safe, everybody knows what

(31:02):
they're doing.
Off we go and then they don'trecognise that full stop happens
much further down the line.
So once you've got your policy,that's great, that's standards.
Now we need to start looking athow we crane our staff, how we
educate our staff, and thechange management of that alone

(31:22):
needs huge consideration, hugeconsideration for communication,
the why.
You know the impact, and sobusinesses have forgotten that.
That takes time and it takesinvestment.

Ant McMahon (31:37):
I mean change management is probably one of
the biggest things that'sevolved alongside technology,
and the 20 years I've beenworking in my industry probably
the same year was that the morewe've brought tech into it, into
the way people do their jobs,the more important change has
become, and to how we run thatprocess with them, the more
important change has become andto how we run that process with
them.

Annie Johnson (31:55):
I agree.
You know, I think, that the bigthing around change, you know,
when I've implemented bigsystems like HubSpot, there was
always a really in-depth changeprogram.
You know, melissa touched on areally great point in her chat
with you that this is technologythat sits in the hands of
employees.
And so with that, I thinkthere's this dynamic that we

(32:18):
forget because it sits in thehands of employees.
We still need to run change, westill need to communicate.
It might not be a big hub spotor a big payroll system.
It still requires a level ofchange management and we've got
to support our leaders tounderstand that and the exec
team.
It's a big skill set tounderstand and learn and do so.
It's just an added thing to thepile when it comes to AI.

Ant McMahon (32:43):
Exactly and within there and I can't remember if I
talked to Melissa about this,but certainly a few episodes ago
now, when I caught up withChristy Law and talked about
change we talked about thatelement of what's not changing
as a result of this is just asimportant as it might seem easy
to us looking back and going ohwell, the job's not changing, so
we don't need to tell them that.
Actually we do, because theabsence of any information

(33:06):
drives the uncertainty and if wetell them their job's not
changing, that's a piece ofinformation that removes
uncertainty.

Annie Johnson (33:13):
Yes, 100%, and you've hit the nail on the head
around that transparency.
And there's a big parliamentaryinquiry that's just come out in
Australia that has looked atthe use and adoption of AI and
one of their corerecommendations is that the
change management side of AIrequires much more employee

(33:35):
consultation, and that's on somany different levels of if it's
going to change people's rolesand how they interact, there
should be a level ofconsultation with employees so
that they've got thatopportunity to feed into the
process.
And that's particularlyimportant when it comes to that

(33:55):
more invasive technology.
You know where you've got somekind of AI running your
engagement survey, looking atemails and Slack and messages.
You know that's going torequire change management.
That's incredibly invasive foremployees and if you don't run
that change again, you're goingto change the interaction and
that's not why we want to adoptAI.

(34:16):
You know that's not theopportunity that sits in front
of us.

Ant McMahon (34:20):
Absolutely, and there's some metrics that go
with this that may seem onlyimportant to management but very
important to be sharing as well.
Usually with the clients I talkto about AI strategies.
Talk to them of, well, what areyou trying to be?
Are you trying to be a consumerof ai, where all you do is use
someone else's tool?
Are you trying to be a creator,where you have created a tool
and this is human assets and thecreator side you've created a

(34:42):
tool that drives revenue foryour business.
Or are you somewhere in themiddle where you're going to
create some tools that willenable revenue, but they're not
the revenue generators on theirown?
And whichever one of those yousit, there's going to be some
metrics that you're going totrack to see have we been
successful and should wecontinue to invest in this.
And some of those metrics arethings you want to show the
employees as well, where, inways that they will understand,

(35:05):
it's not just a oh, it's toocomplex and they're not going to
understand these five KPIs, wewon't share them.
It's more like, actually, bybringing this in.
These are the things we expectto see in the business and we're
going to share with you ifthat's happening or not, so that
the employees can understandthat.
Ok, so they want to seeimproved customer outcomes or
faster customer handling orwhatever, and they can see that
that's been enabled by that 100percent.

Annie Johnson (35:33):
And you can imagine the knock on effect when
you're implementing AI andthere's this scaremongering,
this narrative that it's goingto take your job.
It should be about buildingthat human AI symphony and part
of being open and transparentaround what's being measured and
that return on investmentabsolutely should be how you
design that you were showingyour people that there are no
smoke and mirrors here.
It's doing what it intended todo.
We are getting a return oninvestment and that's only going

(35:57):
to build more trust for greaterinnovation.
I think if we ignore this partof the hype curve and the change
management required, ourinnovation in the future with AI
will suffer because we willhave lost the ability to build
that trust and if we break itnow, we don't do it well, it's
just going to become harder.

Ant McMahon (36:17):
We'll get more people sitting in that category
of being highly sceptical andwhen you're wanting to adopt
this really quickly so you don'tget left behind, you need those
adopters understanding your whyand your purpose Correct,
correct and the why and thepurpose are huge, and it brings
me back to what I was originallygoing to start the conversation
on and we've gone a long wayaround to get to here and that's

(36:39):
purpose-led, which you recentlyposted on LinkedIn about and
you touched on some really goodpoints in there.
But I just want to explore abit more what you mean by
purpose-led and how it differsfrom the traditional
human-centric or other methodsof design.

Annie Johnson (36:55):
Yeah, for me, I'm a non-tech person, right, and
so I love talking about thistype of technology with everyday
people leaders, employees,business owners and it's really
interesting to get reallycurious on the level they
understand this TikTok and a lotof it has become this

(37:17):
human-centric design in thebunny years, and so I just
happened to ask someone theother day.
They came to me saying, annie,we're ready to go out and build
some proprietary technology.
Can you help us find someone todo that?
And we want to make sure it'shuman-centric.
And I said to them what doesthat mean to you?
You know, is that just a termthat you've picked up on, or or

(37:39):
is there something in thatthat's really important to you?
And essentially, uh, it's, itboiled down to, it's just become
that buzzword.
And and saying human centric forthem, uh, gave their staff a
sense of security that it wasn'trobots, and so it led me to a
bigger conversation with Kim andKarine around our approach to

(38:01):
design.
We actually don't use the termhuman-centric.
I understand why it's there,but we lean into purpose-led
design because it gives you amuch broader approach to how you
choose your technology, how youdesign it, how you implement it
, why it's there, and so, if youcan take a broader approach, it

(38:24):
helps you to make the rightdecisions for your business and
your people.
Purpose Lead is all aboutunderstanding how any piece of
technology, any change in yourbusiness, any goal is actually
designed for a very specificreason, and so, for us, it's
about using that so that you canbuild that transparency, trust.

(38:45):
It has a core purpose in yourbusiness and it goes back to
your evaluation of it.

Ant McMahon (38:50):
And that's something that I see as hugely
valuable as well, coming from myenterprise architecture
background, where I've seenprojects.
I've seen vanity projects,which I've touched on in the
past, but I've also seenprojects that haven't really had
a clear purpose beyond.
Do something, anything but theprojects where that there has,
within enterprises particularly,but for any business, when

(39:12):
someone's taking the time to sitdown at start and talk about
that purpose, what are we tryingto achieve by this and why?
What are the problems we'retrying to solve?
And that's been articulatedthroughout the project is more
likely to be successful.
I'm not saying it will be 100%guaranteed, but it's more likely
to be because as scope startsto creep or as things start to
try and derail the project, youcan always bring it back to

(39:32):
what's the purpose.
Why are we doing this?

Annie Johnson (39:34):
Absolutely.
You know I'm a huge believer inthose big why statements.
You know it anchors you back towhat you were doing in the
first place, and you're right.
Scope creep is one of thosehorrible monsters that are
inevitable in techimplementation, as I've found.
But it's also about making sureyou stay true to your why as an

(39:57):
organization.
Because of that hype curve we'vespoken about, a lot of
organizations have run out.
They've gone, we're going toget left behind.
So just throw an AI into ourbusiness without any real
consideration, and so it is hardto measure the effectiveness of
that.
How do you measure the returnon investment when you've just
kind of plonked it into someworkflow or you've just gone

(40:18):
with a general model?
It's really important tounderstand why you're using it,
why it's here.
If you're going to have a bigstatement on your reception wall
that one of your values isbeing human and then you make
hundreds of applicants interviewvia an AI chatbot, you start to

(40:41):
get cognitive incongruencythere where people what are we
so so?
That's the whole thing aroundpurposely designed for us, is
implemented in a way that it'syou and it and it supports you
in your business to continueyour why yeah, yeah, absolutely.

Ant McMahon (40:58):
And and you also start to lose the human
connection from the process themore technology you put around
it as well.
Um, yeah, so yes, there areprocesses where technology will
take a hundred percent of thejob and will do it faster and
more efficient than people, butthose processes are very rare
within an organisation and thereare going to be.

(41:19):
Most processes thatparticularly AI can solve for
people are going to be moreaugmentive.
The AI is going to come in withrecommendations or with
insights or something that willmake the person better at the
job they're doing.
It's not necessarily going totake the job away from them.

Annie Johnson (41:36):
I agree, and you've touched on a point there
and that goes back to that, thatfear that I hear about and
that's out there in the media,where a lot of this has been.
It could take your job.
You know it's going to replacejobs and look it has.
In some industries it hasdisrupted it hugely, right.

(41:56):
You only need to look atcontent design for that.
I am 100% all about it.
For renting our roles, there isa real opportunity to look at
how we value our work.
Where did this 40-hour workweek come from?
Why is it eight, nine hours aday, you know?
But with that also comes itcomes back to this trust model

(42:18):
as well with AI.
So how much efficiency somebodyis gaining day by day and how
transparent they are with youabout that is entirely dependent
on your response.
So if it's going to augmentyour role and I'm going to win
back an hour, two hours a day,are you going to be an

(42:39):
organization that just tips morework in that bucket or are you
going to be one of the thoughtleaders that start to challenge
how do we value our work?
Are we going to do what we saidwe would do and get more of our
humanness back, rather thanallowing AI to dehumanise us in
a way.

Ant McMahon (42:56):
Absolutely, and let's just touch on the job loss
there for a moment, because youtalked about AI taking over
content creation.
I did a presentation on this afew years ago and this is before
generative AI really became up.
But it was about the job lossthat's happened through
technology, and one of the statsthat I pulled up and I've just
brought it up now is thattechnology has taken 90% of all

(43:16):
jobs ever in history up to date,and this was by 2019 I think it
was pre-COVID when I did thepresentation.
So 90%, but when you startlooking back, a lot of those
jobs not only were they taken bytechnology, but they were
created by technology in thefirst place and necessitated by
technology, and two examplesthat sprung to mind were

(43:36):
telephone operator and thecheckout operator at a
supermarket.
So, yes, technology took thosejobs, but the telephone operator
didn't exist before the middleof the 19th century.
The checkout operator didn'treally exist before the middle
of the 20th century either.
Yes, we had store clerks.
Yes, we had people behind thecounter doing work, but it was
at a different layer.
So technology sort of came inand bought all these jobs with

(43:58):
it and then it took them away atthe same time, and there's
probably an element there whereyou could see the tasks that
technology is taking away areprobably tasks that technology
put there in the first place.

Annie Johnson (44:08):
Yeah, that is such a great point and I've
never actually thought about itthat way.
And you know, one of theindustries that I often
reflected on is banking.
You know, I used to watch mymum run payroll for my dad and
it was on a carbon copy, biggreen bit of paper.
Every Thursday she would sitthere handwriting it out.

(44:29):
You know we would traipse downto the bank.
I would go with her becauseWestpac at the time had a big
jar of jelly beans.
Why they took those away is astory for another day.
But you know there was a bankteller and and that bank teller
processed it and and handed hercopy back and we left.
And you look at banking now andand that automation, and you

(44:49):
know the self-service that comeswith banking.
Now I walked in the other dayneeding to do some complex
payment and I needed a human,and so I panicked, um, but but
you're right, you knowtechnology, the disruption with
technology has been here for along time.
It feels intensive.
It feels like we're we'reripping the guts out of industry

(45:11):
at the moment because of howfast ai is moving.

Ant McMahon (45:16):
It has just been a wave of iteration and that that
we have never seen in ourlifetime, so that is what is
making it feel hugely intenseand disruptive Absolutely, and
the element within there and I'mjust trying to bring up the
stat and in fact I'll talk aboutthis one because this was one

(45:36):
that fascinated me in thepresentation and the research
that we'll go back and we'retalking broad technology here,
everything technological, but ifyou go back to the the start of
the 19th century, there were11,000 handsome cabs in London.
Now that's 11,000 horse-drawncarriages plus the horse-drawn

(45:57):
buses, which have 12 horses each.
They estimated this was 100years on, obviously, but they
estimated that there were 50,000horses transporting people
every day through London.
What was then broken down inthose stats is that an average
horse produces between 15 and 35pounds of manure and two pints

(46:18):
of urine per day.

Annie Johnson (46:19):
Wow.

Ant McMahon (46:20):
So that was going onto the streets and London had
some pretty bad diseases.
150 years ago, horses had anaverage life expectancy of three
years and because of all this,because of all the disease and
everything that was going on, itwas predicted that that most
modern cities, london, new Yorkthere's similar stats for New
York.
It's quite interesting to seethem as well but they felt that

(46:41):
most of those cities would beoverwhelmed and unlivable within
20 years.
So by 1920 and it went away.
so not only did technology takejobs you know, taxi drivers
weren't there, the stable handsthat looked after the horses,
everyone that dealt withsupporting horses wasn't there
as a job but London became.
Now this is a very subjective asto what liv is, but London

(47:02):
became a livable city again as aresult of technology, and we
can fast forward that 100 years.
And okay, horse manure was notthe same problem it once was,
but there's similar problemsthat have disappeared because
technology started to bringbetter efficiencies.
I remember there was aMicrosoft TechEd conference
where one of the keynotespeakers was talking about the

(47:23):
use of drones and machinelearning and predictive
analytics to identify areaswhere mosquitoes were breeding,
and I think it might have beenHaiti, one of the Caribbean
islands.
Mosquitoes obviously bringdisease, and what they were
using the drones for was to thendrop insect traps into those
zones that were normallyunreachable by humans, thereby

(47:43):
controlling the mosquitopopulation and thereby
controlling disease as well.
So we can talk about all thebad things that technology might
do, but we can also focus onthe things that we were never
able to do before as a result,absolutely and there's always
the group of well, it does needconsideration.

Annie Johnson (48:01):
We are such massive change.
You need to try and anticipatethe unintended consequences of
that Absolutely, and that's oneof the big things also around
this purposely design is thesustainability aspect of AI, and
that should be consideration.
If that's important to yourbusiness and you have values

(48:24):
around that, then make sure thatyou are building your solution
in line with that.
And there's some really amazingvoices out there now starting
to challenge us on that, and Ithink it's a good conversation
to have.
It's an important conversationto have because individually, I
might be saving time and it'sgood for me, but the impact of

(48:45):
that on a wider scale sometimeswe forget, so it's that's going
to be a big conversation.

Ant McMahon (48:52):
I don't have the answers and I'm really
interested to see more aboutthat, because the stats and the
numbers are pretty mind-blowingyeah, they are, and I're right
individually we don't have theanswers, but collectively, when
we start to build on theconversations and look for those
alternative points of view, aswe talked about early on, that's
when we can start to see theanswers coming through and what

(49:14):
they might look like.
And it's been part of asocietal conversation on what
does AI mean for us, what doestrust mean and how do we build
trust in the tools and how do weput humans at the centre,
people at the centre, ratherthan technology at the centre of
all of this?
I think that only comes aboutwhen we actually start
connecting and talking as well.

Annie Johnson (49:33):
Yeah, I agree, and this inquiry from Australia
has raised some really importantpoints where AI will disrupt
industries.
Industries, it'll remove typesof roles, how we do work, and
there is concern that you knowwe're always saying, for every
job that's lost, ai will createa job.

(49:55):
We just have to be carefularound the pace of that and
whether it's balanced.
Um, so you know there's a lotof that fear mongering around
losing your job.
I'm not seeing that at the pacethat people are fearful of, but
we do need to be cognizant ofit.
Right, you've got to thinkmedium to long-term and that's
another thing around AI forbusinesses is don't get

(50:17):
short-sighted.
Don't be short-sighted.
You know there's a lot of quickwins that you can.
It will help you, uh.
But you've got to make surethat you've got that medium to
long-term plan in place, becauseyou can go sink a huge amount
of money in ai development todayand in six, twelve months time

(50:37):
it will be out of date.
So you've got to be reallycareful about that and I think
that's that's a lot of pressureon businesses to to stay ahead.
And, as we talked about earlier, it's the right knowledge
around what's happening.
What's coming down the line.
It's in beta.
What's what's coming out,what's good, what's bad, you
know it's.

Ant McMahon (50:57):
It's all of that consideration as well yeah, and
and and within that as well asdo we need to build it ourselves
or is there someone out theredoing this?
And maybe as a counsellor, ifwe think we need to build it
ourselves, has someone alreadytried?
Because there's possibly dataout there from other
organisations who have tried tosolve the same problem in the

(51:18):
same way and decided it justwasn't worth it and therefore,
if you can find that you mightsave yourself a bit of time and
pain along the way, Absolutely,and we find that, particularly
in the HR industry and we are HRtech it can be a really
frustrating space to navigate.

Annie Johnson (51:38):
We are, you know, system to system there might be
the slight differences, butnobody's really taking a step
back and going.
Why do we keep building systemsthat we've got to keep vaulting
together and are we listeningto the right problem statement?
You know it's yeah, there'slots of lessons out there and I

(52:02):
wish people were moretransparent around.
It failed for us or it didn'twork, or you know, because I
agree, you've got to do yourhomework and to understand those
earlier learnings.
There'll be nuggets of epicnessin that right.
Absolutely Might help us.

Ant McMahon (52:20):
Absolutely, and that's where experimentation is
key.
Through this, as well as ascall it a proof of concept, call
it a proof of value, doesn'tmatter what you call it, but
just experiment with some of thestuff before you go and sink
that large amount of money in it.
You know that an ai project canbe six months or more of of
time that's been distracted fromfrom doing anything else, but

(52:42):
you might be able to run anexperiment in two weeks or even
in two months that will tell youthis has legs or this doesn't
have legs.

Annie Johnson (52:50):
Completely, and that is some of the best advice
I received when sourcing devswas build a really tiny concept.
Think about workflows, theproblem statement.
He's to give it to a few matesto to play with and break, and
that's one of the best things Iever did was take that advice

(53:12):
and and play with it first andunderstand is is this solving
the problem I intended it tosolve, or you know?
So, yeah, you're right, it's.
It's definitely worthunderstanding that and and
getting the right expertisearound the table to help you
with that.

Ant McMahon (53:27):
But the needs analysis is really important
yeah, definitely, definitely,and I think it's been a great
conversation.
It's probably a good time to tostop on that note of getting
people to reflect on what.
What is the need, what are theytrying to solve and and making
sure they don't spend more time,money, blood, sweat, tears than
is necessary.

Annie Johnson (53:48):
Yes, completely.
It becomes very complex veryquickly and it is expensive.
Devs are ooh.
It's something I've learned.

Ant McMahon (53:58):
The benefits might be there, but developers don't
come cheap.

Annie Johnson (54:02):
They don't, but they are very, very smart people
and every day being deep in thedev space is cheap of product
Gosh.
I'm just.
I'm floored at how smart peopleare and actually a good nod to
New Zealand that we've got somany smart people here in New
Zealand that I have been amazedby in this journey.

(54:24):
We are incredibly lucky and Ithink we should, you know,
really back that innovationspace.
You know there's a lot of greatthings happening to help
explore that and raise it andcelebrate it.

Ant McMahon (54:39):
Absolutely, Annie.
Thanks for your time and thankyou so much for coming on as a
guest.

Annie Johnson (54:44):
Absolute pleasure , Ian.
Thank you so much.
No problem.

Ant McMahon (54:47):
I look forward to catching up again soon.

Annie Johnson (54:48):
Yeah, absolutely.
Thanks so much.

Ant McMahon (54:50):
See you.
Advertise With Us

Popular Podcasts

Stuff You Should Know
24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.