All Episodes

May 13, 2025 53 mins

Send us a text

"People are using AI Tools whether they're being allowed to or not. More and more of large organisations’ ways of working are starting to operate out of these things organically."

Ben Le Ralph

Listen to the full episode for an in-depth look at how AI is changing the way teams work and why strategy may soon become the next big challenge in the age of AI.

In this episode, you’ll hear about:

  • What Ben means by the “strategy execution gap”
  • Why AI adoption is messy, fragmented, and very real
  • The comparison between AI and Excel in organisational usage
  • How teams are using AI tools unofficially to move faster
  • The personal vs organisational use of AI - what’s working and what’s not
  • The shift from AI hype to quiet productivity
  • Why grey user insight can still be useful
  • The risks (and benefits) of AI hallucinations
  • Why AI won’t replace jobs - but will shift how we work
  • How delivery bottlenecks are giving way to deeper strategy work
  • The future of AI as a practical, democratic tool for decision-making
  • The emerging value of strategy work in the wake of AI adoption
  • How AI is revealing deeper patterns that drive decision-making, even if not perfectly accurate


Key links

About our guest 

Ben Le Ralph is the founder of AI For Busy People and runs a small co-working space in Richmond called Meet and Gather.

Over the past 15 years, he has helped small teams, often within larger organisations, to achieve big things.

He specialises in supporting business owners and team leaders to align their teams on the right strategy and implement practical systems that supercharge delivery. Ben is passionate about helping teams work smarter and build things that actually move the needle and make an impact.

Before launching AI For Busy People, Ben co-founded and scaled a B-Corp certified consultancy, growing it to a team of 15+ and $6 million in revenue. His company partnered with some of Australia's most recognisable organisations and government departments to help them rethink how they tackle complex social challenges.

About our host

Our host, Chris Hudson, is an Intrapreneuship Coach, Teacher, Experience Designer and Founder of business transformation coaching and consultancy Company Road.

Company Road was founded by Chris Hudson, who saw over-niching and specialisation within corporates as a significant barrier to change.

Chris considers himself incredibly fortunate to have worked with some of the world’s most ambitious and successful companies, including Google, Mercedes-Benz, Accenture (Fjord) and Dulux, to name a small few. He continues to teach with University of Melbourne in Innovation, and Academy Xi in CX, Product Management, Design Thinking and Service Design and mentors many business leaders internationally.


For weekly updates and to hear about the latest episodes, please subscribe to The Company Road Podcast at https://companyroad.co/podcast/

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Chris Hudson (00:07):
Okay.
Hey everyone, and welcome backto the Company Road Podcast,
where we explore what it takesintrapreneurs to change a
company from the inside out.
And I'm your host, Chris Hudson,and today we're gonna be diving
into how AI is revolutionizingthe way that entrepreneurs can
drive change within theirorganizations.
So joining us today is Ben LeRalph, founder of AI for Busy
People.

(00:27):
And he's the mind behind Meetand Gather coworking space that
was set up in Richmond here inMelbourne.
And Ben has spent 15 yearshelping small teams achieve
outsized impacts often withinlarger organizations.
And before his current ventures,he co-founded and scaled a B
Corp certified consultancy toabout$6 million in revenue,
partnering with Australia's mostrecognizable organizations.

(00:49):
And today, Ben's gonna share.
How AI can bridge what he callsa strategy execution gap.
We're gonna get into some ofthat in a moment, but giving
smaller teams and people reallylike us, the leverage to
accomplish what previouslyrequired, huge amounts of
resources, massive teams.
So whether you're leadingtransformation efforts or you're
looking to drive innovation fromwithin in some sort of way, this
conversation is gonna give youload of practical insights on

(01:11):
harnessing AI as your secretweapon for change.
And if you've seen any of Ben'sclips, he's on TikTok and all
the different channels.
He's a prolific content creatorand shares some amazing stuff.
So tune into what he has to sayand Ben, a very warm welcome to
the show.
Thanks for coming on.

Ben Le Ralph (01:25):
Thanks for having me, Chris.

Chris Hudson (01:26):
Great.
And now Ben, you're no strangerto content creation yourself.
So you're on all the socials andobviously you're putting out
videos and hints and tips andtricks all the time.
You work deeply in the area ofAI.
And it's been a bit of a headturner, obviously, on LinkedIn
and across much of the media fora while now.
But have we grown bored of it?
Has it grown bored of us?
I don't know.
Are we doing things with itanymore?

(01:47):
There are a thousand peopleevery minute.
It seems, making themselves intoMattel action figures right now.
But what have been some of thehottest topics in AI at the
moment, do you think?

Ben Le Ralph (01:56):
I look, I think you're so right in that it's
becoming very normalized veryquickly.
I think it went from somethingthat was very out there and like
peak.
Interesting to something thatpeople are tackling within their
companies is like, oh actuallythis is something that I better
start doing something about.
And I think the Mattel actionfigures is also interesting in

(02:18):
that it is amazing how fast afad can go from something that's
like wildly interesting to likeso over and like we're talking
like maybe in an afternoon.
Yeah, but that's how easy the AItools are these days, right?
Like you can literally just typewith no pre-knowledge on
anything about machine learningand create a very impressive
image of yourself doingsomething wacky.

(02:41):
And that learning curve fall offhas been, I.
It's very interesting to say.

Chris Hudson (02:46):
Yeah.
Yeah.
So there's stuff popping up likethat all the time.
There was the one before, it waslike the week before or
whatever, I can't reallyremember what it was called, but
it was the like a certainillustration style that people
were using.
So you probably know the one,but yeah, it just feels like
there's more, more things beingtried out, more things being
pushed, and yeah, people aregetting comfy.
Right.
So is it getting moreinteresting or is it just

(03:07):
plateaued into this creative.
Pool of,

Ben Le Ralph (03:11):
yeah.
Yeah.
It's certainly a position whereyou can either choose to be
pessimistic about the future oroptimistic about the future, and
I'm always quite optimisticabout the future.
I think what's interesting aboutthis moment or what, like I
harness my content around isthis sense that like there are
lots of announcements coming outat a very high rate.
Like what actually got mestarting my channel back just

(03:33):
before Christmas was OpenAI didsomething like the 12 days of
Christmas and they released newfeatures every single day.
And I'm like, in a regularinstance or in a regular world,
just one of those announcementswould've been something that
they built up to four for likethree or four months.
Hmm.
And like that's happening at notone company, but.

(03:53):
At 50 or a hundred companieswhere someone's finding these
new features, whether it'screating images or it's writing
code, or creating apps, likethese little things that it can
do.
They're being released andannounced every day and it's
almost gone from, wow, this isinteresting to like.
Just trying to keep up with thevarious ways that you can

(04:16):
implement it has become almostlike a part-time job.
So yeah, I think it's exciting,but in a different way

Chris Hudson (04:22):
because the, yeah, the fragmented nature of that,
obviously it's kind ofsplintering, right?
It feels like there are many,many, many, many options now and
everyone knows Chat, GPT and.
They think they're impressivewhen they mention chat GPT.
Yeah.
But there are thousands andthousands of things that you
could try.
Right.
And it, it's becoming a harderspace to navigate and some of

(04:44):
it's good and some of it's notgood.
Right.
So some of it's still picking upspeed.
So where do you think some ofthe challenges and some of the
ways to navigate that kind ofhelp, where can it be easily
done and where is it helpful tostart, do you think?

Ben Le Ralph (04:56):
I certainly agree that you can get lost.
In all the possibilities, right?
Like you can just literally lookat all of the announcements,
very hypey, very frothy, and youcan just get stuck in, should I
try this?
Like should I try chat GBT, orshould I try Claude or should I
do an agent?
Or literally things that you canjust get lost in.

(05:17):
And I think particularly inlarger organizations where.
You can't necessarily justdownload something and start
using it right away in like aprofessional sense.
I used to work on like oldschool technology projects where
you would go through and you'dhave the ideation and you'd do
the strategy and you'd gothrough a full design

(05:37):
implementation process, and thenyou'd pick technology and like
all of that.
Even if you're doing it quicklytakes, takes six months, and
then you get into an agile worldand you're like, oh, maybe we
can break that down into like adiscovery phase and then some
sprints.
But all of that just feels likeit takes too long in this
current environment.
And so I think the challengesthat people are facing is

(05:58):
finding these new ways ofworking that allow you to work
and experiment.
At a faster rate while also Iguess balancing the need for
safety and security and planningand strategy.
Like all of these things have tocome together and they're very
different speeds.

Chris Hudson (06:18):
Yeah.
It's kind of interesting to seeprobably what the number of
organizations at differentstages almost, it feels like
there.
The stages of maturity are wildand different.
Some and some, some teams arerunning rogue within an isolated
unit within an organizationbecause they're able to do so.
Or a lot of people just usingtheir personal laptops as well,
which, yeah, that may be anotherthing which kind of goes under

(06:39):
the radar, but they're justtrying to see what they can
output.
And then maybe it's just a caseof then business casing it or
then doing out.
But yeah, this notion of kind ofguerilla AI is kind of
interesting because you know,you want to think about.
Okay, well where is it official?
Where is not official?
If you are waiting for theofficial lines, then what are
you having to wait for and whatare the signals for it being
okay to use?
But yeah, is there a way of justgetting around that and there's

(07:01):
some gorilla stuff and gettingthe work done, as you say.
'cause people are just using iton the phone, on the home
laptop, you know, whatever itis.

Ben Le Ralph (07:08):
A hundred percent.
I forget where I got this from,but I heard someone recently
talking about this idea thatlike chat, GPT and these like
assistants that were used likethat in particular, like there's
lots of different ways AI can beused, but that like assistant in
particular is being implementedin organizations the same way
Excel was.
Right, right.
And so this idea of aspreadsheet where people would

(07:30):
create them for literallyanything, and like large
organizations, there is so muchdata that's just jam packed
outside of all of the othersystems that they've got.
It just like came out.
Into and then someone's put itin a shit spreadsheet and like
people would often, probably notpeople listening to this
podcast, but people out there inthe real world would be very

(07:51):
surprised about how muchcritical infrastructure is
actually just being run out ofExcel because it is just so easy
to spin up a sheet.
It's quite powerful and peoplecan just like get.
Something that they're beingasked to do, done.
And I think that's the nichethat Chat GPT or these AI
assistants is feeling is thatit's like people are using them,

(08:11):
whether they're being allowed toor not.
More and more of largeorganizations, ways of working,
starting to operate out of thesethings organically.
Yeah.
And so then the question reallybecomes how do you scale that
across teams, across theorganization, and kind of
utilize.
That in a more formal way.

Chris Hudson (08:32):
Yeah.
Yeah.
Which you think you always haveto do, but you don't always
really, because you know, withthe adoption curve as it is,
some people might not get roundto it or need to, whereas like a
10% or a 20%, I dunno what thepercentage is, but there'll be
some fore runners and somepeople behind that.
Right.
So the people that need to useit will the people that won't
it.
Probably will just be an Excelversion.

Ben Le Ralph (08:53):
Yeah, I think it, it would be interesting, I think
a philanthropic who makes Claudecame out with a new research,
like a new paid tier, which islike 200 bucks, and what they're
finding is that the people whoare buying it, developers at
large organizations using theirpersonal cash to buy it, like
it's proven that it can helpthem so much that they're
spending their own money on itjust because it makes their day

(09:15):
easier.

Chris Hudson (09:17):
Oh yeah, that happened with Adobe as well.
Like people were just buyinglike the licenses for themselves
and yeah, maybe not with Figma,that was a bit more legit, but
it felt like there wereworkarounds.
People were just finding ways,not in a software sense, but
just work stationary or otherlike basic things that you would
just bring into work becausepeople wanted to have it a
certain way.
Yeah, it's been something that'sbeen hacked a little bit for a

(09:39):
while.
I think

Ben Le Ralph (09:40):
certainly.
I think there's more of thatlike groundswell usage of it,
like that actually feelsrelatively mature almost within
organizations.
It's the how do you then buildit into your products to improve
the user experience of thebusiness.
That's, I feel like people aremore nervous around, how can I
use it?
My own personal workflow is alot safer than.

(10:02):
How do I restructure our serviceoffering to take advantage of
AI?
Yeah, and I think, say if thepart of the curve is that like
AI is here and it can help mepersonally is maturing, the next
wave is how can we improve ourinternal processes by using it
to do admin, project support,writing content, like there's

(10:22):
thousands of use cases, but it'swithin your internal teams and
organization.
And then the last lagging factorwould be how do we change the
product in a world that has AIin it?

Chris Hudson (10:35):
Yeah.
It's an interesting point.
It presents a fairly open-endedchallenge or question, which is
around how those individualsthat are using it in their own
way in a fairly nuanced way.
How does that become scaled andsystemized, and how do people
actually make sense of it andwork together as a team in
collaboration with it as well.
Because it has been, you know,I'm gonna input this email and

(10:57):
ask it to do something better.
Make this email sound moreformal, make it sound more funny
and do it in the tone of voiceof Tom Hanks, or whatever you
wanna use.
But you know you can see howit's become very.
Individualistic and quite sortof introvert, not in inward
facing, I wanna say, even thoughit's an outward expression in

(11:17):
the end because it's creating anoutputting something as
generative AI.
But it feels like that ownershipof it sits with the individual
and it needs to then stretchinto an organization.
So that could be interesting tosee how it plays out.

Ben Le Ralph (11:29):
Very, like at the moment, we're still in a, I
mean, it depends on theorganization, it depends on the
people, but I still feel likewe're in a moment now where
people feel like if they cancatch you using AI, they've got
something over on you.
They're like, oh, there's an NDash, there's a little dash in
the words there.
I'm pretty sure that chat GPT iswritten that, or whatever it is,
and I think we all just need toget over.

(11:53):
That as like a, those go.
That's it.
Yeah.

Chris Hudson (11:58):
My wife's a designer.
She was talking about typographyand was like, the only people
that know about in dashes arecreative and art directors,
directors that worked in thenineties, or people that have
come from a serious publishingbackground.
And you wouldn't use, nobodyelse would have a clue as to how
to use it.
So.
Yeah, it pops up, right.
And yeah, other things, thespelling, capitalization, you

(12:18):
can soon see what words it'susing quite frequently, so,

Ben Le Ralph (12:21):
totally.
And there's, there's parts of itwhere it's like your job is to
make the con, like you as aperson is your job is to make
the content better.
'cause chat, GPT can create somegeneric stuff.
Yeah.
But then there's the element oflike, if it's just polishing the
words or helping you think thefact that it did, that shouldn't
be something that you have to.
Have as a secret.

(12:43):
It always reminds me that therewas a time when Wikipedia was
seen as like trash.
If someone call you referencingWikipedia, they're like, oh, you
are not a serious person.
And in the like the span of 10,15 years, I think if someone's
citing Wikipedia, they're citingit because they're like, this is

(13:04):
fact and there's so many othernews sources.
So I think these things change.

Chris Hudson (13:11):
Fate news and all that.
The fact that I've got a13-year-old daughter and she's
watching YouTube and that is allthe truth, right?
That is like the truth.
And it's just somebody like youand I on this podcast, we're
just talking about what wethink.
But that's taken to meansomething, the associated
meaning of an expression.
It could be verbal content,written content, anything, but

(13:32):
as soon as it's made and it'sout there in the world, it's
considered to be real, so.
That's right.
Yeah.
It's a hard thing.
That sense of truth, it feelslike the fact that it's a
pooled, crowdsourced thing, it'sbringing together a lot of
source of information andobviously the more it's being
fed, the more it understands andthe more nuanced it becomes, and
that sense of crowd knowledgeand the sense of the wider

(13:55):
shared knowledge pool, it feelslike you would in the end get to
a definition of the truth that'sevolving and evolving, evolving,
but that feels like it, it couldbecome quite credible.
What do you think?

Ben Le Ralph (14:07):
Oh, hard agree.
I think, yeah, definitely.
I think there used to be thisconcept of like wisdom of the
crowds and I think.
It kind of plays on that a bit,right?
Which is, it's like there'sprobably a few ways to think
about it, the first part of itis that they crawled a bunch of
random content on the internetand just by taking that large

(14:28):
snapshot, taught these modelshow to speak English in a
well-defined way.
And within that.
It can speak English thing.
There was some fact and therewas some truth, but there was
also some craziness and therewas just wild speculation about
putting like glue on pizza orwhatnot.

(14:49):
Right.
But then as people use it.
Provide feedback.
People can add much better datasources.
They can remove things thatdon't work.
There's a lot of reinforcementlearning that people just have
using it, and I think that iswhat notionally makes this thing
smarter and smarter over time,is that it wasn't created by one

(15:10):
person, it was created it.
It's being used by so manypeople and getting so much
feedback on a daily basis thatit's impossible for it to.
It is not the truth, but it'swhat we think the truth is as a
society right now which can geta little philosophical, right?
Like what is the,

Chris Hudson (15:29):
it can, what's driving what, I mean, it feels
like if you've got smart peopleputting smart things in and
feeding, giving it feedback,then it would educate If you've
got.
Dumb people putting, I can'tcall people, dumb people, but if
you put in like,'cause I wasjust running some research about
this the other day and with AI.
We were talking a little bitabout how the difference between

(15:51):
interaction with aconversational AI and a human
and people are just abrupt tothe point, direct borderline
rude with the AI.
You know, some people say you'vegotta be nice to it.
That's another topic ofconversation.
But yeah, that better resultsvery direct.
Like, can you gimme this?
I've got these items in myfridge.
What am I cooking?
You know, it's kinda like youtalk to it.
Like, you wouldn't talk toanybody else in your life.

(16:13):
So it's kind of the blandness ofit could lead to a lesser
version of the AI, but obviouslythat could be balanced with the
people that are.
Pushing its capability andfeeding it with a lot richer
content so that will always bein balance, I'm sure, as
attention.
What do you think?

Ben Le Ralph (16:30):
There's a few things that I think in there,
and I'm not an expert at modeldesign or how these models are
put together by any way.
So, everything that I say is agrain of salt.
But what I find interesting is.
We are kind of moving as asociety, I guess, as all
communication has to just morelike the amount of photos we

(16:52):
have, the amount of interactionswe have, like all of that is
becoming more and more and moreand more.
And even like the printing pressbefore that to the printing
press and after was just.
More people being able to sharetheir experience and I think
there's something to the notionof smart put it, people putting
smart things in and trying toprotect from the dumb people.
Not putting dumb things in isalmost balanced.

(17:14):
Where it's like is if you justget lots and lots and lots of
stuff from lots and lots ofdifferent people and the lots of
different people, bit is I thinkthe most important part.
Which is that you just get somany different experiences of
the world that don't necessarilyusually get picked up in things
that were being published ortalked about.
And so it's this balancingfactor of yeah, there's people

(17:36):
who are saying things that areclearly like untrue by any
scientific standard, but oftenthose people are talking to an
experience that they have orthey think about things in a
particular way or like all ofthat stuff.
As long as the s can be tuned ina way to operate the way we

(17:57):
expect.
Actually, we are getting a muchbetter breadth and depth of
what's actually happening in theworld than we were probably
having before when we weretrusting a small amount of
people to publish veryspecifically accurate things.

Chris Hudson (18:12):
Yeah.
Yeah.
So from a representation pointof view, it would be more
visibly and accuratelyrepresented.
For better or for worse.
It's what TikTok or YouTube is.

Ben Le Ralph (18:25):
That's, it's, it's certainly a for better or for
worse situation.
We're

Chris Hudson (18:27):
thinking we want to create and we, this is what
we think people wanna see andread.
So that's how it's gonna be.

Ben Le Ralph (18:33):
Look, I would base this just on looking at the
transition of like going from nowebsites where all you could
reference was books.
And I was old enough to justcatch the generation where
you're like half of my schoolingwas done purely going to the
library and looking at physicalbooks.
Yeah.
And then the other half waslike, now we've got websites and
access.
Yeah.
And I think AI, if it's justanother orbit of magnitude of

(18:55):
there, it's like there's morecraft, but there's also a lot
more genuine, valuable stuff.
Yeah.
In that would be my,

Chris Hudson (19:02):
so it's like having, that's the, yeah.

Ben Le Ralph (19:04):
What I would hold to in my framework, not knowing
how the numbers of the math workunder the hood.

Chris Hudson (19:09):
Yeah.
The days of the encyclopedia ona CD wrong.

Ben Le Ralph (19:13):
Yeah.
There used to be gold standard.

Chris Hudson (19:14):
That's all the knowledge in the world.
Apparently.
It's on one disc and I'm gonnaput it in my machine.
I'm gonna ask you whatever Ineed, you know?
Interesting stuff.
So that will be the trend moreand more so for navigating that
more and more.
It feels like we'd need somediscretion, right?
Like we need some judgmentaround how, how to handle that,
what to take, what's credible,what's not it.

(19:36):
It's gonna be hard, isn't it?
Yeah, recommendations.
It's a bit like choosing whetherto go with this airline or that,
or with this shop or that shop.
But it's much harder than that.

Ben Le Ralph (19:46):
Certainly.
Yeah like where it becomesinteresting, like when you
actually get AI to do stuff foryou, right?
Like they're acting as these AIagents like book a flight or
something slightly morecomplicated than that.
Like actually seeing where therubber hits the road there.
It shows you where they're dumbin very particular ways.
Like I've always thought there'sthat saying of like, users

(20:07):
aren't good at telling you whatthey want, but they're good at
telling you what they don'twant.
Once you show it to them, Ithink there's a notion of that
when you are working with anassistant and you are just
asking it for ideas or you'retalking with it and it's all
just language, it's a little biteasy to just be overwhelmed by
how realistic it sounds or howhuman it sounds and be like, oh

(20:27):
yeah, that sounds credible, thatsounds great.
But then when you get it to doan actual task where you can see
that there's a right or a wrong,like get it to code something
for you or book a flight.
And then seeing it struggle, itprobably gives you a healthy
dose of, this isn't magic, thisis just a computer that's easier

(20:49):
to interact with.

Chris Hudson (20:50):
Yeah, it's falling from grace.
You, you put in this whole thingand you know, people are know,
to some extent go quite in depthwith their prompts now, right
around what you're asking it todo.
So if you've written yourmasterpiece in your instruction
manual and you've given it allthe information you think it
needs, and you get served a turdor you get something bad that
you weren't expecting, then thatmust be part of the learning

(21:14):
experience.
People have gotta be okay withthat.

Ben Le Ralph (21:16):
Yeah, a hundred percent learning about to be a
prompt engineer.
This is something that'schanged, I think, dramatically
in the last six months, whichis, I think there used to be
this idea that you were gonnahave people in your organization
who were gonna be promptengineers or like they were
gonna be the people who use AIor everyone was gonna have to
learn this skill.
Whereas now I think everyonekinda has this realization where
it's like you're not gonna haveone person there.

(21:39):
Using the AI, everyone needs tolearn how to ask a question and
get an accurate response.
Yeah.
As part of their job.
Like that's gonna be somethingwe are all gonna have to get on.

Chris Hudson (21:48):
Yeah.
Kind of like a waiter in arestaurant, you know, take in
the order to take to the kitchenand the AI's gonna give you the
food and then you're gonna takeit back to the table.
It's a gated version, whichwould've resulted in a different
AI in the end as well.
Presumably, if that had beencurated by a certain set number
of people, whereas now it's openfor everybody.
So, yeah.

Ben Le Ralph (22:09):
Can I kind of prompted from a question you
asked earlier, and this ispotentially controversial.
It might, you might not thinkit's controversial at all, but
I've had.
Yeah, so we're like both userresearch people.
Yeah.
But one thing that I've foundvery interesting with AI is this
idea of like gray user insight.
You know, if you've got likegray water, it's like water.

(22:29):
But I wouldn't necessarily drinkit, but like I'd put it on the
garden, right?
Like it's this water.
It's a bit tainted, but it'sokay.
I've started seeing, when I workwith organizations.
The quality improve and actuallythe quality improved by making
the customer ever present, butnot necessarily having that
customer data be tied back to aspecific user or the same with

(22:51):
like analytics within anorganization.
You know, like there would bethis whole thing about like, how
do we.
Use our analytics within theorganization with an LLM if we
can't guarantee the accuracy endto end.
Whereas what I've seen is thatthe decisions in an organization
are being made irrespective ofwhether they're getting data or
user research at all.
And so taken a bunch of researchreports that have already been

(23:15):
conducted, feeding them into abot, doing some clever
prompting, and then just givingit to an exec and saying, yeah.
Run your thinking through thisvoice of customer bot.
Yeah.
Significantly improves theoutput of their decision making.
One, because the AI is good.
But two, because really what'shappening is every time they go

(23:36):
to do something, they'rethinking about customer.
Like the AI could literally donothing, but the fact that
they're having to engage andframe their decision in a way
that you would ask a customerjust that improves it.
And so I think my controversialopinion is, is that AI lies,
but.

(23:56):
So what, like there are degreesin which I don't necessarily
think that that matters.
Hang

Chris Hudson (24:00):
on, hang on.
So where are the lies?
That was the controversial bit.
Where are they?

Ben Le Ralph (24:05):
Oh, so I think the controversial is like, when I
hear people in organizationstalk about AI, their biggest
thing is they're like, oh, butdoesn't it hallucinate?
Doesn't it make stuff up?
Yeah,

Chris Hudson (24:12):
yeah, yeah.
That's very interesting.
I mean, I, I work a lot inresearch.
As we were talking about justbefore the show, it's.
Yeah, the, the point of view,you know, the dynamics around
leadership and obviously how,how close the opinions are held,
you know, and how that can beliberated often is needing to be
backed up by user research orcustomer research, and people

(24:32):
are going to some links, youknow, sometimes you're talking
to thousands of people andsurveying them to get.
Get the data that supports or,you know, disproves, you know,
somebody's highly importantpoint of view.
You know, that can be the casetoo.
Yeah.
So, so yeah, we're, we're, we'rethinking about that.
But yeah, if it's democratized,you know, the AI, if it was
asking, you know, if the, if theCEO had this point of view, was

(24:54):
asking the AI for the answer andthe AI was, was not, you know,
obviously not really gonna careabout what the CEO thinks
because it's gonna give theanswer that thinks it's the
right answer.
Um, then that, that will changepolitical dynamics, you know,
within, within team structuresand decisioning and all of that,
which, which could beinteresting.

(25:15):
Certainly good decisioning outof the mix from an intro
intrapreneurship point of view,from an organization point of
view, because it can beneutralized as something that
is, you know, it's crowd learnedand you know, all the data
that's feeding into it.
It's the collective wisdom as wewere saying.
Then, you know, what's thatgonna free us up to do?
What are we gonna be doing ifwe're not talking about what the

(25:36):
decision should be?

Ben Le Ralph (25:37):
Yeah, totally.

Chris Hudson (25:40):
Yeah.
I mean, yeah.
Sorry.
Do you have a follow on?

Ben Le Ralph (25:43):
I could talk about that forever, but yeah.
Okay.
You go,

Chris Hudson (25:48):
yeah.
We're, how far away from that doyou think?
Do you think we're close tothat?
Or do are we, are we justlearning the baby steps?
At the moment,

Ben Le Ralph (25:56):
I'm not necessarily convinced that.
AI is gonna take anybody's job,to be frank, I, the way I see it
playing out realistically isthat there are a lot of people
trying to figure out how toreduce head count using AI, and
I haven't seen a single versionof that that has worked to
meaningfully reduce the amountof people that they have.

(26:19):
Yeah.
While keeping a level ofquality, like there are actually
plenty of high profile exampleswhere people fired a bunch of
their support staff and thenstarted hiring them back.
Well, they said they weren'tgonna hire any more developers.
And then you just look at theirLinkedIn profile and there is
developer jobs in there fordays.
Mm.
And so I think what will happenis that anything AI can do

(26:42):
becomes commoditized right away.
It's like by definition, if AIcan do it, yeah.
Any organization can do it.
And so I think much is more,what's more likely to happen is
that it raises the bar.
For everything and so that thereare certainly things that you
used to spend a lot of time inyour day doing.
Whether you're a researcher or adeveloper or any of these kind

(27:03):
of knowledge work professions,they're like, what you do during
the day will change.
But I'd be very surprised ifwhat happened was it did all the
work and then we all sat thereand we were like, I dunno what
we're gonna do.
Yeah.
Mostly because I don't know whatyour experience is like, but my
experience within organizationsis that.

(27:25):
There are a lot of people whohave a bunch of stuff that they
say they should be doing.
Talking to customers is like agreat example.
Yeah.
But they never actually have thetime to do that stuff, and so I
don't think you ever get to theend of a list of tasks.

Chris Hudson (27:40):
So all they're gonna do is talk to customers?
Is that what you're thinking?
They'll be out, they'll bearound their homes having tea
every day.

Ben Le Ralph (27:49):
If that's the competitive advantage, then
that's.
That's what you'd end up doing?
Yeah.
No, I, I don't, it's a bitfacetious, but no, I don't
actually think that that's allthat Unlikely.
Yeah.
Particularly because I thinkprobably what will happen is
that AI allows us to, and thisis the difference between like a
strategy execution gap, right?
Which is that, yeah, you used towant to do things within an

(28:11):
organization.
You'd have all of thesebrainstorming sessions, you'd
have a lot of strategy, and theywould all come down to this one
bottleneck, which was delivery.
Yeah.
And you just could not do thatmuch.
Yeah.
Whereas I think what I'm sayingis that that delivery is
starting to become much less.
Of a bottleneck and so moreideas can flow through, which
requires a lot more strategy,research, customer insight.

(28:36):
To be able to actually creategood stuff because the can we do
it is no longer the bottleneckthat forces decision making, if
that makes sense.

Chris Hudson (28:44):
Yeah, yeah.
I mean that is an interestingpoint in itself because I
wondered where it would leaveour friend's strategy in the mix
of this conversation becausewhat we've seen in, I don't know
if this is relating to AI not,but it's certainly the last 12
months feels like the seniormanagement in a lot of cases,
big corporates.
I was being trimmed back.
So there are a lot ofredundancies here and there.
There's a real focus on shippingthe work and delivery it's

(29:08):
engineering, it's dev obviouslywork needs from a product point
of view, if you're running asoftware business or whatever,
you just need to get the designdone and out there into the
world so that you can seewhether it works or not, but.
Fringe strategy, where does itsit?
Is it just being directed fromabove the implementation, the
design, that can often be nowwith AI just driven with a few

(29:30):
simple prompts and people arejust getting to a version of
something creative and it endsup being evolved a little bit.
But there's either a danger ofthe strategy being missed or a
danger of the executionalaspects, just being a bit gray
water as you were describingbefore.
We get around some of that andstill keep it joined up.
Do you think?

Ben Le Ralph (29:49):
Yeah, I think strategy is about to have a re a
renaissance, to be honest.
I think there are certainlygonna be companies, Renaissance
the

Chris Hudson (29:56):
right word, it feels like, right.
Traditional word to use.

Ben Le Ralph (30:00):
Yeah.
And I think, and, and this a bitrogue, but I think it will be,
yeah, traditional strategy, likeI think.
It's hard to speak in such broadterms, but what I've experienced
is that there are a lot ofpeople who do strategy who that
isn't how I would describe whatthey do.

Chris Hudson (30:16):
Yeah.
Okay.

Ben Le Ralph (30:17):
Yeah.
I just don't necessarily thinkthat often those roles are being
all that strategic and it's nottheir fault.
It's just that what they'rebeing asked to do is we've got
all these requests coming in andwe've got one development team.
Could you run the strategy ofwhat essentially gets
prioritized in through thatpipe?
Yeah.
And if the pipe gets bigger, youdon't need that job quite so
much.
Yeah.
But if you are executing atthree or four, or five or 10 x

(30:41):
the amount that you were doingbefore.
If that's not all directed, thenyou're gonna see organizations
just splinter at 2, 5, 10, 50times the rate that they were
doing before.
Like I'm always amazed at justhow much waste is happening
within organizations becausethere are teams who are
executing things, unaligned orunrelated.

(31:03):
To each other.
Yeah.
And without good solidtraditional strategy that's
aligning these teams, you won'tbe able to hire a bunch of
people to sit between thoseteams to do the gatekeeping
anymore.
Like it just won't be somethingthat is physically able to do by
a person.
And a specific example of thisthat I found interesting, and

(31:25):
it's more of an experiment thatI run with people than anything
else but.
Obviously you've got yourorganizational strategy and
you've got your organizationalvalues, and they're defined to
various degrees of clarity.
And then what you've got is, sayyou're a manager and you're
working across a department,you've got all the things that

(31:45):
they're typing into an LLM likechat gp, and you've got all the
emails that they're sending on adaily basis.
And running their emails andtheir chats through an LLM
essentially saying, can youarticulate what the strategy
this person has?
Like what are they actuallyexecuting against?

(32:05):
Yeah.
It is amazing how like polaropposite they can often be.
Yeah.
Like they're just disconnected.
And I think ideally where LLMscan play a really interesting
role in strategy is keeping.
That aligned.
It's what we're executingactually what we're saying we're
gonna do?
Because we don't have to have ameeting to figure that out.

(32:28):
We can use other metrics.

Chris Hudson (32:31):
Yeah, it presents an interesting ethical question
as well around what it's got andwhat is it just answering the
question that you've asked itabout the strategy.
But it could by the same tokenanswer the question like, what
is that person actually doingfor the company?
And how are they spending theirtime and all of these.
Taboo type questions that comeup.
So I think from a leadershippoint of view, that data access

(32:54):
and consent, some of theirprivacy conversation comes in.
It's not just about the outsideworld and giving your data away
to a bot or an AI that islearning from it.
'cause in this case, it couldinfluence either yours or
somebody else's career.
Right?

Ben Le Ralph (33:09):
I think that's right.
I think the difficulty is Ithink if I was coming at it
without ever working incorporate, I would come out
pretty hard to say.
If you are at work and you'rewriting an email and someone
asks what you do, like ifsomeone comes to you in a notion
like, what do you do on a dailybasis?
And they can't answer thatquestion.

(33:29):
I think that's a problem, and Idon't feel like that's
necessarily unethical it feelsfundamental one on one.
I don't think that that isethically problematic.
Yeah.
We both have worked in largecorporates and so we both know
how that gets manipulated orused like you screen capturing
what people are physically doingor tracking mouse clicks or

(33:49):
like, all of that becomes grossand I think it stems from not
fundamentally understanding thevalue that people are creating.
Like I think it actually comesfrom a place of not
understanding what people do.
Yeah.
Like if you are.
Trying to quantify what someonedoes by hours spent or emails

(34:10):
written or whatever, then you'vekind of missed the point.

Chris Hudson (34:13):
Oh yeah.
But there's a lot of that goingon.
You know, the witch hunt it'salways happening.
Feels like, whereas the pointyou were making about, oh yeah.
That was written by AI, wasn'tit?
So what did you do with the restof your day?
Yeah.
It's gonna be conversation.
So I don't know if you're on thereceiving end of that sort of
feedback, how should we bepreparing for it do you think?

Ben Le Ralph (34:32):
Probably not the right person to ask

Chris Hudson (34:34):
necessarily.
You're an evangelist for AI, so

Ben Le Ralph (34:37):
Yeah, and I would also, I guess before that be
kind of an evangelist for notmeasuring output.
I.
Or not measuring someone's valuewithin an organization based on
output.
And really what you should bedoing is having or time, like it
always seemed like crazy metricsto me, and I'm not the right
person to ask because I thinkthat's a stupid thing to do.
And so I acknowledge that thereare management out there that

(35:01):
think that that's the best wayto get the performance out of
their team.
And I think in the long term,that will prove to be incorrect.
Yeah, to the extent that theyuse AI to do that.
In the same way that they'veused other tech to do that, I
think there needs to be goodsolid frameworks, rules, laws
from a government whounderstands this stuff, to
protect people from like blatantmisuse of this stuff.

(35:23):
Like I think that that'sprobably very important.

Chris Hudson (35:25):
Yeah.
Alright, maybe I'll ask you amore comfortable question.
Yeah.
Which is, he is probably leaningmore into the positive sides of
AI and not maybe some of thenegative at this point, but it's
also in regard to that witchhuntaspect.
So if you're an intrapreneurwithin an organization and maybe
using some of the guerillatactics or maybe not, but how do
you positively create good newsand groundswell around AI in a

(35:49):
way that you've seen has workedwithin organizations?

Ben Le Ralph (35:51):
think fundamentally AI is, a real good
word, to be able to attract somefunding for an initiative that
you've got going on.
Like I honestly, so many of theclients that I work with, like
we are not really rolling outAI.
Notionally, what you are doingis you're coming in and like,
well, what we would do is youcome in and you look at a
business and you say, well, whatare you trying to achieve?

(36:14):
What are all the things that youdon't want to do or that are
taking you away from what you'retrying to achieve?
And then from that, you say,great, how can we use this new
suite of technologies that havecome out to enable those
business outcomes?
And so probably what I would sayis you use the term AI.
To the extent that it is helpfulto you, to the extent that it

(36:36):
allows the organization to giveyou some money to experiment or
to try something new or to goout from a different approval
process, like it's good at like.
Particularly these days, likegetting some funding for
something that's interesting,but that if the project is just
about AI, it's probably notgonna succeed in the long term.
Like it needs to be tied to somekind of customer outcome.

(36:58):
Yeah, a business outcome oremployee efficiency outcome,
like it's gotta be a process.
Attached to something real, notsomething AI.

Chris Hudson (37:08):
Yeah.
So, because I reckon that whenit first came out there, there
were all sorts of projects thatwere just going up and out
because it had AI just stampedon it, right?
Yeah.
But what you're saying is thatthere's more to that now.
So the business casing aroundAI, you'll still get the money,
but you're gonna have to put ina proper case for it now.
So what do you think has changedthere?

Ben Le Ralph (37:28):
I think it's maturing and people are becoming
smarter about it.
I think we've gone through aphase where there's enough case
studies now to see where itworks and where it doesn't work.
I think within largerorganizations, things like AI
and like Agile is a very goodexample.
I don't know if you're familiarwith Agile?
Yeah, yeah, yeah, yeah.
So I think Agile, it's success.

(37:52):
As a fad that is certainly dyingoff.
Now, I, I would say its successcame from organizations had this
core problem that they wererigid in how they worked.
Like they felt like peoplecouldn't move, like they weren't
literally agile.
Like someone needed to come andlike throw all their processes
out and come in with a bunch ofnew processes.
And where it succeeded was agilewas used to like as an excuse to

(38:16):
spend some money on internalreorganization to be able to
create an environment wherepeople could have all of these
other things that I wouldn'tnecessarily say were agile.
Like if you could use an Agileproject to improve psychological
safety, big a tick, if you coulduse Agile to come in and get rid
of a whole bunch of hierarchicalapproval.
Big tick.
Yeah.

(38:36):
If you came in and used Agile tojust set up a bunch of Scrum
ceremonies, then that.
Didn't really help anybody.
And the reason I say all of thatis that I think.
Companies within theirtechnology at the moment have
kind of got a bit stale.
Like people have their apps orthey've got their website or
they've got their business setup on a bunch of technology that
if someone came through withsome budget and said, how could

(38:59):
you significantly improve thesethings?
People would have a bunch ofanswers.
Like, people, there's plenty ofresearch has been done about how
all this stuff could beimproved.
And so to the extent that youcan use AI to focus, that will
internally.
That's where I thinkorganizations are succeeding,
where organizations are like,oh, we're gonna spin up our own

(39:22):
LLM and we're gonna pump itthrough a whole bunch of data,
and we're gonna do a whole bunchof stuff, which is just AI
without being attached to anykind of user outcome or business
outcome.
Then I think those are failingand each one of them that fails
gets talked about and is justanother reason that people are
becoming much more.
Analytical about how theseprojects should work.

Chris Hudson (39:44):
Yeah.
Yeah.
Wow.
Alright.
So Agile's gonna be killed byAI.
You heard it here first, but

Ben Le Ralph (39:51):
yeah, that's probably the most unfair, like
that if I get any heat from thispodcast, it'll probably, from
talking bad about Agile,

Chris Hudson (39:57):
they're gonna be coming for you.
You know, there some passionatefolk out there that, that love a
bit of agile,

Ben Le Ralph (40:02):
so Yeah.
Yeah.
Look, I, I've taught many, manypeople Agile as,

Chris Hudson (40:07):
yeah.
Well, yeah.
You've turned, you've changed.

Ben Le Ralph (40:12):
I've changed.
I know, I've, I've jumped intothe new, into the new van.

Chris Hudson (40:16):
Oh, it's funny.
I mean, the fact that agenerative knowledge based tool
as it was first probablyunderstood in a mass context has
now, from what you're saying,has applications for
collaboration to the extent thatit could fix for some of that is
kind of interesting becauseyou're not just using Wikipedia,

(40:38):
you're trying to make it changethe way in which the
organization runs.
So from a operational point ofview, like what's it gonna take
to make that bridge possible, doyou think?

Ben Le Ralph (40:48):
What I'm pretty excited about is what works well
is when small groups ofcross-functional, like a small
group of people work together onlike an end-to-end delivery,
right?
Like it just feels and leastprobably researched that backs
this up that I've read andconsumed and I'm kind of
summarizing badly, but I thinkthat there is this sense that

(41:09):
like people work better whenthey're interacting with a small
group of other people and have adirect idea about what they're
actually trying to do.
And the thing that I'm excitedabout when it comes to AI is
that there is only so much badwith that people can actually
store.
Within one of those teams, liketraditionally you couldn't have
legal and marketing and customerresearch and development and all

(41:32):
of that in one team, and so youwould have to breath out and
then these projects would getbigger than anyone could
understand, and then that's whatleads to failure.
Whereas I like the idea of smallteams within an organization who
have access to like 70% of theinformation they need from legal
and.
50% of the information thatthey'd need from marketing.

(41:53):
And so what that ends up lookinglike is you've got a marketing
team and a legal team and a opsteam who are there enabling the
business and their job is to,one, make sure that their LLM
knowledge base tool works asgood as it can, is as helpful as
like it is delivering value.
Yeah.
And then their other job isdoing like the 20% what actually

(42:15):
is the legal expertise thatyou're paying this person for?
Yeah.
And having that person haveenough time to be actually able
to get into the team thatactually needed their expertise
at that moment.

Chris Hudson (42:27):
Yeah.
Right.
It's two days a week now.
Or not five days a week, they'vegotta be in the office.
I don't know.
It could some of thoseconversations maybe.
Yeah.
Well

Ben Le Ralph (42:34):
that's a question about value, right?
Yeah.
Like if you can do, like, sayyou're a lawyer with a
particular skill set inacquisitions, right?
Yeah.
The company needs to be have youon the employee because they're
gonna do acquisitions.
Yeah.
Do they literally need you fivedays a week when they're not
doing that process?

(42:54):
No, probably not and this iswhere you can be pessimistic or
optimistic, either.
The optimistic read is we end updoing higher quality work or
working four days a week.
And the pessimistic read is allof those people become
subcontractors and have to jumpfrom organization to
organization to make that work.

Chris Hudson (43:11):
Yeah, I mean the fractional roles.
Have been quite popular from agig economy point of view, but
also from the point of view,just having shared resources and
access to experts as and whenyou need.
So a decentralized model canwork for a lot of companies.
Just depends on whether that'sright.
And obviously those that need itin house.
And if you need your legalcounsel or if you're in IP or
whatever you're doing, you'regonna have to have people at a

(43:33):
point of escalation at anypoint.
So I think it just depends alittle bit on the business
model.
But yeah, raise an interestingquestion, and I think that the
rate things are changingprobably suit them.
We think some of these behaviorsand changes and trends will
probably start to take shaperight, in the next six to 12
months.
What do you think?

Ben Le Ralph (43:50):
I think so, you go through a whole blockchain
technology revolution whereeveryone tells you the next six
months blockchain's gonna changeeverything.
And you're like, yeah, yeah,yeah, yeah.
And like it never comes.
And then someone comes throughwith AI and they're like, AI is
gonna change everything.
And you're like, wanna have abit of hesitancy about is this
actually gonna be astransformative as the
technologists suggest?

(44:12):
But i'm always actuallysurprised by how slow
organizations move.
So yeah, I think it's gonnachange quickly, but I'm often
wrong about.
Exactly how fast.

Chris Hudson (44:21):
Yeah.
That's things change all thepeople out there listening to
this and that's right.
How fast do you wanna go?
How brave are you?
I think there's a questionaround risk.
You know, risk often comes upand these sorts of things are
thrown about in they want tomitigate risk, but also manage
any kind of issues withcontinuity basically, and the
service model and an operationalmodel.

(44:42):
You can't just turn it all offand throw something else in
overnight, but there'll be waysin which that could be tried out
presumably quite, quite easily.
So, yeah.
Interesting.
Certainly.

Ben Le Ralph (44:53):
Yeah, I think that's probably the biggest
thing that's changed in the lastthree months, three to six
months, is that people have gonefrom the risk of doing things
with AI is too high to the riskof not doing.
Or understanding AI is startingto build up on people.

Chris Hudson (45:07):
Oh, yeah.

Ben Le Ralph (45:07):
Like organizations are starting to be like, look,
if we miss the boat on this,that risk is starting to
outweigh the what happens ifit's bad risk.

Chris Hudson (45:16):
Yeah, yeah.
And why is that, do you think?
What's behind that being a risk,do you feel?

Ben Le Ralph (45:22):
Probably, I'd love to give you a much more academic
reason, but I think it probablyis just like.
Call lizard brain, just beinglike, I'm seeing more and more
people take this thing up, and Ithink that uptake is genuine,
but there's only so much thatyou can see that in the media
and not have it cement itself inthere, that it's something that

(45:42):
you should think about.

Chris Hudson (45:44):
Yeah, yeah.
What do you, A couple of otherquestions.
So what about the argumentaround it being detrimental to
the experience of learning andmore of a pedagogy and this kind
of understanding of if it's allout there and written note, is
it teaching us to not think asmuch or to think as laterally?
You know, it's particularlyrelevant, obviously not just
within corporate context, butwithin schools and universities

(46:08):
and is it just spitting out theanswers and does that mean, what
does that mean for us all?
Yeah.

Ben Le Ralph (46:14):
I, I really grapple, or I wrestle with this.
I look, there's, there'sresearch out there that proves
that people who use these.
Think less deeply and by somemeasures, dumber.
Yeah.
So that's hard to refute.
I guess.
On the other hand, it's neverbeen my experience that a
technology that allows you to dosomething more has resulted in

(46:39):
me understanding it less andthat often.
What you see is the more peoplecan get their hands on the
thing, the more they go downinto the rabbit hole of how does
this thing actually work and Iwonder if like, what's gonna
come out of I.
Shake out in the wash of like,we look back from 10 years from
now, what we say is that, turnsout technology wasn't as bad as

(47:01):
we thought, but social media wasfive times buist than we thought
it was gonna be.
Like, it's hard to unpicknecessarily, but there's a lot
of, when you talk about theresearch around like what's
making us dumber or what'smaking us more unhappy, a lot of
that.
It kind of isolates down to thisweird dynamic that we've created
for ourselves where we put outonly the best stuff.

(47:24):
Yeah.
Don't put out any of the badstuff and like create these
weird microcultures.
so yeah, I would worry aboutthat in particular.
I think probably the othernuance, and look, I'm just
talking, I should ask what youthink, but I think the other
nuance that I, I would have orthe experience is that I'm
dyslexic and so.
I feel like I was born atexactly the right time because

(47:45):
computers have always done a lotto be able to get me from like,
being awful at school andlearning and not being able to
participate really at all.
To actually being someone who Ifeel like I'm quite well
educated because I was able tobring the technology together
and just even on the smallestthings, being able to just smash

(48:05):
out some thoughts and have AIput all the commas in the right
spot and then send it out hasbeen Yeah, yeah, yeah.
Or turn text into audio is alsoman.
Amazing.
And so, I don't know, there's awhole massive answer in there.
What do you make of it?
This is probably an area whereyou might understand a little
better than I,

Chris Hudson (48:22):
well, that puts me on the spot, I think there's
obviously two different skillsof thought.
You know, one is that theprocess of learning is training
your brain and it's giving youthat sense of critical thinking
to what should be taught at theschools, what should be taught
at the universities, what shouldprofessional development in the
end look like.

(48:42):
You know, they're all bigquestions that could be taken
one way or another.
Some would say you should beusing pen and paper.
Some would say it's okay to usean iPad or an AI.
You know, it's all being usedalmost for the same purpose, but
there are different methods.
So I think it'll probably justcome down to preference and
parents are always gonna have apreference around where and how
their kids are gonna geteducated on that basis.

(49:05):
Yeah.
But yeah, I just wonder if itputs the brakes on, like you've
gotta wonder when you've gotkids, whether if they're just
consuming, created content andthat content is so much more
readily available and so, somuch more frequently available.
How much more can we take that?
And is it putting the brakes onthem creating for themselves or

(49:28):
is it inspiring them?
I mean, it's completelysubjective and probably down to
the individual as to what thatwould.
What impact that would have onyou longer term.
The person that isn't in any wayconnects it's technology, on a
nice island somewhere that theymight end up, they might end up
learning in different ways too.
So it feels like it's allformative.
Right?
It feels like you would gothrough.

(49:50):
Your walk of life understandingand seeking out probably what is
interesting to you.
And if you wanna learndifferently, then you can learn
differently.
If you don't wanna do it thenfine.
But I think from a moredemocratic point of view, it
feels like, particularly withinan organizational context, there
should be some sense of choiceultimately as to whether you
want to create your work in acertain way or in another.

(50:14):
And the comparison of thosedoesn't need to be made.
It's just about whether the workcan be done, whether it can be
done to a good enough standard,whether it's exciting, whether
it's inspiring, so I feelthere's, there's a lot to kind
of figure out.
And obviously you mentionedsocial media, you can't just
separate that from AI or fromtechnology because it's all
interrelated now.
So I don't feel like.

(50:35):
We can really avoid it, but wecan avoid the metaverse.
Right.

Ben Le Ralph (50:40):
Let's all just like as, yeah, as a society,
let's all just agree that that'snot something that we have to
do.
Yeah.

Chris Hudson (50:47):
Yeah.
Super interesting.
So Ben, I really want to sayjust thank you for so much for
coming onto the show.
It is been a fascinatingdiscussion.
We've talked a bit about.
Your own personal work and thework that you do within your
consulting practice around AI,but obviously you're going into
organizations and you're helpingto set up some of these systems
and fixes for what people wouldneed.
So I'm sure if people hadquestions they'd like to get in

(51:10):
touch, like what's the best wayfor people to get in touch with
you?

Ben Le Ralph (51:13):
Yeah, great question.
Look, LinkedIn's probably thebest.
So LinkedIn, any of the socialmedia platforms are great.
Yeah, like LinkedIn, TikTok,Instagram, AI for busy people,
or Ben Le Ralph.
Yeah, the, the algorithms willhelp you find me on any of those
platforms.
Yeah, that's probably theeasiest way to, to get in
contact with me.

Chris Hudson (51:30):
Good stuff.
Thanks so much.
And yeah.
LinkedIn's in trouble, right?
What do you think?
It's, there's a lot of thecontent being pushed out there
that's just no good anymore.

Ben Le Ralph (51:38):
Oh my God.
Yeah.
Do you know what actually I havea very different answer about
this.
Yeah.
Go ahead.
A month ago I was on like a bigpush on like, LinkedIn is the
like literal worst and like Iwould complain about it to
everyone.
I'm in this phase, I'm trying toget myself out there.
I'm trying to market and I'mlike, just hated LinkedIn and an

(51:58):
old friend just like sat medown.
He is been in the corporateworld for.
Forever and he just looked atme.
He is like, generally speaking,I don't know about much about
social media, but generallyspeaking, networks are as good
as like you make them.
Yeah.
And so I actually have had moresuccess recently with the
perspective around like, I.

(52:19):
I don't know.
The main feed is necessarilyfull of a bunch of people who
are making pictures ofthemselves all look the same,
but surrounding yourself by likegood people on LinkedIn or like
small groups or whatever.
I've actually, I don't know,it's, it's been a real benefit.
I.

Chris Hudson (52:33):
Yeah.
Yeah.

Ben Le Ralph (52:34):
To me personally, so I can't talk too badly of it
actually.

Chris Hudson (52:37):
Yeah.
Okay.
Good, good.
There's some nice campfiremoments out there for those that
are looking, so if you need to,you've gotta

Ben Le Ralph (52:43):
wade through a lot of sludge.

Chris Hudson (52:45):
Yeah.
Bring it together yourself.
Yeah.
Cool.
Well, we'll leave it there.
Thanks so much, Ben.
Really appreciate you coming tothe show.
And yeah, thanks.
Thanks so much for sharing yourknowledge and your wisdom and
yeah, we'll let you get on withyour evening.

Ben Le Ralph (52:59):
Thanks so much, Chris.
It was, yeah.
Great to be on.
Really appreciate it.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.