All Episodes

June 18, 2025 28 mins

The gap between casual ChatGPT users and organizations with massive AI teams seems unbridgeable for most departments. But what about that middle ground where small teams can leverage AI effectively without specialized expertise?

Dr. Tim Scarfe, CEO of Machine Learning Street Talk, discusses practical AI implementation for smaller organizations with hosts Pauline James and David Creelman in this HRchat conversation. 

Running a sophisticated content production operation with just 15 team members and spending $1,500-2,000 monthly on AI tools, Tim offers a realistic roadmap for departments looking to move beyond basic AI usage.

"ChatGPT is a reflection of you," Tim explains. "It makes dumb people dumber and smart people smarter." This insight highlights why some users remain frustrated with AI while others create remarkable value – the difference lies in approaching AI conversations as iterative journeys rather than one-shot interactions.

Most surprisingly, Tim suggests that building internal AI systems doesn't necessarily require specialized AI expertise. Rather, curiosity and experimentation can take departments far, especially when leaders understand that AI itself can help explain how to use AI more effectively. Tim cautions against waiting for "perfect" technology before diving in, warning that we face a potential digital divide similar to what occurred during the 1980s computing revolution.

For HR leaders and department managers, the conversation offers a practical middle path between doing nothing and pursuing enterprise-wide AI transformation. By starting small, experimenting continuously, and focusing on specific use cases, even modestly-sized teams can create significant value with today's AI tools.

Support the show

Feature Your Brand on the HRchat Podcast

The HRchat show has had 100,000s of downloads and is frequently listed as one of the most popular global podcasts for HR pros, Talent execs and leaders. It is ranked in the top ten in the world based on traffic, social media followers, domain authority & freshness. The podcast is also ranked as the Best Canadian HR Podcast by FeedSpot and one of the top 10% most popular shows by Listen Score.

Want to share the story of how your business is helping to shape the world of work? We offer sponsored episodes, audio adverts, email campaigns, and a host of other options. Check out packages here.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to the HR Chat Show, one of the world's
most downloaded and sharedpodcasts designed for HR pros,
talent execs, tech enthusiastsand business leaders.
For hundreds more episodes andwhat's new in the world of work,
subscribe to the show, followus on social media and visit
hrgazettecom and visitHRGazettecom.

Speaker 2 (00:26):
Hello and welcome to the HR Chat Podcast.
I'm Pauline James, founder andCEO of Anchor HR and Associate
Editor of the HR Gazette.
It's my pleasure to be yourhost.
Along with David Krillman, ceoof Krillman Research, we're
partnering with the HR ChatPodcast on a series to help HR
professionals and leadersnavigate AI's impact on

(00:46):
organizations, jobs and people.
In this episode, we speak withDr Tim Scarf, ceo of Machine
Learning Street Talk, a platformknown for deep, technically
rigorous conversations with someof the world's top AI
researchers.
Tim shares how he's using AIday-to-day to scale his work and
offers us practical insights onhow we can move beyond casual

(01:08):
use.
We explore how to get startedwithout a technical background,
what trade-offs to expect andwhy experimenting now matters.

Speaker 3 (01:17):
Thanks for listening to this episode of the HR Chat
Podcast.
If you enjoy the audio contentwe produce, you'll love our
articles on the HR Gazette.
Learn more at hrgazettecom.
And now back to the show.

Speaker 2 (01:32):
Tim, so pleased to have you with us today.
Can you briefly introduceyourself for our audience?

Speaker 3 (01:39):
Thank you very much for inviting me on, pauline.
I'm Tim Scarfe and I run theMachine Learning Street Talk
podcast, which is probably themost galaxy brain technical very
large podcast.
I get to interview some of thebest AI scientists in the world
and we have a wonderfulcommunity and I have a
background building severalstartups.

(02:01):
I've worked in big corporationslike Microsoft and so on.

Speaker 2 (02:05):
Thank you.
We're really excited about thisconversation and to help us
educate ourselves, our audience,could you?

Speaker 3 (02:18):
begin by telling us a bit about your own organization
and how you use AI.
Yes, so I'm a founder ofMachine Learning, street Talk
and we use AI pretty much foreverything.
I think that AI helps foundersmore than large corporations at
the moment, and it's because youknow what it's like when you

(02:39):
have big teams and you buildsophisticated software.
You get bottlenecks becausethere's this knowledge sharing
bottleneck, essentially, whereyou have to explain your work to
everyone else and they have toreview your check-ins and they
have to understand everything.
Ironically, it's faster ifyou're on your own and doing
what I do, I have to essentiallywear 20 hats as one person.
So I'm an expert audio engineerand motion graphics designer

(03:01):
and video editor and I'm readingresearch and I'm doing
interviews.
I'm doing all of these thingsand it's just a lot for one
person to do.
But it's cheaper for me to useAI in many cases than it is to
hire a separate expert becausethe sparsity problem.
You know why would I hire anaudio engineer?
I mean, don't get me wrong,it'd be great if I could pay
them loads of money and get areally good one.
But there's always this problemthat the best people wouldn't

(03:24):
want to work for me, becausethey would make their own
YouTube channel and they wouldbe earning some massive salary
somewhere else.
So there's a huge gap and AIfills that gap.

Speaker 2 (03:34):
Thank you.
What is the size of yourorganization?

Speaker 3 (03:37):
Well, we're very small, so we have a team of
about 15 video editors andthat's the bulk of the team, to
be honest, and then I'm doingmost of the other stuff.

Speaker 2 (03:46):
Thank you, and could you tell us how much you're
spending on AI a month?
How much of an investment isthis financially?

Speaker 3 (03:53):
Probably at the moment around $1,500 to $2,000 a
month.

Speaker 4 (03:58):
And that would be US dollars?
Yes, and the reason that Iwanted to hear all that that
Pauline's been digging into isthat many of our listeners have
some personal experience withlarge language models like
ChatGPT, so they know the sortof basic use.
They also read about what somegiant companies with huge teams

(04:19):
are doing, but your experienceis actually probably more in the
ballpark of, say, theirdepartment or their part of the
organization, and so it's reallyinteresting to think, well,
what can we do where we want tobe more advanced than just
making casual use of ChatGPT,but we don't have some huge team

(04:40):
to support us in ourapplications of AI?
Why don't we talk about some ofyour uses?
What's one of the uses thatyou'd like to start with?

Speaker 3 (04:50):
Well, I think a lot of people when they use ChatGPT
and don't get me wrong, theamazing thing about large
language models is theirflexibility.
You can use them for literallyanything.
They're multimodal.
You can feed videos into them,you can feed audio into them,
you can feed audio into them,images or any combination
thereof.
You can use them for writingyour social posts on Twitter.

(05:11):
You could use them for planningyour shopping or even financial
trading if you want.
It's overwhelming and, ofcourse, it works better for some
things than others.
And I think a lot of people atthe beginning they get trapped
in the sort of the beginner'smindset where they're using
chatgptcom and they're justdoing formulaic things, cookie

(05:32):
cutter posts and unfortunately,because of the way the
technology is especially ifyou're not using the foundation
models is you get cookie cutteranswers.
So to a certain extent, chatgptis a reflection of you.
I joke that it makes dumbpeople dumber and smart people
smarter.
So if you're very creative, youcan make it sing, and what I
mean by being creative is reallybecoming acquainted with what

(05:54):
it gives you back, because thisis called prompt engineering.
There's a whole field of promptengineering where you become a
large language model whispererand you learn when it's going
well and you quickly detect whenit's not going well and you
adapt and you iterate.
Importantly, you iterate.
You don't do it in one shot.
You kind of create this graphof interactions and you go many,

(06:16):
many steps, because if you justdo it in one shot it'll give
you a banal answer.
But coming to your question,david, um, the real step
forwards is, rather than justthinking of it as a chatbot, you
start to integrate it into yoursystems and you build software
around it and it's surprisinglyreliable.
So, rather than getting textback, you ask it to give you
like a JSON, which is, you know,like a software schematized

(06:39):
object, and you wire it intoyour existing software stack.
So, rather than it just being awall of text, it's now actual
entities in your system.
You can put a user interfaceagainst it and you can start to
build on top of it.
And you can build very far,very quickly.

Speaker 4 (06:52):
And let's talk about what you built to help you edit
your video interviews.

Speaker 3 (06:58):
Yeah, so I've built a fairly sophisticated stack so I
can put an MP3 recording in ofan interview and it will be
transcribed.
The transcription process isquite sophisticated.
My previous startup was atranscription startup so it does
many layers of transcriptionand diarization and you know
post hoc transcription,refinement with language models

(07:19):
and so on, and there's aresearch stage before that.
So I'll use open ai, deepresearch and I'll get a
vocabulary which helps thetranscription, that I'll get
lots of grounding informationabout all of the papers that the
guests are talking about.
I mean, this is another thingwith language models.
You have to ground it on usefulinformation to mitigate
hallucinations.
So we have a big transcriptionand then I have this multi-agent

(07:40):
system that will go and readall of the research papers and
it'll figure out what questionsI asked and it'll, you know,
kind of create this entire mapof the conversation if you like.
And then I have some otherfeatures that will rank the
podcast, so a bit like the, theELO algorithm in chess, you know
where this guy plays, this guy,and if this guy wins and it was

(08:02):
a surprise then the ELO will goup.
Well, I do that with fragmentsof the podcast, so I have
language models as judges and Irank, you know, all of the pairs
of fragments based on howinteresting and engaging they
are, and I use that for clipselection.
I can automatically generatetimelines that go to my editors
for creating clips for the shows.
I can automatically generate aprofessional looking PDF

(08:25):
document with all of the shownotes and I can automatically
export all of this into thevideo editors.
The problem is with videoeditors they don't understand
the content.
This is very highly technicalcontent, but this software
before language models.
It would have taken me probablytwo years to write this
software and I wrote it in abouta month.
Now I don't want to, you know,be overly zealous about this.

(08:52):
I mean, of course, there areproblems with it.
It sometimes hallucinates andit's problematic, and that's why
it's very important to createsoftware that has a human in the
loop, so you can kind ofrobustify and verify as you go.

Speaker 4 (09:00):
And for the listeners .
So basically, we're talkingabout building a very useful
tool for a particular use casethat Machine Learning Street
Talk has, and it took a month ofprogram or time to do it and,
as you said, it was very sort ofiterative.
As you think, well, I need todo this as the individual
responsible for this area.

(09:21):
Maybe I can get the AI tool todo this part for me, and then
you keep just adding parts andfixing them as you go, exactly.

Speaker 2 (09:29):
And also be interested in where you see
you've saved time, as opposed toimprove the quality of the
output, where in the past maybeyou would have accepted a
limitation based on my resources, but I can fix this or,
additionally, I've saved howmany hours of my time.

Speaker 3 (09:46):
Well, there's a couple of angles to this.
I mean you could argue in someways it's worse.
There's no substitute for me.
I understand the entire lifecycle of the podcast and in the
olden days I would go throughand painstakingly edit
everything.
And putting motivated visualsis very important in a podcast
so you understand the content,you actually show a visual which
you've understood the contentand it makes sense.

(10:08):
And increasingly, when youstart to systematize the process
, you're not paying attention toeverything because of course
you've just scaled this thing upa hundred times over, so you
don't have time to pay attentionto everything and that creates
a disconnection and that's notentirely a good thing.
But that's just the reality ofbuilding businesses.

Speaker 4 (10:25):
You have to scale them up, so that's the way it
goes If I'm thinking as amanager and maybe I do have
access to some technicalresources.
I guess I think you know we'vegot some good technical people
on our IT team or maybe evenwithin my own department, but
they're not really experts inusing large language models and
I understand that in fact, youcan use large language models to

(10:47):
help you use large languagemodels.
Maybe one of your use cases isusing AI to help program AI.
Perhaps you can talk about that.

Speaker 3 (10:58):
Yeah, this goes to the flexibility of language
models, so they can actuallyteach you how to use the
language models and you can usethe language models to
reflexively improve the solutionthat you've created.
This thing that you're justpointing to is something I'm
very excited about, becausesoftware at the moment is very
linear you write it, you test it, you put it in production.
You probably have some businessrequirements first, and

(11:20):
potentially, the new generationof software it's referred to as
agents, although I think a lotof people, when they talk about
agents, that they're not reallytalking about it in all of the
nuance that the subject deserves.
But potentially, the newgeneration of software is more
like a living thing, it's aliving ecosystem.
You never put it intoproduction, it's just alive and
it's always there.

(11:41):
It's much more biologicallyinspired, if that makes sense,
and the agents are justautonomous units of computation
that do a particular thing.
And so, rather than thinkingabout the software stack as this
big monolithic blob, it's nowcomposed of this, you know,
panoply of agents, and what thatmeans is that you can have
different versions of the agentsin play at the same time and

(12:03):
the system can test the agentsto see if they're performing
correctly.
And the system can also do metaprogramming to update bugs and
to adapt to failures withoutexplicit involvement from humans
.
So, essentially, building anagential system is a system that
has its own goals and has lesshuman supervision in its

(12:24):
operation.

Speaker 5 (12:29):
Hi everybody.
This is Bob Goodwin, presidentat Career Club.
Imagine with me for a minute aworkplace where leaders and
employees are energized, engagedand operating at their very
best.
At Career Club, we work withboth individuals and
organizations to help combatstress and burnout that lead to
attrition, disengagement andhigher health care costs.

(12:51):
We can help your organizationand your workforce thrive,
boosting both productivity andmorale across the board.
To learn more about how wemight help you and your company,
visit us at careerclub.

Speaker 4 (13:08):
So if I'm a manager and I want to get started with
my department, where would Ilook for the kind of technical
resource to help me?
How do I get started?

Speaker 3 (13:19):
I think that the best thing is just to develop an
obsession with language models,and it's easy to do, because
it's very fun just playingaround with them and just seeing
what you can do just on yourown.
So I use a program called Cursor, for example, and it's a Visual
Studio fork.
So you just install it on yoursystem and in Cursor you can
just say I want to build aTetris game or I want to build

(13:42):
an hr database and again, havinga little bit of technical
knowledge goes a long way.
So if you understand how toframe the architecture and the
technology stacks, you can say Iwant it to be a react app, I
want it to be an angular js app,I want it to be a python app, I
want it to have thisarchitecture.
So you know, at the moment sometechnical knowledge required,
but the art of using languagemodels is about the unknown

(14:03):
unknowns.
So understanding when thelanguage models don't understand
something.
But when you don't understandsomething, prompting the
language models to tell you whatyou don't know, and following
that thought train, because it'salways a thought train, it's
never one shot, it's alwaystaking a trajectory, taking a
path through many steps toactually get to a useful
solution.

Speaker 4 (14:22):
Yeah, and just to sort of play that back, the kind
of support I need as a manageris some programming background
if I'm going to build my ownsystem.
But their sort of curiosity andinterest in LLMs is going to be
critical to them being a usefulaid building this internal
software.

Speaker 3 (14:41):
Absolutely.
I mean, it really is atechnology which is going to
change a lot of things and Ifear that it might trigger a
digital divide much like what wesaw in the 1980s, where people
who gained their skills in the70s they were basically removed
from the workforce and they werevery disconnected from
technology.
And I feel that if folks don'treally embrace this technology,

(15:04):
they will find themselves on thewrong side of the next digital
divide.
And the best way to mitigatethat is just to play around with
the technology, because it kindof teaches you as you go.
And I see some folks are veryskeptical and they're right to
be skeptical.
The technology is veryproblematic for lots of
different reasons, but there aresome folks who just focus on
the negatives and they say don'tuse this, it's unreliable, it's

(15:28):
never going to work andunfortunately, that strategy is
a failing strategy.
It's becoming clearer by theday that this technology is the
future and I recommend folks getaccustomed to it.

Speaker 2 (15:40):
Thank you.
I'd like to lean in on yourcomment there that the
technology will teach you if youlean in and you know, begin by
playing with it, which I thinkis apt Experiment, see how it
works, see what it can do.
What are your suggestions onhow you don't say, stop there?
I welcome your perspective andhow you go from being a casual
user to being able to integratemore extensively, and how you

(16:01):
get over being a casual user tobeing able to integrate more
extensively and how you get overthat hurdle or that fear
potentially, because I don'tthink you're saying you need to
become a technical expert, butyou need to understand the
foundations of the technology toleverage it effectively.

Speaker 3 (16:15):
Yes, it's very interesting how I mean part of
the reason for just thedisparity in opinion.
Like very technical people inSilicon Valley who have software
engineering and computerscience backgrounds, they're
hyping this technology up andthey're making it do amazing
things, and then a lot of otherfolks when they use it and to be
honest, this is a problem withlanguage models it's just

(16:35):
creating this slopification ofthe internet.
So you look on LinkedIn andeveryone's generating their
posts and they all look the sameand I can understand why people
would have the perspective thatthis is just a bad technology
and it's creating slopeverywhere.
And part of the problem forthat is the technology is so
deceptive so it will just giveyou very confidently wrong
answers and if you ask the wrongquestions, you get the wrong

(16:57):
answers, and that's why youalways have to see this as a
journey and an exchange.
You have to recognize when itsteps into the world of banality
, and it happens very often.
The thing is I spot itinstantly and many other people
in Silicon Valley probably spotit easily, and because it only
takes them about 0.9 seconds tospot it, they don't even

(17:18):
cognitively register it.
Spot it, they don't evencognitively register it and a
lot of people get stuck on thatand they don't get any further,
which is why they have thatperspective.
So I think a big part of thisis just curiosity and just
spending the time with thetechnology and understanding how
to make it work well.

Speaker 2 (17:33):
Thank you.
On the flip side, I'd alsowelcome your perspective on
where it makes sense to lean inbuild your own and where it
makes sense to lean in buildyour own.
Where it makes sense to waitbecause the systems improve
every day.
I was experimenting with deepresearch the other day and I was
so impressed with how muchfurther ahead it was than other

(17:54):
models that I had usedpreviously.
A platform we use for learningit's now automated AI voiceovers
and they sound great and thatjust showed up in my system.
So I welcome your perspectiveon that and also just the risk
that we just we sit back and wewait for everything to be solved
and easy to use, as opposed toleaning in more so.

Speaker 3 (18:16):
I definitely wouldn't recommend waiting.
I think there was a landmarkmoment, june last year, and
that's when Anthropic releasedtheir Sonic Clawed 3.5 model.
That was the first model whichwas incredibly reliable.
Because you know, people said,oh, this technology it doesn't
work, it's not reliable, ithallucinates and so on.
And Sonic still has problems,it does hallucinate and so on.
But you can now build softwarethat uses tools, which means you

(18:40):
can actually program it tointegrate with your existing
systems, and it actually worksand it doesn't hallucinate and
it's very reliable.
And when it's not reliable youcan fix it very easily.
All of these models havetrade-offs.
So O1 Pro, for example, it'svery, very clever.
It's actually an order ofmagnitude smarter than any of
the other models, but you haveto wait 10 minutes to get an
answer an order of magnitudesmarter than any of the other

(19:01):
models, but you have to wait 10minutes to get an answer.
And Google Flash thinking isincredibly fast it's 200 tokens
a second but it's veryunreliable with tools.
And the O3 mini model isincredibly good at mathematics
but it's incredibly bad atfollowing instructions and you
can't really build it intoapplications yet.
So in a year or maybe two years, we'll have one model which
does all of these thingstogether.
But I would definitelyrecommend to play around with

(19:26):
all of these different models,even though they do different
things well, just to give you astep up in the future.

Speaker 4 (19:32):
Right now, all the big AI companies are talking
about AI agents and sometimes itseems like, oh, we're just
going to get this AI agent andit's going to replace a person.
Other times it just sounds likea feature added to the software
.
So where are we on thatspectrum from?
The agent is an interestingfeature, like it lets us

(19:52):
schedule a task versus no, I'mgoing to replace a whole human
being with this new AI tool.

Speaker 3 (19:58):
So there are many ways of designing agent-based
systems.
Deep research that's a greatexample of a multi-agent system,
but a true agentual system issomething that has its own goals
and does what it wants to do.
So with deep research, forexample, you tell it what to do,
it's breaking down the researchagenda into different subtopics
and then it's parallelizingthat into different agents and

(20:21):
they're going and finding all ofthe stuff and they're
aggregating it together togiving you the answer.
Now, the true agential systemis what I discussed earlier,
where it's actually a living,breathing system that has its
own goals and it can heal itselfand repair itself, and it's
divergent and it's unsteerable.
We're nowhere near that.
I mean, that's a long way away.

Speaker 2 (20:38):
What are your thoughts on where we will be
with this technology?

Speaker 3 (20:49):
say, in two or three years, tim, I think we're going
to see just improvements inrobustness and autonomy.
Over the last three years, Imean, I was a huge skeptic of
large language models.
I mean, I've had Gary Marcus onthe show many times.
He's the most famous skepticand I must admit that every year
, all of the things that Ithought the technology could
never do, it now does you know,whether it's, uh, creativity or
reasoning, uh, we're nowstarting to see improvements in

(21:11):
autonomy.
Um, there are still loads andloads of problems with it.
So I expect to see just animprovement in in capabilities,
and one of the main thingsthat's driving that is just the
amount of computation that wehave at our disposal.
So you need to have very, verypowerful GPUs and data centers
to run this technology, andright now we have a centralized

(21:31):
model, which means open AI andAnthropic, et cetera, et cetera.
They do a huge pre-training runand they build this massive
monolithic model and then theyjust copy it around onto all of
these servers everywhere andpeople use it, and what we're
going to see is more of adistributed version of the AI,
and part of this is called testtime inference, which means
either they or you on yourmachine, you use the foundation

(21:54):
model but you also do some extracomputation when you do the
prompt, and that dramaticallyimproves the answer.
And we're going to see thesekind of active updating systems
where you're essentiallygenerating data.
When you do this, when yourmodel in respect of your prompt
is doing reasoning and creatingadditional data, that will be
fed back into the base model andthe whole system will just

(22:15):
develop a kind of you know, aliving property perhaps that it
doesn't have now, and the systemas a whole will become far more
adaptable.
I think that's what we're goingto see over the next couple of
years.

Speaker 2 (22:26):
Do you have thoughts on how that will impact the
workforce, how that will impactleaders' roles?
I think about the skills weendeavor to teach leaders around
the importance of supportingtheir team's development, how
this relates to this technology.
Also the importance ofsupporting their team's
development, how this relates tothis technology.
Also the importance ofdelegation and how leaders will
need skills for both delegatingto their teams effectively

(22:48):
upskilling them and delegatingto technology effectively
process mapping skills that theymay not have been a core
requirement previously.

Speaker 3 (22:57):
That's the million dollar question.
Nobody really knows.
So, as exciting as thetechnology is, the only thing I
know for sure is that it's goingto initially help founders like
me build software very quicklyand do things very quickly.
Unfortunately, in the realworld, in large corporations,
you know how it is.
We have to deliberately removeagency from processes so one

(23:18):
person can't launch a nuclearbomb.
We deliberately have fivepeople approve it and we have
the codes over here.
And it's the same thing insoftware.
We don't just let people putsoftware into production.
We have release gating, we haveethics boards and we have
advisory boards, and maybe someof this stuff is just
bureaucracy and it shouldn't bethere.
But most of the time when youhave these processes in

(23:40):
corporations, they've beendesigned for good reason and
these things, of course, will bea bottleneck to AI technology.
So as smart and as good as theAI technology is, these things
will form a bottleneck.
So the million dollar questionis what impact, if any, will
this have in the corporate worldin the next five years?
I honestly don't know.
I think that's what everyone'sthinking about.

Speaker 4 (24:02):
So if you're an HR manager, I mean you worked in
big corporations, so you knowwhat the human resources
function is all about.
Any advice for a HR leadertoday?

Speaker 3 (24:13):
Well, I mean we spoke before we hit the record button
around some of the problemswith automated decisions and
bias in hiring and stuff likethat.
There was a news piece in the UKtoday about retail brands.
They do personality testsbefore they hire people and it
excludes neurodiverse folks withautism and, just as I've done,
the temptation is to buildsoftware to systematize your

(24:36):
entire system to remove thathuman subjective intuition, and
the risk of that is you create amonolithic, biased system that
sometimes makes the wrongdecisions.
And don't get me wrong,sometimes we need bias in hiring
.
Maybe there's a good reason forit.
But I think what we should usethis technology for is we should

(24:58):
use it to maintain a degree ofdiversity and subjectivity in
our corporate life, and the bestway to do that is to get folks
using AI themselves to empowerwhat they do to maintain that
diversity, because if we buildcentralized automated decision
systems everywhere, theneverything just gets a little

(25:19):
bit boring and quite cookiecutter.
So yeah, I would be thinking alot about the risks of using
this technology en masse.

Speaker 2 (25:28):
Thank you, and it's interesting because it's very
different from how we'vetypically thought about
technology being verycentralized enterprise level,
where we think about this beingmuch more enabling technology
and allowing employees acrossthe organization to leverage in
small ways.

Speaker 3 (25:46):
Yeah Well, I think, when it comes to building
effective organizations, this isthe perennial problem of
whether you should kind ofempower people and give them
more agency or take it away.
And, of course, the corporationitself it has an agenda right,
it's trying to make money andit's got a strategy, and you
want people to stay in theirswim lane to a certain extent,

(26:08):
but by the same token, you alsowant to maintain a degree of
innovation and you want folkstrying new and interesting
things.
And coming up with that perfecttopology to manage an
organization is very mysteriousup with that perfect topology to
manage an organization is verymysterious.

Speaker 2 (26:27):
As I come back to just how goal-oriented we are
within organizations, we'realways looking to identify the
ROI before we've begun thejourney, which can really limit
experimentation and the kinds ofenablement we're trying to
encourage.

Speaker 3 (26:36):
That's exactly correct.
I did a great interview withKenneth Stanley.
One of my favorite books everis called why Greatness Cannot
Be Planned.
Definitely recommend readingthat book.
And exactly that you know.
If you apply for a researchgrant you have to say well,
what's the objective?
You know children, when they goout and play they don't have
any objectives and serendipityplays an outsized role in our

(26:56):
lives.
But we live in a kind ofvirtual reality where we like to
think that everything we do isin service of an objective.
And from a computer sciencepoint of view, that's the most
naive thing possible to do insearch, because your objective
actually biases you away fromdiscovering all the interesting
stepping stones.
But by the same token you canargue it both ways.
You're an organization and youneed people rowing in the same

(27:19):
direction.
So it's.
The perennial problem is how doyou balance that exploration
and exploitation?

Speaker 2 (27:25):
Thank you, which is certainly something
organizations need to wrestlewith when they think about
enabling and upskilling theirteams to be able to leverage
this technology effectively.
Absolutely Thank you.
I'm really grateful for theconversation.
You've given me someinspiration as well and some new
things to search today and tokeep playing with my deep
research.
My new best friend this week aswell, so it's been really,

(27:47):
really interesting to connect.
Thank you.

Speaker 3 (27:50):
Thank you very much, and I love deep research.
I've been using it a lot.
I'm so impressed with it.
It's amazing.

Speaker 4 (27:56):
Thank you very much, Tim.
If people want to follow yourwork or get in touch, how should
they do that?

Speaker 3 (28:02):
Well, they can subscribe to Machine Learning
Street Talk.
We're on YouTube and they canjoin our Discord server.
We have lots of great chats onthere.
Thank you, my pleasure.
Thanks for inviting me on.

Speaker 2 (28:13):
Thank you.

Speaker 1 (28:24):
Thanks for listening to the HR Chat Show.
If you enjoyed this episode,why not subscribe and listen to
some of the hundreds of episodespublished by HR Gazette and
remember for what's new in theworld of work?
Subscribe to the show, followus on social media and visit
hrgazettecom.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Cold Case Files: Miami

Cold Case Files: Miami

Joyce Sapp, 76; Bryan Herrera, 16; and Laurance Webb, 32—three Miami residents whose lives were stolen in brutal, unsolved homicides.  Cold Case Files: Miami follows award‑winning radio host and City of Miami Police reserve officer  Enrique Santos as he partners with the department’s Cold Case Homicide Unit, determined family members, and the advocates who spend their lives fighting for justice for the victims who can no longer fight for themselves.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.