All Episodes

February 18, 2025 33 mins

AI is transforming the workplace—but how do we separate real, practical use cases from overhyped trends that don’t deliver results? And what ethical risks should we watch out for?

Host David Rice talks with Jonathan Conradt—former Principal AI Scientist at Amazon and Management Board Advisor at Synerise—about AI’s real impact on HR and leadership. They explore how AI is shaping hiring, employee wellness, and decision-making, plus the crucial role of responsible AI. Jonathan also reveals why companies often mis-invest in AI—and how leaders can make smarter choices.

Related Links:

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jonathan Conradt (00:00):
At its core, an AI is a function call.
It's like an Excel.
You have the sum function.
And you all use that and yougive the list of columns or
rows that you want to add,and it takes that as an input
and it produces an outputand AI, that is all it is.

David Rice (00:19):
Welcome to the People Managing People podcast.
We're on a mission to builda better world of work and to
help you create happy, healthy,and productive workplaces.
I'm your host, David Rice.
My guest today isJonathan Conradt.
He recently left Amazonwhere he worked as a
principal AI scientist.
We're going to be talkingabout AI and machine learning
today, and what needs to happenin terms of education around

(00:42):
these technologies that arebeing folded into the work
people are doing every day.
Jonathan, welcome.

Jonathan Conradt (00:47):
Thanks.

David Rice (00:48):
So, first, tell us a little bit about you.
You know, how you got towhere you are, and what's
the biggest challenge you'reworking on tackling right now?

Jonathan (00:55):
Yeah, so I've been in technology for a long time.
I was on the originalChrome team at Google,
so I joined the team whenthere was about 40 of us.
And if you've ever used theMac or Linux versions of
Chrome, that was my baby.
I then was at eBay where I,I managed their marketing
and brought in AI andmachine learning into that,
both on site and off site.

(01:18):
And then for the last 12years, I was at Amazon.
So eventually at Amazon,what I discovered was that
there were great AI projectsthat weren't being approved.
And there were kind ofbad AI projects that
were being approved.
And the principal problemis that Amazon has really
smart people, reallysmart vice presidents.

(01:39):
They're well educated.
They come fromgreat backgrounds.
They know theirbusiness inside and out.
But AI was coming outof left field to them.
They didn't really knowhow to deal with it or
what it involved, andthey were struggling to
make good decisions aboutwhat should we invest in.
And so they were missing outon great ideas and sometimes
investing in things that weren'tgoing to work out very well.

(02:00):
So my, about my last yearat Amazon, most of what I
did was prepare curriculumand teach vice presidents,
directors worldwide at Amazonabout machine learning and AI.
And then ultimately I decided.
If it's this big of amess inside of Amazon with
these people inside of atech company, what is it
like outside of Amazon?
And so I decided to leaveAmazon and do things like

(02:21):
this, try to help peopleunderstand what AI is, where
they need to be cautious, wherethey need to be aggressive
and help them understandhow to make the best of it.
It's an interesting and verypoignant time to do that.
There's a lot of things that Ithink AI and machine learning
can do, and some of whatcompanies have cast their eye on
it is sort of tied to it beingthe shiny new thing, right?

(02:43):
Folks have some really bigideas, but there's a lot of
practical solutions as you and Iwere talking about before we got
on this call that are much morepractical and can help companies
tackle immediate needs.
I was wondering, can you giveus some good examples of this
and where can HR implementsomething for immediate impact?
Yeah.

(03:03):
So generative AI is that kindof shiny ball that's catching
everybody's eye, right?
But it turns out traditionalmachine learning has
also made big advancesin the last few years.
One of those things is calledautomatic machine learning.
It's basically what happenedis scientists were looking at
the work that they were doing.
And they realized that theywere basically following

(03:23):
the same steps every time.
And there was a potential tojust automate all of that.
And there's a really good opensource package called AutoGluon.
And it's quite remarkable.
When I looked at it, I had abit of an existential crisis.
Because I realized that herewas a package that Someone
who had decent SQL skills, whoknew a little bit of Python.

(03:43):
They could produce a worldclass model overnight
that would have taken mepotentially months to create.
That's really useful.
So in HR, there's a varietyof things you need to do.
Like, you might wantto classify things.
Like, for example, you mightwant to identify people
that would benefit from,say, leadership training.
Or employees who areat risk of attriting.

(04:06):
Those kinds of things.
And these modelscan do that for you.
There's also types ofpredictions you want to make,
like they relate to numbers,like how many resumes do we
expect to have, or how manypeople do we need to man the
warehouse in two weeks, right?
Those kinds of kind ofpredictions of numbers,
it also does really well.
And it can also do time series.

(04:28):
Time series is useful forthings like Understanding, you
know, the seasonality of data.
So I'm not sure how thatrelates directly to HR, but
I'm sure there is some thingsthat are seasonal in HR that
they want to be aware of.
So what's remarkable aboutthis is in three lines
of code, someone who hasthe data can basically
create a world class model.

(04:49):
I think that's just remarkable.

David Rice (04:51):
Yeah, that's super interesting.
Of course, like, HR isgetting more and more
data all the time, right?
Like, especially aswe keep implementing
all these new tools.
And I know you've, you've spokena little bit in the past about
responsible AI, and, you know,you've kind of said that this
is tricky when it comes toHR. Let's talk to us a little
bit about the biggest ethicalconsiderations with AI and

(05:11):
HR, and what are some of thethings we have to solve for in
these models before getting itmore involved in what HR does?

Jonathan Conradt (05:16):
Yeah, in Amazon, we have HR
business partners, right?
So they're tied to yourteam, and they're great
people to go to for advice.
As we started working onsome machine learning models,
these models were going touse customer data to predict
things about customers andHR and legal turned out to
be really important businesspartners for us because computer

(05:38):
scientists and scientists.
You know, we maybe have takenlike a human research class
a long time ago in college.
For HR, these kinds of lawsand concerns are top of mind.
And so there's a uniqueopportunity for scientists
and people doing models toincorporate HR and ask them
good questions, like thingsabout how do we want to manage

(06:01):
gender and what sensitivitiesdo we have around race, those
kinds of things HR peopleknow inside and out, right?
But the other thing thatcould happen is within
the HR project itself.
So let's say an HR organizationwants to go and do something.
All of those issuesare very sensitive.
So the classic exampleis, you want to automate

(06:21):
resumes as they're cominginto the organization.
And you want to classify them,or you want to sort them in
some way to be more efficient.
And there's dangers there,because Machine learning
models, they cheat.
They're just incessant cheaters.
They're going to findthe shortest path to the
answer that they can get.
So you can see things like,the model will learn that if

(06:42):
they mention in their resumethat in high school they
played lacrosse, those peopleget hired and promoted within
the company historically.
And that could bea historic bias.
And it's something that youdon't want as part of your
filters for, uh, hiringor interviewing people.
And so, you know, youwouldn't necessarily expect
that word to end up beingimportant to a model.

(07:03):
But like I said, models cheat.
And they're going to findall of those little things.
Responsible AI is, it's a wholesuite of ideas around how do
we best manage the relationshipbetween what our goals and what
people and the people that areinvolved, right, maybe their
customers or their employees orjob applicants, how do we manage

(07:23):
their data, how are we fairacross subgroups, how do we, you
know, expose to them honestlyand transparently what the
machine learning model is doing.
And so this is an interestingarea of ethics and again, HR
professionals, they spend alot of time thinking about
ethics and how to treatpeople well, you know, how
to treat them correctly.

(07:45):
So responsibility isa fascinating area for
HR and it touches upona lot of HR solutions.
Like, let's say you wanted tobuild a model that was going
to predict who of your currentemployees is going to become
a vice president in 6 years.
That's a useful model, right?
Because you want toretain those people.
You want to developthose people.
That's a good model atthe same time as you're

(08:06):
creating that model.
You need to be very carefulbecause let's say all of
your existing vice presidentsthat are your training data,
mostly graduated from IvyLeague schools or, you know,
maybe their mail and allof those kinds of factors
are in your training data.
And so to do the right thingin terms of responsible AI,

(08:26):
you need to remove the factorsthat shouldn't be considered
for, for consideration.
Don't let the model see or makeinference on things you wouldn't
consider in an actual promotion.
Yeah, Responsible AI is, it'sreally interesting, and there's
classes about it online.
There's lots of papersabout it online, and
regulation is coming.
It's already in the EU.

(08:47):
It's coming to theU. S. quite possibly.
You know, it's been introducedin Congress a couple times.
And this is an area,again, where HR is probably
ahead of the curve fora lot of the scientists.
And so you can be awareof it, and you can
help them understand.

David Rice (08:59):
Yeah, I think that's, you
know, super important.
Like you said, it was, you know.
Like certain dataperpetuating bias.
I was talking to someonerecently and they were
referencing a study thatsaid, um, women were like
75 percent less likely toexperiment with AI than men.
And then my firstthought was, well, what
data does that create?
It could then learn fromand almost like perpetuate

(09:21):
this bias that like womendon't have certain skills.
I don't know.
It's like everything, everyaction has a reaction, right?

Jonatha (09:28):
Yeah, it's interesting.
So like the large, let'ssay chat GPT, OpenAI's
large language model.
They don't use the thingsthat you type for training
the model directly.
Because people type all kindsof ridiculous, terrible,
awful things, right?
And, and your trainingdata is your most valuable
resource as an AI company.
So you don't want to diluteit or add garbage to it.
So you're very careful with it.

(09:49):
But there's signalsthat they pick up from
how people interact.
Like one of the signals is,Did you restate the question?
Let's say you ask OpenAIa question about something
about, I don't know, theKorean War, and it gave you an
answer, and then you basicallyask the same question in
a slightly different way.
That's a good signalto them that, Oh, we

(10:10):
didn't do a good job.
And they eventually asked thequestion and got an answer
that they appeared to like.
So that's a good signal to us.
But of course what happensis, if there's a gender bias,
or income bias, or geographicbias, In those interactions
with the system, the systemis naturally learning to
do the thing that Dom and agroup likes a great example.

(10:34):
That is actually and I'mnot sure this is true, but
this is this is the rumor.
Open eyes models hadsome peculiar language
that they were using.
Like, there was words thatthey were using that were
happening more frequentlyin their responses.
Then like you and I wouldhave a normal conversation
and someone finally said,Hey, this looks like the

(10:54):
way people speak in Kenya.
I think it was Kenya, butit might have been another
African country, but itwas some African country.
What happened was duringthe process of creating
these models, there's aprocess called reinforcement
learning with human feedback.
And so what you do is you aska question, you generate five

(11:15):
responses from the AI, and thenyou choose which one is best.
And a couple ofthings were happening.
One was people were choosingthe longest one, which
wasn't necessarily the best,but it just seemed like it
probably should be the best.
And so what happened wasthe model started producing
longer and longer responses.
The other thing that happenedwas people would naturally.
Click on the ones that soundedlike the way they speak.

(11:38):
And so we had this large groupof contractors in an African
country, and they appeared tobe responding to the language
usage and tone that was familiarto them in their own country,
which is slightly different thanAmerican English and slightly
different than British English.
And so this got picked upand people were starting
to create tools that couldpick out AI responses.

(11:59):
Based on this weird distributionof words, I mean, weird within
the U. S. context, right?
So there you have an exampleof, well, biases can creep
into A. I. in a whole bunchof variety of ways, right?
So if more men are using it,it might end up responding in
a way that is more satisfyingto men than women, perhaps.
Or it could be take a, a deeperinterest and responding to

(12:21):
things that the more popular,the more common population is,
is talking about, or it couldend up mimicking back to you
the dialects of the trainers.

David Rice (12:30):
Fascinating.
Uh, you know, one of the thingsthat comes up all the time
is like guardrails, right?
So in terms of howpeople are using it.
The guardrails comes up, uh,you know, around employees
particularly a lot, butsometimes I wonder if there's
more guardrails aroundsort of executives that
need to be put in place.
I don't know what I mean, butas maybe sort of preventing

(12:51):
them from overinvestingin the wrong things.
So in terms of the techitself, I'm wondering
how much can we actually.
I guess my question to you ishow much can we actually put
guardrails on it with employees?
And is there an element ofexecutive education around
these tools that needs tohappen before anything gets
implemented and handed toemployees, essentially?

Jonathan Conradt (13:09):
There's a lot to unpack there.
First of all, I think it'suseful to tell you that on
the science side, we talkabout guardrails as well.
There'll be guard models so thatwhen someone types in, you know,
what is the best formula forbuilding a bomb with fertilizer,
When that question comes in,there are guard models, and
they're small, they're fast,and they're highly tuned to,

(13:32):
uh, identifying violence, orillegal things, or unethical
things, and they just takethe question and drop it on
the floor, because there'sno point doing the expensive
processing after that, becausewe're not allowed to answer
those types of questions, right?
And then there's another guardafter the A. I. is responding.
And this guard's job is to makesure that you didn't trick the

(13:53):
A. I. into telling you somethingyou shouldn't tell you, right,
like how to build a bomb.
And so there it attempts toidentify the fact that A.
I. has gone off the railsand drop it on the floor.
So there's kind of atechnical term for this.
Yeah, guardrails for employeesis an interesting problem
because everybody has a phone.
And in my own life, At home,I've built a machine that can

(14:14):
run an AI and I've, I've gota way that I can get to it on
my phone and I can do anythingI want with that machine.
I could talk toit about anything.
I could send anydocuments online.
And of course, if you completelycut off employees, you
provide them with no way toget to these types of tools.
They will just find a way.
Because the tools aretoo valuable for them.
It takes the 20 minutesof writing an email into

(14:34):
five minutes of writing anemail because they can, you
know, get it started andthen go back and edit it.
People are goingto want to do that.
The big problem with cuttingpeople off completely and not
providing any outlet is alsothat they're going to use models
that you have no access to orcontrol over or visibility to.
And they're going to doreally dumb stuff, like
they're going to uploaddocuments from your company

(14:57):
to these, to these models.
If you go look at the termsand conditions for open AI and
Google's model and probablyall the other ones as well, it
says anything that you enterinto this model can be used
by the company in perpetuityfor free for improving
or developing the model.
So you really don't want to.
You know, someone uploadingyour business plans for the

(15:18):
next year or your budgetor list of employees like
that would be a nightmare.
Fortunately, there are waysto provide nice guardrails.
Like you can getcorporate accounts now
for most of these AIs.
My former employer, Amazon,they have a really nice way for
you to have basically your ownAI that everybody can get to
and it's, it's sectioned off.
So things aren't being uploadedand shared with Amazon and.

(15:42):
I think that's a reallygood way forward.
Most companies are goingto provide the same thing,
but what about executives?
So now we've got theproblem of executives.
Well, first of all, we know thatexecutives are just as likely
to upload sensitive documentsto AI as everybody else.
In fact, they might beworse than others at it.
And, you know, there's a limitedunderstanding of like, where

(16:03):
is that AI actually sitting?
It can be hard for peopleto tell whether it's Say
it's within our own IT, it'sprotected, it's safe, or
it's a random third partyon the internet, right?
It can be hard forexecutives to figure that out
because, you know, they'reexperts at their work.
They're well trained,they're, they're smart,
they're well educated, butthey're not IT people, right?
So then you run into the,the issue of, okay, once

(16:25):
we've kind of trained them,phishing training that we all
have to go through, right?
Once you train them, don'tclick on that link, don't
upload that document.
It's a really good ideato try to provide them
access to something.
Something that they can usethat's useful for them, that's,
you know, well protected,that's official, that's,
you know, logged, and allthat kind of good stuff.
But those aren't projects.
So projects are, hey,we're going to replace our

(16:47):
entire customer servicedepartment with this AI.
Those kind of initiativesthat Some companies have
already attempted, right?
So there's, there's thoseearly adopter companies that
ran really fast, spend anenormous amount of money,
had enormous difficulties,probably weren't successful,
but they were early adopters.
The companies that arethinking about these projects

(17:07):
now, I would term to bemore fast followers, right?
So they have an opportunity tolook at, okay, what is working
in the marketplace, the priceshave come down dramatically.
So they're not going to spendquite as much and there's
been some good research abouthow do we measure the impact
of these kinds of things.
But, you know, just likelet's take this, the customer
service thing, I don'tknow anybody that's like,

(17:29):
I really hope they push meoff to chat so I can talk to
a machine about my problem.
I don't know anybodythat is looking forward
to that and definitelynot your best customers.
And so while thereis an opportunity to.
Maybe make some thingsmore efficient and use
AI in those realms.
You're going to haveto be careful about it.
And I would suggest that one ofthe first things you want to do

(17:49):
is have a good model in placewhere you can differentiate
between an expensive seriousproblem that you can recover
from and these calls that arecosting you a lot of money.
But, you know, are arereally simple things
that are easy to answer.
Like, how do Ichange my password?
Those kinds of things.
But yeah, executive guardrailsare that that's a real problem.
And that's 1 of the reasonsthat again, why I left

(18:11):
Amazon was so I could havethose conversations with
companies and have that deepconversation about, okay, well.
You know, what data do you have?
What experience do you have?
What problem areyou trying to solve?
Are you trying to makemore money with this?
You're trying tosave money with this.
What are your goals?
And, you know, have youtried something simpler?
You know, that'salso a possibility.
And so I invite peopleto, you know, put 15

(18:33):
minutes on my schedule.
We can have those conversationsand decide if that's something
that I can help them with.
But yeah, it'sgonna be very hard.
I think there's threekinds of companies.
Like I said, there'sthe early adopters.
They've been trying stuff,spending a huge amount of
money, taking big risks,and probably not succeeding.
The second groupare the never evers.
They're the ones thatare convinced that, oh,
this is another crypto.

(18:54):
It's like, you know,it's a big fad.
It's, it's notgoing to work out.
We've avoided these thingsin the past, we're going to
keep doing everything theway we're going to do it.
Those companies in particularare in big trouble, because
it's not that, this is morelike the coming of electricity.
And then there's the fastfollow organizations, who are
just now, looking at the data,figuring out how to measure

(19:15):
what they're going to do,you know, starting pilots.
They saved a lot of moneybecause they, the prices have
come down and now they'rekind of getting into it.
I think those are theinteresting groups that
are going to succeedand do really well.
And by the way, they'regoing to catch up with all
of those early adopters.
Because the performance of themodels is going to hit a peak.
Okay, this is as goodas they're going to get

(19:35):
for a long time, right?
There's going to be thisplateau they're going to hit.
The early adopters are goingto get to that plateau first.
Even if you started late, therate at which these things
are improving is so steepthat someone who is a fast
follow company is going to getthere just in plenty of time.

David Rice (19:52):
Yeah, the, uh, the learning curve and sort of the
growing pains with it won'tbe quite as severe, right?
Because a lot of that willhave already happened.
So sometimes I wonder, youknow, do people have enough
of a fundamental understandingof how machine learning, for
example, works to understandhow to make the best use of it.
Would you say that's true?
And what can we do tochange that in the near

(20:13):
term when it comes totraining and development?

Jonathan Conr (20:16):
Yeah, absolutely.
They don't, they don't havea great understanding of it.
I mean, there's a lotof technical people
and companies that.
Don't have a great understandingof it as the University of
Pennsylvania award, and Igive a talk to their graduate
students about, you know,what is machine learning?
And it's about a half hour long.
And I get to a pointwhere I explain that look.

(20:37):
At its core, uh, an AI.
is a function call.
It's like in Excel, youhave the sum function.
And you all use that, and yougive the list of columns or
rows that you want to add,and it takes that as an input,
and it produces an output.
In AI, that is all it is.
So if you think about thatsum function, In between

(20:58):
times that you use it,it's not wondering about
what it said most recently.
It's not plotting against you.
It's not daydreaming.
It's not, you know, it doesn'texist in some ways, right?
It only exists in thatmoment that it's being
used and it goes away.
AI is fundamentallya mathematical object
that's a function.
And so it takes aninput, processes it,

(21:20):
produces an output.
A lot of the amazing magicalthings that we see that it can
do are actually augmentationsand code that we've written
around the AI to organizedata and things for them.
And that's all complicated.
It's a little bit likeexpecting everyone to
understand exactly how afuel injection system works.

(21:42):
Most people don't evenknow what that means.
And yet they can drive, and theycan do their lives, and they've
got these reliable systems.
And so, I think the, the,what we need to do is for
each group or subgroup withinan organization, we need to
help them get to the rightlevel of understanding.
Right?
Some people are going to beusers, some people are going
to be technicians, some peopleare going to be deep in the
weeds with these things, andsome people are, are going

(22:03):
to be just beneficiaries.
They never interact with thesystem, but it's just doing
good things on their behalf.
So that's going to be tricky.
HR has got a lot of, you know,HR is usually responsible
for training, right?
You know, they got to figure outwho to bring into train and what
kind of training to invest in.
And it's going to be tricky,but you know, having a peanut
butter solution across theorganization, like everybody's
going to get AI trained andwe're all going to provide

(22:25):
the same thing won't work.
When I was at Amazon, oneof the things I worked on,
you know, I'm teaching thesevice presidents and these
senior vice presidents.
And one of the questionswas, what can I do for
the people that drive ourtrucks, the people that
put the products in boxes?
You know, a lot of thesepeople have, they don't
necessarily have a collegedegree, they're hardworking,

(22:46):
they're good people, they'resmart people, but you know,
they're not technical.
How do I bring them upto speed on AI so they
don't, um, number one, sothey don't fear it, right?
The company needs to adoptmachine learning and AI, right?
You don't want to be leftbehind because you were
scared of it or your workerswere terrified of it, right?

(23:07):
And so to get over that,there has to be training
that's appropriate for them.
It meets their needs andanswers their questions.
But also, you know, there's thattraining at the executive level,
which is quite a bit different.

David Rice (23:17):
One of the things, you know, I think
there's some real interestin deploying this technology
around is employee wellness.
I spoke with somebody recently,they had an interesting
use of voice technologyto flag burnout or stress.
And we've got all kindsof data that could provide
behavioral indicators, I'd say.
Where do you see this changein the way we approach employee
wellness in the coming years?

Jonathan Conradt (23:36):
Yeah, I'm particularly interested in that.
Now I'm working with tworesearchers on a way to
not only measure employeewellness, but also to
help improve it over time.
One of the great things aboutAI is it's very patient, right?
It can be a valuable sourceof information because you
can provide it with accurateand actionable information

(23:57):
and the AI can respond topeople and provide them
with those kinds of inputs.
A long time ago, I was atthe Gallup organization and I
helped develop StrengthsFinder.
And so this was an instrumentthat was about helping managers
understand their employees.
And through that deeperrelationship, the goal was that
everybody in the organizationwould benefit from that.

(24:18):
There would just be a betterunderstanding of each other,
greater support for each other.
And a lot of goodness comesout of that, that today
I think we would referto as part of wellness.
But yeah, and what we're tryingto do with AI is, first of all,
in a more interactive way, youknow, learn from people about
how they're currently doing.
One of the things that'sfrustrating in a lot of

(24:39):
assessments is, you know,you have to click the boxes.
And I think at the end ofthe day, I usually feel like
I'm not sure they're going tointerpret my clicks the way I
was thinking about it, right?
And so what we're doing is we'reopening it up where, besides
that, the kind of forced choicedata, which is still useful, we
give employees an opportunityto just express themselves.

(25:01):
It's like, okay, tell, youknow, as you were, as you went
through that last section,kind of talk to me about.
What is it you werethinking about?
What struck youas most important?
What do you wish we, wewere learning from what
you were just saying?
And this text is super valuable.
One of the ways that I sawhow text was super valuable
was I created a machinelearning model that would
take research papers and itwould try to predict whether

(25:23):
I would find them interesting.
And so I had several hundredpapers that I'd marked
as interesting out ofthousands and it worked okay.
And then I used a part of an AI.
So AIs don't understandwords, so you have to take
the words and convert theminto basically a mathematical
object called a vector.
And you might rememberthat from high school, a

(25:44):
vector is a mathematicalobject that has a direction
and a magnitude, right?
And so you take words or youtake sentences or even entire
paragraphs and convert theminto these mathematical things.
With basically noeffort on my part.
I just took all the titlesand abstracts from all these
papers and I used the first partof the AI to create a vector

(26:07):
and now I had more data andso I provided that to my same
model and it improved by 14%.
And so that's a big jump.
And likewise, if you could thinkabout it, if we have an employee
satisfaction survey, which ispart of what we're working on,
and you can understand theiropen ended responses, and it
gives you 14 percent betterunderstanding of your employees.

(26:29):
That could be a really big deal,

David Rice (26:31):
right?
I guess with that in mind,you know, like what is some
of the untapped potentialin terms of the data that we
have, the tools that peopleare gaining more access to?
I guess the untapped potentialreally in helping people and
how they view their jobs, thecompany, their experience,
those kinds of things.
Can we use it to change howleaders approach those things?

Jonathan Conradt (26:53):
One of the most common questions
I get as I talk to peopleis, Can I outsource HR
or legal to an AI agent?
And the answer that I, that Igive is, I think that people
that are most likely to useHR agents are going to be
existing HR professionals.
An HR professional'sjob is very complicated.

(27:15):
There's a lot ofstuff that's going on.
They often need to, like, lookup more information, consolidate
a lot of information.
An agent is an AI thathas the ability to go
and do things on its own.
Ask for information, maybehit a database, hit an
application, do a web search,right, that kind of stuff.
And so what I think is goingto happen is people are

(27:35):
going to gain essentiallyexecutive assistance.
Like, imagine everybodyin the whole company has
an executive assistant.
And that executive assistantmanages all of the kind of
overhead of being at workand simplifies things like
when you get an email that'smaking a request, the agent
reads the email, organizes theinformation you need to respond

(27:55):
and says, here's some thingsto think about in the response.
And maybe you could saysomething like this.
But you, as a human, makethe decisions, right?
You can look at the dataand say, Okay, I want to
describe it a different way,or there's a reason that
we want to say somethingdifferent, or, you know, the
AI misunderstood it, whatever.
I think those are going to bereally powerful instruments.
And we're going to see thatat the Here's the funny thing.

(28:18):
AIs are best at replacingvice presidents.
Because if you think abouta vice president's job, It's
about information synthesis.
There's the entire organizationthat's gathering information,
and that information bubblesup through the organization.
And then you have this vicepresident who has to make kind
of these strategic decisionsbased on this really wide view

(28:39):
of the entire organization.
And it turns out that typeof information synthesis
is where AIs are muchstronger than humans.
I think we start off with,with these kind of Microsoft
calls them co pilots.
You might call them an executiveassistant or an AI assistant.
I think they're going to bereally useful and powerful.
And they're going tosave you time and allow
you to have a betterunderstanding of the business.

(29:01):
By the way, I think it's amistake for an organization
to say, okay, we gainedthis efficiency, right?
So now let's look atcomputer programmers.
If my computer programmersuse AI to help them write
their code, their efficiencygoes up pretty dramatically.
And there's two ways youcould kind of respond to that.
One is you could say, whoa,okay, we don't need as

(29:22):
many programmers to get thesame amount of work done.
The other way is to recognizethat in every tech company
I've ever worked at, Therecomes a time when you
prioritize everything youwant the programmers to do.
And then you have to draw aline and say, well, we don't
have the time or money todo anything below this line.
So you have to get rid of thingsthat would benefit customers,

(29:43):
would benefit the company.
Well, if AI increasesproductivity, essentially
what it does is itpushes that line down.
So the smart companies aregoing to be the ones that
are like, look, let's makeeverybody more efficient.
Let's help them gettheir stuff done faster.
Let's help them reduce theoverhead of just being at work.
And let's produce morefor our customers.
And those companiesare going to accelerate

(30:05):
past their competitors.
So the companies that choose todownsize and remove people and
stay at the same level of workwith fewer people are going to
get crushed by the companiesthat keep the same number of
people, but dramatically improvethe amount of work that you get
done and the amount of thingsthey can do for the customers.

David Rice (30:25):
Jonathan, I want to thank you if you're joining
us today, before we go, there'salways just two things I
like to do with every guest.
The first is I want togive you a chance to kind
of tell everybody wherethey can connect with you.
You know, if there's anythingthat you're working on that
you want to plug, feel free.

Jonathan Conradt (30:38):
Yeah, so we'll provide a link
where you can just have a15 minute conversation with
me about anything you want.
Tell me about the companyyou work for and the problems
you're trying to solve.
And I'd love to talkto you about it.
I love learningabout new companies.
And this was a big partof what I did at Amazon.
Amazon is this giganticcompany that does all
kinds of different things.
So I would go and meetwith random teams about

(30:58):
what they're doing.
And help them understandthis is a machine learning
process that you could useto make your life better.
This is how you can apply AI.
So I'd love for peopleto go on Calendly and
schedule some time with me.

David Rice (31:10):
That'll be linked in the description
for this episode.
So feel free to check that out.
Get some time with John.
And of course, connectwith him on LinkedIn.
The second thing is, uh,we've started a little
tradition here on the podcast.
You get to ask me a question.
Could be anything you want.
Could be about the topic.
It could be aboutsomething random.
It's, it's up to you.
So I'll just turnit over to you.

Jonathan Conradt (31:29):
Yeah.
So I'm curious, AI does agreat job with video editing
and things like that.
How has AI impactedyou and the podcast?

David Rice (31:37):
Um, with the podcast, I mean, it's crazy
what it can do with audio.
Sometimes, you know, itcan really like help us
streamline the audio a bit.
It does have, I don't knowif it's the AI that's doing
it or some of our editors.
Sometimes it has a weird effecton somebody's voice, but.
The biggest thing with my jobis just in content creation,
how quickly it's changed andhow we sort of organize that

(32:00):
from the base level of like,here's an idea, here's how
we're going to outline it.
Here's the process that you'regoing to go through to do
basically the whole thing.
I mean, that's reallythe biggest thing that.
That has changed.
It's sort of changed thenature of content creation.
I wouldn't say in some ways,I wouldn't say it's easier
because we're still in like thatlearning phase of figuring out
how to, how to like get the mostout of it and what you would

(32:22):
actually want it to say, orparticularly with your audience
profile, depending on what itis, getting it to match that.
I cannot believe how far it'scome since the first time I
typed something into chat.
I mean, It'sremarkable, isn't it?
It's changed

Jonathan Conradt (32:37):
so fast.
I guess like, soap opera editinghas been just revolutionized
seemingly overnight.
Because they record somuch video and editing it
is such an enormous jobbecause of all the video.
I suppose realityTV is the same way.
And these tools justdramatically reduce the
amount of time it takes.

(32:58):
Basically, they said thatthe AI can do the first
rough edit automatically.

David Rice (33:03):
That's amazing.
That's incredible.
Yeah.
I mean, I'm sure that, uh,every major film studio
is investing heavily.

Jonathan Conradt (33:11):
Yeah.
Well, it was a pleasuremeeting you, David.

David Rice (33:14):
Yeah.
It was great chat.
I appreciate you coming on.
Hopefully we'll get todo it again sometime.
Yeah.
Thank you.
All right.
Listeners.
If you haven't already head onover to people managing people.
com forward slash subscribe,get signed up for the
newsletter and until nexttime, keep experimenting.
Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.