All Episodes

October 8, 2025 38 mins

Generative AI has crossed the threshold from novelty to necessity—but most organizations still haven’t caught up. In this episode, I talk with Kenneth Corrêa, global AI educator and author of Cognitive Organizations: Leveraging the Full Power of Generative AI and Intelligent Agents, about what it actually means to be AI fluent. Kenneth breaks down how leaders can move from scattered experimentation to systems-level adoption, why uploading your financials to a free chatbot isn’t “innovation,” and how education—not fear—is the key to responsible implementation.

We unpack the shift from predictive to generative AI, the cultural lag that keeps leaders from seeing tangible ROI, and why the real competitive advantage comes from empowered humans—not replaced ones. For anyone trying to make AI a force multiplier rather than a security nightmare, this episode’s a roadmap.

Related Links:

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
David Rice (00:00):
What are the risks when it comes to AI that
leaders are underestimatingjust because, frankly,
it's easy for people tounderstand how to use it now?

Kenneth Corrêa (00:08):
I think the major mistake I see
companies doing is not exactlyusing the tool by itself.
It's actually sharingprivate, confidential files.
Somebody just had an idea,so the person uses her own
phone to upload a spreadsheetwith the financial results
for the quarter, for thefree version of ChatGPT.

David Rice (00:29):
You noted your own company saw a 15% increase in
productivity and you didn'tincrease head count at all.
Why would you say morecompanies aren't following suit?

Kenneth Corrêa (00:37):
People are teaching and talking
about AI as if it is this70-year-old technology that
still needs lots of data anddatabases and data scientists.
Now generative AI, that'ssomething that is easy to
use, very easy to implement,very easy to guess.

(00:57):
Just 20% and 40%increase in productivity.

David Rice (01:00):
What are some of the pitfalls you
typically see in orgslooking to become AI fluent?
How do you avoid unreliableand overpromising output?
Welcome to the People ManagingPeople Podcast, the show
where we help leaders keepwork human in the era of AI.
I'm your host, David Rice.

(01:22):
On today's episode, I'm joinedby Kenneth Corrêa, a global
AI educator, speaker andauthor of the book, Cognitive

Organizations (01:29):
Leveraging the Full Power of Generative
AI and Intelligent Agents.
In this conversation,Kenneth lays out a path for
companies to move from AIcuriosity to true AI fluency.
Breaking down what that means,why most leaders are still
on the sidelines, and howto avoid the Frankenstack
trap of tool overload.

(01:50):
We talk about the shift frompredictive to generative
AI, what leaders are gettingwrong about implementation,
and why the future of teamleadership may look more
like workflow orchestrationthan task delegation.
If you're trying to empoweryour people instead of replace
them, and you want a moregrounded systems level approach
to AI, this one's for you.

(02:11):
So without furtherado, let's get into it.
Alright.
So Ken, welcome!

Kenneth Corrêa (02:16):
Alright, thank you very much, David.
It's a pleasure tobe here with you.

David Rice (02:19):
Yeah, absolutely.
So you're in Brazil, right?

Kenneth Corrêa (02:22):
Yeah, definitely at São Paulo, the
largest city in South America.

David Rice (02:26):
Oh wow.
Yeah.
That's cool.
I've got some colleaguesdown that way.
I always wanted to visit.

Kenneth Corrêa (02:30):
It's a big city, not as big as New
York, but it's our own NewYork down here in the south.

David Rice (02:37):
So yeah, we're gonna be talking about AI fluency
and the path to get thereessentially, 'cause I think a
lot of us are headed down thatpath already, but not really
sure what all the steps are.
Right.
And like how do youmeasure fluency, so?

Kenneth Corrêa (02:50):
We're still kind of building the stairs, so there
will be steps that we'll findout halfway through it, right?
The airplanes beingbuilt while it's flying.
So those are the metaphors weuse nowadays because everything
happens so fast, right?

David Rice (03:02):
Absolutely.
I think one of the things thatis just so different about
this, right, is like, you thinkabout the modern AI interface.
So it's kind of removed a lotof barriers to experimentation.
Like you don't have tobe a computer scientist
to interact with it andto test what it can do.
Thinking about when we look backin different times in history,
like Excel comes out, peoplehad to take courses at first

(03:26):
for like, how do I use Excel?
And then now it's pretty muchsort of common knowledge.
By the time you leave college,you've got some level of
experience with Excel.
But that's also led to likemisuse of spreadsheets, right?
Like I know people who havespreadsheets to track the
amount of cleaning suppliesthey have on a cupboard.
But I'm curious, what are therisks and when we come back

(03:46):
to AI, that leaders, maybethey're underestimating the
complexity of org wide AIusage, just because quite
frankly, it's easy for peopleto understand how to use it now.

Kenneth Corrêa (03:57):
Like point and click, or you start
even talking to the tooland it will reply back to
you with natural language.
So yes, but again, I reallylike the way you put it because
I would always advertisethat as a feature, but maybe
you're trying to pull upwhere it may be a bug, right?
In the sense that peoplecan jump right in.
I think having a goodinterface, making it easy

(04:18):
for the person to interact.
It's amazing to get a person tojump in, but I think the major
mistake I see companies doingis not exactly using the tool
by itself, that the act of usingit, but it's actually sharing
private, confidential files.
Right?
We're talking about informationfrom the company that should
be kept secret, that areusually behind lots of.

(04:41):
Firewalls and passwordsand user authentications.
And now somebody just hadan idea, and maybe because
the company blocked theuse of ChatGPT, so the
person uses her own phone.
His own phone to uploadlike this spreadsheet
with the financial resultsfor the quarter, for the
free version of ChatGPT.

(05:01):
So I think probably the majorrisk is regarding private
information being spread outwithout the user not even know.

David Rice (05:09):
Are there ways to sort of audit how people are
using it or like kind of createsome compliance checklist,
maybe a. Sort of framework foryour people to understand how
to not use it, and what aresome of the things that you've
seen really work in that area?

Kenneth Corrêa (05:24):
Yeah, I don't see any sort of,
let's check out people'sprompts or let's read their
conversations that we don'thave space for that in 2025.
I mean, but on my side, what Itry to do with my own company,
also doing that for othercompanies as well, is education.
It's the same issue withcybersecurity, right?
We're not going to beable to block it all.

(05:45):
There are new URLs and newtricks that are used to hack
people's account and all that.
So we hope that educating peopleon how to use it on the limits
of the tool, on whatever means.
Using the free version on yourown cell phone and how that's
different from the corporateversion that has blocks.
So information is keptinside those fences.

(06:08):
So I think education should bethe major path in that sense,
because of course, we don'thave enough time to check
out on everybody's answer.
What some companies still try todo is whenever they create their
own assistant, so some companieswill get the technology,
GPT or Gemini or that, butcreate an assistant with that
company's name, so now theycan create extra guardrails.

(06:30):
As regarding whatever was askedfor the assistant, I'm not
able to help you with that.
For example, this is the kindof answer you could get or this
is the kind of information thatyou are not allowed to share.
So if the company comes intocreating their own assistant,
they could install thoseextra guard hails to try to
keep that from happening.

David Rice (06:49):
It's interesting you brought up the cybersecurity
angle, you know, thekind of comparison there.

Kenneth Corrêa (06:54):
Usually when we're talking about innovation,
we try to connect it towhatever we know before, right?
David, because usually if I'mtalking about something that
is totally new and totallydifferent, and I'll have to
get your head up from yourneck and then start talking
to you as if it's totally new.
People will never adopt it.
That's something we learnedback in the sixties.
There was this guy called,I think it was Raymond Lowy,

(07:16):
created a concept of Maya.
I dunno if you heard of it.
It's most advancedyet acceptable.
So when you're creatinga new product, you try to
keep it the most advancedas you can, of course, new
attack and the possibilitiesand features and all that.
But you have to createsome sort of familiarity.
So for the person to connectto that and understand.

(07:37):
So I was in San Franciscotwo weeks ago and I
was riding Waymo cars.
So now if somebodyasks me what's a Waymo?
Waymo is a Uberwithout a driver.
So I make it familiar.
So, you know, you can getan app, you can call for a
ride, but the driver willnot be in front of the wheel.
That's the difference.
And the rest is all the same.

(07:57):
You put a credit card, youpay for the distance that
you travel and all that.
So I think creating thesort of familiarity always
makes it easy to sink innew information, ideas.

David Rice (08:07):
We just got those Waymo's here in
Atlanta and I don't thinkthey're ready for this city.
It's just, theykeep getting stuck.

Kenneth (08:14):
They're getting stuck.
I saw some violence in Ithink Los Angeles, right?
There was this...

David Rice (08:19):
Oh yeah, when people were lighting 'em
on fire and all that.
Yeah.

Kenneth Corrêa (08:22):
So I don't know if the car is not ready for the
city is not ready for the car.
Some sort of paragon, you know?
Right.

David Rice (08:28):
Might be a little bit of both.
Leaders are wantingteams to get more done.
Right.
Like productivity is reallydriving a lot of this,
what people are aiming forwith AI and they're largely
getting that, I think.
A 20 to 40% boost is sortof the norm from some of the
studies that we've seen, and Ibelieve that you noted when we
spoke before that in your owncompany you saw a 15% increase
in productivity and you didn'tincrease head count at all.

(08:50):
So, why would you say morecompanies aren't following suit?
Like what's holding folks back?
Is it just a lack ofunderstanding on AI?
Is it sort of a scope creepproblem where the AI gets
used so much that no one isusing it for the same things
and then it just creates alot of confusion or differing
levels of effectiveness?
What, what is the big hurdleI guess, in your opinion?

Kenneth Corrêa (09:10):
Yeah, I think it sounds absolutely crazy
that companies are not jumpingright in because I'm doing
that for my own company,so I'm seeing the results.
I found money on the table.
It's like how I like to putit, but I think there are
many factors pumping in.
The first one is people areteaching and talking about AI
as if it is this 70-year-oldtechnology that still needs

(09:36):
lots of data and databases anddata lakes and data scientists.
So it scares the hell out ofpeople that don't have the
technical expertise for that.
So I think one of theresponsibilities is we're
still not making it clearthat whatever AI was before
November 22 is different fromwhat has happened in almost

(09:59):
three years now, which it isclassical AI, predictive AI.
Hard, technical is amazing,but it costs a lot and takes
a lot of time to implement.
Now generative AI, that'ssomething that is easy to use
as you've mentioned before.
So easy that people areuploading data they shouldn't.

(10:20):
Very easy to implement, veryeasy to get this 20 and 40%
increase in productivity,which means, remember,
if we're talking aboutfive days a week of work.
20% is like one day off.
So we're talking aboutaccomplishing the same
production, the sameresults, but we've won or
two less days in the week.
So whatever happens iscompanies, the leaders,

(10:44):
people in charge, they arenot using it personally.
And the first tip that Iwould like to give your
audience, and that's somethingthat I speak with leaders
all around the world is.
You have to putyour hands on it.
You only see the value as longas you start using the apps.
So there's ChatGPT, there'sDeepSeek, there's Gemini,
there's Claude that you candownload the app and you can

(11:07):
use it for personal stuff.
So for example, you get athome, it's Sunday night,
you don't wanna cookanything, but you gotta eat.
You can take a photo of yourrefrigerator, of your kitchen
cabinet and say, this isthe ingredients that I have.
What can I cook infive minutes or less?
And now you're gonna see thatthis tool is able to see a
photo, to understand the object.

(11:27):
Sometimes count the objects aswell, reaching to a database
of recipes and suggest youwhatever you could cook.
Combining those items on that.
And this usually sparkssomething we say, like in
Brazil we use like the.
The head explode, right?
Possibilities and ideas.
And then when you're backin your own business and you
look at your processes, youlook at the activities that

(11:48):
people were doing like everyday, I'm pretty sure a lot
of possibility will pop out.
And again, David nowis the first time that
we have a technology.
The technology itself canteach you how to use it.
So you can say ChatGPTlead a team of 14 people.
Those are all salespeople.
We're visiting clients in theiroffices and I want to know

(12:09):
how ChatGPT could help me out.
So now this tool will notgive you like this generic
answer as long as you explainthe dynamics of your team.
Whatever you sell, what's yourstrategy, where you're going,
you're gonna get like reallygood advice into what to do.
So I think really startusing the tools is one of
the steps that should betaken and companies have

(12:31):
done it yet, of course.
When you look at the media,it's all about the fear and the
risks and how AGI and Skynetand AI is gonna destroy us all.
Again, I dunno if we'rereading the news or a science
fiction book, but that'show it feels sometimes.
So I think that'sprobably one of the main.
Issues that companieshaven't jumped on it yet.

(12:53):
Now it's a technology,it has its limitations.
I'm pretty sure it'simportant for us to
understand the limitations.
So this tool will hallucinate,will create content which is not
based or grounded in reality.
And this is like a big no right?
I don't wanna use atool that are make it

(13:15):
is making tough apps.
So that's why I suggest peopleto start by using ChatGPT,
but try to incorporateperplexity, for example.
So now you get answersgrounded on sources.
We're not talking aboutabsolute truths, but at
least there is some groundinginvolved in that sense.
I think you can mature, youcan evolve into better working.

(13:36):
You can learn the bestpractices just as long
as you are on board.

David Rice (13:40):
Yeah, absolutely.
Yeah.
It's interesting there you'dmentioned people feeling like
maybe it's not safe or they'renot qualified to use it.
And I think we've seen someresearch that shows, like folks
in HR operations, sometimesthey do have a little bit
of fear attached that theydon't feel like technical
experts, you know, they aremaybe thinking of sort of
those predictive AI solutions.
In some cases, other folkshave maybe experimented a

(14:03):
little bit with Gen AI, butthey don't really trust it yet.
And you know, I'm curiouswhat you think about like,
because what those folks willinherently derive value from
in terms of what they can dowith Gen AI is maybe a little
bit different from other teams.
You know, where do youthink somebody working in
a HR function, for exampleor operations, what are

(14:24):
some of their biggestareas of opportunity?

Kenneth Corrêa (14:26):
That's awesome.
I'll jump right into that,but just before I'll mention,
do you know why HR peopleare afraid of using Gen AI?

David Rice (14:32):
Why is that?

Kenneth Corrêa (14:33):
2016, there was this paper came out and
then every other media talkingabout human resources mention
it, about the algorithmic bias.
I hope I'm saying that right.
Because there were companiesusing predictive AI to help
them select through resumes.
And when they found outthat actually the predictive

(14:56):
AI tool saw the data on ITteams and they said it, okay,
only men are working in it.
So as a predictive tool,I predict that only men
are good working in it.
That's a horriblething to say, right?
But again, AI has noethics, has no morale.
It's only working on thedata which was provided.

(15:18):
So that's a real issue.
It's still not ahundred percent solved.
So there's a lot ofsteps you have to take
regarding responsible AI,and this is one of those.
But a lot of the folks inthe HR space, they created
a thing with the term AI.
So whenever you say AI, they'relike oh no, there's bias there.
And of course, if I useChatGPT, of course there's bias.

(15:39):
Every person that Iknow has their own bias.
I have my own bias as well,and software trained on human
data will carry human biases.
So that's just one thingthat I wanted to mention
about why HR people arenot really into it, but.
I've spoken in two largeevents in Brazil, one of
those 3000 people fromhr, and I was showing and
demoing some of the use casesthat we've done with AI.

(16:02):
One of those was a study casethat I did with a company,
with a solutions provider Iwork with because my company's
95 people that's the size ofteam and we use it to evaluate
and to call our employeesabout their performance.
And regarding how they'refeeling about the company and
how we're treating them, howtheir bosses are treating them.

(16:24):
So it's 360 feedback,analysis and all that.
So now get it.
95 people areevaluating 95 people.
We're talking about 95 times 95.
It's 9,000 pieces ofcontent to analyze.
It's impossible to have time.
If we started that inJanuary 1st, we'll still be
halfway through it, right?

(16:44):
But we have to givepeople feedback as well.
So what we did is using Gemini,which has a large context window
of 2 million tokens, it meansthe 9,000 answers to the form
could be inputted as data.
Now we're using thatto analyze patterns.
To check out outliers, weasked Gemini to pinpoint to

(17:06):
us whichever topics were moreurgent so we could act on it.
So one of examples for theHR people is sometimes you
have dashboards, you havedata, you have answers and
forms, but you don't havethe time to analyze it all.
Some people in HR don'tknow Microsoft Excel, are
not good with formulas,not good with numbers.

(17:26):
Gemini can do a prettydecent job into organizing
and analyzing dataand numbers for you.
So that's something that wehave been doing for HR as well.
So then this is the firstexample about HR people.
The second would be, if youthink of a HR workflow, you
got to have a job descriptionso you know who you're hiring.

(17:47):
Then you have the resumes, soyou know we're interviewing,
and then you have the selectionprocess in that sentence, again,
we have to worry about thebias that I mentioned before.
But still, if you createa prompt with the job
description and you say, I'mKenneth, I'm the person in
charge of hiring this jobdescription, and then you
paste all the job description.

(18:08):
This could be done in any tool.
ChatGPT, Gemini, DeepSeek,Claude, you base it there
on Microsoft Copilot.
Now you upload the firstresume and say, okay, so
this is the first candidate.
How does that personfits into this job?
And you can do that for everyresume, but sometimes you're
going to make interviews, you'regonna talk to these people if

(18:29):
you get their authorization,which is sometimes a
sensitive point, you can get atranscription of that interview.
So now you say, I havejust the subscription.
This is the resume.
This is the transcriptof the interview.
Now, please, Gemini, helpme understand the positives
and negatives of hiringthis person for disposition.

(18:52):
Now, you're doing that forevery interview that you make.
Now you get summaries.
Now you get comparisons.
Of course, you're gonnahave your own impressions.
You ran the interview.
You can even ask for Geminior ChatGPT Su suggest
questions you could ask forthat very candidate based
on the resume they sent you.
And now you'regonna get feedback.

(19:14):
Now when you add.
Your own perceptions withwhatever AI came out from
this resume, job descriptionand the transcription.
I'm pretty sureyou are way more.
Empowered to make a betterselection in that case.
So this is thinking of thewhole talent acquisition
space for hr. Those are allstuff that I've seen companies

(19:35):
doing already in America,United States, both Brazil,
Chile, and also India areworking with that as already.

David Rice (19:42):
For a lot of companies where they've started
with AI, especially Gen AI.
You know, they've basicallybeen automating content
and communications, right?
A lot of folks, they'renot touching the ops.
They're not touching planningor internal decision workflows.
Leaders aren't necessarilyusing AI to gain better
visibility or clarify prioritiesor support faster decision

(20:03):
cycles in a lot of cases.
So where should they startif they wanna apply AI beyond
these sort of normal tasks?
I mean we touched on HRthere, but we expand that
out into leadership overall.
Maybe it's the CEO,maybe it's operations.
What internal workflowswould you say are kinda
low hanging fruit?

Kenneth Corrêa (20:20):
Awesome.
Awesome question.
I really like, rememberthe name of my company is
80 20, so that's based outof the Pareto rule, right?
We really like the idea ofprioritizing the ability
to choose where should youact first, because surely
there's a lot of things to do.
You could act or you couldattack in many different fronts.

(20:41):
But once you think ofprioritization, and this
is a matrix that I createdon the book where I say
you have simple tasks.
You have complex tasks,you have unique tasks
and repetitive tasks.
So the first place youshould go to are the both
simple tasks and repetitive.

(21:03):
So we're not talking aboutcreating an AI solution to
make better strategic planning.
Okay, that's fine.
But if you can start withaccounts payable and accounts
receivable, if you start withanalyzing customer phone calls.
If you cannot start with jobinterviews, transcriptions,
this is stuff that's happeninga lot of times every day.

(21:28):
So if you get a 5%, 10%increasing productivity in those
tests, you're gonna collect thisresults in the very next day.
So try to go.
The low hanging fruit is on thesimple and repetitive tasks, and
now that's not something new.
That's not only for AI.
Every time we're talking aboutautomation as a more general

(21:50):
word, we are using this verysame matrix, but now with the
large language models, whathappened, David, is they're able
to the models, the AI agents,they're able to understand
information provided to them.
They are able to read into text.
They can open CSV thoseare spreadsheet files.
They can look at images orthere's a company I work with

(22:12):
that they use AI to look at thedashboards because now every
company has their own dashboardswith all those indicators,
KPIs, and informationabout what's going on.
But the employees never take thetime looking at the dashboard.
So what good is a dashboardif you're not looking at it?
But now we found out that a lotof people, like you mentioned in
hr, they don't know technicallyhow to analyze this data.

(22:35):
What does this KP, I mean,how do I make a different
decision regarding whatI saw on the dashboard?
Now, if you explain.
To Microsoft copilot, forexample, what every indicator
there is if you have a manual,a description, sometimes
the dashboard itself has adescription of the indicators.
You now only take aphoto and now you click

(22:56):
a photo of the dashboard.
And copilot will read it to you.
So I think this is the sortof automations that really
makes our lives a lot easier.
I have a sales team of 45 peoplein another company that I own.
It's a health tech backin Brazil that we have
people visiting doctors,doctor's offices.
So they're in waiting rooms.
When they get into theAg doctor's office, they

(23:18):
have like four to fiveminutes to make a sale.
So it's a very dynamicmarket we work with.
I have data from the UnitedStates has 50 something states.
Brazil has 27, so it'shalf of the United States.
We are in 17 states.
So we have peoplespread it all around.
So it's very hard to know what'sgoing on in a company this big.
So now analyzing datathrough the dashboard,

(23:40):
AI has no problem.
AI is not lazy at all.
Inter look key to everydetail in the dashboard.

David Rice (23:45):
Oh, when we talk about this
idea of fluency, right?
I think.
This little part of methat thinks it means it's
knowing where tools fitinto your operations, right?
And how to design aworkflow around it.
Most orgs, they wanna jump ona cool tool when they see it,
but they maybe never buildthe connective tissue between
teams, tools, tasks, right?
So I'm curious, you know, inyour opinion, what does a basic

(24:08):
but functional AI stack looklike for a company that's under
a hundred people, let's say?
How do you avoid that sortof Frankenstack that tends
to form after a minute?

Kenneth Corrêa (24:20):
Love it.
Love it.
Frankenstack.
Yeah.
There are a lot of companiesthat have Frankenstack in the
sense that they are working withmaybe 10, 20 different tools,
and every tool has a differentway to prompt it, a different
result that comes out of it.
So what I see, larger companies,you asked me for companies
with lower than a hundredemployees, but I, what I see

(24:41):
from larger companies is theytry to keep it to one, two.
So if they are a Microsoftcompany, their stack
is Microsoft, they'regoing for copilot.
If they're a Google company,they're going for Gemini.
If they are more techsavvy, let's say a software
development company, ifthey're creating digital
products, they'll probablygo with either on Tropic, so
that's cloud, or they're goingwith OpenAI for GPT, and I'd

(25:06):
recommend that for a start.
It's trying to go for onesingle tool and trust, train
the team, like educationand education because.
Really when you compare, I thinknow the, at this very minute
we're talking, there are seventools that I consider like state
of the art, and that means thatit doesn't really matter if

(25:27):
you're using ChatGPT or Gemini,they're both have the same
interface you can interact withand the same power, let's say.
So I think for now, mid 25,Copilot from Microsoft, ChatGPT
from OpenAI, Gemini from Google.
Grok 4 from X. Lama for Meta.

(25:47):
DeepSeek will be the sixhere, and there's one next
extra, which is Perplexitythat I really like.
I think those seven AIassistants are now at the
very same level, so fora companys gonna be a lot
easier to manage risks.
A lot easier to control accessand a lot easier to train

(26:08):
if you're focused and on.
And this item going toone very single two.
But that's regarding like theend user, every employee in the
company having a copilot itself.
But when you go to the backend,when you're talking about really
automated processes, you'reimplementing networks of agents
and orchestrating them, thenyou have to go to about three

(26:29):
or four different providers.
It is very hard to stick toone single provider because.
There's so much stuffgoing on that you don't
want to be left behind.
Every time that there's anew model, how to leave a new
tech, you have to be able toswitch between those seven.
But this is all more technicalside CTO side conversation.

David Rice (26:48):
Gotcha.
So big question right now.
I think a lot of peoplehave when it comes to our
people, or a lot of leadershave, when it comes to our
people, is around skills.
How do we transition folksinto what comes next, right?
Right now it still feelslike we're having to
reframe efficiency tomean not fewer humans.
But better leveragedhumans, right?
So people becoming orchestratorsand validators of AI outputs,

(27:13):
not necessarily task monkeys.
It seems like the futurerole of a team leader might
be focused more on qualitycontrol of agents and strategic
routing of workflows, right?
So what new roles or skillsets should leadership be
developing now to sort ofmanage AI literate teams?

Kenneth Corrêa (27:30):
Yes.
The idea of being AI literate inthe sense of generative AI has
all to do with using the tools.
That's amazing.
But for leaders is veryimportant to have, and this is
hard, what we call the judgment.
Yeah.
The ability to choose wisely.
Either this task supposedto be executed by a human.

(27:50):
Or is this test somethingthat a computer would do best?
And there's noproblem into that.
We're not talkingabout any taboos here.
The idea is there aretasks that the computers
are already doing better.
So if you have to checkstuff on a checklist, if
you have to analyze imageryor any sense if you have to

(28:10):
check specific informationinto documents, if you have
to scout for parts of text.
Or a piece of information,if you want to summarize.
Those are all tools thatcomputers at the very moment
we're talking, are doing best.
So as a leader, if youunderstand that and you use
that as a starting point,like this is something that

(28:32):
I don't want my humans tobe working with, right?
Because on the other side,you have creativity, you have
judgment in the sense of makingdecisions, making the call,
right, and being responsiblefor that decision as well.
Also being able to seeparadox, being able to
understand complexityinto the decision making.

(28:54):
Those are all human abilitieswhere humans are thriving
a lot more than AI, andAI is in this space.
You could say you coulduse, try to use AI to
create a plan for you.
It will suggest maybe dates anda schedule for that, but it's.
Very low quality when comparedto human work, so I think trying
to decide which way to go forevery test, either computers.

(29:19):
Humans is one of the abilitiesthat these managers will need.
And I will have to say that,David, that's what's inspired
me into writing the bookCognitive Organizations, because
I wanted to give a handbookfor managers so they're able
to keep aligned with the visionthat computers are always

(29:40):
getting more and more powerful.
Computers are gettingbetter by the day.
Every day, every weekthere's a new solution out.
So this is all right.
We are not able to keepup with everything.
Of course, it'simpossible to follow.
I try to keep up with what'sgoing on, but we are already
three minutes in here, soI don't know, maybe in the
last 30 minutes somethingbig happened, so you're not

(30:00):
keeping up with everything but.
If you understand thatthis is a reality, if you
understand that 20 to 40%increasing productivity is
non ignorable, there's no wayyou can ignore that in the
sense that your competitorsare already using it.
I think managers will makegood decisions in that sense.

(30:22):
And again, remember that wehave a discussion, an ongoing
discussion in the worldabout job displacement, about
people being replaced by AI.
Actually, when I speakto companies, their
problem is reverse.
Actually.
They are not findingtalents willing to work.

(30:43):
So now what I need to do,and that's what I did for my
own company, you mentioned itearlier, is I kept my talents,
I have 95 people in the companythat's, for the past three
years, I've grown 15% per year.
With that very samenine, five people team.
And I'm doing that because Iam empowering my own people.
And again, that's the mindsetI had to change when I saw

(31:06):
that this is a reality.
I need to geteverybody on board.
And I'll be honest with you,not everybody jumped on board.
A lot of people still wanted towork like they did before, but
I don't think you can work in25 the same way you worked in
21 because something happened.
In that period, I'm nottalking about COVID, right?

(31:27):
I'm talking about thework of generative AI.
So I think when you change yourmind to, I have talents in my
company, which I need to empowerand to leverage the best that
I can out of what they can doto get with AI tools, that's
when you get the best theicing on the cake, you know?

David Rice (31:44):
Yeah, I agree.
You know, there's a lot ofpeople who maybe struggle to
come along on this journey,but I, I. The way I've been
putting it lately is like, wecan't be attached to our tasks.
The task is not whatgives us value, it's our
abilities, it's our knowledge.
It's our unique sort of humantraits like you mentioned
before, that are really, arewhere our value is derived from.
So maybe it's just aboutletting go of the tasks as

(32:07):
the engine for those traitsto come out, you know?
I'm curious, what aresome of the pitfalls you
typically see in orgslooking to become AI fluent?
You know, especially as they'resort of in those curious
stages, they're startingto experiment, and how do
you avoid getting sort of,I guess, the quote unquote
rug pulled out from underair or something like that,

(32:28):
but basically by unreliableoutputs and overpromising
on what the tech can do.

Kenneth Corrêa (32:34):
Yeah, I think this is a bad problem with AI
because usually we say like,AI is, and then you fill in
the blanks and AI does this.
AI does that.
AI can do this, AI can do that.
And the thing with that iswe start to think of AI as
something that is like ent.

(32:55):
Just like if it's somethingthat's floating around the globe
and then understands everybody,and that's not true, right?
We're talking about multipledifferent companies.
We're talking aboutdifferent technologies.
All those models aretrained with different data.
They have different guardrailsbecause each company's deciding
how they're going to blockeverything that a model can

(33:17):
do that is not supposed to do.
And sometimes that'seven the problem.
So if we're talking aboutAI, we have to remember
that it's just a tool.
It's a tool that's helpingignite a revolution.
Yes, a revolution inproductivity, a revolution
in how you, we do business,but it's still just a tool.
So the way to get fluentis repetition, is usage, is

(33:41):
trying and making mistakes.
Because again, rememberthat when AI tells you
to do something, youdon't need to do it.
You just need to readit, and you decide
whatever you do with that.
The more you use it, the moreit's going to be very easy.
Again, back to the judgmentcall to understand where
that will fit and work fine.
There's a large company in NorthAmerica called CH Robinson.

(34:04):
They are operating allaround the world with
third party logistics, andthose guys have to handle
like 3000 emails per hour.
That's emails for clientsthat want to move packages
from point A to point B. SoI need to move something from
Beaverton, Oregon, have totake it all the way to Austin.
Taxes.
So how much does it cost?

(34:25):
How soon can you getto the spot to get it?
How much would it cost me?
So all this interactionswere previously done
by human operators.
So this is people taking calls.
Reading short messages oremails on their computers.
And now AI is doing awonderful work at the triage.
So it's selecting, so this is asimple routine call with every

(34:50):
specific data already organized.
I can generate a code ontop of that automatically.
No human interaction needed.
Okay.
Now this one is alittle bit different.
This has to be taken witha refrigerator truck.
This has to live in the next sixhours, which is below the time
that we usually do so, okay?
Now let's take that to a human.
So you start to developingfluency when you see

(35:12):
more and more use cases.
Remember the CHR has beenworking with that for a
year now, and they're onlyhandling 10% of those calls.
90% of the calls arestill handled by humans.
There's another story withKlarna, the Swedish company
that does like, customerassistance service, right?
Those guys wentall in into Gen AI.

(35:32):
When they saw thetechnology, said, okay
guys, this is gonna save us.
We're gonna lay offeverybody, right?
That's their approach to it.
But now it kicked backbecause they realize that
AI is not going to be able,AI is not a human, it is not
going to be able to deal withevery a hundred percent of
customer service situations.
And now they've switched.

(35:53):
To a more balanced approach, andI'm not making the numbers here.
It's 80 20.
So 80% of the work ishandled by AI, 20%.
Those are the exceptions.
Those are the specific cases.
Now they are triaged andtaken to the human assistance.
So I think, again,it's impossible to
read a book about that.
So we were talkingabout fluency.

(36:13):
There is no grammarbook you can read.
You could talk toother people, right?
That's how you get morefluent in the language.
I hope my English isnot bad today, so.

David Rice (36:20):
It's ingrained.

Kenneth Corrêa (36:21):
Fluent right when we're talking about it.
So you could talk to yourfolks or your fellows to people
in other companies that arealready using these tools
and especially, again, veryimportant, get your hands dirty.
I think I made my point on that.
You get fluent the moreyou speak the language.

David Rice (36:37):
Excellent.
Well, Kenneth, thankyou for coming on today.
I really appreciate you givingus some of your insights.

Kenneth Co (36:42):
That's awesome, man.
I hope you like, I'mglad you, you liked it.
I liked it as well.
I loved your verycomplex questions.
You're trying to get me, andI have to find a way to, to
bring the side that I lookat things because sometimes
it will feel like I'm allexcited and you should go all
in and there's no limits orthere's no problems with that.
But there is, there aredownfalls, there are points

(37:03):
that we have to take care of.
But we will only learnwhen we jump in on board.
So that's what I'm tryingto help people to understand
that 20% productivity boostis impossible to ignore.

David Rice (37:14):
Well again, thank you and looking
forward to seeing how yourconversations evolve over
the next couple years.
I think it's gonnabe interesting.

Kenneth Corrêa (37:20):
That's awesome.

David Rice (37:22):
Listeners, if you wanna check out Kenneth's
book, it's called CognitiveOrganizations: Leveraging
the Full Power of GenerativeAI and Intelligent Agents.
You can pick it up on Amazon,be sure to check that out.
And until next time,get your hands dirty.
Advertise With Us

Popular Podcasts

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.