Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
There is an old adage, one that you
might have heard from a grandparent or village
wise person. The one that says, you get
out what you put in, meaning your efforts
are matched to some degree by the results
or the output.
Now take our nonprofit, Pioneer Knowledge Services, who
delivers this cool program that you're listening to
(00:21):
right now takes a bunch of effort that
you don't even see. We hope that you
obtain value from our efforts to deliver it
to your powers of reason. Here is where
you come in. You. Yeah. The listeners.
Make our efforts rewarded.
Consider donating to keep us moving forward. Visit
pioneerdashks.org
(00:42):
and click on donate.
Welcome, everyone.
This is because you need to know.
Forward thinkers, please note that the content you're
about to hear is dated, and the content
(01:03):
that Jean Claude talks about with the book
is on the back burner.
Britain tag.
Bonjour,
Ebony.
Bonnoy Jean Claude Monnet. So my name is
Jean Claude Monet. I'm a I'm a Swiss,
French,
and American citizen. I happen also to had
a Italian mother.
(01:24):
This is why I said.
No? I live in a beautiful,
little town in north of San Francisco called
Mill Valley, which is about 20 minute north
of San Francisco.
The most interesting thing about where I live
is I'm on the top of a national
park called Millwood
Park. And what is interesting about this place
(01:45):
is that one of the first, national monument
created by, president Theodore Roosevelt,
January 9,
19 08. And for those of you who
are not familiar with Roosevelt, he was one
of the key architect of the United Nations.
So
during the war, there was
a big effort to, you know, unite nations
(02:07):
to avoid
any world war. And, actually,
in 1945,
40 9 countries, at in San Francisco,
unfortunately,
Roosevelt died, that year, but they came here
in the park to pay respect in a
memorial. So there's a special place in a
park
reserved to that. So
it's a place I go of fun because
(02:28):
we have those,
100 of years old tree, and it's absolutely
magnificent.
I think one thing that, you might want
to know about me is that I'm passionate
about electronics
and software technology
as a key enablers
to world progress. And if you go on
my LinkedIn profile, you will see that's what
(02:49):
I said about me and
did some interesting thing in my career
from either being an entrepreneur.
I started as an entrepreneur to pay for
my studies,
and then I moved to what I call
being an entrepreneur
because, this entrepreneurship,
spirit basically drove my career
(03:10):
in major corporation, like Motorola,
Digital Equipment Corporation, and finally, Microsoft.
In 2017,
I decided to
elect, retirement age to actually reinvent myself. I
went to teach at Columbia. I created a
course on digital transformation
that I taught. Also finally
(03:32):
started to write a book, And I think
it's very relevant to the field of,
knowledge management. It's actually not a book on
knowledge management. The
title of the book is gonna be amplifying
minds, and it's how to cultivate personal intelligence
in the age of AI, which I think
is gonna be a very important subject as
(03:54):
we go. I'm happily married, and I have
4 children and 4 grandchildren.
I'm, very well surrounded,
on our family front. It's a really nice
thing to be able to give back to,
your grandchildren
and to actually learn from them. And the
key question I have for me, which I
use in the book is, what should my
(04:16):
grandchildren
learn? That's a good question,
and I'm sure there's a lot of variations
on what that would be. So, Jean Claude,
why write a book? Who is the audience?
Who do you think is gonna be compelled
to pick this up? Well, it's always a
story. First of all, I'm not a good
writer. I wrote,
and it is why I never wrote a
(04:37):
book, probably.
Many of, my friend, like Stan Garfield, he's
an excellent writer.
I was actually
doing a webcast that came world 2 years
ago,
which was organized by, Zach Vahl of Enterprise
Knowledge. And I found Zach extremely interesting in
the way he questioned people. So I decided
(04:59):
to listen to some of his podcast,
and I discover a lady called Moe Weinhardt,
who, is a director of knowledge management for
a company called Mac 49, which is a
VC company.
I think she was extremely eloquent.
So I reached out to her. She, basically
surprised me by the fact that,
(05:20):
she originally
was a teacher, went into Kilometers,
you know, and there's many way to get
into Kilometers. Yeah. And she asked me, mister
Jean Claude, you've done so much in this
field. Why don't you write a book? And
I said, because I'm not a good writer.
Would you be willing to partner with me
on this? And I said, yeah. But are
you a good writer? And I found out
she was extremely good. Of course, writing a
(05:43):
book, it was my my first book is
like a new project.
For me, the audience is what I call
the rest of us. The audience, I can
tell you what it's not. It's not a
book for knowledge manager. It's for knowledge manager,
but on their personal side. The question is
after you left education,
education, you have guided learning.
(06:05):
Once you leave education,
you might get assisted learning from enterprises.
So for example, when I joined Motorola in
1977,
I went into the Motorola Management Program. Thanks
god, because I had no idea about management,
marketing. I'm a nuclear physicist and, electronic engineer.
So marketing was not my, core competency,
(06:28):
but then I took a job of product
marketing manager.
Motorola gave me that education, enterprise
assisted education. But then, what do you do
for your self guided education?
How do you continue to learn and grow?
What methods do you have?
What personal hygiene do you have?
(06:48):
Something that I basically
through my career,
I met some people that ignites
that passion in me Yep. And this will
to get organized, to have a personal hygiene.
I've talked about it in the book. The
person was Mike Cammie.
Basically, the one that made me think about
this. What I hear is personal mission.
(07:09):
Your personal
drive. What feeds you and what doesn't. Does
that sum it up? Sorry. It's the word
is personal knowledge, continuous learning hygiene. The reason
why I use the word hygiene is because
we all do, you know, have a personal
hygiene. You wash, you etcetera,
or your health symptoms of hygiene. But
(07:30):
how do you
do things regularly
to learn? For example,
do you learn from different source regularly?
Do you curate some of the things that
you read?
Do you summarize?
Do you apply Yeah. In a systematic way?
That right there, you just described to me
what I think leadership is.
Well, that's a whole different subject.
(07:54):
Leadership is the ability to,
to bring people to make do things
and give the best of themselves. There's many
diff the definition of that. My wife is
an expert on it because she's an executive
coach. I'm not an expert on leadership.
I just want to say contribute
to your ability to lead, and I like
(08:15):
to take small example. So let's talk about,
AI, artificial
intelligence. Right? And we all seen the revolution
of generative AI. AI literacy
is a must.
You cannot be a leader if you don't
have literacy about a new subject. Yeah. What
I mean by literacy, at least understand the
basics. There there's still people that are so
(08:38):
confused. Yeah. You know, I see that every
day. I
became the vice president of the Muirwood Community
Association.
And I'm gonna produce
a free course
every month to the community
on generative
AI because I realize people don't understand what
it is.
They may have misconception
(08:58):
and
worse, they don't use it. You're talking about
the social structures of the Luddites, the ones
that will not accept new technology because they're
afraid they're afraid they're gonna take over. It'll
it'll wipe out the economy as they know
it because people are gonna lose their
jobs. So there are those that are dead
set about progress.
(09:19):
So how do you bridge that? And that's
what you're talking about. You try to provide
materials,
education, learning opportunities for them to start seeing
the bigger picture. You know, it's not an
either or that. It's also the people who
don't have time or are ignorant. I I
want to, again, make a panel with, the
Internet. Okay. I was fortunate
(09:40):
to go through major technological
changes through,
my professional life and personal life. Mhmm. In
the end of the nineties,
when the web came out, I was already
on the Internet since
1981.
So my first email was in 1981.
In the end of the nineties, I was
working at STMicroelectronic.
(10:01):
I was the vice president of IT. I
had
to make the company
understand
the Internet.
So I started
with the executive vice president, and I sat
down with every single executive vice president, including
your CEO, Pascole Pistorio,
and I made them touch the Internet.
(10:23):
So we opened a browser on their laptop,
and for each of them, I did something
that was relevant to their functional domain. That
opened their eyes, and then we were able
to create a course
in the ST University. We have our own
internal university
to teach people.
(10:43):
By
1998,
99, we created an open system center where
we were teaching product manager
search engine marketing,
how to use keywords
because we had just launched the first website.
I mean, everybody takes for granted website and
all this. We created the first website, and
(11:04):
then suddenly, we had to tell the person
who was doing the data sheet that now
you have to use the keyword field and
put this because the search engine will index
these things. Right? Fast forward
to today, we have generative AI
where the knowledge is democratized. So in 2000,
(11:25):
the information was democratized.
You know, the world in flat, you remember
the famous book from Thomas Friedman,
the world is flat. Okay. Now the knowledge
is there. So that means that all the
explicit knowledge is reliable worldwide
in many, many, many languages.
This is an expansion of function. If we
(11:46):
go back to your example where you sat
down with somebody and show them value that
means something to them,
handheld them to the experience
in order to build oh, oh, okay. This
is good. Oh, I can use this. Oh,
okay.
So the fast forward motion has exploded
from that first experience of handholding somebody to
(12:07):
a website
and showing value to now
where you're saying the generative AI will produce
results beyond your own capacity
that you didn't know of
that could elevate everything. Yes. And you know
what is fascinating?
The history repeat itself with the risks. So
I'm gonna be very transparent here and tell
(12:29):
you about what happened. Okay. We created the
first website. We created the Internet, and then
I wanted to install the search engine. So
we did an experiment. We did a search
engine, and I had to present to the
executive committee. And the CFO
at the time, Moisso Girga, asked me a
very, very nasty question. You typed the word
(12:49):
company confidential,
and guess what happened?
A bunch of document came with company confidential.
And he point to me, and he said,
this guy is dangerous. He's gonna, you know,
blah blah blah. And I turned back to
him. I said, no. I'm not the manager
of your people. Right. You know? You know,
today, we talk about,
hallucination.
It's the same story. Garbage in, garbage out
(13:12):
at the time. So it took us 2
years to clean our mess
before we can install the search engine. In
that example, though, he bird dogged and pinpointed
a huge issue.
It's like anything new. You're gonna have nothing
but issues you have to kinda reconfig
on the fly. Oh, we didn't think of
that. Oh, okay. Yeah. We gotta fix that.
(13:32):
Alright. That that's how things work. Right? I
mean, it's an iterative process any way you
look at it. But I I hear what
you're saying, and I don't wanna lose the
the listeners because I wanna get back to
something I had pulled up when you talked
about the literacy piece.
I wanna define for the folks that in
Cambridge dictionary, literacy means,
the first definition, the ability to read and
(13:53):
write. The second one is knowledge of a
particular subject or a particular type of knowledge.
So when you're talking about raising literacy,
and I wanna say comprehension, but literacy,
right, someone's abilities,
that is a construct that is really
evolutionary in itself because if you don't have
(14:15):
that mentality
of increasing your literacy,
then you're gonna soon become a dinosaur.
You're soon to become outdated, out outgrown
if you're not literate.
I hear you saying that the key ingredient
is
constant evolution. Is is that is that a
fair statement? It's constant learning.
(14:37):
I think you bring a good point about
literacy,
and I I want to maybe go back
to some Okay. There is this word intelligence,
but let's go human first.
Intelligence
spells with an s, and I think this
is an an interesting thing to think about.
There are many different human form of intelligence.
(15:00):
I mean, think about
Marie Curie,
scientist. Think about an architect.
And so there are
all these different kind of of intelligence. In
fact, in 1983,
there's a guy named Howard Gardner,
psychologist that basically define 8 distinct
intelligence.
(15:20):
Human
have different form of intelligence
depending who you are. Artificial
intelligence
has different kind of intelligence.
Pattern recognition
is a particular
domain of artificial intelligence. Robotic is another domain.
So the first thing is when I talk
about literacy or AI is to understand that
(15:41):
there are different domains of intelligence. And then
for the one that is really hitting us
today, the generative AI,
is understand that what comes out of a
solution of generative AI is generated.
It is not a regurgitating
of a piece of text. Okay. That's not.
(16:02):
So every piece
of text
or image or video that comes out of
a generative AI system
is uniquely created. By the way, that's why
you can't copyright
things from generative AI because the US copyright
law is very clear, and it was tested,
you know, in the federal,
(16:22):
Supreme Court is that you can only copyright
material that is human generated. Uh-huh. I think
it's important that literacy
we have some basics there and then help
the people understand that the training of the
data, which was initially the Internet and books,
etcetera,
contained
wrong things.
(16:43):
And so hallucination is just a normal output.
What I think it's a probably a good
thing to think of is that to apply
the same rules
when you seek knowledge to human that you
seek knowledge to a system. So let me
give you an example.
If I'm asking you a question
on brain surgery,
(17:04):
knowing that you're not a brain surgeon,
I would really question your answer. And so
the ability
to apply critical thinking is there, but you
could tell me a wrong thing. Yeah. Right?
Well, that's happened with generic TBI. What is
the message behind this? The message behind this
is that, number 1, you need to master
(17:26):
the art of questioning, and I can tell
you a little bit more about that. 2nd
is generative AI is gonna give you some
proposed answers. You need to apply critical thinking
to that answer
like you're doing with human. Once you get
those 3 valuable understood,
you can become very good about it because
(17:46):
better you are the art of questioning,
you know, better
output you are. In fact, in the future,
the questions are the answers.
Think really hard about this. I totally agree
with you because that is the only essence
of
comprehension
and judgment
that the human in the loop is the
(18:07):
mechanism for. And I think that is an
absolute skill, and I wanna go back to
what we
originally had talked about. And I wanna bring
this to a a small scope here, is
that what we're getting to is the intersection
of personal knowledge
and AI or tech. I mean, either way.
But in degenerative AI,
everything is suspect
(18:28):
just as any information from anything should be.
But how do you develop better critical thinking?
Yeah. So that's a chapter in my book,
and there will be different
answers for different
stages of your life.
And in fact, I just published a an
article on my LinkedIn newsletter, which is called
(18:48):
the art of possible.
It's about my grandchild
that
was learning
about the art, and I was invited just
to see what they are producing art. And
I found out that my 4 year old
grandson
knew about cubism and
pointillism
and realism.
(19:09):
You know, this I I was like,
his way of questioning me when we looked
at something. Now it reflected why he had
developed already some critical thinking because I do
a lot of photography. I show him thing,
and he was asking me question, which I
did not really understand. Where Where where is
this coming from? Yeah. How do you even
know this? Yeah. I think, there are ways
(19:31):
of developing critical thinking
at different stages. I mean, Jean Piaget, which
is a famous Swiss psychologist that developed a
methodology
for that, which is applied by some school.
You can basically research how to apply critical
thinking depending on your stage in life
because I think that's important.
More you would know about it in fact,
(19:53):
couple of things that are important is this
notion of common sense is extremely important. There's
a famous story, you can see on the
Internet right now regarding
generative AI if you ask, generative AI how
to make coffee. Generative AI is a software
construct
that is in one dimension
right now. We have to bridge
(20:15):
the physical world and the digital world of
knowledge.
This will be coming so that
the system will understand
the environment
around it. Is there a coffee machine? Is
there a coffee? Is there so you don't
answer the same thing if you don't pull
your construct. What you're saying is there's gonna
be a spatial element to consider all facets
(20:38):
of the environment
that will be interfaced somehow
into the system. Yeah. In fact, here's what
I predict
will happen, and here's what is already happening.
When I present a generative
AI, I I show the state of the
art today. And let me give you some
point which are very important.
CHAT GPT 4 test.
(20:59):
UBEB,
which is the
National Conference Bar Examiner
test for becoming a lawyer in United States.
Okay? CHAT GPT 4 passes
90%
of the test.
It was passing 15% of the test for
a month or with GPT 3 dot 5.
So by GPT 5, it will pass 100%
(21:20):
of the test.
It passed
98%
of the US biology
Olympiad test, which is all the biology discipline.
So you already have
more knowledge
into the system there than any human can
have. So the next thing is that next
experiment was done with an fMRI.
(21:41):
An fMRI is a machine that capture
signal
of your brain. The experiment was to show
an individual a picture, which was a giraffe,
and capturing the signal,
feeding
this as a input to a generative AI
system, and the generative AI to reconstruct an
image. And guess what happened?
(22:01):
The image is a giraffe. Not exactly the
same, but close enough. So now fast forward
for this,
my dream will become movies during my lifetime.
So what is the thing that I predict
will happen? The biggest transformation
is that physical world to the digital world
through sensors.
There is a revolution
(22:23):
that is about to start
about how sensors
of all kind
are gonna be the input of generative AI
system. And that's, for me, the biggest transformation
we're gonna see after
GAI itself is that environment.
So that we will have that physical environment.
We could have a camera. We could have
(22:44):
a a sensor for pressure, for for temperature,
for anything.
All these sensors gonna feed the machine.
That's where we're gonna have a whole new
world. You're creating an environmental
computing
schema.
Landscape digitized
landscape where the human is not the center
(23:05):
of the universe anymore.
We are a player in the game. Yes.
It's human intelligence and artificial intelligence in symbiosis.
We have to create
that symbiosis
for the good of the humanity.
But is that the tipping point to where
technology
has a thumb up? Is that a tipping
point to where the power shift will go
(23:26):
from the human in the seat, the human
in the loop,
to the digital
is driving? Okay. Well, so here, you're touching
the question of consciousness
or not conscious. I don't think we are,
at least not in my lifetime, we're gonna
see system that will have consciousness.
But we will see
systems that would be
(23:48):
knowledgeable enough to help us as human there.
We need to find what is the right
question to ask. Is the question will will
artificial intelligent replace,
human? The answer is no. Okay. Why?
Because
the human is far more than intelligence.
A human is more than intelligence.
(24:09):
Now can artificial
intelligence
solution
replace some human task?
And the answer is yes. When we talk
about the fear of AI on a job,
and, again, I invite you to go to
one of my newsletter,
there are three things.
Every job is a sum of 3 tasks.
(24:30):
So there are tasks that are repetitive.
Those
are primary candidate
to be
replaced. And who likes to do repetitive task?
I worked in a factory when I was,
16 years old during my vacation
where I was putting a little piece of
brass, and I was just doing something to
that piece of brass for 2 weeks. I've
(24:51):
did the same thing. Okay. The automotive task
will go. Second thing is that your task
will get augmented.
So think about the power of the first
draft of generative AI. You want to write
a letter. You want it to be,
you want to make sure it is very
inclusive, for example. So you ask generative AI
to
(25:12):
generate a very inclusive
note. That's your first draft, and it's up.
And you want a picture. I wanted a
picture for my, Christmas,
letter.
It created a picture through a prompt in
in one second with DALL E. Right? There
is augmentation
of the tasks, and then there are tasks.
I did some interesting thing is that I
(25:33):
was looking at how many jobs
for prompt engineering
existed on LinkedIn, and there was, let's say,
10,000 in the US. There were none the
year before. Right?
What we see happening, like, with every technology
is a lot of brand new tiles Yep.
That are created. So, again, three things. Tiles
(25:53):
that will be replaced, like, with any technology,
you know, steam engine electricity,
any new technology
makes replacement of task, augmentation of task, and
then creation of new tasks. So
if you are a leader in a company
right now and worrying about the impact of
AI, just apply this rule.
(26:13):
Which of the task that my company is
performing can be at later? Yep. And this
is why
the generative AI prime
impact right now is in customer service. That's
the number one function that is impacted by
generative AI.
Because who lacks customer service, the turnaround time
(26:33):
of the customer service agent is very high,
and the knowledge half life is also very
short. That's another component that is happening right
now. We have
2 phenomenon. 1, we create
enormous amount of data
daily.
I think it's 300,000,000
(26:53):
terabytes of data created daily,
which mean
90%
of the data created
in the last 2 years is all the
data that we have. It's just so it's
absolutely enormous.
Then
in the same time,
the half life
of the knowledge is decreasing. Explain what you're
saying. What what are you saying by how
(27:14):
are you saying the value, the the credibility,
the usability?
The rate of innovation
is exponential. Let's take a practical example. Let's
say that you are a customer service agent.
You are supporting iPhone
15, 15 dot 1, 15 dot 2, 15
dot 3.
You see, there is new knowledge created, so
(27:34):
the half life of the knowledge, if you
have 15.3,
what was on 15.2
is no longer important.
Need that. Not important. Not usable. Nobody cares.
Moving on. Half life of knowledge. That's why
with design knowledge management system, you have to
do
a a matrix,
which is strategic operational,
(27:55):
and you have to put the half life
of the knowledge. Are you saying we need
retention policies in order to keep our data,
somehow we have to filter. Some somehow we
have to delete old stuff. Right? Yeah. So
I think you're you're touching the point of
knowledge. So when is knowledge management for me?
Knowledge management is a process.
The fact is that it's never been it's
(28:19):
still not considered as a function. Why? When
you create a start up company, you create
a marketing job, a CTO job, a sales
job. You don't create a CKO job. It's
not there. Now is it important? Yeah. It's
across all this function. Right. 2nd of all,
what is that process, the key
element of this process?
(28:40):
1 is creating,
reusing,
growing,
and retiring.
You you said, you know, knowledge retention. Yeah.
So I think you have this life cycle.
I mean, it's interesting that when I was
at Microsoft, I spent a lot of time
with the person in my team responsible of
search
because I was really in and out about
(29:01):
statistics on search to understand what people are
searching for Right. But, also,
what is the knowledge?
And we are data that has been never
cleaned, that were more than 10 years old.
So we went through an exercise, and if
you leave that data, it pollutes the system.
So one of the advice that I give
(29:22):
people who are focusing on explicit knowledge in
enterprise and especially as they go with RAG,
retro analog mounted generation,
is to make sure that they have a
data quality program so that on an ongoing
basis, they clean
the data. I think a lot of organizations
skip that part or don't do it well.
If everything is live data, if everything is
(29:44):
at the top shelf,
then you're right. It's absolutely
2 thirds of it could be chucked and
nobody would ever miss it. Yeah. But conversely,
somebody has to make a data decision
if this should be archived
or not. There's gotta be something that is
in place that has a management
and, like you say, life cycle
process in order to keep it from piling
(30:07):
up, or we're gonna be covered with old
stuff that has no relevance. Yeah. It's interesting.
So as an IT person,
responsible
leader, I had to admit to archiving
policies,
you know, European Union, US, etcetera.
In my old career,
I'm talking about, you know, at least 35
(30:28):
years, 40 years. Okay?
I had 3 or 4 times a question
to retrieve information
for, actually, for legal requirement.
So, yes, you can do this kind of
thing and keep the data for 10
years and so on. The cost of storage
is minimum.
(30:49):
The importance for me is to put the
emphasis on strategic knowledge in the future.
So that's the knowledge that,
you built innovation from because innovation
is the reuse of existing knowledge in a
different domain. So innovation
creates value,
creates new sales, creates new markets.
(31:10):
Operational knowledge
basically
reduce cost
because you reuse something that you know to
do.
It's more predictable
in, like, if you do a project, you
would
have better ability
to say it's gonna be done on that
specific date, on that specific quality, and so
on. So we need to separate in and
(31:32):
that's one thing that I see many times
people are getting confused
or they don't have. They don't apply the
differentiation
between strategic and operational.
Knowing that operational knowledge with generative
AI, the bar has been reset. So if
you are a consulting company today,
your bar is reset. Let let me give
you an example. I'm having a hard time
(31:53):
tracking the difference. I understand what you're saying.
There's a difference between operational and strategic.
But gee whiz, who's got the slide rule
to figure that out? I mean, it the
relevancy
of what is operational versus strategic can shift
on a dime, I'm thinking. Well, it's,
all I'm saying is who's gonna be determined?
What's gonna determine what's operational and what's strategic?
(32:16):
Because I think you could reach into operational
data to get strategic insights.
Operational data
to get statistic
insight.
Yeah. So when
it becomes strategic,
solution becomes the question. Uh-huh. It's operational
when,
answer is the solution. It's how the human
(32:38):
question the system again to make it strategic
very operational.
But
the ability
to bring the knowledge is what needs to
change.
We have been focusing
as an industry in
a pull mechanism,
which is we want the users
to go to the knowledge. That's what you
(32:59):
do when you use a search engine. What
I think the value is is to be
able to push the knowledge in context.
When you do that,
you are
really accelerating
both strategic and operational knowledge.
For some reason, I see many companies that
use
(33:20):
excuses
of privacy
policy to not do that, And what they
confuse
is the what versus the how. Let me
take an example. Let's say that I'm a
consulting company, and I have a project with,
Bank of America. And I have another project
with another team with Wells Fargo. And the
2 project are about
(33:42):
installing
Copilot,
make something very, up to date. The question
is when you install Copilot and you are
an engineer and you have to learn the
configuration
and so on, that's not competitive. That NOAH
should be able Yeah. To be shared. Not
that the fact that they're gonna use Copilot
(34:02):
to generate automatic
profile of the consultant.
It's important we put this element,
strategic operational
push and pull into a matrix and start
to make some decision about where you invest.
I work with large companies, and, when I
look at the investments and I had this
discussion with a a client who became the
(34:24):
friend of mine Mhmm. Is about search engine.
Every company has a search enterprise search. Right?
What the other search engine vendors doing this
at all? But you can use generative AI.
You can use RAG. Okay? What's the
incremental
value versus the cost? That is unproven at
this point. The question you need to ask
yourself versus
(34:46):
because if you think that
through generative
AI, the knowledge level has reason to be
more available everywhere,
the tacit knowledge
is the one you should be focusing on.
Alright. I think in enterprise, we're not focusing
enough in tacit knowledge. And I think as
an industry, there is not enough solutions
(35:08):
that are innovative
enough
to do that because as you know, people
are busy. They don't want to share. A
query is only good if the data and
information is expressed.
That has no value for tacit knowledge unless
we get a system in place that builds
tacit knowledge.
And that, I think, I agree with you
is that most organizations will not put resources
(35:29):
towards
building a tacit knowledge bank in a good
way. I don't think they just see the
ROI anywhere in the near future, so they
just say, yeah. Well, you know, people come,
people go, you know. There's a lost revenue
there
that I think is bleeding most companies dry.
Yeah. And, you know, I would say, fortunately,
communities of practice, the human solution is still
(35:51):
the best solution today. Yeah. And especially when
you go to the retirement. I mean, 10
years ago, there was a big retirement problem
in the oil and gas industry, and these
people were riding going with the knowledge. So
I think
community of practice is still
extremely important for the enterprise
to capture
(36:12):
the basic knowledge. I'll say that a community
of practice, if that's your crutch of collecting
tacit knowledge, you're not doing enough.
Community of practice is as good as the
people that participate
in it. Not everyone that participates has the
golden critical knowledge that the organization needs
on some occasions.
(36:32):
I think the community of practice is a
great buffer to help, but I don't think
it's a solution because I'll couch this, see
what you think. I'll propose that unless you
have a protagonist,
unless you have somebody
that, in my case, is a not interrogator,
but definitely somebody that picks apart what people
are saying to get to the deeper
(36:54):
knowledge, the stuff that they don't have even
on the surface of their own brain.
Somebody needs to poke and prod in order
to pull out
the real task of knowledge that could be
proven to be a critical piece. Okay. So
I, was fortunate
to take a job as a chief knowledge
officer at Microsoft
when the knowledge management program was, already in
(37:17):
place for more than 13 years.
And the strongest part of it was the
community of practice.
When I left, we had, at Microsoft,
100 community of practice with 56,000
people.
The median time to respond
was less than 1 hour for 30 of
those communities
(37:38):
from people that don't know each other. Yeah.
Yeah. So APQC
has defined
3 categories
of human knowledge to make it simple.
Level of knowledge.
1, you are a novice. You don't know
nothing about anything, so you have to ask.
2nd of all is the expert, and then
you have in the middle what they call
the nextpert, the people that know enough about
(38:01):
something and can answer to the novices.
And the nextpert can ask the question,
the intelligent
question to the expert. So, again, we go
back to questioning.
It's a fundamental
domain
to master, and it's extremely important. In fact,
questioning,
I would recommend to listen to Dana Kanzler.
(38:23):
Dana
is an assistant
professor
at the London Business School.
She has a talk about
she made an analysis of 2,000
entrepreneur question
for VC funding.
What is really interesting
is that
67%
of the question posed to the male entrepreneur
(38:45):
were promotion
focused question
versus
66%
of those to the female
were prevention
question.
As a result,
male entrepreneur
got 7 times more funding than female entrepreneur.
Her that talk is is fascinating. I think
it's it's this whole questioning
(39:08):
for me is is a fascinating thing. You
keep leading like questioning is the answer for
all things, but I gotta say, if you're
not listening, it doesn't matter what the question
is. Oh, okay. Right? In order to generate
new questions, you gotta have the comprehension and
just the juice flowing up here in order
to create a question that applies or digs
(39:28):
deeper. Yeah?
You have to have that interchange. You have
to have in conversation theory, that's what it's
all about is that
in conversation theory, you're gonna generate new knowledge
just in the art of that conversation. You're
absolutely right, and and it's about another quality
related to this. I talk about critical thinking,
but active listening.
(39:49):
One of the thing that helped me throughout
my career is,
humility.
So not being afraid,
honestly, I don't know. I I don't know.
Could you please explain? I think this is
so important.
If you want to continuously
learn and grow, be humble, and don't be
afraid to say you don't know. People are
(40:11):
willing to help. I think that's a good
address to all of society
because it is a human function that we
desperately need more
of. Humility and being humble in your place
and being fair to yourself
and to others to say,
I don't have the answers. And so so
now we're talking collaboration. Right? So now we
(40:32):
have to be open to collaborate in order
to do any of the above. It's interesting.
I developed a little model
about access to knowledge. It's a little visual
where one arrow goes to system,
you know, like, how do we which system
do we use, etcetera, like browsing, searching, etcetera.
And the other arrow goes to the people.
(40:54):
There are 3 categories
of people.
The community of practice of your company or
enterprise social network, whatever you call it. Okay?
So you seek knowledge through these groups.
The second one is the community of practice
of interest of the industry.
And the third one is your own personal
network.
(41:14):
What I said about this is that you
are as strong
as your personal
trusted network.
Because when you need something
real deep, you're gonna pick the phone and
call a friend of yours that you trust
or not even necessarily a friend, but a
person that you trust. When I was, VP
of IT in my early days at STI,
(41:37):
we had to implement an SAP system, which
I had no knowledge of.
But I called my counterpart CIO
at Siemens,
which I knew they were implementing
SAP, and I learned from that
CIO all the experiences and what was to
my vendor. That
ability to have a network and be able
(41:57):
to call somebody
in your network and cultivate that network because
cultivating network is important.
Cultivating the network means that you have to
be willing to give knowledge to them
continuously.
You see something that you think this person
could use, send them an email, send them
a text message with a link to that
(42:19):
video and so on for no reason, for
just the reason of sharing.
But maybe 10 years down the road, you
need that person, and you're gonna call that
person. They'll remember you Yeah. And they're willing
to help. So what you're talking about is
being the good Samaritan
or a good steward. This is stewardship
practices. You're feeding
as you eat. You are sharing as you
(42:41):
learn. You are a combination of things,
and you're in community
with your community.
And that's where I think you're hitting gold
because I think a lot of organizations don't
foster that. Yeah. But I think at the
end of the day,
you,
should take responsibility
for your own life and in particular for
your own learning.
(43:03):
That's something
my network,
and I'm a pack rat.
There's every time I meet somebody,
I put the name of the place and
the date into the notes section.
And I have people that opted my professional
career in 1977.
So you can imagine how many names I
have in the in the system. Take responsibility
(43:24):
for building your network. You don't need a
company to tell you what to do. So
you're talking about personal responsibility.
That is a piece of good leadership. If
you are a and I hate to use
the term. If you are a master of
your domain,
then you can be better prepared
and ready to roll. Yeah. It's like any
situation in life. We've seen that
(43:46):
preparation
is,
a good thing. It reduce stress. It reduce,
mistakes.
Do not avoid them all the time, but
at least it is a great factor of,
reducing
those negative
effect in life. We've wrapped up the show
with a whole bunch of elements
all balled up into just
(44:07):
progress. I'll just label this as personal progress,
takes a lot of work, and it's not
easy. And if you don't continue it, if
you don't share it, if you don't develop
it, then progress will be beyond your reach.
Yes.
I left him speechless. Yeah. I I go
back to my passion. Okay. I'm passionate about
(44:28):
electronic and software technologies
as key enabler to the world progress.
And this is why I'm so hungry of
learning every day to actually feed myself in
terms of, passion. And for me, it's just,
you know, being able
to reinvent yourself in your career. Top domain
(44:49):
right now, which is AI, is something that
happened because
you constantly
I'm constantly able to reinvent
myself
and adapt,
learn, and grow with this.
It's my way of being happy in life
professionally
is that to feed that intellect
with continuous learning
and thing. And
(45:10):
and I read not only about AI. I
read about biology. I read about many domain.
Actually, one of my recommendation
is that when you do develop your personal
knowledge
routine or knowledge hygiene,
do it in a way that is very
diverse.
I have something called TED Tuesday. So every
(45:31):
Tuesday at lunch, I watch a TED talk.
I don't select the TED talk. I go
into TED, the application, and it says surprise
me.
Well, it does tell me how much time
do you have, 5 minutes, 10 minutes, 15
minutes. And then I learn about things that
would have Something totally
yeah. Exactly.
You're you're playing knowledge roulette. Right? You're just
(45:52):
like, I I'll take what you send me.
I I'm open. I think that there is
a value,
a very, very
important value of diversity of thought
And being able to apply also this to
knowledge
access and knowledge acquisition and learning
is key. The world is so complex right
now. Think about
(46:13):
what's happening in the war. I have friend
that are Russian, and I talk with them
to understand how they view things.
You know, they they're not very happy right
now as you can imagine, but think about
what's happening in Palestine
and Israel.
Understanding
the different point of view of those people
also
and not going to conclusion
(46:35):
immediately,
being able to form your own opinion based
on diversity
of thoughts is very important. I can't agree
more, my friend, and thank you very much
for being an intricate part of our concepts
and education and learning around knowledge.
Well, thank you for what you're doing in,
in spreading diversity of thoughts from, those people
(46:56):
that you interview, and I hope I helped,
add water to your well. Oh.
I'm liking that.
You've added good water to this well.
Thank you for joining this extraordinary journey, and
(47:18):
we hope the experiences gained add value to
you and yours. If you'd like to contact
us, please email
bynpk@pioneersdashks
dotorg, or find us on LinkedIn.