Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:13):
Welcome in to another Because You You Need
to Know. This podcast is actually designed to
share learnings and understandings and experiences from those
in knowledge management, nonprofit work, and innovation. Today,
we're gonna step into something that's pretty sci
fi. I have proposed the idea, how do
I create a digital twin of me
(00:35):
doing what I do? Today, we're gonna find
out. I brought in two guests from the
European Space Agency. Marcel Henriques. And. Andrew Hurd.
Will be leading this conversation
as we all learn how can we do
this?
Welcome to the conversation.
(00:56):
Say your name and your title. Andrew Hirt.
I'm a senior knowledge management engineer at the
European Space Agency. Marcel Henriques,
managing director at Red Data Solutions.
In this conversation,
I really wanna touch on what's possible,
what might be likely
based on your experience
doing this kind of stuff already.
(01:17):
I do a podcast with folks all over
the tech and knowledge management and entrepreneurial
kind of world and nonprofit.
So I touch on different things with different
capacities
as I have been doing this series,
because you need to know, it has been
mentioned
and commented about multiple times
(01:39):
that I have an ability,
a capacity
to get into
deeper knowledge, critical knowledge and deeper knowledge that
may almost be unconscious
to the guest.
The art that I bring in conversation
is that in elucidation,
meaning I can hear and
(02:02):
reformat
whatever was just said in a different way
in order to confirm or deny that's what
they mean
and, ultimately,
to make it clearer
for the listener. So I am an active
component. It's not an interview. I don't ask
a question. They give me an answer in
this whole back and forth thing. It's not
a deposition.
(02:22):
As Andrew said very clearly, I jump around
a lot. And I think that adds to
a level of
authenticity
to the conversation
because
they don't know what to expect,
and I can kind of drive them verbally
into a
avenue
they're not quite seeing because they're on the
stage. But I can kinda steer them towards
(02:44):
something
that may be an moment. My intent is
is a legacy piece for something futuristic for
pioneer knowledge services
is to have the capacity to build a
model
of how I operate, a digital twin,
a artificial intelligence, an avatar,
something that will be suitable
(03:07):
not just to do the function,
but to build the trust in order to
gather
that tacit knowledge. That's the first foremost step.
But the interactivity
to have levity,
to have elucidation
capability of and not just to repeat, not
to mimic and just repeat what they say,
but to reformulate it. Build a metaphor out
(03:29):
of what they just said and really play
with it like Plato and build something different.
I'm open for who wants to talk next.
Okay. You don't find out I'll kinda jump
in because the the way you described what
you do,
I would propose that it's almost the art
of what you do.
And I think that's actually quite an important
(03:49):
consideration.
You know? Is what you do an art,
or is it, like you say, a process?
Is it discernible? Can we make it scientific?
Can we create the Edwin equation
rather than you are stylistic?
You know, you have there is an art
to what you do. The and there's probably
both. One of the things that, when you
raise this topic, one of the things I
(04:10):
try to to go back to is, you
know, what do we have at the moment?
What do you have at the moment at
your fingertips? And that would be the series
of podcasts that you have, and how might
that be used in order to find this
style? I think it's an interesting challenge. I
mean, that's obviously why we're here today. I
think what both, Marcel and myself here, you
(04:30):
know, we will try to bridge that gap
between your vision
and the reality that we know and also
what you already have, and that clearly one
of those things is the podcast archive. I
think there's a good basis there to get
something at least formulated in a learning profile.
Yeah. I agree. What I would think is
that I would not use anything I've produced.
(04:51):
I would go to all the raw material
because I added out a lot of stuff
to the finished product.
And I think you would want the whole
enchilada
in order to get all the intonations,
where we went in conversation.
The word that came up and I've used
it constantly since we published last year with
this idea of the podcast in an organization
(05:12):
to generate tacit knowledge.
I consider myself a protagonist.
That's my job.
I'm not an interviewer. I'm a protagonist. I
listen. I dig. I poke and drive the
conversations.
Take that as the grain of salt of
what I think would be needed for something
to be super effective
in a digital space
(05:33):
that could actually bring goodness in an organization
or
with data outside of an organization. What I
find interesting is what you're saying
about,
who you are,
how you work, how you get people to
make other
assumptions or get that.
(05:53):
I think what's interesting for me as, well,
mainly the the one translating something
of an idea to an actual working software
or system is what is actually happening there.
I think you are able to do that
because you have
experience with talking to hundreds of people
(06:13):
from all different kind of alloys, all different
kind of work capabilities, etcetera, etcetera.
And that's, for me, that's basically the bridge
to the system that we actually built for
Andrew because our system also has
not one data source in it. It has
nine different data sources in it, which makes
it possible
to start
(06:34):
or at least, I'm not gonna say it
makes it possible to have an actual conversation
with you and to poke
or to, ask the question in a different
way or whatever,
but it has the ability to
let you, search on. Like,
I don't know if you noticed, but if
you go to YouTube, you always start with
one video and you end up three hours
(06:56):
later,
in a completely different because it allows you
to keep on
going. And I guess that's something that you
also do. It makes you a good podcaster,
and that's something that, in the long run,
you would also like your system to be
doing. Are you saying that it would probably
just easier to digitally hook me up and
map my brain and then just use that
as the cortex?
(07:17):
No. Because you have processed
so many
interpretations,
opinions from people. You have all processed them.
And those are all somewhere where you can
say, okay. Now I need to ask this
question
in order to get that,
flow going on the other side.
It's it's that's I mean, it's
(07:38):
sometimes you sometimes you don't have to. Oh,
okay. So hearing you both, one of the
things that Clark does,
and you don't necessarily do, Edwin,
and that is you don't necessarily
know who you're talking to. But in Clark,
it does. It knows
the the individual's name. It knows the individual's
(07:59):
line manager. It knows their job function. It
also knows something about their profile.
And in a way, that gives the machine,
if you like, some insight
to answer questions that haven't even been asked
yet, but provide useful and related knowledge. That's
the first thing.
So in in principle, we could say, you
know, part of this capability
(08:20):
would maybe be best served by having a
short, I don't know, bio or something fed
into the machine that says,
it will help me interact with you if
there are some basic things I know about
you, and that will make it seem more
natural and and flow easier.
The other is that when we started this
Clark development, which is basically a development that
(08:42):
uses knowledge graph and chatbot
to link a user with a knowledge base,
and it tries to establish, is that knowledge
base, let's say, more knowledgeable? Does it have
content
that the the user does not have, or
indeed, does the user have content that the
system doesn't have, and can we create that
two way dialogue?
(09:03):
In trying to come to that solution,
Red Data Marcel asked me a very basic
question, and that is, how do you define
AI? And and in this case, I think
we would also need to do the same,
answer the same question
for the podcast. You know, you're saying, okay.
Give me AI in podcast because I want
an basically, let's say, an intelligent being. Well,
(09:25):
what we had to decide was would we
be driven by
intelligence as we know it, or
would the system be well informed?
And that is two sides of a coin.
So when you think about it, you could
have a very intelligent system that's very good
at asking questions,
but those questions may be quite, let's say,
(09:48):
poor in terms of content. So it has
to be both, and you can or what
we found with Clark
was we could overcompensate
with being well informed, and that's what Marcel
means about these nine different databases.
We made sure it was very well informed,
and therefore, we weren't so reliant on the
intelligence of the system to somehow derive knowledge.
(10:11):
It was more accessible. And
I don't think that's necessarily you go one
way or the other,
but I think it's important
when you're looking at a solution deciding whether
or not it's suitable.
I think that's a good criteria to judge
it on.
Is Clark considered a generative AI, or is
it just a search and find kind of
(10:32):
thing? At this point, it's a search and
find. Yeah. And that's that's also one very
clear
requirement set by ESA
because we need the users to get the
actual knowledge. No fabrication.
No statistically
exactly. It needs to be the actual knowledge.
Okay. Do you think that this application
(10:52):
is more of a generative
AI approach versus what Clark has a traditional
feel for? I I would it's it's interesting
just to hear you you talking bills. Edwin,
you you mentioned something in your sort of
prologue, which was actually really
significant, and it touched on on a few
things. But
one of them is trust, and I think
that's really important. So I think, ultimately, what
(11:14):
you want is a trusted conversation.
The solution that you present or indeed even
the dialogue that you have with the user,
I think, fundamentally
needs to be trust based.
And in that regard, you can actually use
the retrieval,
just simple retrieval of existing knowledge, say Mhmm.
(11:35):
To create that trust, but also then to
further expand. And let me give you a
classic example of how you work, Edwin. You
know, you and I have worked on multiple
mod podcasts.
One of the things you do is you
listen for keywords,
and you go and find a definition.
And that definition
then creates a basis of trust. You have
gone away to go and get the definition.
(11:56):
You are quoting the definition,
and it promotes further exploration.
So that is a simple retrieval, but it's
a really important way
of creating sort of factual dialogue,
but also then stimulating a further conversation
maybe about
the terminology that's used in that
(12:16):
or the range of,
let's say, aspects within the definition Mhmm. That
then either clarify,
I, as in refine,
or expand. Sometimes the definition can cause you
to expand the dialogue,
and that is just simple retrieval. But I
think in the sort of, let's say, paradigm
of what Edwin is, I think that's really
(12:36):
important. Yeah. Yeah. I'll give you the tie
back to where that kinda is is established
from. So I'm an old US army soldier,
then I got into military intelligence with the
US army.
In every doctrinal piece that the US army
has and produces and trains soldiers with,
they have a glossary. There's a glossary for
everything,
And it is such a basic thing that
(13:00):
I see most organizations
don't bother with
that you get confusion just in conversation because
words are not the same thing to each
other.
And in the work that I do, I
it's a selfish thing when when Andrew says
that I go fishing for what the hell
does that mean? What what is the actual
meaning of that? I think I know what
I think it means,
(13:21):
but I'd use it to clarify for myself
that that oh, okay. This is what we're
talking about. So I like that idea of
having that
cast to a source to help drive.
And I like the way you said it,
Andrew, is that you either clarify or expand.
And they usually end up being a hidden
point
for the next jog in the conversation, which
(13:42):
I love doing because it brings a point
of, oh, I guess I didn't realize it
meant exactly that.
For me,
another nice thing about is it brings a
third perspective in the dialogue.
So the dialogue is typically between, you know,
the interviewer or protagonist as you say and
then the and then the expert. But if
you bring in a third perspective
(14:02):
Mhmm. And the way that it's done is
this is new to me and it's new
to you, so there's no ownership. It's not
Edwin saying this is the way things shall
be understood.
It's offered. So there is an element of
humility in there, and I think that's also
part of what the tool should be. And
we know from research, even back in the
(14:23):
sixties,
that human beings, we
are very
easily and and very willing to attribute human
attributes to
talking robots. We we we look for that,
and therefore, we're actually seeking that dialogue, and
we are open to it. So I think
that's something as long as, you know, you
(14:43):
don't stretch
the authority concept that the the protagonist is
somehow the authority, but they are simply trying
to enable
knowledge to be talked about. And, yeah, I
think this concept of humility and offering opinion
to expand thought, and it really is this
concept of, you know, how can you create
dialogue to expand thought. I think that's important.
(15:07):
And in the in the basis, you are,
of course, talking about two different things. So
generative AI
is the way it produces an answer.
And then you have another part which says,
I want to mimic the way Edwin does
his interview. Okay. So those are, in my
opinion, two different things. You can also
have, let's say, a separate system or a
(15:27):
separate model that says, I want my bot
to talk exactly like and mimic
Andrew's way of interviewing
exactly like he's doing now with the the
complete with wording, with intonation, etcetera,
but still
be completely clear about
where is my answer coming from. Is it
(15:48):
generated? Is it statistical
or whatever? The those are, in my opinion,
you can see those as two as two
different things. I wanna bring up a piece.
I was teaching a class once at a
university,
and I came across for the first time
I I follow tech. I'm not a tech
person per se, but I look at the
what's coming five years from now. And there
(16:08):
was a service, and I think it was
out of England,
that provided digital ghosts for people. Have you
heard of this? No. Go ahead. So it's
freaking phenomenal. And this was probably seven years
ago. And I don't know if I got
the right term. I just looked for it
and I can't find it. But the concept
is this, somebody you love passes. You can
hire the service to ingest all their texts,
(16:30):
all their emails, all their voice mails, anything
recorded
from that will create a generative I don't
know if it's generative, but it's responsive. It's
somehow responsive. So you can text your digital
ghost and it creates this whole link in
your head of Right. Oh, he's still here.
You know? Because it uses the same language,
uses the same style,
(16:51):
and that really kind of kind of blows
me away.
And just as long we're talking sci fi
stuff, and that's I guess that's not sci
fi because that's real. And I don't know
how good it is, but still, you know,
somebody's doing stuff like this. The other piece
is that there is a sci fi book
called,
I've read the book and I've watched the
series Altered Carbon.
(17:11):
Altered Carbon has the precept that you can
upload
your being to a disk and then the
human bodies are just shells.
So So you could basically live forever. You
can produce the same body or you can
the technology is that you can needle cast
foreign country like that because it's all digits
and then occupy a shell of a body
over there, do your business, and then needle
(17:33):
cast back home. And the concepts are just
phenomenal.
And I'm I'm way off track here. Just
I got excited.
But do you think there's a possibility,
as long as we're talking this, can you
duplicate
a person's
personality Yes. Digitally? As long as you have
enough data, you can duplicate. Yep. Interesting.
And and with the new technology, you can
(17:55):
also have the same voice.
So now you have the there are specific
services that can really create your voice. Okay.
And there's also technology that can make it
as an image, either an avatar or
a reel like image
where you probably have to have someone
mimicking how you're,
(18:16):
so it's it's recorded, and then it's they
put your face on it.
So that's all possible already. And I think
those are,
interfaces,
and I think those interfaces are going to
be more mainstream within a couple of years.
Yes. The intent of the conversation is to
prompt knowledge to be expressed by the expert,
(18:37):
and that's actually a relatively
limited conversation
in reality. It's actually quite quite a niche
conversation.
So I think the triggers that you're talking
about, that's important.
You know, some triggers might be from this
bio.
You might imagine that, you know, in some
cases, individuals might have written documents or papers
(18:59):
or you know? And all of that could
be pre ingested.
No. They might not. There might be a
bank page, but, you know,
there there may be a way of sort
of pre ingesting.
And and I liked your idea of this
sort of, you know, digital ghost. You know?
There may be a way of doing that
to trigger or allow the the bot, the
avatar,
to have this because all it needs to
(19:20):
be is meaningful,
trustful, and meaningful.
You can have conversations with people, you know,
in the bus stop or something. You have
a conversation. You say, you know, that was
a great conversation. They learned about point 001%
of what you know, what you are, and
do you know what I mean? So if
the expectation is this dialogue,
and I think the concept of protagonist
(19:43):
is really just the way that you formulate
the questions
and you respond to the answers. So I
think it's language used.
And for sure, as Marcel says, you know,
the kind of verbal techniques
that you use, I think, could be copied.
How you know, you might be disappointed with
that, but I think they could be. I
I dug a little more. They're called grief
(20:05):
bots.
Grief bots
for your dead passed on preschools. So I
guess
what we've got right now, what would be
an experiment
that could be at least looked at? Now
we're a small nonprofit. It's not like I've
got a war chest of funds. Could you
give me an idea? Could we take
written material,
audio material, and I don't know how how
(20:27):
you would extract all your social media posts
and all the text that you write, and
I don't know how you get all that
stuff.
But if it were to be a consumption
of everything
that is
and I'm just thinking, so maybe maybe a
first iteration step would be to build an
interface for pioneer knowledge services just to access
(20:48):
all of our stuff, all of our content.
And maybe the interface is me or my
voice since, you know, and I become the
trusted agent to get to all this recorded
content that nobody's gonna listen to in its
entirety, looking for answers. You could make a
map because you
have all those podcasts where you have people
that you are interviewing. When they introduce themselves,
(21:09):
they always say, I'm from this and this
country.
You can make sort of a profile
of everybody that's been interviewed, not
as writing a profile, but you can let
some software Get some data points labeled to
the Yeah. And when you're looking for someone
who's into knowledge management at NGOs,
you can find the right person on the
(21:30):
map. Okay. I like that. I I also
like your idea, Edwin, of because I I
I think that's where I would recommend starting.
I mean, I remember when we first started
the cloud project,
we were working with one particular organization that
was new to us, and the first thing
they did was they gave me a brochure
of what they do. And in there was
a big section. The very first section was
get to know your data.
(21:51):
I think that fundamentally what you're saying is
totally true is get your podcasts
in a place
where you can interrogate them, where something can
interrogate them.
There is interesting content in that. So for
instance, if you simply extracted
everything that you have said, forget the other
party, but everything that you have said, if
(22:12):
you've got your text, your transcripts you know,
I know when we've talked, we've got these
transcripts,
then you would find, and maybe disappointingly,
that you do have specific techniques.
You know, you might like to think that,
yes, I'm an artist,
but when you actually analyze it, you may
find that you have key questions, techniques.
(22:32):
You know, we all like to have places
in a conversation where we can think. So
there may be some things that you say
to kind of maybe rephrase things or recap
on things or,
reflect on things.
So I think that's, let's say, possible
for sure, but it may be one of
those things where you need to explore and
see what you've actually got. But I think
(22:53):
the fundamental at Marcel is probably better to
talk about this is actually
putting your data in a place where it's
better retrievable.
That may help you with user access. So
if a machine can access it better, it
also means that a person can access it
better.
The transcribing
all your podcasts
can also have specific time stamps in it.
(23:14):
So you could say, okay. I want to
search through all my podcasts,
and then it basically opens the podcast a
few seconds before a specific
type of knowledge is shared. Does the technology
allow for not creating transcripts, but to actually
read audio and video files directly? Do you
mean exactly by read audio files direct? As
(23:36):
Andrew said, you know, I generate transcripts from
these.
It's not a exact science. You get a
lot of garble, garble, and then you mix
up who's talking, and it's not a great
transcription service, any of that I've used. But
does the technology have the capacity to just
skip transcription
and just use the audio and the video
as the What they're saying about ChatGPT,
(23:58):
I don't think it's called ChatGPT anymore,
is that they are developing this, like, audio
so that you can basically talk to ChatGPT,
and then it responds.
It does work, but it slows the system
down. But it's a very big buck. Ultimately,
whatever we're doing
so you're saying, can I go from audio
basically, can I go from audio to audio?
(24:21):
The answer is yes, but it will always
get into bits and bytes. It'll always get
into zeros and ones. It will ultimately be
as machine readable text. It might not be
as human or readable text. So is that
term the the word I was thinking of
was natural language processing? Is it NLP? Is
that great? NLP. Alright. Well, so what do
(24:42):
we do next?
You're the customer. So
Well, I I would propose, interface. That's that's
basically what you're gonna do next. We're gonna
create an Edwin avatar,
which is gonna speak like Edwin and,
give answers,
like you would do. Okay. I mean I
mean, just going to to Clark, you know,
Clark is is interesting to me because
(25:04):
it sort of shows the two faces
of a chatbot.
The first face is when you start with
Clark, it does two things immediately.
One is it tells you it knows who
it is, and it knows who you are.
We've already pre logged it in. So it
knows, and it's telling you, I know what
I am. These are my these lessons in
(25:25):
my systems.
I know you. I can give you information.
That's an important point. So that's the first
part. The sort of second part of the
introduction
is it asks you how you are. It
kind of get a sense. Now it doesn't
use that data, but it has that dialogue.
That's not getting any knowledge from the person.
That's establishing a relationship. It's establishing a connection.
(25:49):
It's establishing a dialogue. And though these things
are all important.
When you say to us, oh, I want
a protagonist,
well, actually, you will probably want three or
four personalities.
When when I mean personalities,
it's types of dialogue.
You will want an introductory
dialogue for sure. Mhmm. Hello. I'm Edwin's digital
(26:10):
double. I hope, you know, you can understand.
And and then the the, obviously, the body
of it is getting the knowledge, getting the
dialogue to get the knowledge. But then there's
also the end of the conversation,
which is thank you for your time. All
your data has been recorded. We found this
very useful. I mean, there's something like a
follow-up you wanna build in there. Do you
(26:31):
want us to follow you up? Do you
want us to send you something or whatever?
That's also one of the possibilities.
I think one of the other things, do
you want it to be generative,
or do you want it to be actual
knowledge? I think in also in this case,
I would say it needs to be something
that's actual knowledge. And that's it depends for
(26:51):
a for a little bit. It it also
depends on your user and what they are
actually looking for.
But I would Right. I would say it's
it's probably best to focus on actual knowledge
being completely transparent.
This is what's been said. This is what
So in that definition, though,
if this is all conversational,
is it any of it actual knowledge?
I mean, what's what defines actual knowledge other
(27:12):
than me pulling up a Merriam Webster's definition
of whatever? So I'm just curious. What are
you saying is actual knowledge? Typically, your as
as you stated earlier, your glossary
your definition. Like you said, we always have
a glossary in our document because that's the
one thing that That's one thing I don't
have in the conversational
format unless I pull it.
But in a system,
(27:34):
we could easily establish that to
add validity and clarity and and just all
that. Interesting. I like that. Clearly, there's a
skill in bringing it in at the right
time. There is that skill. The other thing
to sort of think about, which is very
Clark like, is that this
entity,
this digital entity we're talking about, could use
(27:55):
the dialogue
to make connections with previous,
in this case, podcasts,
that you, Edwin, maybe are not able to
do. You know, we may end up with
a capability
that is more able in some areas
in some areas and less able in others.
So for instance,
you know, that dialogue could be,
(28:16):
you know, I've got three podcasts
from the past five years. As an internal
business intelligence tool, that would be awesome because
I I've got,
you know, three podcasts from the last five
years that talk about this topic.
This is the person, and this is where
they're based. I have a lot of content,
and I couldn't recall any of it off
the top of my head. So I think
(28:37):
that would be an extremely
useful retrieval and reuse, repurpose of content we've
already generated. Yep. So there, you're actually using
the podcast
as a source of truth at the time
it was truthful, and you can rely on
it. You know, you can say, this person
said that. That's that's the truth.
So you could actually use them a little
(28:59):
bit like you use your terms and definitions.
You know? It says, okay. This is an
interesting term. I'm gonna find a definition. But,
also, we could say, I can find a
podcast
that uses that term. And and the tool
in principle could also then it it would
probably be preassessed.
So each podcast would be preassessed for, say,
keywords.
(29:20):
So we could also say, yes. This podcast
applies, but it was talking about
this other topic or this related term. That
is very much how Clark works, and that's
how the knowledge graph works. You know, we
search on a term, but these are the
related terms. These are the related people.
Now whether visually you present the knowledge graph
(29:40):
to the person in real time,
that's possible.
Yeah. When we talk about the interaction,
it doesn't necessarily
just have to be
a face talking.
Because it's digital, you could bring up so
much more on the screen on the interface,
I should say, not the screen, but the
interface. Yeah. Alright. Well, let's do that then.
We get that next week.
(30:04):
I like all this. It helps me formulate
in my mind what exactly
is the use case. And right now, I'm
sitting on two of them. One is an
external phase and one's an internal phase.
And they're both interesting. I don't know how
I would approach either or both
as a project. Yeah. That's that's probably another
conversation. But,
(30:25):
what is your bits of wisdom in that
aspect of where is next?
Okay. So I think you have a very
interesting use case, and there's no reason it
couldn't be some academic study or project
from someone at university.
If you were looking for a research project
to which way to go, that would be
that would be one way. Mhmm. I'll I'll
(30:46):
hand over to Marcel for the kind of
what the business case might be. I think
in order to start, you need to,
on the one hand, have, let's say, a
number of scenarios that this is what I
would like to reach in the end,
regardless of technology you need, regardless of data
you need, etcetera, etcetera, and then work your
way back.
So let's say, okay. I want a chatbot
(31:09):
to talk Edwin
to users and help them with their knowledge
management challenges.
Okay. Which data do do we need for
that? Is it one source? Are we focusing
only on the podcast, and can we transcribe
those?
Are they built up in the same way
all the time, or are they completely different
(31:29):
from each other? Those are all things that,
that have an impact on your solution. And
from there on, you start working your way
to a data model,
testing with the data,
dropping it in some sort of, repository where
you start working on your knowledge graph because,
basically, the knowledge graph is part of
(31:49):
every system that that you are building AI
on. And then you can decide on, okay,
do we want to use a interface like
we did with Clark? We have three different
type of interface, three different type of personas
that can use it, or do I want
one interface which can do everything? For instance,
a a chatbot like you have well, mostly
(32:10):
on customer service
type, channels, which says, okay, starts asking a
question and starts giving you an answer. Within
there, there's so much possible. I appreciate the
input and the interest as you're speaking of
the use cases. I mean, ultimately,
for future proofing what pioneer knowledge services is
for,
if I had a digital
(32:32):
twin of me to do this level of
conversation,
then pioneer knowledge services would just deploy me
in multiple places
and and uses. And maybe one will be
to continue the podcast. Two would be to
be hired by a company to
drill down into a team or individual to
get that tacit knowledge and to make it
(32:52):
a usable product for the organization
and or and or, you know, as you
were speaking of the multiple facets of where
do you pull the content from, you know,
I'm thinking of the ISO for knowledge management.
I mean, we could tie in
most published material with a with a paycheck,
I'm assure,
in order to help constitute
(33:12):
a really
what's your word for the, bona fide knowledge,
actual knowledge? Is that how you say it?
What what's your bona fide word for bona
fide knowledge? I I just use the word
truth. So we we sources of truth. Sources
of truth.
Well,
that's definitive.
If we go to the what is the
epistemiologist's
definition of knowledge, there has to be belief,
(33:35):
and truth can still be truth
in time.
So it doesn't have to be truthful
for all time
as long as the temporal aspect is included.
In 1988,
this was true. That so there there's always
metadata associated with truth for sure.
I have a pension in old English using
(33:57):
old English,
and my wife hates it because I'll use
words that she has never heard of in
playing Scrabble.
But I'll be like, oh, yeah. That's a
word. 16 hundreds. They use it all the
time, the 16 hundreds. How do I know
that? I don't know, but I know it's
a word. So Very good. Temporal. Alright, my
friends. Any final words before we wrap? If
(34:17):
you wanna start with something like Clark or
something AI,
there's just a ton of possibilities at this
point. I guess it's,
either start with digging in what are the
possibilities. So, like, one other instance is,
if you are indexing your podcast,
there's also an AI summarizer, which works really
(34:38):
nice, which helps you
to get your podcast
more searchable but on the smaller side.
But those are also possibilities.
There's just a ton of of possibilities
coming around. Try to start small, think big.
Those are the typical things
always mentioned.
It's kind of an open door, but still
and then see it through. This is one
(35:00):
discussion we always have with Andrew. It's it
starts with the data.
Do you have the data? Can we model
it? Can we get it sourced? Can we
parse it? Can we index it? It starts
with the data. That's one of the things
that AI just starts with the data. And
that's probably my least focused ability.
That would be the the first big hit.
(35:20):
Go ahead, Andrew. You were gonna say? What
data are you going to consider and what
you're going to do with it? What I
mean by that is that if you own
the podcasts,
that's your data to sell, and nobody's going
to charge you for using it. There is
great potential. You said, you know, there's knowledge
management
texts out there you could include
in order to become
the authority on knowledge management.
(35:42):
But whether it's really accessible, open, you know,
I have thoughts from our conversations, you know,
producing a free Kilometers
kind of website that says these are the
things that are actually free out there. You
may not know.
A lot of people don't want to dig
through the text, and that's where machines are
wonderful at it. But I think that idea
of saying,
like Marcel, start small Yeah. And useful.
(36:05):
For instance,
really digitalizing
your podcasts,
if that has multiuse, so it helps you
step forward, but it also allows what you've
got already to be more attractive and usable
and marketable, then I would go there. The
world is your oyster,
and AI sounds wonderful, but, yeah, to get
to know your data. Okay. And I think
(36:25):
it flows from there. I appreciate both of
those perspectives are gonna help.
I'm gonna kinda
I'm gonna compost
a bit
on all of this
and see what ferments.
Marcel, I've I've been dealing with some health
issues. Andrew knows all about it. And so
part of the instigation
of why I'm heavy into this idea is
(36:46):
that
I don't know how many more suns I
have on the planet.
I really would like to figure out a
way that I could model something useful, you
know, for the future
that could add value. I'm excited about all
this, and I thank you for the time
you've given to help foster this concept. Yep.
I think there's probably lots of people around
the globe having similar conversations,
(37:09):
maybe not this particular to conversation theory and
and building that trust piece. As you said,
Andrew, that introductory
piece has to be the criticals,
absolute first hundred percent
right on
attempt
to the next piece. Because if you don't
like, we've talked multiple times. That trust piece
(37:29):
allows everything else to happen. Yep. And trusting
a machine
may not be the easiest thing for people
to digest. It might be easier than trusting
people. It may just be the magic ticket
everybody needed because they hate people. Yeah. I
do have a follow on, which I didn't
think I would have Okay. But I do.
I'm being introduced to someone who does podcasts.
Scarily,
(37:49):
they do podcasts
on AI.
I did think that we should bring him
in. You may have to repeat all of
the stuff you've said. Yeah. I think it
might be interesting. Okay. He works in the
scientific domain and trying to use scientific data
for promote writing. To me, a very close
and not much of a jump between writing
and talking because you're really constructing data and
(38:11):
you're using the data in the right sequence.
I offer that. If it sounds interesting and
if it's not overloading,
then I I offer that. I will take
that lead, and why don't you connect me
to that person
at least to start a relationship there? And
then
we say, hey. You know, we're kinda kicking
this whole thing around
and see where it goes. Maybe he would
be a complete player in the Absolutely. More
(38:32):
the merrier. That's what I say. Yeah. It's
been fun, Edwin. Bye bye. Thank you for
your time and commitment and concern for the
future.
There is an old adage, one that you
might have heard from a grandparent
or village wise person. The one that says,
you get out what you put
in, meaning your efforts are matched to some
(38:53):
degree by the results or the output.
Now take our nonprofit, Pioneer Knowledge Services, who
delivers this cool program that you're listening to
right now, takes a bunch of effort that
you don't even see. We hope that you
obtain value from our efforts to deliver it
to your powers of reason. Here is where
you come in. You. Yeah. The listeners.
(39:15):
Make our efforts rewarded.
Consider donating to keep us moving forward.
Visit pioneer-ks.org
and click on donate.