Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Hi, levin, it's great
to have you on our latest
episode, leadership Espressopodcast.
You are one of the experts interms of coaching and coach bots
.
You started a startup a coupleof months ago and you are at the
forefront of connecting AI tocoaching, and today we want to
talk about the impact onleadership.
(00:24):
Welcome to the show.
Speaker 2 (00:27):
Thank you so much,
stefan, excited to be here Great
.
Speaker 1 (00:30):
So let's jump right
into it.
We all know that AI isdiscussed as completely
devastating the landscape ofcoaching.
We have the power already thatwe can have easy cases of
conversations, and I want totalk more about the capacities,
(00:53):
on the opportunities, on howthis can shape a new leadership
bringing more value to clients,to customers.
So here we go.
What, what is your basicassumption on how this will turn
out into a more value-drivenapproach?
Speaker 2 (01:14):
Yeah, great question.
I like the positive sentimenton it.
So I believe AI is here for areason.
Many of us appreciate itbecause it's already impacting
their abilities, their skills,their potential in a positive
way.
(01:35):
Others might not yet have hadthe chance to access AI in such
way.
Others might be just led byfears and concerns, which are
very valid at that point as well.
So, no matter where you are onyour AI adoption journey, there
are certain, you know, kind ofpoints of learning when it comes
(01:56):
to AI that we need to, let'ssay, go through when it comes to
AI literacy in order to makesense of AI and then be able to
apply it to our lives, to ourbusiness.
So if you sit someone in frontof a JetGPT and say, here, go
and use it, they will look atyou and say what should I use it
(02:17):
for?
What is it?
Is it a search engine?
So you need to understand howis it built, what is it capable
of, where are maybe also somerisks in terms of hallucinations
and so forth, in order to thenuse it.
But once you understood thepower of a chat GPT oh my gosh,
I'm just going to feed in thatthis, you know whatever funny
character style, because thenit's much more fun for me to
(02:47):
digest the content, then thepossibilities are endless,
absolutely, and actually, I havehad so many conversations with
world-class companies where Irealized the adoption rate is
fast.
Speaker 1 (03:02):
So we don't talk
about people maybe in the lower
end of adoption.
Let's talk about those guys whoreally need to make a turn in
the company to produce value.
So if the adoption rate wouldbe just having high literacy,
knowing what can I use it for?
What can I use it for Now?
(03:27):
At that point, what are yourassumptions, the main
assumptions, the maybe threeassumptions, how this will shape
leadership in that adoptionrate.
Speaker 2 (03:44):
Again, big question,
and you're jumping the gun.
You want to go straight into it.
How will the future look like?
I'm not that oracle.
However, my assumptions hereare SoftBank, as an example, is
already implementing AI agentsat a large scale.
We're talking about 10,000 ofAI agents and they're using
those AI agents at a large scale.
(04:05):
We're talking about 10,000 ofAI agents and they're using
those AI agents to A power theiremployees with more
capabilities, more resources,because an agent is only 23 cent
per month in theirunderstanding.
Now, agents is a very vague term.
Right, we all have anunderstanding more or less of
what an agent is, but is thisagent now having access to the
(04:27):
World Wide Web and can researchlatest information, or is this a
past version of the World WideWeb?
Can it also do agentic usecases where it triggers smart
actions like writing an email,triggering a workflow,
validating certain information?
Let's keep this vague for now,but an agent is something that
can follow a task that youdecide and, yeah, most of the
(04:47):
times we're using large languagemodel LLMs and we instruct them
with prompt engineering toinstruct them on a certain task.
Yeah, do this summary answer inthis tone, with this template,
whatever.
Now, if we would assume thefuture is energy is cheap
because Meta is building datacenters in the size of Manhattan
.
We have endless energy.
(05:10):
We have endless compute becausewe can find all those resources
we need to build those you knowkind of computer centers, all
these data centers around theworld, and then with all this
chip and computing power we canpower our LLMs, currently the
biggest bottleneck over there.
We need to find more data totrain the AI on more stuff,
(05:31):
because we've explored andexploited all the data there is
already and it's reallydifficult now to get more data,
and this data is needed to bringthe models on even more mature
levels.
Let's assume all of this ishappening and has happened over
the years to come, and now anemployee has to its ability a
set of 1000 agents Think of 1000interns that you can manage
(05:59):
with the click of a button andthose interns are already today
each acting on the level of aPhD in the respective subject
matter.
So already today we have GPT,which is Deep Research Mode, and
so forth the level of a PhD atthe hands of our fingertip.
(06:21):
There was an interestinginterview with Sam Altman where
he talked about he would haveexpected the world to be a very
different place when you wouldhave asked him that in the past
about what will change once thisis available.
It's not that much of a changenow compared to three, four
years ago.
Why is that?
Because we as humans no matterif we are private people or in
(06:44):
an organization have not yetfully understood the potential
of how to use the technologywisely.
But with the AI literacy andthe AI fluency, our ability to
communicate and collaborateeffectively on the topic of AI
there will come the maturity andwith the maturity we will be
knowing what to use an AI agentfor and what not.
(07:05):
Because here we have a higherror rate, here we have a high
bias, here we better have thehuman lead with their
understanding of emotions,ethics and so forth.
But then, once we figured allthis out, we can deploy agents
across many different areas.
We're building workflows.
Just by saying out the workflowto a voice interface hey, can
(07:27):
you create a scraper that isalways checking the latest
weather data and, based on thisweather data, you're increasing
the production of umbrellas inmy offshore fabric over there.
Hey, then you're speeding up,you know whatever logistical
times?
And then I want in thiswarehouse, to have a stock of
this size and depending onweather forecast, and you know,
boom, boom, boom, and this isthen happening fully.
Speaker 1 (07:50):
I can see that like a
dashboard.
Yeah, I'm a person, a cockpitabout my main agents I use,
maybe as a sports agent.
I have a financial agent, Ihave my financial agent, I have
my workflow agent.
So if I put myself into thatpicture being an employee, this
(08:14):
would give me a lot more powertowards my leadership, Because
it's like, as you said, there isa PhD at my fingertip.
That is a very complex andpowerful tool.
Now, I guess there are a lot offactors.
(08:38):
That first is that literacy weneed to build up people wanting
to choose those agents, which isa step.
I guess there's a lot of fearin it because it could replace
me.
Now my question is how do weaddress these kind of fears?
Speaker 2 (09:00):
Yeah, very good point
.
And when I look at the SoftBankcase that I talked about, yeah,
very good point.
And you know, when I look atthe SoftBank case that I talked
about, it's not only, you know,enabling their employees with
thousands of agents, it's alsobeing able to save, you know,
costs for human resource whenthere is an agent that can do
the job potentially much faster,with much lower error rate,
(09:22):
with much higher ability toanalyze, spot trends and so
forth.
So what you're saying thereabout the leadership leadership
will no longer just apply to thepeople that have power formal
power of the organization.
It will no longer just apply tothe people who maybe have the
skills because they have beentrained on certain soft skills
(09:44):
that are needed for leadership.
It will no longer just apply tothe people, you know, who have
intrinsic motivation and feelempowered to take responsibility
as a leader.
As you say, it will be thisuniversal power that we will all
have as employees, no matter ifwe're junior just entering the
organization.
We will enter an organizationthrough agents.
(10:06):
We will onboard through agents.
We will get to know theorganizations through agents.
It will all be conversationalinterfaces, no matter if they're
text, voice, video whatsoever,and those agents are here to
serve us, instruct them, our thememory of the agent about us,
our preferences, our needs,wants, the way we process,
(10:27):
language.
It will all customize so thatit's most easy for us to just
look at our dashboard and makedecisions.
So what you're describing ofhaving already today maybe GPT
or whatever console you're usingand having different agents
your workflow agent, yourmarketing agent and so forth.
They're all having differenttasks.
This is the future, where youwake up and then your agent is
(10:52):
already waiting for you.
Last night, you received 44messages on LinkedIn.
You have 12 emails.
Six of them are highly relevant.
I'd love to escalate one evenbefore you have your breakfast
to you.
Are you okay for me to suggestan answer?
It was your colleague, brian,who has flagged that, this issue
(11:13):
and I would now do this andthis and this, and all you're
doing as a human is.
You're processing on theabstraction layer.
That is where you still need it.
Meaning the future in leadershipis not so much about what we
know of leadership today.
The future of leadership, as Isee it, is you need to be able
to understand what is humandirected, where do we need to be
(11:35):
involved and what is AI agentautomated.
So automated is like I kind ofcreate a rule rule and do my
instruction.
I verify, I validate, I refinewith the contextual data that
I'm collecting, and this is asystem that learns on its own
and get better and better.
But there's a few areas that wediscussed earlier where you know
human emotional processing oryou know our value and belief
(11:58):
system, that can be at timesvery complex, or at least we
feel like it's hard to put thisin words so machine could follow
it.
Probably it's going to bepossible very soon.
But let's say, the human mindand its complexity still wants
to take ownership of certaintasks.
And this is also, I think, froma philosophical point of view,
the beauty of why AI is comingnow to us.
(12:19):
We're in such a crazy time.
Things are getting faster andfaster, we're more addicted than
ever to our digitalcompanionship, our devices and
so forth.
And now AI is here and reallyit's going so crazy and so fast
that probably many of us will atsome point just burn out and
switch off.
This is the past where I thinkwe have to decide now what is
(12:41):
human and what is ai led wherecan?
Speaker 1 (12:44):
I love that, I love,
I love that approach and, uh,
you know, when I'm justlistening and receiving your
message, it's kind of creatingsome temptations but also it's
frightening me.
You know, my heartbeat goes uplike, do I really want that
(13:04):
world and where am I gonna stayas a human?
And I, I believe in human, Ibelieve in the creativity, in
the original presence and, youknow, I think we are the best.
Uh, if there's such a thing asan AI machine, we are somehow
(13:26):
superior.
I still believe.
Maybe I'm a dinosaur.
Right, how, if we believe in ahuman-centric way forward
(13:46):
companioned by AI, what is thebeauty of the human-centric
approach?
Where do we stay?
The beauty of the human centricapproach, where do we stay?
Actually, is there, is there,is there a way where the human
centric assisted could besuperior to what we have now?
(14:09):
Could you design a picture orwhat is your assumption about
that potential approach?
Speaker 2 (14:20):
Yeah, I like that
human-assisted part because you
know, especially in CoachBot, wealways explain coaches,
trainers, managers, experts that, hey, we're not here to replace
you as a human expert with AI,we're here to enhance, enhance.
You know this could be we scaleyour impact, we scale your
reach, we scale your income.
We kind of automate workflowsto free up more time.
(14:43):
So these are all the benefitsof having AI at our fingertips.
And then we need to learn withour literacy, fluency and
maturity of like how to reallyuse it.
So it's doing good for us, forour stakeholders, clients, our
environment and so forth.
So we're talking now about AIenhancing the human.
We're not going to talk todayabout robotics, because robotics
(15:05):
is the second crazy thing thatwill also change the world
massively, next to the supertechnology cycle that we're
having.
This will be our second episode.
Another topic let's just focuson the AI part.
So it doesn't matter if in themorning, I wake up and next to
me there is my robot alreadywaiting and greeting me with,
(15:26):
you know, the fresh towel, yeah,the nice towel that I love so
much.
On a Tuesday morning I wouldcall my wife doing that,
actually Using the phone orwhatever gadget, the AI pin.
But let's assume we have now AIat our fingertips.
And your question, where do wewant?
(15:48):
This is actually an importantone.
Where do we want to stay human?
And where do we need to stayhuman because of potential
impacts that are dangerous orfrightening or uncontrollable.
So, in that example of the, thekind of the, the rate, the,
whatever, the weather forecastand the production, the
short-term production of um ofumbrellas, and then how you want
(16:11):
to ship them and distributethem and so forth, um, what is
it that I want to control as ahuman?
Because maybe it has a crazycost impact.
For example, we're having, youknow, rains scheduled in europe.
We want to produce anotherhundred thousand rain umbrellas
that we want to distributethrough robots at the metro
stations or whatsoever, andanything that goes above a
hundred thousand orders.
(16:32):
I want to as a human control,because I feel this is having a
financial impact that I want tocontrol.
Speaker 1 (16:39):
One topic is where do
we want to control in times of
taking decisions Exactly?
So decision points is going tobe human, okay.
Speaker 2 (16:52):
But also more
important than decision points
is actually design, architecturedesign Also more important than
decision points is actuallydesign, architecture design and
we usually know this only so farfrom anything that we're
constructing physically orsoftware architectures, but
those software architectures areobviously the most important
knots are the decision points.
You know what happens if wereceive this data and the data
(17:13):
is this way and this happensthen.
So when we build thoseworkflows, it's about how do we
design the process and then thedecision-making can be fully
automated, it can be fullyhuman-led or it can be a
combination where we say up tothis data point it's going to be
AI automated, then it's humanor in this zone, whatever.
We want to have this humanpoint of reflection.
(17:34):
We want to have the human inthe loop.
So when we design those systems, we need to carefully think
about when to bring in the human, for what reasons right because
control is one another.
One is ethical concerns I'mtalking here about, let's say,
we have police robots thatautomatically are imprisoning
certain people.
Yeah, they have whateverbiometrical understanding of the
(17:56):
person that they're looking for.
We know there is an error rateand then we just let this police
dog, police robot do its work.
Again here, do we want maybe toaccompany a human police escort
that in the moment also judges?
Oh wait, the person, thesuspect that we're looking for,
is currently here with hisunderage kids.
We don't want to have them thistraumatic experience that this
(18:18):
robot is kind of taking out thisperson.
Speaker 1 (18:21):
And so, yeah, Okay,
what I understand.
It's like the enhancement, theassistant, that makes my
decision better.
Exactly, okay, understood,which is great and maybe it's
philosophical.
Speaker 2 (18:39):
Exactly, and maybe
it's philosophical.
Exactly, maybe it'sphilosophical sometimes.
It will be easy because it'sconnected to our values.
Yes, yes going far beyond thatand we're like, whoa, are we
even asking us the rightquestions?
And this is now coming to thepoint that you just made.
What is empowering ai, us ashumans, to go deeper.
I want to kind of quicklyreflect on the fact that I've
(19:01):
been working over the last 10years in technology startups,
scale-ups, big companies atGoogle and I haven't had many
people talking to me aboutphilosophers or philosophy in
general.
Some people are kind of a bitof a nerd and reading up on
stoic, but in the end,philosophy has not been a big
thing in my generation, for youknow my peers now.
(19:25):
Lately in the last year, I'veheard a lot of people come to me
talking about the need ofphilosophers and looking into,
you know, historic and historyin terms of how can we make
sense of what's currentlyhappening, because what is
happening is oftentimes beyondour imagination, of our
understanding and so forth.
And suddenly philosophy isagain very relevant as a science
(19:47):
and as a subject to make senseof things.
And this is, I believe, a greatpoint in time Me as an employee
, me as a person, a great pointin time me as an employee, me as
a person what can I get rid ofwith AI, knowing it will do the
job exactly as I wanted, ormaybe even better at times.
Now I freed up time on my side,what do I want to do with it,
(20:09):
right?
Do I want to design newarchitectures?
Do I want to?
And workflows?
Do I want to spend time oncreative things?
You said you know humans aresuper creative, and I agree.
So do I want to make more music?
Do I want to write, and soforth?
Or do I want to ask?
Speaker 1 (20:22):
deeper questions
Understood.
So it will.
So the second, second leverageof AI will be it frees up of
tasks that are reputational orthat are just.
You know that can be donequicker and faster research by
somebody than me.
So you, you are actually,you're actually opening the box
(20:47):
of what.
What is the value of beinghuman?
Speaker 2 (20:53):
that is, yeah, what
many people today are talking
about when they talk about AI.
And you know, some are afraidand others say wait, this is a
great opportunity because itallows us to become more humans.
And some and I would say maybethat's one third of the people
agree, and two they're like whatis this person talking about?
Like what AI is so dangerous?
I'm so afraid of it.
(21:13):
It kills 300 billion jobs inthe next years to come.
Yeah, yeah, because it makes usmore human.
Now, Right, so what is ourplace in the future.
Speaker 1 (21:21):
Exactly why is it
good to be human?
What does it bring to the table?
What is your take?
Speaker 2 (21:28):
Yeah, honestly, it
depends a lot on the environment
that we're going to live in thefuture Climate change,
employment, certain crisis, warand so forth.
So this will depend a lot onare we getting our stuff
together as humanity to helpeach other, you know, not starve
(21:51):
from hunger, not die fromrockets, and so forth.
Nevertheless, let's assumewe're just looking at the
typical, you know, professionalhere in the westernized world
that has already a good income,that has sustained, you know,
maybe some savings that couldallow them to be off work for a
year.
And now they found a few smartAI workflows where agents are
(22:14):
actually delivering the incomethat they previously had with a
nine to five job or whatsoever.
So suddenly this person is likeunemployed, but wanted
unemployment because still moneycoming in agents doing the work
.
Okay, what's next?
What do I focus my time andenergy on?
And then I'm likely ending upasking myself very existential
(22:37):
questions about what is thepurpose of life.
You know, what is my journeyhere?
What can I do?
And the funny thing is, when wetalk about, you know,
civilizations and development asbig state organizations, only
the developed countries come toa certain point of consciousness
that allow them to ask deeperquestions and care about things
(23:00):
like the environment, because wefirst need to have that luxury,
to have the time, the resourcesavailable to even open our
minds and be conscious about it.
So and this is pretty much thesame thing, I believe, the more
people will now get to a pointwhere they're being fed, where
their needs, needs you know,basic needs and also sometimes
extended needs are met.
(23:21):
They now have a lot of time andopportunity at their
fingerprints, meaning they haveall these PhD agents that they
can say, hey, do some researchon this topic or on this
challenge, or, you know, buildme this workflow here.
And what do we now do with allthat stuff?
And this goes back to theexistential questions that we
all need to ask ourselves whatis our mission here?
You know, why are we workingfor this organization?
(23:44):
Why are we in this domain?
What is, what is our purpose inin this life?
What do we still want to do inthis decade, in this time of my,
of my life?
Um, and yeah, this is bringinga lot of opportunity, but, I
think, also a lot of challenge,and positive challenge, because
we need to now undergo apersonal transformation
(24:06):
ourselves, no matter if we areleaders in our families or, as I
take it in a more general term,it's about consciousness.
Speaker 1 (24:25):
So if being human
means to ask deeper questions
about meaning of life, aboutmission, about our purpose,
(24:47):
mission about our purpose, this,I reckon, is deeply human and I
think that's the difference toa machine or to any other
species and potentially couldhelp us where we are as humanity
, as a nation, as anorganization, because if you
look at the automotive industryright now in Germany, we're
stuck.
We're stuck because we'rerepeating the patterns of the
(25:10):
past and trying to do themfaster.
Speaker 2 (25:14):
So would AI liberate
us from repetition, from
research, from some types ofdecision-making, and opens up a
new space of asking deeper,meaningful questions to raise
(25:38):
consciousness what resonates,you know, for me is like I don't
want ai to automate research orautomate curiosity in a way
(25:58):
that we are not doing it, but,as you said, at the end, I
wanted to challenge us to godeeper and deeper and deeper,
right?
So, when we think aboutautomation, let's build even
better automation, with moredecision-making points where we
have the perfect understandingbetween human and AI.
When we do research, let's useeven more data points even you
(26:20):
know more later data, or youknow more context-rich data, or
whatever in order to understandthe problem, maybe from
different angles.
And then, yes, I believe AI canbe very helpful to know what
are the options.
However, then the finaldecisions.
You know this is still on us and, yeah, consciousness, you know,
(26:46):
I think, is very difficult tomake decisions when we're being
rushed.
We all know this from personalexperience.
Right, it's very difficult tounderstand what is my gut
feeling, my instinct, and alsoreally going deep into data
points.
However, now with AI, we'rehaving the opportunity, if we
build those systems right, tounderstand, digest the data much
(27:08):
faster and then, hopefully thisis my personal hope have more
time for the okay, let's just bepregnant with that idea, with
that path, with that decision,the okay, let's just be pregnant
with that idea, with that path,with that decision.
So we're buying ourselves sometime back for stuff that, as you
describe it, makes us human thecreative part, the intuition,
(27:28):
the connection, the oneness thatsome of us feel.
Speaker 1 (27:32):
I like that picture
that AI could bring us into a
position of a new space ofliberation, of creating more
meaningful or for asking moremeaningful questions, to start
more meaningful conversations,to have more meaningful
(27:54):
relationship, because that'struly human, that deep sense of
connectedness, oftransformational power and and
being liberated.
For, you know, time wise, moneywise, whatever, so that we, you
know, we, we are pushed,potentially pushed, to be more
human.
And this could be an answer ora picture that could kind of
(28:20):
balance, to be freaked out bywhat the potential could be and
that I, you know, I don't, youknow, I'm not part of it anymore
, but I could become more, evenin that system.
So I found, by experience, thatsomebody who is a CEO or starts
(28:42):
a startup, in that sense,there's always a person's story
behind.
There's always a person's storybehind.
So, if you allow, I would liketo end up a little bit about how
come that you drive AI intothat direction, what is your
(29:03):
person's story behind?
And I heard that some majorchange happened, maybe when you
became father or so.
So maybe you want to share alittle bit about that foundation
, that truly human foundationthat drives your startup.
Speaker 2 (29:23):
Yeah, happy to.
So our mission is to createmore access to coaching,
especially AI coaching, tocoaching, especially ai coaching
.
Um, meaning ai coaching is amix of what we know as coaching
non-directive questions and, ascoach, sending our attention and
our presence to that person tobring them from point a, through
(29:44):
self-reflection, effectivequestioning, to point b and it's
not kind of ingested likedoctrine.
You need to go, you need to dothis like religion, or in a
mentorship is more like hey,what is it that is important to
you?
So I found this very powerfulfor myself.
My background is, you know, I'vebeen raised by a single mom.
Most of the time it was.
Two younger brothers took a lotof responsibility early on,
(30:06):
which, you know, was hard in mychildhood because I didn't get
to play as much as others, butthen later on, it was also good
that I could use, because it wasjust very natural for me to
take over responsibility, andresponsibility at some point no
longer was, you know, somethingthat feels hard, that feels it
(30:27):
brings liability.
It was like the ability torespond.
It was like, well, it's mucheasier for me, you know.
You know I was.
I was serving nine months inthe army and when I was picked
to lead an operation, I was likenice, I enjoy this because I
can, you know, stay calm evenwhen it's getting very hectic,
and make rational decisions,both on the data I collect and
the gut feel that is guiding me.
(30:49):
So, when I became a father, Imade very conscious choices.
So I've transitioned from, likeI would say, you know, a party
animal that was a frequent flyer, very unaware of the
environmental impact, and thatI'm having eating meat and fish
every day.
You know, yeah, just livingmyself through life by enjoyment
and holding to, you know,plant-based, centered person,
(31:12):
person that is, you know, verymuch focused on their
mindfulness and meditation andsports.
I did full distance.
So I really changed myphysicality and my mindset in
order to, yeah, somewhat be mosteffective and most loving and
(31:35):
kind, as I wanted to be in thisnew life episode of mine and I'm
currently in this episode wherea I want to take care of for my
three kids and my wife.
I want to be that family man,while I serve also my social
environment, my community, andthis is, for me, a global one.
I don't see my community justas the neighborhood here around
me, looking at the houses nextdoor.
(31:56):
No, I see this as all humanbeings and I have this deep
sense and kind of wish to servepeople, especially those who do
not have access to, let's say,support when it's most needed.
So I believe, especially whenwe're going into our adulthood.
So you know something age 13,14 until 30, men are usually a
(32:20):
little later than women whenthey really feel adult, because
the brain works there a littledifferent.
Nevertheless, I feel in thistime it's so important to have
the right kind of support inyour life to figure things out
like productivity, careeradministration, you know, even
well-being, and how do Iformulate goals for me that
(32:40):
serve me, how do I achieve them,how do I build on that success
further and so forth.
And I believe coaching is avery powerful tool.
It's highly inaccessible at theminute, with about 100,000
professional coaches that wehave, the average coaching
session is $244.
That's only affordable forabout 15% of the adult global
population only once.
(33:02):
So for those who want recurringsessions, it's even a smaller
circle, so it's super exclusive.
We do know it works much betterthan traditional learning and
training you training wherewe're just going through a set
of static content and nowputting the power of AI on top
of it meaning we're making itsuper available 24 seven access
(33:24):
through AI companions kind ofinterfaces, through text and
voice, for example that canbring you the power of those
reflective questions, no matterif you're on your phone, on your
computer, on your smart device,in your car, at your home.
That can be super powerful, nomatter if you are in your
private life, as an individual,when it comes to your relations,
(33:44):
to your life purpose and soforth, or to the workplace.
Imagine, sometimes you know youwould just have that perfect
voice with that perfect questionin that perfect moment, making
you reflect.
Is this really the best thingyou can spend your time on right
now?
Is this really the bestapproach that you've taken, by
just copying that slide set,versus thinking about the
structure first?
And this is what we want tobuild with CoachBot not to
(34:08):
create the final product for you, but actually we allow experts
that are really good on theirdomain and their subject and
their niche automotive, you knowto bring the best out of the
humans that they're working with.
And this is what we do kind ofon an everyday basis helping
human experts to buildtechnology that can scale impact
(34:30):
Right.
Speaker 1 (34:31):
Now, how did you
build in this human-centric
approach at the heart of yoursystem?
How can we see what wediscussed alive in your system?
Speaker 2 (34:44):
We haven't discussed
this today because we focused on
the opportunity around AI, but,as we know, there are risks.
We quickly touched on topicslike hallucinations, bias.
There is this risk of privacyand IP loss, meaning that if we
only have two, three bigtechnology companies in the
world that power all the dataand we just learned it from Sam
Altman as well whatever you putinto OpenAI, no matter if you
(35:07):
even archive or delete thethread In a court case, OpenAI
is forced to give these datapublic, so they retain
everything that you've evertalked to the AI, no matter if
you delete it.
It's pretty crazy, by the way.
So in this world where a fewtechnology companies are storing
and kind of processing all theinformation in the world,
(35:29):
including your most sensitivetopics and so forth, we had made
a few choices about how do wewant to play that game and we
decided we do not believe thatall the data needs to go into an
AI model, meaning we have aso-called zero trust AI policy
and we're not fine tuning our AImodels on the sensitive client
(35:51):
data.
We're collecting your workplacesecret, what you told us about
your colleague.
We're also not training themodels on the coach, the
expert's knowledge, theirframeworks, methodologies,
subject matter, expertise,personality.
They build into the AI Becausethe moment it's in the model
it's no longer going to beextracted and removed, right,
it's then in the model.
It's a black box even for themodel.
It's no longer going to beextracted and removed, right,
(36:11):
it's then in the model.
It's a black box even for thedeveloper.
It's really difficult toidentify certain points, move it
out and oftentimes we'retraining a model on a model on a
model.
So we have designed ourcoach-oriented reasoning
algorithm, short name CORA, in away that it's only built on
memory.
This is a client's informationthat we're extracting from the
conversation.
That is relevant and systemlogic, very old school software
(36:35):
engineering, you know, no kindof big fuss in terms of AI gets
to decide.
No, the humans build the waythey want to create context.
So we're doing a similaritysearch where we're making sense
of the words that we're learning, and then we have a semantic
search where we're making senseof the context.
What does it mean?
This employee seeking for morewell-being?
(36:56):
Did we have previousconversations that are connected
to the area of well-being?
Oh, maybe there is somethinggoing on at home with the kids
or the partner.
Oh, there is a certain healthsituation.
Oh, there is stress becausethere has been no promotion and
then, based on all the memoryinformation that we have and the
context understanding that theai does, we can bring out one
very powerful question that ishelping you are very cautious
(37:20):
about data yeah.
Speaker 1 (37:23):
About the ethical
issues of that data yeah.
Um, and how does it?
How does your system assist thecoach to come up with more
(37:46):
value?
Speaker 2 (37:46):
yeah, that's a very
important point because if we
don't have a communicationbetween the AI and the human,
we're creating silos andsuddenly maybe the AI is smarter
than the human and makingdecisions where the human is
like what are you doing?
This is nonsense, like I don'tlike this, this is dangerous.
But the AI has data and justacts as per your instruction
(38:08):
that you put in in the firstplace.
But you're thinking.
Your understanding is outdatedbecause the AI is much ahead.
So we need to have a constantunderstanding between the two
worlds in a way that, okay, thehuman can control and follow the
processing of the AI.
And this is the challenge withthese black box, fine-tuned AI
models.
They're getting so good thatit's really hard for us to make
(38:29):
sure.
Are they still workingethically?
So what we put in there is thatA we're controlling the output
at all times.
That means hallucinations thatcome up about 30% of the times
when using certain GPTs and theyneed to be put into boundaries.
We need to put the AI intochains, basically to not have
(38:51):
those.
So we hard code a certain setof rules.
Don't go into professionalhealth advice, financial advice,
legal advice.
Don't go into directivecoaching, for example, where we
are giving clear advice, like aGPT would do.
Rather ask smart questions,don't stack questions to
overwhelm the user.
And it's us that create, let'ssay, two-thirds of that list of
(39:14):
basic requirements and rules,and then one-third is added by
the expert where they believe ohyeah, I want to do more of this
or less of this.
I want to do this use case, butnot this use case.
And together we're creatingthis long list of exclusions and
rules that we then tell the AI,we instruct it not to do, but
even through hallucination,meaning the AI has conflicting
(39:37):
rules or is overwhelmed by theamount of rules or whatever.
And when we have such asituation, we're double checking
with a hard coded contentmoderator, as we call it, to
remove those from the answersbefore they go out to the client
.
Speaker 1 (39:53):
I'm a little bit
shocked or irritated, because
this would mean it reallyreplace.
It replaces the, the, the coach, and it's just that, as the
model is going to get better, itreplaced even the senior coach,
so it's no longer anenhancement.
Speaker 2 (40:11):
Yeah.
So the thing is, ai is alreadyso pretty awesome that I would
say was confident it can work ona PCC marker test.
For those who know coaching alittle bit, that's a
professional standard that isalready quite senior.
Yeah, it's not the master level, but it's already quite senior.
And it's probably only aquestion of time until AI could
(40:33):
also pass the relevant markerstests of a master coach
certification.
Obviously it cannot be, you know, in the emotional context.
It has certain disabilitiesbecause it cannot experience
emotions like us.
It's just imitating theunderstanding by having this
amazing contextual textunderstanding.
Nevertheless, yes, I think it'svery dangerous in terms of if
(40:58):
you use it for that, it canalready imitate what a human
coach, a senior coach even,would do.
We rather say no, we'recreating something that, in its
combined power, enhancing thehuman with ai, is a better
(41:21):
outcome than the ai only, whichis aiming to replace the coach.
And here our coaches are usingai to follow up with clients
where they don't have time andso forth, and then having the
human when it's most needed fora deep transformational session
where emotional connection,presence, and then having at
other points in time, at night,at three, where I feel maybe a
(41:42):
little worried about thepresentation tomorrow or the
financial issues that I'm having.
They're having an ai coach tosay do this breathing exercise
that we discussed you know,follow the principles that we
know, then this can be apowerful combination.
Speaker 1 (41:54):
I mean this sounds
like there is.
The question is not if Ienhance my capacities as a coach
, as a senior coach, with that I.
The question is more, with whatkind of uh ethics, with what
kind of integrity I build amodel or I choose a coach bot or
(42:21):
a system that will enhance andcome up with a product of
integrity is that?
Is that the sum of what of itall?
Speaker 2 (42:33):
I think that could be
one of the conclusions that we
draw, not just for coaches andrelevant experts, but for all of
us, because, let's face it, thefuture is going to be supported
by agents all around.
There will be agents from thegovernments monitoring the
traffic already happening today,making sure the traffic lights
work on time, and everything inthe infrastructure to our
(42:53):
private lives, our phones andall the apps we're using is
agentic and automated.
So the question now is for uswhich are the systems, which are
the workflows we're choosing tobe AI-powered?
Some will be fully AI-led, somewill be hybrid, as I say,
between human and AI, and somewill be human-only.
And it's only our understanding, our ethical understanding,
(43:16):
that will be starting in theconversations that we have with
our parents at home, goingthrough school, going through
experiences that we have withapps that we build, because in
the future, we just build appswith you know voice, and it will
shape our understanding of howand when to use AI.
Speaker 1 (43:35):
Oh brilliant, I love
it.
So my key takeaways are it'sall about your personal adoption
in that to become literate, ailiterate, and then overcoming
your fears that you're replacingand think more of, about the
(43:58):
enhancement.
The second one is once and it'salready there you know you have
chosen your kind of agents ofintegrity that stay with your
values.
You know it will liberate youfrom workflows that can be done
(44:20):
better by AI and it willliberate you to be opening a
space of freedom, of choices, ofnew choices.
And the third one is what Ilike most about it.
It's about it will kind of helpus being more human, asking
deeper questions, moremeaningful questions about what
(44:43):
is our mission here, and makebetter choices.
And then leadership is moreabout raising that kind of
consciousness of what are thebetter choices to take.
(45:05):
So it's it's it's it's becominga leader of your own personal
transformation story, where youraise your own consciousness.
So the discussions we havepotentially in the future is not
about you do this or that youreach goals.
It's more about what is a moremeaningful way forward.
(45:26):
It's more forum, it's more ofliberation, it's more of
accountability, it's more ofcreating uh.
So leadership must undergo itsown personal transformation as
well.
So it's a side-by-sidedevelopment and it could
(45:49):
potentially overcome the fear tothe opportunity and help us
maybe start going on thisjourney and experiment and find
out how we can become more humanand have a guided way from AI.
Is that a potential sum up ofwhat we discussed?
Speaker 2 (46:15):
I think I want to
make one last example for those
who feel this is too abstract.
Think of an employee that isbeing asked to reach out to new
clients and bring forward thevalue proposition of the company
so that we start a salesconversation.
So this business developmentrep so far has spent a lot of
their time.
About 80% was the what, what amI writing?
(46:37):
What is the list of people thatI reach out to?
And then they're stuck in theprocess of doing that writing an
email, sending an out checkingthe answer, thinking, writing
the doing part, yeah, doing part, right.
And then we have about 20%scheduled time with our managers
, our retrospective meetings.
Oh, what is the how?
How can we do that better?
Is there maybe a littleincrement that we can do?
(46:58):
Imagine the future.
You're using 10 AI agents.
You know, some are writing yourposts, some are writing in
different styles, with differentindividual, like different
audiences in mind, doing tests,a, b tests, giving you the data
points, and you see, oh, myaudience is rather respect kind
of addressed by this.
You know, brings thisconversion rate and so forth.
(47:20):
So this is just done by oneemployee who can maybe instruct
10, 100 or 1,000 agents at onetime.
If you're being a hiringmanager and you have a person
who says, hey, I've donebusiness development for the
last 10 years.
I'm really good in opening upopportunities, here's my track
record versus you have anotheremployee, maybe like a Gen Z,
who says, yeah, I'm working withai for two years now.
I've built armies of agentscollaborating among each other.
(47:43):
We can basically do whateveryou want me to do.
And who would you hire?
Yeah, and most of the hiringmanagers, I think, would go now
with the person who can instructthe agent.
So what you said about, you know, having fears of being replaced
.
It's not helpful at the moment,because the world keeps
spinning faster and faster, andonly the ones who are AI mature
(48:05):
will have a chance to really youknow use AI.
And only the ones who free upthe time this was your second
point will be able to spend moretime on stuff that is human,
that is creative, that is, youknow, ethical, that is
philosophical.
And this is the third point AIwill allow for those who join
the hype train and who startwith curiosity, but also healthy
(48:26):
skepticism and being loyal totheir concerns and really
understanding what are the dataprioritizations.
Those will master the game andthen eventually uplift their
human consciousness in a waythat this can solve some of the
biggest problems on earth,potentially.
Speaker 1 (48:46):
Nothing more to add.
It was fantastic, veryinspiring.
Levin, I wish you all thesuccess with your startup, and
maybe I have to join and findout myself as a dinosaur, what
we talked about, so I will stayAI mature.
So thank you.
Thank you for today.
(49:06):
We'll do another one, maybeabout robotics, later, and for
now, thank you Having you in theshow for today and all the best
for your future.
Thank you, stefan.
Have a good one Bye.