Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome it, change Makers to the Deck Show with Tim
Flower and Tom McGrath. Let's get into it.
Speaker 2 (00:09):
Hello, change Makers. We're in the two hundred to Tim,
we are a two something episode. We're in our third century,
in our third century of podcasts, and you know, look,
we had a few people point out to me, Tim
Smart Alex that our listeners are but we're actually on
the like when we could two hundred episode, it was
it was our two hundred and third episode.
Speaker 3 (00:30):
Somebody's counting other than us.
Speaker 2 (00:32):
We're so sorry everybody who's counting, who's counting quite so,
But we've got it. We've got an excellent, an excellent
guest to kick off our third century, as you say, Tim. Today,
we're joined by Rob Wilson, CEO and co founder of
one Reach dot ai, which is an award winning platform
(00:54):
pioneering enterprise grade conversational AI and automation. But we know
Rob quite well for his sort of you know, his
writing and his thought leadership around one phrase. Rob uses
humor centric AI design. The co author of a fascinating
book called the Age of Invisible Machines, which, as much
as anything, has the most wonderful and appropriate title for
(01:17):
one of the dominant theme I feel the show.
Speaker 3 (01:20):
At the moment.
Speaker 2 (01:21):
Welcome on Welcome on board, Rob Wilson, great to have you.
Speaker 4 (01:24):
Yeah, thanks, two hundred and three episodes. I'm feeling old,
you feel to go just looking at us? Just yeah,
just listening to how long you guys have been doing this?
You imaginer?
Speaker 2 (01:38):
Could you maybe the best place to start would be
if you could define that for us and maybe explain
the broader significance of of something which is obviously so
pivotal to the age that we live in, which is
the conversational interface. What do you what do you embody
connected to the to the agentic runtime concept.
Speaker 4 (01:56):
Yeah, I'm super guilty all the time of hating that
we anthropomorphize the stuff and then doing it to help
make things more understandable to people. So I'm going to
do it again. If you think of a run time
for agents, A lot of people can relate to the
idea that if they need a place to work, just
(02:19):
like people need a place to work, they need tools
to work with, just like we need a desktop and
tools to work with, and that they're as good as
the tools they have and the knowledge that they're supplied
within that within that workplace, right, So that's kind of
what it is. But then, of course it completely breaks
(02:40):
that paradigm when you realize that these things are self replicating,
that they're self learning, that they work twenty four to seven,
and that they can, you know, never forget anything, read
things instantaneously. So then you start creating all the nuances
(03:03):
which aren't so nuanced about how different they are from
people and you and you end up with a run
time that's not quite like our desktop.
Speaker 2 (03:12):
Intriguing, and you know you've got the conversational interface. It's
significant features so so much in your work, and we're gonna,
I think, dive into it, and Tim's got a few
questions that sort of drill into it in a bit
more detail. But I was just interested almost in passing world.
Given the the the growing significance of that conversational interface,
(03:34):
where does it leave for humble screen? I mean, we
have been.
Speaker 5 (03:37):
As screen dominated issues now at the decade, right, we
can't even imagine life without screens, let alone tech, or
even we can't imagine technology without screens, let alone life
without screens.
Speaker 2 (03:49):
Are we approaching a life without screens? Is a screw?
The definition or role of the screen just going to
fundamentally change.
Speaker 4 (03:55):
Are we entering the post screen world orbody? I don't know.
I'm not like a futurist, but I I did grow
up with Marshall mccluan. I played in his front yard
and my mom meditated with his wife.
Speaker 2 (04:12):
Yeah, hey, so we're moving from a hot medium to
a cold meat to a cool medium.
Speaker 1 (04:18):
What are we?
Speaker 3 (04:19):
What are we doing here?
Speaker 6 (04:20):
Yeah?
Speaker 4 (04:20):
Well a lot of people don't know this about him
because he kind of kept this a secret because he
always claimed that he didn't have anything against media, that
he was neutral, but it wasn't really true if you
knew the backstory. And the backstory is he would sit
in his living room listening to his wife talking on
the telephone to her sister or other people, and like
(04:42):
completely tuned into what was going on in some other
town with people she didn't know for hours, and it
irritated him so bad, Like what that? He's like, what
is going on? What is happening?
Speaker 1 (04:55):
Like?
Speaker 4 (04:55):
Where is she right now?
Speaker 2 (04:57):
Where?
Speaker 4 (04:58):
She's not here with me, she's somewhere else? Where is she?
I'm gonna spend a bunch of time understanding this. It
from an irritation standpoint, not from not because he was
on the phone or wanted to use the phone, but
because he couldn't understand why she did and why she
wanted to and and that is sort of an unknown story. Uh.
(05:20):
And and I think our yeah, and our screens are
just an extension like where was she where? Why wasn't
she there? Why was it more interesting to be in
that other place? And and and how convenient was it
that she didn't have to leave her house to get there?
Speaker 2 (05:37):
Right?
Speaker 4 (05:39):
And I don't think we're ever gonna not trans you know,
transport ourselves to other places in that convenient way until
we can like actually do in Star Trek where we
like move around instantaneously. And I don't think that having
more senses to being somewhere else is gonna is like
(05:59):
we're reduced that. You know, screens will become more high
fidelity potentially, but I think the screen is a super
convenient way for our eyes and our ears to go
somewhere else really fast. And so I think we'll always
have screens of different sizes, just more of them. I
think people talk about it going away, but there's just
(06:19):
gonna be different screens everywhere, in different sizes. I think.
I mean, I have a hard time seeing them go away,
But I don't see them as phones that I think
that's a funny thing that we call our computer a phone.
I thought I thought computers were gonna take over, but
it turns out phones.
Speaker 2 (06:40):
The humble phone. You know that that ties you a
lot more from martial material in listening to his wife
from the phone to where we are. Now, that's that
pretty incredible. I love that, And I'm gonna is there
anything else we can flesh out here? Because I think
you going on to be be a sort of agentic
conversational interface expert and inn of it you are, and
(07:00):
having that childhood and playing in Marshall mcluin's backyard feels
incredibly appropriate. Any any other influence or touch point there,
but we we could, we could passed by on our
way to Tim's questions.
Speaker 4 (07:14):
I don't know. I think I find that his writing
like is still being understood today. It's fascinating. I think
one thing that I'll I'll just give you another point
on him that people don't know. Well, I'll give you two.
One is well yeah, one one media is the massage
(07:38):
right was actually a misspelling by accident, and then he
went along with it.
Speaker 3 (07:44):
He's like.
Speaker 4 (07:47):
Meeting. So the you know, media is the massage, which
is how it's spelled. Like it was message right obviously,
but there was a misspelling and instead of correcting it,
he just went along with it. It's like massage like that.
So that wasn't like intentional. And the other thing that
(08:07):
thinks super interesting is if we look at when he
wrote a book, right, his publisher was always like, no
one's going to read this, and he's like why, and
he said, well, because when you write textbooks, like there's
kind of a there was kind of a rule and
(08:27):
there still is. You can only make ten percent of
that textbook new ideas and the rest have to be
regurgitating old ideas or it's too much for people. And
so this was like just kind of a rule of
textbooks is ten percent new ideas, ninety percent old ideas
(08:48):
because people can't handle more than ten percent. And I
think we see now with the adoption of AI that
it can do so much more and that we're holding
it back. And it's because we're doing this like iterating
ten percent different from our old software just to keep
it familiar. And as you kind of go, as we
(09:10):
talk more today, we'll see like this software is really,
you know, one hundred percent different than the old software.
But when we see how it's being implemented in the
companies that are coming out in the solutions, each one
is like ten percent different.
Speaker 3 (09:25):
You know.
Speaker 4 (09:25):
We put a button in here to summarize the conversation.
You know, when why do we even have an interface?
So there's there's two more points for you. Love it.
Speaker 3 (09:36):
Love it. So rob, let's bring it to life a
little bit. We'll start with a question around kind of
a hypothetical example. Give us an illustration and it can
be real or hypothetical of an organization where these agent
runtime environments could actually be profoundly transformational. Apply it to
(09:57):
kind of daily life in an example illustration for us.
Speaker 4 (10:01):
Okay, so the first thing is to just like realize
that you kind of go through a few things, right,
I think you have to kind of kind of go
through the evolution of machines and code for a second,
just to just to kind of mark where we are.
Speaker 3 (10:19):
Right.
Speaker 4 (10:19):
So, in the early early days for software developers, they
liked and I was one of these. You sat there
looking at a screen with a blank cursor just blinking,
and you're like, Okay, write your application right from nothing.
Everything you had to create was from nothing, every component, everything,
(10:40):
And then that changed, and all of a sudden, there
were all these pre written components. There were APIs and
NPM packages and and and we went from like writing
software to orchestrating software. Even though we still call it
writing software, when you look at most applications, it's not
writing software more. It's orchestrating software. We're like, okay, now
(11:03):
now we're orchestrating software, and and that is done in
advance and predicting, like oh, if this, then that, we'll
call this, we'll call that, and and then we get
to like an LLM, which can write orchestration code in
(11:27):
real time. So now now we're now we don't need
to orchestrate software anymore because software can orchestrate itself. And
you're like, great, it can even right itself. So lllms
can write their own prompts. Lms can tune their own prompts.
Lms can evaluate their own outputs and decide if they're
(11:49):
appropriate or not, and and if they're not, they can
go and fix it and they can rewrite the code.
So what you have now is a self adaptive system.
You have a system that is that is adaptive, almost
like economics. You know, when you look at a city
or most economic environments, they're like super adaptive. You know,
(12:15):
a coffee shop goes out of business, another coffee shop
comes in. There's no like centralized orchestrator. The mayor is
not deciding and master planning out the city. You have
this adaptive system. Now, imagine that all of our software
is like that, each agent constantly evolving as you, as
you change your policies as you. I think of it
(12:37):
as the GPS, like it says turn right, you turn left,
it says recalculating, it's rewriting itself. So now you're like
you're like a gardener. You're like planting seeds and creating
an ecosystem, and you're never going back. You're not fixing bugs,
you're not deploying software. It's just all of these little
tiny systems with objectives adapting to everything that you're doing
(13:02):
or want to do. Hence, like then visible machines like
these things will just work in the background, trying to
get you a better deal on your health insurance, moving
your you know, balances around from one introductory offer to another,
playing for jobs at every company that can offer you,
(13:23):
you know, more money than the one you're in blah
blah blah blah blah getting out of parking ticket. It's like,
we're just gonna have little armies of these systems out
there advocating for us in the world against other little
systems advocating. So that's just a crazy, crazy, crazy idea
when you and this is all possible right now. This
(13:45):
isn't like, oh, in the future.
Speaker 3 (13:47):
Yeah, yeah, to hear that art of the the art
of the possible is real and the different scenarios that
are coming to life. It definitely it's it's on the
verge if not feeling in many ways like science fiction
(14:09):
and it's here, right, So apply it to some real
world scenarios. It operations, for example, can tend to be
very stagnant over time. They're doing the same thing over
and over again. They're just keeping the lights on. A
lot of people would relate it to just being on
a treadmill, and it's not a very satisfying kind of
a role. In your view, what role should it be
(14:29):
playing to help drive and enable real organizational transformation to
get off that treadmill? Does it play a role here
or is it more of a business function, telling it
what they need for outcomes. What's the who's driving the ship?
Speaker 4 (14:44):
Yeah, that's so interesting. There's a great book out there
called Prediction Machines and it talks about the transformation of
like cool powered steam engines to electric motors and it
I think it really helps when you look at strategies
and how companies were evolved to split it into two.
It's going to be two things. This what they call
(15:06):
points solution change, which was replacing the coal motor with
a big electric motor, but still all the belts that
radiated through the factory are still there and all the
machines circling the big motor in the middle. And then
systemic change, where every machine gets a small motor. They
put it in a line and they call it assembly line. Right,
so let's look at it through that. It has their
(15:30):
ten percent changes that they're going to keep doing right,
and they're gonna all of these systems can easily handle
ten percent better on everything you just name the use case.
It can monitor logs, it can monitor systems. It can
when a bug happens in a piece of software, it
can go repair the bug, even recode the bug. On
(15:51):
the fly for the user.
Speaker 2 (15:52):
They don't.
Speaker 4 (15:52):
You don't have to open a ticket, send it to it.
It you go to like, it can just solve itself.
If an API call is is malformed, it can reform it.
Speaker 3 (16:02):
Right.
Speaker 4 (16:03):
So it's hard to say because we're holding it back.
The question is what will people implement at their pace
right that's comfortable to them, which is really hard for
me to predict, and systemic changes. Why do we have
an IT department? Like we used to have a computer
department in companies, and if you worked on a computer,
(16:24):
you were in the computer department, and you're probably a woman,
by the way, in nineteen seventy. Now if you work
on a computer, you're on in the computer department. You're
just an employee. So is there an IT department? Is
there development part department five years from now? I don't
think so. I think everyone's going to be in it
or no one of whatever your definition of it is.
(16:46):
So I find that if you just look at any
project in IT, you just pick anything, you can apply
an agentic approach to it and improve what it is.
So the question is where do you begin? And to me,
it's begin at the point of high pain points. But
there is a point I'd like to make, which is
(17:07):
I think super super important. At this moment, the adoption
of AI is is not happening at the company level.
It's happening at the individuals. It's it's just people just
aren't That's where the money is, That's where all the
money is being made. That's who's adopting at light speed.
Employees are like, oh, this company hasn't isn't using AI.
(17:31):
Yes they are. Every employee if they can save time,
they're using it. Just because it wasn't sanctioned. It's like
an illusion like, ah, we don't use AI, Like you're wrong.
Look around us. And if you factor that in, you realize, like, oh, wait,
what happens when they unleash when that AI goes outbound?
(17:51):
When they unleash that onto companies to apply for jobs,
to you know, get better deals to in the in
bad cases, add mold to a candy bar and send
an email and try to get a free box of candy.
Speaker 3 (18:06):
Right.
Speaker 4 (18:08):
What happens when teenagers and everybody else starts putting these
things outbound? This idea that it can move at ten percent,
and call centers can be like, oh, let's get ten
percent better, they're going to be in panic buying. They're
going to be like, we can't, we cannot no longer
handle this. We didn't expect the exponential curve of adoption
(18:31):
on the other side. So I think we're in this
moment where, yeah, look at all your current projects, look
at your pain points, and incrementally change it at your
leisure until a year from now or eighteen months, and
you'll be in panic to get systems in place. And
that's what I'm worried about. Actually, I'm worried about the
(18:52):
panic buying. While people are taking their time learning this stuff,
now you've got a bunch of people who don't know
what they're doing, implementing in a panicked way.
Speaker 3 (19:02):
We've talked several times on the show about the the
evolution of shadow it to shadow AI, people doing things
at the grassroots level that are unbeknownst to IT. Not
only are they doing it for their their own benefit,
their own reduction of time, but I would argue, more important,
(19:23):
more more directly, a reduction of their own effort. I
think they're they're looking at it as an easy button.
Why should I put in the effort if I can
just go have AI do it and and that shadow AI.
I think that giving it a term like that almost
minimizes the impact and the risk of that happening in
(19:44):
an enterprise. So let's look at the impact in an enterprise, right. So,
as my background is it engineering and Jay Compute. I
ran at this up engineering team for many years. The
first time we saw an orchestra tool was awe inspiring.
It was, oh my god, look at what we're going
(20:06):
to be able to do. The possibilities seemed endless, and
that notion of ten percent really rings true to me
because we implemented that and we saw so many possibilities
and so many new things that we could do. Over
a period of a very short period of time. It
actually died because we didn't know who owned it. We
(20:28):
couldn't handle the inbound workload asking for new things, and
we said, you know what, there's too much that we
can do with it. We need to shut it down.
So talk a bit about the implication for it's traditional
service models. When we introduce so much change and now
that you know managing it with agents, does that help
manage that volume of change or are the agents themselves
(20:53):
introducing too much change to be able to manage how
do we see the nature of it support evolving over
time to be able to accommodate it.
Speaker 4 (21:00):
Yeah, so hard. Again, we go back to Marshall's book
that we're still digesting it. Yeah, he's dead, and like
so are we going to Are we going to ten
years from now still be adopting what's possible today? Like
are we going to slow this down? Or is this different?
Is it that all of us as individuals and personally
(21:22):
we're adopting so fast we're doing the greater than ten percent.
Is that going to push companies to go faster? Very
very hard for me to predict that shadow I T
You're right, it's like, I mean, it sounds minimal when
it's when it's like completely transformational. And it's basically saying,
(21:43):
you know, people are automating their jobs and getting second jobs.
They're already doing that, and then if they're not, they're
thinking about it and they're going because the company hasn't
quite like realized they've automated their job yet. Why because
they think they're in control. They wow, I haven't given
them the tools. Of course they have it, Like what
(22:06):
is it in that in that place where like is
their head in the land, Like, do they think there's
they're still in control and that people are just using
the tools that they're providing. And are they going to
just keep providing tools that people aren't going to use
because they're because they're too slow to provide them the
(22:27):
latest tools. I really don't. I really don't know, right
I think there's definitely an issue right now where you
have more restrictions on companies providing tools and what companies
can do to employees and to consumers. Then consumers have
back to companies, meaning companies can't robo call people, but
(22:51):
people can robo call companies. H and so you have
this like lopsided scenario where companies have more liability for
adopting the tools then people do. And then now we're
going to have companies that need tools to react to
all the people adopting them. And so will I Tea
(23:12):
just get overwhelmed or or does this get democratized. The
other thing I've I've seen folks talking about is this
idea that if it doesn't move fast, if it doesn't
adopt you know, at the pace of consumers and employees,
that some companies will and that we'll see this like
(23:34):
massive roll up where companies that do adopt will buy
the other ones that don't because they'll have the leverage
to do so. So we're not going to like necessarily
see them all go out of business, but it'll just
be financially feasible for them to go purchase yeah and
roll them up because they're like, well, we don't, we don't.
(23:56):
It's not like we'll take your ten person accounting team
and turn it into five. I have an added to
our accounting team. It's well, just our accounting team is
fully automated, so we'll just get rid of all ten
and and that gives us money to purchase you.
Speaker 3 (24:10):
It's an interesting paradox to think about the think about
a company introducing AI agents to their employees to a
top down model and telling people we need you to
automate your job so we don't need you anymore. That
would it would be pandemonium. But it feels like employees
are going to do it organically. There's an automate their job.
Speaker 4 (24:32):
That's what's crazy.
Speaker 3 (24:32):
They're not needed anymore.
Speaker 4 (24:34):
They're all like, city there going, my company's going to
automate me out of a job. You're like, you know
you are doing it right, they're not what.
Speaker 3 (24:42):
Has happen for it support? If you talked about this
notion of it just organically being consumed into the business,
we don't need those support channels anymore. We still, and
I think, because it's reality in most places, still talk
about a classic tier one, support, Tier two to your
three escalation. Uh, because there are people and skill sets
(25:05):
and knowledge that are needed in that tier system. What
happens to that? Do the teams automate their way out
of that?
Speaker 4 (25:13):
Or Uh?
Speaker 3 (25:14):
What's your what are your thoughts on them?
Speaker 4 (25:16):
Yeah, it's kind of weird to think about. So Omar Santos,
he kind of leads the that's like coalition of AI
Security and he's from Cisco. He's it's like the center
of security. You know, he's going to all these giant
data centers and consulting with Cisco. And we just had
a conversation recently and on my podcast, and he's connected
(25:40):
to a good friend of mine and and I think
he's he would tell you, like security, for example, there's
there's no security department. Everyone's in the security department. There's
these silos break down and if you're gonna call I
T what are you gonna tell it in the future
of self creating software. That's that's been generated on the fly.
I like your interface was generated right there in real
(26:03):
time you're using it. It disappears it's just part of
a transcript. It was never a C I DC pipe pipeline.
The bug was fixed when it happened. So are you
just gonna call and say, hey, my you know, my
chatbot called me a bully. I don't like that. And
he's like, oh well, I'll have a talk with that
chap out of yours? Like is it just a place
(26:24):
to go, you know, vent the personalities that you know,
who who made these things?
Speaker 3 (26:29):
You know, there's got to be you guys somewhere though,
right if we let I if we let technology just manage, maintain, fix, deploy, upgrade, change,
where do the humans to play?
Speaker 4 (26:42):
It's a great question. It's it's a funny thing, like
where do the humans come to play it? It's like
it's like your dog. You know, Let's say you go
get three dogs, right and they're running around you ice, Like,
where does the human come in? I don't know, Like
these guys are playing with each other, are they going
to they're there for you? You know, like the dogs have
no reason to be in the house and to do
(27:05):
what they do if not for you. Right, So, I
think we think of human in the loop a little
bit off the point of humans in the workplace. And
the reason, like one of the economic reasons we pay
bosses more money than we pay employees is we need
to give them more to lose, so that if they
(27:28):
make mistakes or are negligent, that losing their job is
meaningful to them and that it's damaging to them. So so,
and then the more responsibility you have, the more money
we give you, so the more you have to lose,
so the more seriously you take those decisions. This paradigm remains.
(27:51):
People have to take responsibility for these systems and they
have to have something to lose when these systems are
either negligently ignored, like their objectives, So think of all
of their objectives have to be signed off, almost like
you're a CEO sitting there and you're getting this paperwork
(28:11):
and this one wants to lower our electricity bill by
doing X, Y and Z. Okay, yes, I approve this.
I approve this. Oh you approve this, You're fired. If
a car runs into somebody, it can't be the car's fault, right,
So with all of these machines, some human has to
take response. It's not about them being needed to get
(28:33):
the job done, like that's just an old paradigm. They're
needed to take responsibility for the objectives of these machines,
which means they have to pay attention. That's why you
hear like guys like Elon must say like it'll be
one guy and twelve robots and they'll be watching over
those robots to take responsibility for what those robots are doing.
(28:56):
Because without that, we don't have a system. Our tire
system breaks down. So robots don't have anything to lose
when they make mistakes. There has to be a human,
so we're gonna have We see this with Amazon q
as they've been like working on migrating main frameworkloads over
(29:17):
to Amazon Services. This has been a huge success project.
And the guy who the principal designer on that was
a guy used to work with me. And you look
at what it used to be like trying to recode
that stuff in you know, Amazon Service is really open source, right,
and you you look at what that looks like now
(29:41):
and it's like this this event log. You wake up
in the morning and there's all these yes, oh yes, okay, no,
oh yes, okay no. And I almost think of like
the Obama thing, like are we all going to have
like green I mean not gray and blue suits, because
we're gonna have to make so many decisions all day
long of our systems going like do me do this?
Speaker 3 (30:02):
Do me to that?
Speaker 4 (30:03):
And you have to pay attention and read that. You
want to like diminish the number of choices you have
to mandate secution making.
Speaker 3 (30:08):
Yeah, yeah, there's a whole socioeconomic show that we could have.
We don't have time for today, right, But the revenue
of a company will no longer be flowing into the
pockets of employees. It's flowing into the pockets of the
tech bros who own the AI platforms. Right, they're digital
employees now, so what's the impact there for society? But
that's a tangent that we're not going to go on
(30:29):
for employees, especially in it. How do you future proof
your career? Right? Talk about the skills, the habits, the
mindsets that will help them maybe stay relevant as intelligent
automation reshapes their work. How do they I guess, to
put it plainly, how do you stay employed in it.
Speaker 4 (30:47):
Yeah, you I think one of the challenges with it
is is that your boss and the apparatus above you
thinks they're in control of the adoption of AI and
there and they and so they're slowing it down and
they're slowing you down right and and now you're like
(31:08):
falling behind, you know. So it's not not putting your
learning and your career in the hands of the people
that you work for, but like you're gonna have to
lean into this stuff on your own. Don't wait for
your boss too, you know, to approve use of tools.
(31:28):
You're gonna have to go tinker. You're gonna have to
play with this stuff on your own because you may
find that the rest of the company is more proficient
at technology than you are before you know it.
Speaker 6 (31:43):
It's so interesting or uh you know, you have you know,
when you have sometimes your podcast your suffer right and
sometimes you go questions and they get answered in the
course because you just know that the questions look a
little bit redundant by the time you get to it.
Speaker 2 (31:58):
But uh, you know, I thing I would like to
pick up on is a couple of things. Actually. One
is you talk about like the illusoriness of control in
this era, or you know, you allude to it, the
impossibility of control, whether it's on an organizational level or
(32:18):
an individual level, or even a collective level, that would
seem to be a big transition for US society. And
what kind of a transition do you see that as
fundamentally like an existential psychological one, because there's not a
pragmatic approach you could I am.
Speaker 4 (32:42):
I had a conversation with the former ambassador for the
US in Sweden. He was into the bassador in Sweden
for the US and then the UK Barsum, and he
he talks a lot about decentralization and decentralized orgs, and
(33:04):
he points out that there are companies like Visa and
that are decentralized. He talks about how cities and towns
are decentralized. In general, you don't have the mayor making
all of these decisions. So decentralization isn't necessarily this existential
new idea. It's just not common in companies. We still
(33:25):
run companies like dictatorships and they're very centralized. And what
he would say is you either decentralized or die, and
that means you're going to have to to defer personal
responsibility onto your employees. You're gonna have to let them
make decisions. You're going to have to trust them, just
(33:46):
like we trust in cities people to design their own
coffee shops. And so you've got to kind of rethink
your organization to run more like a city than running
it like you know, Iraq used to be run.
Speaker 2 (34:03):
Mm hmm and uh. And conversely as individuals rather than
necessarily entrepreneurs and business owners. The onus is or let's say,
let's say as employees. That's the same, the same ethos
can be taken on board, but in a very different way.
Right Like you, you you have to consider yourself a
(34:25):
fully decentralized unit of economic production for your own life effectively,
and any expectation of the traditional security offered by the
corporation offered by your employer, that's the that's a that's
a that's fast becoming an illusion. You would do well
(34:45):
on an individual level, correct, Yeah.
Speaker 4 (34:48):
I think if there's something existential to a lot of people,
it's that they grew up with an apparatus, you know,
parents that took care of them. Then they went to college,
but really you know, they were given real and structure
and then from college. They went to a corporation that
dolls out just enough work not too much, not too little,
(35:08):
pays them and now they have to actually be adults
and take responsibility. They can't just sit there and wait
right for for parenting from their boss. And that is
existential for a lot of people. They're institutionalized. You know,
they can go through life, they can go to retirement
(35:29):
and like they go through life being parented.
Speaker 2 (35:31):
Well, we've structured society with the idea of but autonomy
is a rare quality, right like we we it's the
captains is industry, you know, it's the entrepreneur like and
everybody else. The rest of us.
Speaker 7 (35:48):
We don't have to exercise that that that that that
characteristic per se.
Speaker 3 (35:53):
We we are.
Speaker 2 (35:55):
We are for supporting cars if you will.
Speaker 4 (35:57):
Yeah, it might be the one downside or one of
the few downsides of middle class, you know, poor. A
lot of people that are poor, like, they don't they
don't have that structure and apparatus. You know. That's why
we see a lot of them, like you know, become entrepreneurs. Right, So,
I think it's just you know, your class definitely can
(36:18):
contribute to how institutional as you are, and probably that's
the class that gets disrupted the most psychologically by like
feeling like, wait a second, everything's just been provided to me.
What is this where I have to like take ownership?
Speaker 2 (36:39):
The final thing, I thought we could tie it back
to more of the flu maybe actually probably.
Speaker 8 (36:44):
Because actually what you're describing that, I think mccluen talks
about the movement from the literary society to the oral society,
from the from the directional to the instantaneous. Yeah, from
a higher arch cool to the to be integrated.
Speaker 2 (37:02):
And this almost sounded like the complete realization of that.
And I think I remember one image from his writing
which was about the factory line resembling the line of
the text, you know, yeah right, yeah, yeah, yeah yeah,
so and that is the as the fundamental basis not only.
Speaker 7 (37:25):
Of forms of work, but if the career itself, right,
but the progressive career, you enter it, you move through
success of stages left to right, and.
Speaker 4 (37:38):
You go, you know, like, yeah, the linear versus the
circular linear verse is the circular, right.
Speaker 2 (37:44):
Yeah.
Speaker 4 (37:45):
The same with software, like it was linear and now
it's circular, like it's it fixes itself, Like what what
do I do in a circular world? We're used to,
like assembly line, it's the cradle to the grave. It
starts here, it ends there, like it never ends. What
do you talking about? Yeah, I think as designers, right,
especially designing things, that's where things need to go to
(38:07):
become more you know, to create less waste in the world.
If we designed more circular things, that would be better.
And it is circular thinking is definitely you know, it's
a little more difficult and challenging, but it's it's kind
of where we're all headed. When I think of Marshall
(38:28):
mcclean's like global village as you're talking about, right, the whole,
like we're all connected. There's something I wish I could
go back. I wish I could talk to him because
I have this like this new idea. I just not
a new idea, but I wanted to blend an idea
I think. You know, I'm always fascinated by like society
(38:52):
somewhere in time, like we were anatomically like the same,
like we're generally the same time, brain's the same size, fingers,
you know, ten fingers, ten TOAs. But then something happens,
like way way way way down the line here where
we just suddenly start advancing you know, Egypt, and like
(39:16):
like whoa, what we didn't change? What happened? Right, there's
this like puzzle as to like what the heck happened
then to turn to turn us from hunter gatherers to
like where we are right now with with soft like
our lives are so different with where we haven't changed.
And I think you know most will just tell you
(39:39):
that it's it was just some the concentration of people,
the ability for people to reach a certain populace where
ideas could easily travel and we can build upon old ideas, right,
and so it's it's almost like earshot, like oh I
(39:59):
can I learned something? I tell you that. Then then
you learn something new and you tell the next And
so maybe it was just like every time we got
close to the density that would would perpetuate this, like
we get a virus or some natural disaster or comet
hits and we're like, ah man, but finally finally we
(40:21):
hit this density and we could now talk and build
on ideas and iterate on ideas fast enough. And and
maybe lllms like I was thinking about the tribe in
New Zealand that took a language their language that they
(40:42):
were losing trained in LLLM on it, and now their
language will be preserved forever like the elders were dying
and now it's and maybe knowledge now has taken this
leap where we can build upon it at an exponential rate.
So his whole idea of global village it probably ties
to this idea that now that physical concentration no longer
(41:06):
needs to be physical because we have digital connectivity. So
now even if there were a million people all spread out,
it would from a conversation from a knowledge standpoint, it
would be as if they're all in the same location.
And now what does LM's do to that when not
(41:29):
only are we not going to lose old ideas that
we that were popular and we knew about, but we're
also not going to lose old ideas that haven't become
popular yet, the ones that weren't popular.
Speaker 2 (41:44):
And you know, look at you. One last thing. Just
it reminded me about conversation with Pedro last week, Tim
about the mention he mentioned AI doesn't have skin in
the game, and it was a phrase I thought was
very nice, but Roberty fleshed out when he spoke about
taking numbers a human responsibility being a kind of professional exigency,
(42:06):
if you will, or something that stain the idea of
professional in the future. But absolutely pleasure to have you one.
Rob really enjoyed the conversation and anything you'd like to
put out before we say buy.
Speaker 4 (42:19):
In terms of way people can.
Speaker 2 (42:20):
Reach you, will or read your material, et cetera.
Speaker 4 (42:25):
Not really. I think I think there's two things that
are coming that people should pay attention to. One I've
been working on for ten years, and that's the agent
run time. I think it's building agents. We show third
graders building agents. They're easy to do, but giving them
a place to work, a safe place, I think that's
(42:46):
that's a super important thing folks should should pay a lot,
a lot of close attention to. And then I think
also understanding like what's next after agents. I think if
you look at Nvidia and you look at these other folks, simulation,
I think it's really important to factor that in to understand,
like these things become more reliable when they can simulate
(43:08):
what they do before they do them. And I think
a lot of people are missing that that that you know,
oh Ai is going to do X, Y and Z
and make mistakes and like no, actually once they're super intelligent.
They'll be super good at predicting and simulating what they'll
do before they do them, which is they'll be better
at it than we are. So I think simulation is
(43:32):
a fascinating next step that we're all going to take here,
and it's it's a good one to factor in cool.
Speaker 2 (43:39):
Love it rop great pleasure. Well the ben no Man.
Speaker 4 (43:43):
Thanks for having me.
Speaker 3 (43:44):
Thanks ro.
Speaker 1 (43:47):
To make sure that you never miss an episode, subscribe
to the show in Apple Podcasts, Spotify, or your favorite
podcast player, and if you're listening on Apple podcasts, make
sure to leave a rating of the show. Just have
the number of stars you think the po cast deserves.
If you'd like to learn more about how next Think
can help me improve your digital employee experience, head over
to nextthink dot com. Thank you so much for listening.
(44:09):
Until next time,