Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Mark Smith (00:01):
Welcome to the
Ecosystem Show.
We're thrilled to have you withus here.
We challenge traditionalmindsets and explore innovative
approaches to maximizing thevalue of your software estate.
We don't expect you to agreewith everything.
Challenge us, share yourthoughts and let's grow together
.
Now let's dive in.
It's showtime.
Welcome back, everybody.
(00:22):
I'm excited to be on the show.
It's like sparrow my time,meaning it's very early in the
morning with all the time zonechanges, but I'm excited to see
Mr Dorrington is in the house,one of my favorite, favorite,
favorite favorite people in theworld and Will.
I am keen to hear the update ofmy new gym equipment that I'll
(00:46):
be buying later in the year tofollow in the steps of your new
gym equipment.
What are you up to?
William Dorrington (00:52):
So on that
alone, yeah, I've been using
Speedience and it's, honestly,it's phenomenal.
It's the first time I've seen abit of innovation in a home gym
for a long time.
So we'll be sure to drop thelink uh to and to the youtube
video there, but we don't getendorsed by that.
I want to add feel free forthem to uh to reach out to us.
Mark Smith (01:09):
But what are you
finding?
William Dorrington (01:11):
the
resistance training works well,
like you know it's phenomenal,it really is, and it just it
just feels like weights.
I mean, chris came over and uhdone the neighborly thing and
helped me lift it up and and hada go on it and it was fantastic
, you know it was.
It's really good, uh, you know,chris, don't you have any
thoughts on that, with you beingthe the gym king that you are?
Chris Huntingford (01:30):
mate.
I actually so honestly, Ithought it was going to be
sketch.
Um, I was like, okay,resistance training is okay.
I always thought, felt likeresistance training is a bit
lame because it's kind ofcheating, like I pick up heavy
things and put them down again.
But after doing this I was like, actually, this is very
freaking cool.
I like it and, um, the thingthat I love about it is that it
(01:51):
kind of auto adjusts as you canset it to auto adjust as you go.
So you know, when you're likebusting out that, um, you know
100 kilogram bench press thatandrew does regularly, then you,
uh, you realize that and youcan't get the last rep in.
I don't even know.
Andrew Welch (02:06):
I don't really
even have a firm grasp of how
much 100 kilograms weighs.
William Dorrington (02:12):
So I just
that's how strong he is.
Chris Huntingford (02:14):
It means
nothing to him.
Andrew, do you remember thosetimes we used to run and jump
and cuddle me?
Imagine I did that to you.
Yes, ooh.
Well, that's not good, that'snot good.
Yeah, it is.
Ooh, that's so good, that's sogood.
Yeah, it is.
Yeah.
Anyway, it did Mark it shortterm, it works.
Mark Smith (02:29):
Nice.
Andrew Welch (02:31):
Well, to quote
the great Forrest Gump, I have
been spending my time exercisingmy arms.
Oh, wow, wow.
William Dorrington (02:42):
What is
great to bring it back to that
is, although it's resistancetraining, it all it does is
mimic the weight right.
It goes up to 100 kilos andobviously it knows the amount of
resistance that equates to akilogram.
So you've got this ability.
Yeah, you have a home gym thatdoesn't weigh a ton, that you
have to have a load of of platesor weights with.
That goes up to 100 kilogramdifference.
You have a ring that goes onyour hand that you can adjust so
(03:04):
.
So if you're drop setting or upsetting, you can literally just
adjust the weight as you go.
Any workout you can do at a gymis there.
I mean, it is a little bitexpensive, but for actual
innovation and does a lot of AIcoaching, adjusts to your goals,
what you're up to, your fatigue, et cetera.
It's just next level.
It's really good.
Mark Smith (03:21):
You sold me on AI.
Chris Huntingford (03:23):
It is cool.
The thing is, though, honestlythere's nothing like a good
thumbnail.
Mark Smith (03:27):
What I found last
year in hindsight was that the
Black Friday sale is justridiculous and it's their only
sale a year, and so I'm likeI've already measured up for
where it's going and I'm readyfor the Black Friday sale, and
then I'll be on.
William Dorrington (03:45):
My biggest
advice is go harass them, them
honestly.
Their sales team want to sell,and if you said, look, if you
offer it to me for the blackfriday price, I'll buy it, I'll
buy it today.
They'll give you a link.
It gets attributed to you.
I'll tell you another thingabout ai, though, and a nice
little segue is have you seenwhat the us has published
recently around their uh, aroundtheir, around their driving
usage within federal agents Inthe last two days, right, Will
(04:10):
has said it in the last fourminutes, since we discussed it
just before the show.
Andrew Welch (04:14):
Have?
William Dorrington (04:15):
you seen it,
I was trying to get us back on
track.
Mark Smith (04:20):
Nice, nice, I have
seen it.
I read it as bedtime readinglast night.
It came out on the 7th, whichwas the 8th hour time, and it's
the 10th today, so it's in thelast 48 hours.
It's very interesting becausefor all the public bluster, it's
actually got a bit of goodguidance, I feel, in there
(04:40):
around responsible AI and a lotof benefiting in America,
benefiting in America,benefiting in America all the
way through, and I always justfind when countries do that it
reminds me of a dog pissing on astreet light like.
This is my territory and it'ssomething like when I went to
Belfast in Ireland.
(05:02):
The British have their buntingflags up everywhere and I don't
see it anywhere in Britain butin Belfast.
And I said to the cab driverman, it's like the Brits are
here pissing on every flag postto make sure you're really clear
it's British.
Well, he was massively offended.
I'm British.
Chris Huntingford (05:19):
He said well,
I'm British.
Mark Smith (05:20):
What do you mean?
I'm not Irish.
I'm like mate, you sound Irishto me and wow, I reach back and
grab me.
Andrew Welch (05:29):
Mark, and I
remember you telling the story.
That was an absolutely stupidthing to say to the cab driver.
I have no other way around it.
Mark Smith (05:36):
Yes, yes, I know,
but you know I'm a little see
how you oh, you're irritating.
Andrew Welch (05:44):
Color me.
This is my shocked face here.
Mark Smith (05:49):
Exactly, so you know
thoughts on what the US have
said doing.
Andrew Welch (05:54):
Yeah, so it's
really.
It's interesting.
You know, first of all, thefact that responsible AI, even
landed in this thing as a phrase, is the first indication that
we have that Elon Musk obviouslydidn't read it, so I joke, I
(06:15):
joke, I joke.
Chris Huntingford (06:16):
I'm sure he's
very responsible.
You're going to land yourselfin border control, my friend.
Andrew Welch (06:22):
I know, I know no
, but it's actually, it's quite,
it's quite interesting.
First of all, you, you do haveto get.
So I'm looking, I'm reading offof the.
They did a fact sheet and I'msure we can put the.
Put this thing in the shownotes.
They did a fact sheet andthere's some funny things in
here, right, like there's a linein the first paragraph.
These policies fundamentallyshift perspectives and direction
(06:43):
from the prior administration,focusing now on utilizing
emerging technologies tomodernize the federal government
.
The executive branch isshifting to a forward-leaning,
pro-innovation andpro-competition mindset, rather
than pursuing the risk-averseapproach of the previous
administration.
So I mean, it's on brand, right, like we had a whole preamble
that had to trash the Bidenadministration, which did just
(07:04):
make me laugh.
Like can you just publishamblethat had to trash the Biden
administration, which did justmake me laugh Like can you just
publish a document and get tothe point, or do we have to see
this play out again?
But neither here nor there.
But I'm looking here.
So a couple of things that Ithought were really interesting
that stood out to me.
First of all, big section tobig bold letters, all caps,
promoting rapid and responsibleAI adoption.
(07:26):
It's almost like someone likethe Ecosystems podcast wrote
that heading and you know itgoes on to talk about how chief
AI officers are tasked.
This is chief AI officerswithin federal agencies are
tasked with promotingagency-wide AI innovation and
adoption for lower risk AI,mitigating risks for higher
(07:50):
impact AI and advising on agencyAI investments and spending.
Which that made me laugh.
It was like, oh my God.
They read our white paper aboutincremental and differential AI
and how to balance yourportfolio of risk.
William Dorrington (08:02):
That's
amazing, do you not find it
interesting, though, that if youlook through that, we had a
bingo card of all the thingsyou'd hope to see.
They've got the words there,the bit that I find missing, and
you know, mark did call me outand say well, andrew, I got
about three minutes ago, but forthe quick skim read I have, I
(08:25):
can't see anything that goesright.
Here's the criteria forclassifying high impact and what
it means to you.
Here's, here's.
Here's actually the, an ethicalframework.
Here's how you can actuallyclassify and do the risk
classification.
Here's how you map, manage andmeasure risk.
It's just like we're going todo this, we're going to get
someone who's actuallyresponsible but go off and go
off and figure it out.
Andrew Welch (08:38):
Unless I can, we
drop a link to cloud lighthouse
and the center for test for theai in the chat in the chat,
almost like I know someone whohas these things.
William Dorrington (08:45):
But the key
thing there is it's all the
right words, but the contextbehind it it's like the GPT
wrote it right.
Chris Huntingford (08:56):
It's like you
know I've got three of these
open right here, so I've gotthis one.
I've got the 50-point plan bythe UK and I've got the newest
one by the EU AI pact called theAI Continent Action Plan.
So I've got three of them openright.
When you go into the UK50-point plan, it all brings it
down to things likeinfrastructure.
It's really detailed andactually even the 50 points
(09:18):
drill into layers and layers andlayers of content right, and
that's the UK one.
By the the way, when you gointo the new ai content plan
action plan, just go and look atthe q a.
I'll stick the links in thechat.
They break it down sobeautifully of like computing
infrastructure data skillsdevelopments, but they bring out
the human, the human-esque partof it, whereas when I look at
this, I'm like how many gbts didyou run this through to make
(09:39):
these words right?
And without being mean?
William Dorrington (09:41):
it's the,
it's the meat that I'm missing
but look at the politicalnarrative as well of encourages
private sector-led ai innovation, promotes us-made ai.
It's, it's going back to that,that sort of charge again.
Andrew Welch (09:54):
Yeah I also.
I also, I also do think thatthere's a, that there's a bit of
a, you know.
So some some cultural, somecultural context is necessary
here in in that, you know, justanecdotally speaking, europeans
(10:14):
love detail in a way thatAmericans really hate, right?
So so I think part of that issimply like, if you, if you were
to give this thing, if you wereto give this topic to American
government writers and then giveit to European government
writers, the Europeans willproduce 10 times more words
every time.
And I go back to that book.
I go back to that book, theCulture Map, right, that we've
(10:35):
talked about many times on thisshow about like Americans are
light on details and now this islike just go do it.
So there's a little bit ofcultural context there.
Now, let's just go do it.
William Dorrington (10:45):
So there's a
little bit of cultural context
there and I cannot believe thatyou guys have just backed me
into defending the Trumpadministration on something
that's the podcast.
That's it.
It's nothing you can not find,though, even with that in mind,
the low detail, I do respectthat, and let's face it.
We're much more risk adverse inthe EU and UK.
Think of GDPR acts and how wegot that in and how, as soon as.
Ai came like yeah yeah
Mark Smith (11:11):
I agree stronger
focus on citizen protection I'll
.
William Dorrington (11:15):
Yeah, I was
trying to, I was trying to be
nice by saying conservative, uh,but the it's still that focus
on.
They want to drive innovation,but as long as it's and I don't
mean this, there's not apolitical point.
It's a curious point as long asit's domestically driven ai
innovation, you know, I I don't.
Obviously, and let's face it,the us are smashing some of the
uh, you know the ai game.
(11:35):
They've got some of the bestmodels, etc.
But it's interesting, they onlytheir focus is on us made
rather than going.
Let's see how we can innovate,collaborate more openly than
that I don't know that the us do.
Mark Smith (11:46):
I think them.
Everybody tells us that the usdoes and this is why I love the
book you know that yourecommended Will the Coming Wave
by Mustafa Suleiman is becausehe clearly has a wake-up call in
there at how advanced China is.
It's a great book, right, itrefers to the US had their
Sputnik.
You know, when Sputnik waslaunched that kicked the space
(12:09):
race off globally.
It was American.
It was like oh my, the russiansare beating us.
And he said china had theirsputnik moment.
When he was at google and alphago for the second time beat the
reigning champion in the worldin china, yeah, and the whole
chinese government system, legalsystem etc.
(12:30):
Said that was their sputnikmoment.
And I think they're potentiallyway beyond even what they're
letting out in market and theiradvancements in the space.
Andrew Welch (12:41):
There are, you
know, and we I think that the
most, the most, the most recent.
One of the more recent not themost recent, but one of the more
recent, really high profilebits of this was when DeepSeek
hit the market and everyonefreaked out and saw what this
thing could do with much,relatively less computing power
(13:05):
than what some of the othermodels that are out in the
market can do.
So I think that it is becomingincreasingly clear that the US
and the countries that joinedwith the US on this to kind of
choke China's capacity to buildartificial intelligence, that
(13:27):
that mission has not gone wellright.
William Dorrington (13:30):
We're
getting increasingly clear that
China is very, very much, muchin the game, look at the
benchmarking and look at theparameter size and you know, I I
absolutely agree.
I think the us, china are arestill sort of the at the
forefront there.
Yeah, there is a conversationto have as well which goes back
to that sort of, you know, thethe coming wave type approach,
(13:50):
which is with the politicalenvironment we've got going on
at the moment and now the raceagainst ai.
We know that ai, if it gets outof hand and it does become a,
should we do it or we have to doit because otherwise they're
going to beat us to it?
Yeah, that creates a huge riskand, with tensions at the moment
, it's a very interesting placeto be would you have believed 10
years ago the world would be somuch more fractured?
Mark Smith (14:12):
I feel that, like
you know, I always thought, you
know, um, we were going to aglobal world view and everything
, and I think what's happenedthis year is now nationalism and
protecting my nation state hasbecome so much more the focus.
I listened to a um, a pressrelease from the?
(14:36):
C, the other ceo, the, the Idon't know what he's called the
president or prime minister ofsingapore this week.
William Dorrington (14:43):
Let's go
with ceo and he was.
Mark Smith (14:51):
I mean, his speech
is so sobering that he believes
we're going to a much morefractured world than ever before
with what's happening and thatyou know these be dark times
that are looming.
And I'm like, wow, I'm hearingthis thread a lot more.
Mo, the guy that wrote um, hewrote a book on ai.
(15:15):
He was, he was the ceo ofgoogle moonshot programs and he
the first book I read from him,was sold for happy.
I've talked about this beforeand but he wrote a book on ai
and his latest podcast istalking about us going into a
10-year dark period of where AIis going.
(15:40):
And I'm like I'm always now inthis position.
I'm like I'm so excited aboutwhat AI is doing.
And there's these people thatare way more knowledgeable than
me in the space are starting totalk of cautionary tales type
thing, fractured nation states,and they're what's coming
through quite strongly.
It's not just the risk ofindividual bad actors, it's
(16:02):
nation bad actors coming muchmore to protect their national
interests.
Andrew Welch (16:07):
I think that this
we could do.
We could teach a semester orrather an entire PhD program on
this topic, but I think that youknow we can.
Let's separate the AIimplications from what's.
You know what's happening kindof fundamentally under under the
(16:28):
covers, right and at a morebase level, and it is.
It is absolutely true that it'sabsolutely true that across the
world, the political mood andparties and candidates they are
sweeping to power, or at leastcoming within spitting distance
(16:48):
of power, that supportincreasingly nationalistic,
nationalist, isolationisttendencies, and I think there's
a lot of reasons on that.
This is something that this isvery, very close to, sort of my
first interest before Ibroadened my horizons into tech.
(17:11):
So, guys, stop me before we gotoo far.
For I broadened my horizonsinto tech, so guys, stop me
before we go too far.
But yeah, this is, this is aeven setting aside AI.
This is a serious problem and Ithink that it's probably
something, it's a problem thatwe could have seen coming.
I think that folks in thetechnology world or in the
(17:32):
global business world or kind ofthe global you know, the
globalized world, really did notdo a good job at all and this
is an understatement of bringingtheir fellow citizens along for
the ride.
That is absolutely true, butyeah, I do think that when you
then add into that the dangersthat are inherent in we talked
(17:56):
about this on a recent episodethe dangers inherent to AI's
ability to process and tointerpret and to understand an
ungodly amount of data, that isnothing that human beings can
imagine.
Yeah, I think that these aregoing to be.
The world is going to be verydifferent for at least a few
(18:17):
years, and I don't think it'sall good.
In fact, much of it is not good.
Mark Smith (18:22):
You know you
mentioned its ability to process
large amounts of data.
We've heard a lot about agentsand that they are going to do a
whole bunch of stuff for us andI hate the term at the moment or
the overuse I don't hate theterm, the overuse of the word
autonomous agents.
I've never seen one.
Chris Huntingford (18:45):
It's because
they don't exist, that's because
it's power automate.
That's all it is.
It's power automate.
Mark Smith (18:50):
I think, as Donna
put it, it's sparkling
automation.
It's from the.
What did she call it?
How do you say champagne, theautonomous region of champagne?
She said it's not from theautonomous region of sparkling
automation.
Yes, no it's dumb.
Chris Huntingford (19:10):
So first of
all, I just want to tell you
guys something wild, right, likeyou know.
So just back on Andrew's point.
You know China is really big onWeChat, right?
Yeah, huge.
Yes, now you have a thingcalled pay by palm in China, so
you don't carry your phonearound you, you carry no cards
around with you and you pay withyour palm right Now.
That's interesting, right,because that turns this really
(19:33):
weird in the fact that if I amgoing to pay for something, I
don't have to carry stuff around.
But the concept of bad actorsin the scenario makes it way
more difficult, because now youhave a thing attached to your
body that people want, right, sothat is AI.
Mark Smith (19:49):
I've always had that
.
Chris Huntingford (19:52):
Oh, wow.
William Dorrington (19:54):
My story's
face lights up, and I knew it
he's just so happy.
Chris Huntingford (19:59):
He's just so
happy, but isn't that scary, um
anyway.
So that's one thing.
Andrew Welch (20:04):
The other thing
is it's the bald part for those
who are listening to thiswithout the video feed.
I am burying my face, my headin my hands.
Chris Huntingford (20:15):
I think it's
the greatest thing You've got to
line him up, though.
The other thing that'sinteresting so this agents thing
I'm going to call bullshit andI'm going to say it right now.
I think that there are a lot ofpeople that are like, oh, we're
making agents in thistechnology and we're chaining
them.
I put out a post recently.
I'm like okay, show me.
(20:40):
I'm like, show me, show me howyou're doing this.
I want to see how all of yougenius people are doing this.
And I got maybe five responses,and none of them were accurate.
And I'll tell you why.
Because, number one, an agentrequires orchestration and
memory.
Okay, now here's the thing.
I'm not even going to startthere.
I'm going to start withautonomous agents.
So we have a thing in CopilotStudio called an autonomous
agent.
Okay, up until extremelyrecently, in fact, the 2nd of
(21:03):
April, we only got agent flows,which is basically Power
Automate in Copilot Studio.
And then we discovered we had athing called a trigger where
you can invoke an agent.
Now, tell me this how is anagent autonomous if you don't
have a mechanism of firing it?
Well, you do now with thetriggers.
Okay, but it's just photo.
It's been Power Automate foryears.
So I started digging into abunch of other areas Now not
(21:26):
autogen, because I'm not acoding expert, right, so I'm
happy to be kept honest here butI'm working on a project right
now where we are using SemanticKernel.
In Azure AI Foundry we'regenerating agents using Semantic
Kernel.
We are putting the correctdescriptions in.
We have found no way toorchestrate and plan them
because the plan offunctionality isn't there.
(21:47):
That we found, so I'm happy tobe told it is.
And if somebody can show me andI would love to be shown so if
somebody can sit me down andphysically show me there is a
way to have five agents that youbuild with deeply descriptive
information about the agent andwhat it does, and you can
physically fire off those agentsbased on an orchestration
(22:09):
engine and a planning engine.
I want to see it, because all Ican see now is deterministic
chaining between agents.
Mark Smith (22:15):
Yeah.
Chris Huntingford (22:16):
That's all I
can see.
I cannot see anything that'sintelligent.
Andrew Welch (22:38):
So I'm calling
bullshit.
Right is in general purpose AI,right.
So Copilot or ChatGPT or ageneral purpose AI development
tool, something that is meant tobring AI development to lay
people right, like to to folkswho are not data scientists, who
(23:01):
are not engineers specializingin the field right, data
scientists who are not engineersspecializing in the field?
Right, that you're.
I think that a lot of thisright now can seem and feel very
underwhelming.
Right, the real capacity andthe real capability that I think
is very impressive in the worldof AI right now is there are
(23:21):
very high-end purpose scenarios,purpose built solutions.
Right.
So, I excuse me, I do thinkthat that to some extent, the
rush to get consumer gradeagents, autonomous agents, into
market is, in a way, hurting thestory.
(23:45):
Right, because coming to marketthey're very underwhelming.
They're very underwhelmingcompared to what I think people
expect and they are undersellingto the average observer the
capacity and the capability thatis out there beyond the reach
of the casual observer.
Chris Huntingford (24:08):
Dude in
breaking that down.
What I'm seeing is extremelywatered down.
Yes, and in order to achievetrue agentification, you have to
write reams and reams and reamsand reams of code.
Really, and I've seen it.
I've seen the introductorycourse to agents.
I've seen what it can do.
They've done Microsoft havedone a good job of that.
(24:29):
But AI introductions man, it'shardcore Like.
We got some great code snippetsfrom it, by the way, and I
recommend looking at it.
But if you are inexperienced inwriting code and you don't know
how to do, build Azure, sorry.
Function apps to drive action.
You don't have an agent, sorry.
You have, at best, a thing thatdoes retrieval, augmented
(24:49):
generation and maybe at best,yeah.
Andrew Welch (24:52):
So, in defense of
all of this, in defense of all
of this, I will say that twoyears ago, or even 18 months ago
, the idea of realconsumer-grade RAG, it seemed
very far off.
Right Now, I think that we alllooked at it and we said, okay,
(25:12):
it's not actually as far off asit seems right.
So, in the grand scope of ITtime, the fact that we've gone
from where we were 18 months agoon consumer-grade RAG to
especially the ability oflaypeople and folks again who
(25:33):
are not experts in this togenerate fairly decent AI
scenarios that are fairlydecent- Dude, it's a chatbot on
data.
Chris Huntingford (25:44):
It's not.
It's a chatbot on data mate.
You can do this with search.
This is not rocket science.
This is applying search to achatbot and whacking some data
in there.
Right, the fact that it'sgenerative is fine, but this is
not rocket science.
What I think is rocket scienceis where my post said I had a
picture of three agents and Isaid how do I link them, how do
(26:06):
I make this truly autonomous andhow do I make this actual,
create this orchestration engine?
And this is it.
Yeah, yeah, because not manypeople could tell me.
There are three posts on there,two of which are Copilot Studio
and, by the way, copilot Studiois only autonomous now, to an
extent.
There is no orchestration.
There is not.
It is deterministic.
Folks, if people say it'snon-deterministic, it's not.
(26:28):
It's deterministic.
Andrew Welch (26:29):
Well, this is the
Downer Jam podcast.
No, screw that.
This is the Downer Jam episode.
Mark Smith (26:36):
I think we're going
to get to a switching moment
when, all of a sudden, we'll getthat aha, a switching moment
when all of a sudden, we'll getthat aha.
I think what's happened is that, like anything, marketing needs
to own phrases, needs to ownwords, etc.
And so they go hard on this,even though the actual reality
(26:58):
of what they're saying doesn'texist, like I've found it very
interesting.
In such as podcast that he didin January this year, he talked
about something that Microsofthe feels will solve this year,
which is memory.
Ai, to get to its next level,needs to have memory it needs to
be able to go.
(27:18):
you know what?
I remember this happening, Iremember that happening and
therefore that means somethingdifferent.
Remember this happening, Iremember that happening and
therefore that means somethingdifferent.
And until and I've not, I'm notseeing any of the big llms in
market, the commercial um overthe shelf, um over the counter
llms, none of them are reallynailing memory yet, but I think
that when they counter like likepills like pills, the
(27:42):
paracetamol of the authenticrace.
Yeah, OTC, OTC I used to work inthe pharmaceutical game for a
while and OTC drugs you didn'tneed a prescription for
over-the-counter yeah.
So yeah, I think memory isgoing to solve a lot of that,
but at the moment, if I've gotto spend more time telling fire
(28:03):
triggers and do this, these areif-then statement-based
automation.
William Dorrington (28:10):
It's
deterministic.
If we look at this from abusiness point of view, with our
Microsoft hat on, if we were onthe board we'd go right.
This is our first steppingstone into some intent that we
have.
So I always say it's not agents, it's a gentic light, it's
agents with extreme seat beltsthat you can't even get to the
point of agents just yet.
You know there is I like thatextreme seat belts.
(28:32):
It's good, I'm going to use ityeah, but it's true because
there there is.
It's not truly autonomous.
Okay, it still triggers, itstill flows, but what it can do
is it can, it can look at intent, right, so it has an element of
it and you can see where theroute is going.
And a bit like all Microsofttools nowadays, they really
polish it up.
They used to polish it up backin the day and then get it out.
(28:54):
Now it's more let's get it outand we'll improve it as we go.
So you know, you can see wherethey're trying to head to.
But I absolutely agree with youall, which is surprising, is it
?
But yeah, it's not quite thereyet, andrew.
Andrew Welch (29:06):
Just a little bit
off topic.
I was laughing Will when youused the phrase extreme
seatbelts, and it reminded me ofone of my sort of favorite
humorous political moments fromanother time, before Donald
Trump had ever been electedpresident, and there was a guy
in the US, a politician who Irespect and admire, a guy named
(29:27):
John Kasich, and he was runningfor president and in a debate,
right, he said something to theeffect of if I'm elected
president, you should go out andbuy yourself a seatbelt,
because we're going to be movingso fast it's going to make your
head spin, or something likethat.
And it left all of thesecommentators in the news the
next day being like where do youbuy a seatbelt?
(29:52):
Like, where do you go if youjust want the seatbelt, I'm
sorry, so a little bit of a rathole, but it made me giggle.
William Dorrington (29:59):
The reality
is, though, right now, that, if
we did have truly autonomous,agentic, ai a lot of we would be
struggling for adoption andit's good that we have time to
breathe and have get responsibleai frameworks in place and the
technology maturity and themindset maturity the biggest
skill.
Mark Smith (30:17):
I feel that you
can't.
I haven't seen a course on itand I'd love to find a course on
it, and I'd love to find acourse on it.
That people are going to needto learn is the ability to truly
delegate.
Yes, and I don't think peopleare ready for what delegation
means.
And if you've never been in amanagerial position, delegation
(30:38):
is not just go do something.
It's putting the parametersaround what my expectation is of
an outcome, what my timeframesare.
There's a whole bunch of stuffthat goes around.
Good delegation, delegationthat gets meaningful results.
It's almost like in Scrum.
What's the definition of done?
When you come back to me andsay you did what you said you
(31:00):
were going to do, does it meetthis bar at a minimum or did you
go beyond?
Type thing.
And I think there's going to bea need when we go to this
autonomous, which I think isdefinitely uh, I on the near
horizon it's really close Ithink we're going to have to get
really good individually atdelegating and knowing how to
(31:22):
put all the nuance arounddelegating.
Chris Huntingford (31:30):
So I want to
tell you something, man.
Maybe we should wrap with this.
I was chatting to a very goodfriend at Microsoft and she said
to me the other day she's likebecause I was whinging about
exactly this, right, because Ido get mad and I do get angry.
And the reason I get angry isbecause I have to spend hours
and hours and hours and hoursliterally I've nearly two weeks
on this trying to figure outwhere this invisible thing is.
And she said would you preferthat we were behind or way ahead
(31:51):
in our messaging?
And I said you know what?
I'd prefer?
You were way ahead.
And I agree with that because Ithink she's right.
William Dorrington (31:56):
Love it.
Chris Huntingford (31:57):
It signals
intent yes, it does.
It does signal intent, so itgives us something to go for and
if I hadn't been in theposition where I was so bummed
out about trying to figure outhow these things connected.
I do have a message behind itright now and I do know how they
work right, but I spent so muchtime researching and actually
I'm grateful to Microsoft forbeing ahead and having that
ahead message right and I thinkit is important.
Mark Smith (32:25):
I think you're on
point, and what I'm going to say
is not an intentional flex, butit's going to flex.
I'm 50% through writing a bookon co-pilot adoption for
Microsoft Press, yes, and it'sforced me to use co-pilot much
more than I ever have.
My default tool has always beenChatGPT.
I'm on the $200 a month.
In fact, I just saw thatanthropic has bought out
(32:47):
overnight a 200 a month plan aswell.
Um, I use perplexity, I useanthropic.
I use um, what's elon's one?
Um, yeah, and but what?
I have found what I have found.
What I have found is most ofthese models are verbose in
their response, right, they arejust so much fluff and you know
(33:13):
self-flagellation type stuff intheir prose.
What's that?
Andrew Welch (33:19):
The ultimate
mansplainer.
Mark Smith (33:21):
Yeah, yeah,
mansplainer, yeah, yeah, what I
have found, copilot and copilotis really now becoming my
favorite tool, because the moreI use it inside my organization
data, it is actually gettingsmarter yeah, it is and it is
really concise and it is notfull of fluff like one.
(33:42):
one of my kind of secret hacksat the moment is that if I want
to write something, I open upTeams, I set a meeting with
myself, I put on transcription,I turn off the camera and I will
talk for 30 minutes to it.
Like I'll just riff all theideas that are going on around,
whatever topic I'm focused on,and then I close it down.
(34:05):
I'll give it a couple ofminutes and I'll go back and
here's the transcription, butnext to it is a summary of my
thoughts and then I go intoprompting against it and I'll go
listen.
Don't take out any of mystorytelling, any of my voice,
anything that's me.
But can you structure mythoughts?
Because they were reallyhaphazard and what comes out is
(34:29):
fucking amazing.
Andrew Welch (34:30):
I mean let's,
let's, let's just, let's just
stop for a minute to observethat a 30 minute meeting in
which you're the only one doingall the talking sounds like a
typical.
A typical meeting for Will,which you're the only one doing
all the talking, sounds like atypical meeting for Will.
Chris Huntingford (34:47):
Okay, you
fools Big love.
Andrew Welch (34:49):
I'm sorry.
I'm sorry, I couldn't All right, are we?
Done yeah we're done.
Mark Smith (34:52):
This is awesome,
guys.
I love it, will.
I'm so pleased to see you backonline.
It's you know.
William Dorrington (34:58):
I've missed
you guys.
I have this is beautiful.
We missed your musk.
Mark Smith (35:01):
I can't wait.
You know May is coming at uslike a freight train and we get
to hang out again.
Andrew Welch (35:06):
Yes, we do
Dynamics minds.
Mark Smith (35:09):
I love it.
See you guys soon.
Ciao, ciao, peace Later, guys.
Bye guys.
Thanks for tuning into theEcosystem Show.
We hope you found today'sdiscussion insightful and
thought-provoking, and maybe youhad a laugh or two.
Remember your feedback andchallenges help us all grow, so
don't hesitate to share yourperspective.
Stay connected with us for moreinnovative ideas and strategies
(35:32):
to enhance your software estate.
Until next time, keep pushingthe boundaries and creating
value.
See you on the next episode.