Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Isar Meitis (00:00):
Hello, and welcome
to another episode of the
(00:03):
Leveraging AI Podcast, thepodcast that shares practical,
ethical ways to leverage AI toimprove efficiency, grow your
business, and advance yourcareer.
This is Isar Metis, your host,and we're going to cover an
incredibly important topictoday, and that is the topic on
how to build tools thatdelivered standard results
across the organization.
Now, one of the big issues thatcompanies are facing as they
(00:26):
start implementing ai.
Is that they're basicallygenerating lots of local chaos
because there are a few peoplewho are excited about AI who
create local initiatives thatthey start using, which means
many different people are doingmany different things in many
different ways.
And the bigger the organization,the bigger the problem becomes.
Because as an organization, youneed to have standardization.
(00:47):
You need to get consistentresults.
Otherwise, you can't run yourbusiness regardless of what
you're doing, whether you'rewriting project plans,
generating reports, writingproposals, you want them to
always be standard.
And so you can actually use AIin order to increase
standardization and actually be,get consistent results better
(01:08):
than even humans on their ownbefore AI was ever created.
So this is obviously music tothe ears of any manager, whether
middle management or seniormanagement, of how can we use AI
to create consistency across theorganization So there are
standards that everybody'sactually following it and not
just in a folder somewhere.
(01:30):
And to help us to learn how todo this effectively.
We have with us today NateAmidon, who is an incredible
person with a really uniquehistory.
So he started his career as a C17 pilot in the US Air Force.
Those of you who don't know whata C 17 in is one of the most
impressive airplanes ever built.
It is gigantic and yetincredibly powerful platform.
(01:52):
And as a Air Force pilot myself,I can tell you that it's a big
deal to know how to fly thesethings.
Now, after that, he spent 10years in leading large projects
in companies such as Boeing.
So he has the right experienceto know what large organizations
actually need, how they work,what are the pitfalls and how to
(02:13):
solve them.
And in the last seven or eightyears or so, he's been running
his own consulting company wherehe helps enterprises develop
effective teams and processesimplementing what he learned in
the Air Force.
He's also hiring ex people fromthe military, which I highly
appreciate, again, as a veteranmyself.
So that by itself, says a lotabout him as a person.
(02:36):
Now, what he's going to sharewith us is two different
automation tools that he hasbuilt for actual client, for
enterprise clients that helpdeliver clarity and consistency
across multiple aspects of theorganization.
And he's going to show youexactly how you can develop
these on your own.
So you can do this as well.
So this by itself is a veryimportant topic that I'm sure
(02:59):
you're excited about, but tomake it even more exciting, he's
gonna show you how to do this inMicrosoft 365.
Now, why do I find thisexciting?
Because we very rarely doMicrosoft 365 copilot things in
the show.
We've touched on Gemini andClaude and ChatGPT and open
source models, but we veryrarely do anything with copilot
365, and I know many of you arecopilot users because you work
(03:22):
in large enterprises and about70% of large enterprises use
copilot.
So there's two very good reasonswhy you should stick with us and
listen to this episode.
One is because we're going toshow you how to build
consistency across processes andprojects in the organization.
And the other is we're going toshow you how to do this with
Microsoft Co-Pilot.
You can obviously take this toany other tool you want, but if
you are in the co-pilotuniverse, you get another bonus
(03:45):
and this is why I'm reallyexcited to welcome Nate to the
show.
Nate, welcome to leveraging
Nate Amidon (03:50):
ai.
(04:34):
ai.
Ah, thanks, Essar.
Happy to be here.
You know, o one thing you missedon the intro was that, uh, the c
seventeens the best lookingaircraft as well.
Uh,
Isar Meitis (04:44):
debatable.
Debatable.
Hence why I did not mentionthat, because I knew this is
gonna go somewhere, this be real16, pretty cool too, may derail
the podcast, but it is anincredible, incredible platform.
I will definitely give you that.
Uh, okay, so, so tell me, yeah,let's, let's really dive right
in and, and let's talk about,uh, the first example that you,
(05:07):
that, that you want to sharewith us.
And you can, you can maybe startwith how maybe some of your
clients are using it and then wecan dive into, or what it is,
how your clients are using it,and then we can dive into
actually how to build it.
Nate Amidon (05:18):
Right.
Uh, one thing you said in yourintro that I think is really
important is that AI can, isreally, can really just create
chaos.
It can, it can in some ways be achaos generator.
Um, and especially in the lar,the larger your organization,
the more people you have thatcan be off creating their own
things.
And as these tools become, youknow, more readily available and
(05:40):
user-friendly, pretty soonthey're gonna start sharing
those out.
Right?
And if you think about theperson in your office who you
definitely don't want creatingthe standard ai, that person
could be the one that carries itout.
So I think organizations need tobe thinking a lot about, Hey,
what is our, how are we actuallygoing to handle this?
How are we gonna, how are wegonna build our own internal
(06:01):
governance process?
To make sure that, that the AIis being used in a way that,
that we want, that fits ourculture, that fits the, the
objective of our, of our team,of our program, of our
organization.
And, uh, and so I think leaninginto these early is gonna be a
good idea.
So.
Isar Meitis (06:20):
Agreed.
Yeah.
So, yeah.
I, I, I think the, the, thegrassroots aspect of AI is
awesome.
Mm-hmm.
From the aspect that it allowsmore people to be innovative and
do things for the organizationfor sure.
As long as you can control it.
Once you lose that, then likeyou're saying, it's just gonna
create more chaos than you hadbefore.
(06:41):
Right.
Which is definitely not thegoal.
The goal is to create moreproductivity, more efficiency,
more better results, uh, but notto create more chaos, because we
usually have enough chaos inorganizations as it is right
now.
Nate Amidon (06:52):
Yeah, absolutely.
And the.
The um, it's, it's really abalance because you want to let
people, you know, go off and doscience projects.
You want'em to, to experiment,come up with, with things that
are valuable.
And a lot like in the softwaredevelopment realm, like, you
know, those hackathons, eventhose are valuable for software
engineers.
'cause you just come up withideas that could be implemented
(07:15):
and save the company hugeamounts of money a hundred
percent.
But at the same time, you, youcan't have a, a team that's
spending six months building ascience project that isn't
validated and isn't somethingthat, you know, is so, it's a
huge balance and I think AI isgonna look a lot, a lot like
that.
I think we, we need to do AIwith a lot of the agile
principles that we used insoftware development.
(07:35):
And we should treat ai, weshould treat our AI, um, agents
as software products.
'cause that's what they are.
Yeah.
Yeah.
Yeah.
Um, so what we've been doingwith our clients, uh, and, and
lemme just preface, I'm not asoftware engineer.
I don't have really strongtechnical chops, all right?
I mean, I, I get scared withExcel macro sometimes.
(07:57):
Okay?
So like, I, I don't go superdeep in the hands on keyboard
realm, but we really think a lotabout optimizing process and
driving alignment in anorganization.
And for us, and as Air Forcepilots, it's really important to
know what the mission is, who'sdoing what.
And when you have multipleaircraft in the airspace, they
all have to be coordinated intalking.
(08:18):
And that same principle appliesto technology organizations or
really any organization.
That's, that's in the complexdomain.
So, uh, so, uh, we've alwaysdone automations through, uh,
like Azure, DevOps and Jira andthrough, you know, just through
general processes.
And so when, what's reallyhappening is with, especially
Microsoft and Gemini and GPT aredoing the same thing, they're
(08:39):
kind of democratizing some ofthe technology ability to people
that aren't, that aren't handson keyboard engineers.
Isar Meitis (08:47):
Yeah.
Nate Amidon (08:47):
So a lot of the
things I'll show you, you could
have built before ai, AI's notreally unlocking it, it's just
making it a lot easier forpeople to, to do it themselves
without a ton of technicalbackground.
Isar Meitis (08:59):
A hundred percent.
Nate Amidon (09:00):
Yeah.
So, um, and so let me just givea quick overview of kind of our,
our.
Our philosophy of how we dothis.
I'll do it really fast, but Ithink that it is really
important to kind of set thestage for what these are,
because these, like I said,these aren't really technically
impressive.
It's more of how they're appliedinside the process.
So we have kind of four keyareas that we focus on.
(09:21):
The first one is that we shouldbe building these AI solutions
to make humans better, notreplace humans.
That should, it should beenhancing human ability.
And that's important from achange management perspective
when,'cause that, that, a lot oftimes that can stop people from
even adopting.
Uh, we, we, anytime we're gonnabuild an automation, it has to
be valuable, which is thebiggest common sense thing ever.
(09:42):
But it has to be valuable it,it, we define that as it saves
time, but it, or it increasesquality.
So you could create anautomation that makes.
It takes longer for someone todo something, but it's more
valuable because the end productis a lot better and that
improves the systematic value.
So we need to think about bigsystems and value, and they,
(10:04):
they can't create bottlenecks.
So if you can automate your taskand do it really fast, but it
causes 10 other people to maketheir task a lot longer, you
haven't, you haven't added anyvalue, right?
You've just, you've just movedyour work to somebody else, if
that makes sense.
Yes.
So, uh, a valuable is importantand then we think about this
incrementally.
(10:25):
So we don't always try to buildthe fully complete automated
solution.
We'll do things in like an 80%,in an 80% stance.
So if you can automate parts ofyour task, you should do that.
It's okay to roll out smallpieces of functionality and
automation instead of trying tobuild a big thing.
And then the final thing is weshould always think about
(10:45):
sustainability.
Especially in these larger,larger organizations, if you
build automation that, uh,doesn't adapt and change with
underlying business processes orchanging market conditions, and
no one's managing those, they'regonna drift and, um, become best
case useless, worst case,counterproductive.
So,
Isar Meitis (11:04):
yeah.
Great points.
I, I, I think the, uh, theinteresting thing about what
you're saying is that all of itmakes perfect sense, and yet a
lot of people or organizationsdo not follow, not follow these
things.
Yeah.
Uh, but yeah, I thinkhuman-centric and value is, is
too really critical aspects.
Uh, you want to give peoplesuperpowers.
(11:25):
Then they will be a happieremployees and b, more
productive, uh, which will drivethe organization as a whole.
Uh, and you wanna make sure thatwhat you're developing is not
just a cool project, butactually provides value to the
organization.
And the interesting thing is,and I'll say my, my 2 cents, and
then I would love to hear yours.
How do you measure value, right?
'cause value can be measured inseveral different ways.
So when I work with clients, uh,I measure value.
(11:47):
Three of them are very obvious.
Again, uh, you know, one of themis, it, it, uh, drives up the
top line.
So this is simple.
The other one is, it reducescost, which reduces the bottom
line, which makes sense.
Uh, the third one is it removestedious tasks and frees people
to do more value jobs.
So this is still.
(12:08):
Simple and logical, but notdirectly immediate correlation
to like dollar value.
And the last one is even afurther extension of that.
I believe that having happyemployees is one of the goals of
an organization because Ibelieve the goal of life is to
be happy.
And so that's just an extensionof that.
(12:28):
We spend so many hours at work,we should try to make people
around us happy.
So whatever we can do toalleviate employees for things
that makes them unhappy isvaluable, right?
So it may not be captured in aKPI or an OKR or whatever other
acronym you want to put on it.
I find these things valuable,but this is the way I do things.
(12:50):
I'm very curious to see how youorganizations you work with, uh,
capture or define valuable work.
Nate Amidon (12:58):
Yeah, I, so we, we
work primarily with software
development and technologyorganizations, usually in larger
to mid to large enterprise,multiple team type scenarios.
Yeah, yeah, yeah.
Um, and defining value, forgetabout AI for a second.
Defining value is one of thebiggest hurdles, uh, the effort,
especially in these really largeenterprises, right?
(13:18):
Like Boeing for example.
Like, um, there's so many movingpieces and, and when you get
down to the team level, thequestion of what is it that you
actually do and how, why doesyour team exist is a hard
question.
And I think it's a, and soknowing, knowing how the smaller
your team, it's, it can be a lotsimpler.
You have a startup, Hey, are wemaking more money or not making
more money?
Yeah.
But like, you can't correlatethat to maybe a backend API team
(13:42):
in a super integrated system.
In a large enterprise.
Like, it becomes really hard.
So defining value, uh, is, is areal problem.
And you're what you brought upreally made me think about back
in the eighties whenmanufacturing companies just got
automated robots into theirassembly lines.
And there's a book out therecalled The Game, if you're
(14:03):
familiar with that or the goal,sorry, not the Game, the Goal.
Um, yeah,
Isar Meitis (14:06):
yeah.
That's that's way before theeighties.
Nate Amidon (14:08):
Yeah.
It's, it's so, it's back there,right.
Is, you know, yeah.
Before we were born.
And they, what's crazy is theyhad these same problem.
Isar Meitis (14:16):
Yeah.
Which
Nate Amidon (14:16):
is, they're
automating things, but they, and
they're like, Hey, this onetask, it's way faster.
We're, we're getting a huge ROIbut at the end they weren't at
all because they were justcreating bottle.
'cause there's a
Isar Meitis (14:26):
bottleneck
somewhere else.
And your ability to createadditional capacity makes no
difference.
Nate Amidon (14:29):
Yes.
So, so putting on your lean hatis, uh, like the Lean Six Sigma
stuff's gonna become way moreimportant and, uh, the process
of implementing AI is gonna bemore important than the actual
AI capabilities itself.
Especially in the largerenterprise.
Isar Meitis (14:46):
So, agreed.
A hundred percent.
So let's dive in.
So what, what's this particularfirst automation task?
Nate Amidon (14:50):
Okay, so yeah, so
let me just say there's two kind
of areas that I focus, uh, andthese are really lightweight
automations.
And I'm not gonna, I'm not gonnawow your, your audience with my
technical skills here.
But, uh, there's really kind oftwo ways, uh, I think that some
of these capabilities can help.
And the, i, I look at them rightnow as you can have kind of
(15:10):
agents that are informationaland agents that are kind of
process improvement agents.
And so I'm gonna show you, uh,uh, one of each, all right.
Um, and, and, and kind of justgo through what they look like,
what they are and the use cases,uh, for them.
So, uh, I'm gonna start outfirst with what I, I'm calling
it like a ways of working agent.
(15:32):
All right.
And so the, the problemstatement here is that there's a
ton of different process steps,uh, in these bigger, larger
enterprise organizations.
There's so many, uh,regulations.
There can be so many.
Uh, PMO might have its ownstructure.
You have an agile, uh, coach, oryou have a, a director, and they
(15:53):
all want certain things done acertain way.
Uh, and so following the processisn't hard, but knowing what the
process is is hard.
Okay?
So, um, and so what we've doneis built an agent to help, uh,
make it easier to know how we'resupposed to do things, and then
also to get information aboutthe process.
(16:13):
So that's the, the, let's setthe stage here and then let me,
uh, share up my screen and wecan walk through it.
Isar Meitis (16:20):
Awesome.
For those of you who arelistening and not watching this
on YouTube or something, uh, weare going to explain everything
that's on the screen so you canfollow along with us.
Uh, you can also always go andwatch this on YouTube.
There's a link in the shownotes, uh, that can take you
straight to the YouTube channelto watch this, but if you are
walking your dog or doing thedishes or jogging on a
(16:41):
treadmill, then you can stickwith us and uh, we will, or
driving for sure.
You can stick with us justlistening and then we will tell
you exactly what's on thescreen.
Nate Amidon (16:49):
Perfect.
I'll try to, I'll try to giveprogressive directions here as
we go.
Okay.
So first what we're talkingabout is Microsoft copilot.
So if you have a Microsoftaccount, let's say on the larger
enterprise, um, organizations,you probably have copilot.
Okay, so you can get thatthrough your teams instance.
On the left hand panel, there'sa copilot tab, uh, or you can
(17:11):
get it through your M 365account, which is those nine
boxes up at the top left.
Isar Meitis (17:17):
Yep.
Nate Amidon (17:17):
And when you select
these, uh, you can, um, find the
copilot tab, which I alreadyhave open here, and it'll pop
into, uh, your M 365 copilotinstance.
Now, uh, having copilot, uh,I'll talk a little bit about
licensing.
Essentially the gist is, uh,you, you may, you may not just
(17:38):
automatically have access toagents in an M 365 copilot
instance.
So the way you can check is ifyou look on the left hand side
and you see agents, right?
You have it.
If you don't see agents, youdon't, and you should go talk to
your system, admin to, torequest a license.
They're not overly expensive.
(17:59):
They're maybe$20, uh, a month.
I think it's somewhere aroundthere.
It depends on the contract thatyou have with Microsoft or your
organization has with Microsoft.
Isar Meitis (18:07):
Question to you as,
I'm not an expert in the
co-pilot, uh, universe.
If you have the agents on theleft tab, it means you can
create them and use them.
Or if you just want to use them,you don't even need that kind of
a license.
Nate Amidon (18:20):
You, you, you can.
It's, it's all or nothing frommy understanding.
Got it.
But things are pretty dynamic inMicrosoft's billing world.
Right.
But right now, if you have any,so if you wanna
Isar Meitis (18:28):
use agents, even if
somebody else built them, you
still need the license.
Yes.
Got it.
Okay.
Nate Amidon (18:33):
Yes.
And so, uh, but I know how bigorganizations are.
Sometimes you have to, uh,create a full business case that
takes you more than$200 tobuild, but for your$200, uh,
license,
Isar Meitis (18:47):
okay?
Yeah.
Yeah.
Yearly, yearly cost.
Yeah.
I'm, I'm with you.
Nate Amidon (18:50):
So if you have, uh,
agents here, um, then basically
there's a button on the leftthat says New agent.
Okay?
You can click on that and thatwill help you create a new
agent.
Okay?
Now, if you're familiar with,uh, GPTs, uh, in the chat GBT
ecosystem or the open AIecosystem, they're very, very
(19:12):
similar to, uh, a copilot agent.
And I really use Agent looselyhere.
Okay?
But that's what they're callingthem.
So they're an agent.
Uh, and when you, when you get,when you open up a new agent.
Okay.
It'll give you two options.
You can either describe what youwant the agent to do and then it
will build it for you.
Or you can hit the configurebutton.
(19:34):
And the configure button willlet you fill in your own
instructions.
Now, technique only, I like tojust fill in my own instructions
and the agent, and I do thatwith the use of another LLM.
Isar Meitis (19:46):
And so we are
exactly the same.
I've used the, the describeoption in custom GPTs exactly
zero times in Vi Nova.
I've created dozens of thembetween my company and client
companies.
And I teach people how to use aregular conversation with
whatever your choice of tool isto develop the instructions for
(20:06):
the custom GPT.
So we're a hundred percentaligned.
And guys, we did not coordinatethis in advance.
So two separate people who dothis for a living tell you this
is the right thing to do.
Maybe that's the right thing todo.
Nate Amidon (20:16):
Fair.
Well, we're both type A formerpilots, so that's probably why.
Fair.
Isar Meitis (20:20):
Fair.
Okay.
So, uh,
Nate Amidon (20:22):
so when you go
through, um, to your point, I
like to, to do that because thenI can, um, uh, I can test it and
then update the instructions onthe agent.
Um, and so when you are creatinga new agent, you can, you have
to give it a name.
I like to give it something thatsounds useful.
'cause remember I'm buildingthese not for my, myself.
(20:44):
I'm trying to build these for ateam or an organization.
Right.
And I'm using these to help anorganization become more
aligned.
So I want it to be to, to, to benamed something that makes
sense.
Isar Meitis (20:54):
So there's be
beyond the makes sense.
You said it's, there's amarketing aspect to it.
Yeah.
Right.
It needs to sound valuable.
Nate Amidon (21:01):
Right.
Right.
Isar Meitis (21:03):
Yeah.
Nate Amidon (21:03):
Um, and then
there's a description of your
agent below it.
This is really informational.
The agent doesn't use this.
Okay.
And it basically says, what isit that, um, you know, it's just
so someone opens it up, they canjust see what it does.
And then the instructions arewhere you make your money.
Okay.
Uh, and, uh, Microsoft gives youabout 8,000 characters, which is
enough.
(21:23):
Yeah.
From my experience.
Okay.
And remember when I saidearlier, these need to be
valuable and then alsoincremental.
So when I'm building these, I'mscoping these to a specific
business problem and I want tokeep them limited in scope.
Now that's my technique.
So I don't want an agent thatdoes five things.
I'd rather have five agents thatdo one thing.
Isar Meitis (21:44):
I'll say two things
about what you said, uh, one
about the last thing, which is Iagree with you a hundred percent
for two different reasons.
A, they're easier to build.
B, they're a lot more consistentin the results.
If you try to build one agentthat does five things, you'll
most likely get them to doroughly the five things instead
of actually doing each and everyone of them perfectly.
Uh, and you can always just thenstring them together with some
(22:07):
other automation tool that willknow how to transfer the data
from one step to the other.
Or you can do this manually.
Uh, you get a lot more controland much better results.
That's one thing.
The second thing about the 8,000characters, uh, and I know we're
gonna talk about it in a minute,but there's also the knowledge
base, which is a bunch of files,and this allows you to
dramatically extend theinstructions.
So if you wanna give it examplesof what good looks like, you
(22:31):
don't have to write the exampleof the instructions.
You can say, look at example oneand example two in your
knowledge base, and thesedoesn't count to your 8,000
characters.
And then in one sentence youadded 20 pages of examples.
Yeah.
And so this is another way tocut the amount of text you have
inside the instructionsthemselves.
If for whatever reason you hitthe 8,000 character limit.
Nate Amidon (22:50):
No, that's a great
point.
Right.
Um, and I'll talk aboutknowledge base in a second.
Um, uh, the, the final thing,one thing I also mention is.
The other, the other kind ofpillar we talk about is
sustainability.
So the same reason engineerswent to microservices is the
same reason you should have, youshould have kind of one task
agents whenever possible.
(23:11):
Okay?
They're easier to update.
You update one.
You don't have to update thewhole monolith of an agent.
And, um, okay, so, uh, you makeyour instructions and then, um,
at the bottom, uh, you can addin what, uh, information you
wanna train the agent on, okay?
And you can do this directly offof your SharePoint.
(23:31):
And one of the reasons I lovethis idea of building your own M
365 agents is that it's allinside of the Microsoft firewall
ecosystem.
If you can send it an email, youcan build it in an agent.
And so, uh, I don't deal muchwith security concerns because
I'm just using capabilitiesinside the fence.
So, uh, you can add in, uh,different, um, websites, URLs,
(23:55):
documentation, that's in theSharePoint.
You can upload actual documents,um, but generally huge
Isar Meitis (23:59):
channels as well,
right?
They're now connected, fullyconnected into teams.
So you can train it on like aspecific channel.
Like there's, like you said,anything in the Microsoft
universe.
It could be a source of data forthe agent, which is very
powerful.
Nate Amidon (24:12):
Yes.
And I'll say Microsoft israpidly prototyping and updating
this.
So this could be, this couldlook totally different than what
I said tomorrow, okay.
But, uh, generally thecapabilities, um, allow you to
upload and, and there's a hugelimit on files.
Now, if you're doing this in a,in a Gmail, Gemini space, right
now, they're limited to 10source documents.
(24:35):
All right?
Um, but, uh, Microsoft, it'slike 10,000 or something
ridiculous.
Yeah.
Okay.
Uh, the other piece I wanna say,uh, that I think is interesting
here is this idea of only usingspecified sources.
Okay?
So there is a tab, uh,underneath the knowledge base.
And if you click this tab overto say only use specified
sources, you are telling theagent only to use the
(24:58):
documentation.
You loaded, no guessing, nogoing out to the internet.
And so for the agent, I'm gonnashow you, I think that's a valid
button to select because if I'mtalking about agile processes
and how we do business, I don'twant every other agile coach out
there giving me their slightlyhorrible idea of how to run a
sprint planning or whatever.
So, um, when you select thisbutton, you've essentially
(25:21):
turned it into a chatbot.
Isar Meitis (25:24):
Yeah.
Right?
And so to explain to people in,in a broader terms of be very
specific, you limit the universeof the.
Agent to a very specific datasource and only to that data
source.
So whether it is you're doing,uh, agile planning like Nate is
talking about, or you're doingany kind of project management
(25:45):
or employee handbook, how do I,uh, apply for maternity leave?
Then you wanted to just go tothis two, these two HR documents
and that's it.
And this is how you force the AIto do that, which is extremely
powerful because in most cases,unless this is a research agent,
that's what you want.
You want it to only go to thesources you give it.
(26:06):
Uh, and the combination of thattogether with the fact it's
already in your Microsoftuniverse, you already have these
documents, just makes it really,really easy to work with.
Nate Amidon (26:15):
Absolutely.
And, um, I, the, the use casesfor that type of agent, which
I'm kind of calling aninformation agent, um, is really
broad.
And, you know, I was lessexcited about it at first, but
then when I started using theapplication with clients, it was
like, oh, this is actuallyreally powerful.
The ability to, uh, do HRprocesses, uh, if you have a
(26:35):
huge regulatory environment,getting all of the regulatory
environment in there so youdon't have to spend hours
hunting through, um, uh, evenspec like in a software
development space.
Even like specific, uh, usermanuals for your software
development product.
Isar Meitis (26:50):
I, I'll give you,
I'll give you another broader
example.
I have several clients are inthe manufacturing space or in
the product space, and they havea product catalog the size of
the United States.
And you wanna know the partnumber for this and that skew
from this and that year for thisand that product.
And to find that historically,yes, you can open 17 different
(27:11):
documents and write a search oneach and every one of them until
you find the correct thing.
Or you can write one sentence inthis and it will just give you
the result.
Nate Amidon (27:19):
Yeah, no, that's
absolutely, so there is a huge
use case for this.
Um, and, uh, I'll show onebriefly here, uh, just to wrap
up how to create'em.
Uh, you can also createsuggested prompts.
I think this is important ifyou're sharing this across
abroad, audience of people in anorganization.
So it lets them know what, whatthis agent can be used for.
(27:40):
Okay.
So you just put a title andenter a message.
And then essentially, by theway, these
Isar Meitis (27:43):
show up as like a
button.
So people basically see asuggestion on the screen.
So if they don't really know howto use it, it gives them ideas
on what they can do with it.
So your most common use casescan already be there, and so
people just click the button,
Nate Amidon (27:55):
right?
Um, and then once you feel likeyou have it, uh, you can test
your agent while you're buildingit on the right hand side.
Um, and then when you're ready,you can create it all.
Right?
Um, and so that's essentiallyhow you build a new, you know,
build one of these.
Okay, so, um, I, I'm gonna jumpin and show a an example here.
(28:17):
The, the first one, I call itthe ways of working guide.
The context for this is I, uh,was working, uh, and this is
just kind of a replic, uh, ademo replica of a, of one I use
with the client.
But, um, you know, we built anentire program of how to do
software development.
And all the way from how we getproduct requirements in to how
we do releases, um, you know,how we do our ceremonies in the
(28:40):
agile world, how we doreporting, how we do metrics,
uh, all of these things.
And so, uh, but people arealways changing in and out of an
organization, especially if youhave like a hundred person
organization.
Uh, and so having a way to keepeverybody's standard on the
process, knowing what theprocess is, and then using this
agent to help kind ofdemocratize that across the
(29:02):
organization.
So this agent is trained on, uh,some process data, right?
And it's process data I built,uh, or we built.
And, um, and it's configured tothe ways that we do things.
Okay.
So, um, I can ask questionslike, you know, what, uh, what
sprint are we in?
(29:23):
Right?
Um, and this agent, which is,I'll show, I'll show you the
backend after I do a demo here.
But, um, we'll look through the,the documentation, um, and we'll
give you what sprint we're on,what, what, uh, holidays are
coming up.
Um, and it just, it is just aquick way to find the
(29:44):
information you need.
Okay.
Um, you can also, this one's
Isar Meitis (29:48):
actually
interesting, so I have a
question about this.
Yeah.
Because, uh, we were talkingabout, uh, information on
SharePoint, which is usuallystatic data.
Yeah.
Uh, meaning like, like guidesand instructions and, and Yeah.
And stuff like that.
And yet the sprint number is adynamic parameter, right?
So those of you don't knowsprints because you don't come
from the, uh, software world.
Uh, software companies work insprints.
(30:10):
These are usually one or twoweek segments, and you define
the scope of work for thedevelopment team for this
sprint.
So it's a very short amount oftime.
It's not like a, you know, sixmonths or two year project, uh,
with a very clear defined scopeper down to the person, uh, and
what they need to do.
So knowing which sprint you'reon or what's the inside, each
(30:30):
sprint is something, you know,at the beginning of these two
weeks and not before that.
And so this is a dynamicquestion.
What does the data come from?
Nate Amidon (30:39):
So, uh, great
point.
'cause there's, uh, you can sortof blur the lines a little bit
between dynamic and static, andit really comes down to how,
what documentation you exposeand how you manage that
documentation as anorganization.
So, uh, one of the things I'm abig fan of in large enterprises
is a standardized sprintcadence.
(30:59):
Okay?
So I want team A and team B tobe speaking the same language,
right?
It's so in our Air Force world,like the com card that we all
know what frequency we're on,okay.
Between different aircraft.
So if, uh, someone on, on team Asays, Hey, uh, when are you
gonna be done with thisdependency?
And that, and that person says,sprint 24.
(31:20):
I want team A to know whatSprint 24 is.
So we have a standardized sprintcadence with a lot of our
clients.
And we'll load that sprintcalendar ahead of time for the
year and we'll put in whatholidays are gonna be on there.
Um, and so that's part of ourprocess documentation that the
agent's exposed to.
And if for some reason we hadto, you know, we had a big
(31:41):
issue, we had to reshuffle thesprint calendar, we can do that
in the documentation and theagent will automatically update
with the source documentation.
Isar Meitis (31:51):
Cool.
Nate Amidon (31:52):
So now what this
doesn't do is go inside like
this one, I'm not connectingthis with other systems or
backend APIs with Azure DevOps.
So it won't tell me what is inthe sprint.
Isar Meitis (32:03):
Got it.
Nate Amidon (32:04):
And so there's a
balance.
It's a spectrum of what you can,what you can do.
But if you have like a, areporting mechanism for, let's
say an executive stakeholderreport that is dynamic, that's
like, let's say it's aPowerPoint or something or some
sort of Word document thattracks things.
As long as that's a standard onesource of truth document, you
(32:25):
could expose the agent to thatand then it could start to
actually give you more dynamicdata.
Isar Meitis (32:28):
Yeah.
So, so in theory you could takethe dynamic output, in your
case, the content of the sprint,but in other cases could be, uh,
the weekly team meeting summaryor the, uh, announcement from
the leadership, whatever it is,and replace the file in the
relevant folder of this currentdata.
(32:50):
And then we'll have access tothe current data as well.
Nate Amidon (32:52):
Yeah, absolutely.
And, and so, um, and so I thinkwe get what this does.
Yeah.
I mean, you can do a lot ofthings, but let's look at the
backend.
'cause I think that that's, um,important to what we're talking
about right now.
Um, and, uh, and so I buildthis, I have hardly, hardly any
instructions really.
Okay.
And I could do a much better jobof beefing this out.
(33:14):
Um.
The secret for this agent isn'tthe instructions.
The secret for this agent is howand what data is exposed to it.
All right?
And, and so as uh, as we'rebuilding these, we need to
really be thinking aboutdocumentation and process the
way software engineers used tothink about data or still think
(33:35):
about data.
And so if you're in theengineering world, clean,
standardized data is critical toa functioning software
development application.
So if, if you've, everyone'sheard trash in, trash out,
that's the same thing here.
Okay, so now one of the things Iview as a technique, and I found
(33:56):
this to be, um, helpful, is if Ican organize the data to
organize the underlyingprocesses and expose a folder
that means.
I, I, I can use everythinginside that folder as the source
data.
Now, a lot of times you canupload individual documents as
this for an agent, but if youchange one document, you, you
(34:19):
delete version one and addversion two, the agent won't
automatically pick that upunless you reload version two as
a source documentation.
So my solution to that is toupload a folder.
So as process documents changefrom version one to version two,
it automatically picks up theinformation.
(34:39):
And that's part of thesustainability thing that we
talked about.
Isar Meitis (34:41):
Yeah.
So you share the, the folderlevel and not the document
level, because that allows youto replace the documents.
Yeah.
Uh, without having to re-updatethe agent.
Nate Amidon (34:51):
Right.
But the discipline of makingsure we know what's in and outta
that folder.
It's pretty important now.
Okay.
So, um, and so the, a lot of,like I said, a lot of process
integrity is important.
Okay.
Um, let's
Isar Meitis (35:03):
look at, let's look
at at least a little bit of the
instruction so people understandthe general idea of what the
agent knows how to do or assumeswhat to do.
Nate Amidon (35:11):
Right.
And, um, basically what I put inthis agent was very similar.
Okay.
It was very simple.
Uh, it's basically saying, youknow, you're only, uh, a
SharePoint documentation.
Don't use any outside knowledge,even though I've already clicked
that button to not let it do it.
Yeah.
I feel better about saying it.
Okay.
(35:31):
Um, I like it to cite sourcedocumentation.
And this is really important,especially if you have these
like huge regulations.
You know, you have lots ofdifferent source data.
Um,'cause a lot of times you maynot get the exact answer you
want, but then you can click andquickly find the document that
that answers in.
Uh, I tell it to just stay awayfrom HR policies.
(35:52):
Stay, don't guess, you know, andhave a, hey, this a, a, a canned
response that says, Hey, wedon't cover this.
Go talk to somebody else.
Um, or if you think thatsomething's wrong, like a lot of
times you'll be working interinteracting with an agent and
you'll be like, Hey, that youranswer's wrong.
Like, I know your answer'swrong.
And so I want them to promptwhoever owns this agent.
(36:17):
I, I just wanna let them knowlike, Hey, reach out to this
person and fix it.
Isar Meitis (36:20):
Yeah.
Nate Amidon (36:21):
So, um, and then
just some stuff around tone,
formatting, audience.
Um.
And that's basically it.
Isar Meitis (36:28):
Perfect.
Uh, I'll add my 2 cents as faras the citation part of it.
Uh, when I teach people toengage with documents with ai,
whether inside an automationlike this or just upload the
document to regular chat, I havefour bullet points that I have
saved in like a, like a promptlibrary segment that I use again
(36:49):
and again and again.
The first one tells it to onlyuse the information from the
document.
The other one tells it not touse any other source of
information.
I know that sounds redundant,but it's just enforcing it.
The third one, uh, the thirdbullet point tells it to tell me
something specific when hedoesn't find the information
such as not available, orsomething like that.
And the reason for that is thesetools are built to please, and
(37:13):
if you ask a question, you'regonna get an answer.
And if it doesn't findinformation in the document.
You are risking it, that it willmake up an answer, and if you
tell it what to say when theanswer is not there, such as
information not available inthis document, put in the
quotations, whatever you want,it will tell you that in a much
higher percentage of chances.
So your chances ofhallucinations are lower.
And the fourth one is exactlywhat you said.
I asked for very specificcitations in a very specific
(37:35):
format, and it's always the samething.
Document name, page number,section name, and exact quote.
And the reason I do this becausethen A, I can verify the
information very, very quickly,and b.
If I want to find additionalinformation around that area, I
know exactly where to go.
But as far as verifying theinformation, copilot used to be
(37:55):
really bad at getting quotesuntil a few months ago.
Like it would literally make upquotes.
And I've done this recently incopilot as of the last few
weeks, and it's actually really,really good.
Yes.
So when it gives you an exactquote, you can copy that quote,
go to the document, the originaldocument, hit command f, paste
it in the search line, and itwill jump straight into that
(38:17):
segment in the document.
And if it really is in page 73in segment, uh, you know, 20
point a 0.7, that's calledsomething, then, you know, it
actually gave you the correctinformation.
Uh, and so a, it gives you, uh,more context of what's in the
actual document, but b, it givesyou another great way to check
the information.
Nate Amidon (38:36):
Yeah.
And, and that's very importantof when you're building these
things, is knowing what's yourrisk tolerance.
Yeah.
For wrong information.
Um, uh, yeah, so great.
Um, and then I, like I said, Iuse only specific sources.
I hit this tab.
Um, and then from thedocumentation perspective, this
one's pretty light.
Okay.
It's a demo, uh, documentation.
(38:56):
But, uh, a few things I've, I'vefigured out and I actually would
love, uh, your take on a zsar,'cause you've done a lot, uh,
these with other, um, LLMs andis, uh, I found like bulleted
word document, word documents,essentially like documentation
that's set up the way governmentdocuments are naturally set up
where it's 3.1, 3.1 0.1 3.1 0.1,0.1 0.2.
(39:20):
It is really beneficial for theagent to help categorize and
find things.
Um, and so, uh, the way I thinkabout this.
Is I'm building documentationwith less pretty pictures, with
less process diagram flows andmore just straight bulleted step
one, step two, step three.
(39:41):
And I'm, I'm thinking more abouthow organizations document their
process as an AI firstdocumentation strategy that
humans can also read.
Alright.
Interesting.
Um, and so, uh, so for example,like I uploaded a sprint
calendar here.
That's how you saw that.
Now it's on me or it's onsomeone that owns the process to
(40:02):
update this in 2026.
Right.
Um, and then it is really justbullet ster Sarah or bullet
statements that, um, are easyto, to kind of categorize.
Right.
Isar Meitis (40:14):
Yeah.
I, I I love that.
I I was actually about to askyou a question later on.
What about all the visualaspects?
So when you do processes, flowcharts are a big deal, or charts
in general, uh, yeah.
Are, are a big deal, you know,org charts and stuff like that.
How do you deal with those Whenit comes to the AI's ability to,
a, understand them and they'revery good at understanding them
(40:35):
at this point, but moreimportantly, to show you what
they are.
'cause as far as I know, hedoesn't know how to pull an
image out of a document.
Do you upload the documentsseparately?
Like if you want the AI to beable to show you the flow chart,
do you upload the flow chart asa separate document or you
haven't even handled that usecase?
Nate Amidon (40:52):
No, uh, I have
handled that use case.
Uh, I had a client where I was,uh.
Basically, uh, had a well-knownconsulting firm come in and do a
full transformation.
And, uh, they came out with like200 page PowerPoint with all
pictures and arrows and, youknow, it was great.
Uh, but nobody knew what wasgoing on.
(41:12):
And so I essentially built oneof these just to be a, what's
going on with this by theaftermath of this transformation
agent.
And, uh, what I found beneficialwas creating, turning that
PowerPoint into a process,documentation, a document.
I would keep a lot of thepictures, but I would just take
that picture into a separate LLMand say, Hey, take this process
(41:35):
from what you can tell and turnit into a bulleted process
statement.
And, uh, be, and it was probably70% accurate.
It's just, it's hard for, I,it's hard for AI and LMS from my
perspective to pull visualmeanings.
It, it is, this doesn't know thenatural flow all the time.
(41:55):
So I think it's best to gothrough and, and create'em into
a more documented hierarchy typeof, uh, of, so you
Isar Meitis (42:04):
convert them into
text and then you include both
in the actual, so, but thatmeans that the, the agent cannot
show you the flow chart?
It can describe the process,
Nate Amidon (42:15):
yes.
Isar Meitis (42:16):
Okay.
Yes.
Now, if
Nate Amidon (42:16):
you, if you needed
the flow chart to be part of it,
you could play with that in theinstructions and say, anytime
there's a process with anassociated flow chart, describe
the process and then, you know,try to render the image.
I haven't played with that.
Um,
Isar Meitis (42:30):
yeah.
Yeah.
Okay, cool.
Nate Amidon (42:33):
Um, so anyway, so
that is essentially the
documentation.
Now, one other thing I'll say iswe talked about.
Um, a little bit aboutmicroservices and how we should
have one agent specific tasks.
I think when you're creatingyour documentation for your
organization, you should do thesame thing.
I, I think it works better ifyou have, so all of these, let's
(42:54):
say I had, you know, fourdocuments in this demo, um, they
could all be one big How we workdocument, right?
But I think by breaking themdown, it's the LLMs are more
effective, but more importantlyit's more sustainable because
it's easier to update when youhave a change.
And so the process for updating,uh, building this agent isn't
(43:15):
hard to build, but creating,actually knowing what your
processes are and having themdocumented in a way that sound,
that's the lift, like that's, Ilove it.
I love it.
Yeah,
Isar Meitis (43:25):
I think two things
you said that are really
interesting to me that I neverthought of, and obviously you
doing this with largeenterprises brings this immense
value.
One is to break down yourdocuments just like you're
breaking down the agents.
So if you have a document, don'tmake a document about 50
different things.
Make 50 documents about eachthing separately.
(43:45):
And naming conventions, I'm sureplays a big role into naming the
documents.
So then the AI knows actually,yeah.
Oh, I should probably look atthis document first to find this
kind of information.
Uh, so naming conventions wouldplay a big role if you're
breaking down the documentssmaller.
But the other thing you said isalso about the internal
structure of writing thedocuments, as if you're writing
(44:06):
them for ai, making them asstructured as possible, knowing
that humans can also read themversus.
The other way around, whichright now all of our documents
are built for humans and we hopeor assume that the AI will make
sense in them.
And I think these two things areextremely important when it
comes to preparing the data sothat the agent works very well.
(44:28):
And I, you summed it upperfectly that if you do that,
then you can be very lean on theinstructions and it will still
gonna work really well becausethe AI will understand exactly
what it needs to do and wherethe information is.
Yes.
And the other way aroundsometimes becomes, and I've had
this happen multiple times,where you start getting into
very specific intricateinstructions and repeating
(44:50):
things three times because theagent will do it and stuff like
that.
And I think if you focus on thedata preparation, you actually
get, again, assuming of what yousaid is you get better results
with a lot less effort on theinstructions.
Nate Amidon (45:04):
I think absolutely.
It's, uh, you know, seven hoursto sharpen your axe for one hour
of the cutting.
Yeah, right.
Um, and so, uh, no, absolutely.
I think those are all, all greatpoints.
And, um, and I think we're gonnabe thinking about AI first
documentation for the, for thenext, well, for a long time.
Isar Meitis (45:24):
So let's jump
quickly into the other example.
Now we, yeah.
People understand how this isbuilt.
No.
And so I think this other
Nate Amidon (45:28):
example is, uh,
also interesting.
This one is more what I wouldcall a process agent.
Um, and I know we're runningshort on time, so I'll, I'll do
this, uh, fairly quickly,essentially, in the product
management world.
So if you're in a softwaredevelopment organization, right,
you're building new features andfunctionality for a product, and
the good idea train is full,meaning everyone has a great
(45:51):
idea for the next new feature.
And so if you're a productmanager and you're the person
deciding what to build.
You're gonna get inundated with,Hey, you should add this feature
or that feature.
Alright?
And so I, what we do withorganizations is build a process
for how those product managersgo from an idea to ready.
Like how do we go from your goodidea to figuring out what you're
(46:14):
actually asking for to somethingthat is consumable via a
technical team to actuallybuild.
And a lot of organizations don'tdeal with this process very
well, but, uh, that's what wedo.
And we're a big fan of productcanvases, okay?
And when a product canvas, ifyou're not familiar, is a one
page, uh, basically chart, it'sa one page document that is an
(46:34):
alignment tool.
So it asks some very simplequestions like who's asking for
it?
What's the problem we're tryingto solve with this feature?
What's solu?
What's good look like?
What a data source is, what's inscope out scope?
What are risks, dependencies?
And it's a one, it's a way toforce product teams to actually
flush out ideas.
Before they get to theengineering team.
(46:54):
And this from a, a valueperspective for me is, uh, the
biggest ROI you can do in thesoftware development program
because engineers will, willbuild something, uh, that's not
valuable if you tell them to.
And it's on, I think, theproduct team to really make the
best use of the engineeringengineering team's time.
Okay.
So that's the context.
(47:15):
Um, and so before
Isar Meitis (47:16):
you dive in, just
to broaden this to people who
are not in the software world,this is the same for any
project, any initiative that youdo in the company, whether it is
how do we improve our servicesnext year to what kind of
products we want to manufacture,to how we improve our
manufacturing process to anyinitiative in the company is the
same thing.
Like you need to find asystematic process or way to.
(47:39):
A lens through which you analyzerequirements, needs and visions
in order to translate it intosomething practical that you're
actually going to do in aneffective way the following
quarter, month, year, whateverit is that you're doing.
Nate Amidon (47:52):
Yep.
And so this type of agent isreally just to help that process
out.
And, and even to broaden larger,it can be any process.
If you're in the PMO and youhave to do a, a gate
documentation, if you have toalways send a report in a
certain way to your CEO.
Um, there's just these processsteps that have to go through,
uh, and you can build agentsthat are trained in that process
(48:13):
that can help you, uh, do itfaster.
And that's essentially whatwe're doing.
So for this one, uh, what's coolabout this is I can, uh, say I
wanna build a canvas.
And it'll, uh, walk me through,uh, each one of the steps in
each one of the questions.
It's trained to know what's agood question, what's a bad
question?
Um, and, uh, and it will just bemy guide.
(48:36):
Now, that's mildly valuable.
What is more valuable is theability to upload information
into this and have it spit out,uh, spit out the canvas
information.
All right?
So, uh, you can take a meetingtranscript or an email if you're
in the space.
A lot of times you'll get like a30 page BRD type business
(48:57):
requirement document thing thattalks about all the things that
they could ever want in a newfeature.
Um, so I'll just take a, a thinghere where I dropped in a, a, a
simple, um, uh, a simple, uh,call transcript, all right?
And then the agent's trained totake that information.
And, uh, and spit out the firstdraft of the product canvas.
(49:20):
Okay?
So just, it can take a lot oforgan, a lot of data and
information and organize it intothe process step that you want
to help you determine what todo.
So, uh, so it goes through eachone of these.
I won't go into the details ofeach step, but, uh, you
generally get the idea.
Um, one thing I will say thoughis, uh, a lot of times I like
these on like a one pagePowerPoint canvas, right?
(49:42):
A a, a specific document.
And so these agents aren't greatwith this capability of building
those because you want'em inyour company brand.
You want a certain font.
Do you want'em to look the same?
Uh, and we've, we tried to do ita lot, uh, and we figured out
how to do it, but we'd have todo it either through Power
Automate or through some otherseparate system.
(50:03):
You start to get into moreadvanced capabilities of agent
building.
And like I said, I, I try tostay out of that swim lane
whenever I get close to it.
So, um, but the, but you alsomay not want to do that.
And this was kind of part of thebottlenecks thing that we talked
about, which is if, if you can,if you built an agent that took
information and automatically,uh, created the PowerPoint and
(50:26):
saved it to SharePoint, then itwould become really easy to just
put every horrible idea thatgets emailed to you into one of
these canvases, and now you havea thousand of these canvases and
you've just moved the bottleneckupstream.
Okay.
Yeah, hopefully that makessense.
So
Isar Meitis (50:43):
yeah, two things
about this and, and I think
it'll be interesting to see,again, if you dive into this and
see what's kinda like, uh, underthe hood.
Uh, but my, my 2 cents on thatis something that I do a lot
across several different thingsthat I do.
My best example that I use allthe time, I don't write my own
proposals.
The proposals get written by aprocess exactly like this, which
is a very similar process tothis particular one.
(51:04):
So I take, uh, call recordingsand transcriptions.
I use Fathom to transcribe allmy calls plus my email
communication with the prospectslash client, and I upload them
and click go.
I don't have to give it anyinstructions, and it spits out
amazingly well structured, wellwritten proposals.
Way better than I can write themmyself in about 60 seconds.
Uh, and then I spend another 10minutes verifying the
(51:26):
information and, you know,adding my little flare on top of
it, which is in many cases, justjob secure for myself to feel
like I'm doing somethingproductive in the process and,
and it's ready to go.
And so this is a very similarkind of setup, uh, once you
teach it how to read.
Unstructured, or he knows how toread unstructured data, wants to
teach it, how to structure itinto specific buckets, then he
(51:48):
just does it amazingly well.
Nate Amidon (51:50):
Right?
And, and so that's super, it'ssuper beneficial from a personal
efficiency standpoint.
Now, if you're a director in anorganization and you have five
to 10 product owners that areall determining what to use or
how to build what to, what toprioritize to build, you can't
be in every meeting and youcan't.
And so you can really take yourvision of what you want in your
(52:11):
organization and embed that inthe agent, because the agent can
be in every meeting.
So at the larger organizationsin the enterprise world where
you can scale this, there's ahuge amount of value, not just
from efficiency, but also fromincreased, increased quality.
Isar Meitis (52:26):
Yeah.
Nate Amidon (52:27):
Yep.
So, uh, to look over thisinstructions briefly.
Um, I did what we talked aboutearlier.
I went to a different LLM and Isaid, here's what I wanna build
and can you structure this andcan you structure that?
Um, and one thing you said thatI like, which I'm gonna steal,
which is, uh, I put the, what'sa good answer?
What's a bad answer to each oneof these questions in the
(52:48):
instructions?
But I could easily just takethat, put it into a document and
expose it that way, and reducethe instructions.
And so I think that we're gonnacont, everyone's gonna continue
to get better at building these.
Um, but essentially I put inwhat is quality for each one of
the questions.
And then I also wanted to put inthe flow,'cause I'm creating a
(53:10):
guide here.
I'm creating someone to holdyour hand through the process.
What does that, what does thatcustomer journey look like?
From when you submit, uh, tobuild a canvas to when you have
a completed process at the end.
And so I'll use the instructionsto say, okay, after this is
done, then you wanna check thisout.
And after this is done, youwanna check this out.
(53:30):
And so thinking about this as asoftware product, as a, thinking
about it as, as a, as having auser and having a user-focused
design is important when you'rebuilding these instructions.
So, um, I have it give checkmarks that are green, if it's a
good answer, uh, a triangle ifit's not.
Um, and then, you know, a lot ofthe other kind of just
(53:50):
boilerplate, you know, don't sayoffensive things type of stuff.
Um, and then from thisperspective, I'm using close,
I'm using about almost 7,000characters.
But if I needed more, to yourpoint, I could extract some of
this into the source data.
Final thing I'll mention aboutthis before we wrap here is, um,
I don't select use specifiedsources on this one.
(54:12):
I wanna kind of guardrail it inthe instructions, but also let
it go out into the internet andfind innovative answers and
solutions.
'cause that's really what wewant from ai.
Isar Meitis (54:23):
Awesome.
Yeah, I think overall, Nate,this is a great journey through
how to think about the processof creating these automations.
Like what are the things youneed to consider?
When you come to build these,and we talked about many useful
things.
One, we talked about breakingthem down into single tasks and
then building more of these formore tasks.
(54:44):
We talked about prepping thedocumentation in a similar
concept and making it more easy,uh, for the product to consume.
We talked about thinking aboutwho's the audience and what they
would want to use it for andwhat value they're gonna get
from them and how to build itthat way.
So lots of really, reallyimportant aspects when you come
to start creating automationsbecause like you said, creating
the automation is easy.
(55:05):
The tool is today you, you giveit instructions in simple
English and you will follow it.
Creating tools that actuallyprovide real consistent value to
a large organization becomestrickier and you give a lot of
great good like best practiceson how to do that.
If people wanna know more aboutyou, work with you, follow you,
et cetera, what are the bestways to do that?
Nate Amidon (55:26):
Yeah, I mean, I'm
primarily on LinkedIn, uh, so
you can, uh, find me just NateAmon on LinkedIn.
There's not a lot of us outthere.
I, uh, also my website and, youknow, you can reach out anytime
if, uh, if you wanna talk moreabout this'cause I enjoy it.
Isar Meitis (55:41):
Awesome.
Uh, thanks so much.
Uh, this was, uh, really, reallygreat.
I appreciate you, appreciate thetime you took to prepare for
this and to share this with us.
Uh, really valuable stuff, Ithink as far as, uh, thought
process and, and and process ingeneral.
Nate Amidon (55:54):
Great.
Thanks having me on.
I appreciate it.
Isar Meitis (55:57):
Bye everyone.