All Episodes

July 29, 2025 • 53 mins

👉 Learn more about the AI Business Transformation Course starting August 11 — spots are limited - http://multiplai.ai/ai-course/

Are AI agents just overhyped chatbots — or the future of business operations?

With every platform from Salesforce to ChatGPT touting “agents,” it’s easy to get lost in the jargon. But building agents that actually work — ones that can execute tasks, coordinate like a real team, and drive real ROI — is a different story.

In this episode, we cut through the noise. You’ll discover how to build orchestrated AI agent systems that not only think and act, but also talk to each other like a high-performing business team.

Our guest, Jake George — founder of Agentic Brain — brings his rare blend of COO-level business ops insight and cutting-edge AI development to walk us through exactly how he designs and deploys AI agents that make a measurable impact.

In this session, you’ll discover:

  • The key differences between a traditional LLM and a true AI agent (and why most tools misuse the term)
  • What agent orchestration actually means — and why you need it to scale your AI strategy
  • How Jake builds multi-agent systems using Slack and N8N to automate real-world business workflows
  • The anatomy of a powerful agent prompt — from roles and rules to tool descriptions and strategic reasoning
  • Why “your AI is your intern” is the smartest way to think about performance and iteration
  • How to balance autonomy vs. structure when designing AI workflows
  • What business leaders must know before hiring an AI agency or building agents in-house

Jake George is the founder of Agentic Brain, a custom AI solutions agency focused on developing real-world AI agent systems for businesses. With a background as a COO, Jake blends strategic business thinking with hands-on AI expertise to create agent teams that act more like high-functioning departments than clunky bots.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Isar Meitis (00:00):
Hello, and welcome to another episode of the

(00:03):
Leveraging AI Podcast, thepodcast that shares practical,
ethical ways to leverage AI toimprove efficiency, grow your
business, and advance yourcareer.
This is Isar Metis, your host,and probably the biggest
buzzword of AI in 2025 is AIagents.
It feel like agents are takingover everything in the AI
conversation.

(00:24):
Multiple platforms are nowallowing you to develop AI
agents.
Salesforce is focusing now onAgent Force and even Chachi PT
just released their agent tool.
And so multiple platforms,multiple people, everybody's
talking about AI agents, but thereality is that most people
don't even understand what thehell's the difference between

(00:47):
agents and traditional largelanguage models, how they work,
how to differentiate them fromother stuff that's happening.
And to be fair, it's not easybecause everybody calls
everything agents, even when itisn't, because it's a big
buzzword and everybody wants tosay that, that their tool
includes agents.
What are agents and how can youbuild them is exactly what we're

(01:09):
going to focus on in thisepisode.
So we are going to demystifythis whole concept for you.
We're gonna talk about whatagents are and what they're not
and how you can actually buildthem.
But more importantly, how youcan build several different
agents that talk to one anotherin a similar way that a team
works in your company.
So an orchestration of severalagents that can achieve,

(01:32):
actually perform business goalsand do specific business tasks.
And do it consistently andeffectively, which is what
everybody wants.
And so if that's not exciting toyou, I'm not sure what will, but
it's definitely really excitingto me.
Now, our guest today in hisearly career was a COO of a

(01:52):
company, which means he has adeep and solid understanding of
business operations.
And in this past year, hefounded Agentic Brain, which is
an AI solution agency that'sfocusing on developing custom AI
solutions for companies, a lotof it surrounding agents.
So he has this uniquecombination of understanding
business processes together withhis personal technical skills,

(02:14):
together with the things he'sdeveloping for his clients,
which makes him understand whatactually is needed in a
business, makes him the perfectperson to walk us through this
process.
So I'm personally, truly excitedand humbled to welcome Jake to
the show.
Jake, welcome to leveraging ai.

Jake George (03:16):
Hey Isar, thanks for having me.
And yeah, great, great introthere.
And I think that you, you know,are already touched on a lot of
like very important points oflike, you know, understanding
the difference between justtypical LLMs and agents.
And there's, it's definitely ahuge buzzword and a lot of
people will sort of like usethem interchangeably.
Like to them everything is anagent and then it's like when
you actually look at it, it'sreally not.

(03:37):
And then there's also a lot of,you know, sort of like a
strategy and architecture andbuilding effective agents, which
is what a lot of people like.
Yes, anyone can go and, youknow, like.
Build an agent.
There's tons of like really coolworkflow builders and to be
honest, like just using standardlike the latest, you know, LLMs
behind the scenes, they do, theyare quite smart even on just on

(03:58):
a base level.
But really there's like a lot ofskill and strategy that goes
into like actually making themeffective and then actually
making them, like you said,consistently handle, uh, large
or complex tasks, which is, youknow, really what I wanna get
into today.
Because that's something that,when you, when you see people,
they build something for likeYouTube, it's like very simple,

(04:19):
easy to explain.
It's like a 15 minute build.
And I mean, that's great forpeople that are just getting
started and learning, you know,at a base level.
But a lot of times, you know,people kinda like will hype that
up and be like, oh you knowthis, you can sell this for
$10,000 to a company.
I built it in 20 minutes.
And that's just completely, it'sjust not true.
You know, it's like somethingyou build in 10 minutes is
typically not gonna be thateffective.

(04:41):
It's a great starting point, butthere's so much more that goes
into it.
So I'm super excited to share.

Isar Meitis (04:47):
I'm, I'm really excited myself.
I say one thing, I literallyjust came back from lunch with a
good friend of mine.
He's a technical person, veryknowledgeable person, and he
told me, you know, he knowsobviously what I do.
And he said, well, you know, Ithink this whole thing is hyped.
I try different things and yes,I can see the demo and I can see
how it's cool, but when Iactually try to make it to do
the work day to dayconsistently, it just doesn't

(05:10):
work.
And I'm like, you're right.
If you don't really know whatyou're doing and you don't
invest the time to learn thisdeeply and troubleshoot it and
build it properly or havesomebody else do it for you,
that's perfectly fine as well.
But don't expect magic like youcan do magic.
Demos with AI doing magic workjust doesn't happen.

(05:33):
And it requires, it's stillhighly efficient, but it's not
gonna happen in five or 10minutes.
It's gonna happen in two orthree days or sometimes a week,
but then you're gonna have asolution that you can use every
single day in your businessconsistently moving forward.
And so let's, let's dive rightin.
Let's start with what the hellare agents and what are the
differences between an agent ina large language model?

Jake George (05:56):
Sure.
Yeah, great question.
And in a nutshell, it's like,um, very, very basic nutshell.
Uh, an LLM is sort of like aquestion answer, um, sort of AI
model.
Um, it's think back to the olddays of chat GBT before it had
tools because, excuse me, given.
Now, as you said, uh, ChatGPT isbecoming quite agentic.

(06:17):
They actually have agent mode.
It just by default will searchthe web run code, whatever.
So it's like, technically thatis an agent because an agent
would be like an LLM that cancall tools at the very most
basic form.
So a lot of times why peoplewill confuse them, it's like
they have an AI tool.
And so what it does is it'sessentially a workflow that
maybe it helps you write, likelet's say marketing copy or

(06:39):
something like that.
And so you write, Hey, this ismy company.
I want an email that says this,this, and that.
You click run and they say, oh,our agent writes it for you.
It's not an agent, it's justyou're putting an LLM on a step
in there that takes the input,it mixes it with their system
prompt and then gives you somesort of hopefully desirable
output.
But it's not actually likethinking, planning between tool
calls, reasoning, you know, itdoesn't have memory.

(07:02):
You know, there's, there's a lotof, uh, aspects as I'll go into
that go into an agent.
But at its core, it's just anLLM that can use tools
intelligently, and that's whatmakes it agentic.

Isar Meitis (07:13):
Awesome.
So I, I will try to summarize,because you touched on a few
very important points that arethe biggest differences.
One, it thinks on its own right?
An agent is not a step-by-stepprocess that you define, but it
can decide how to address aproblem, right?
So it has its own thinkingability.
Uh, you give it a goal and thenit figures out the steps on how

(07:35):
to get there in the mosteffective way.
So this is one.
Two, it can use tools so it candecide when to search, when to
do deep research, when to writecode, when to, uh, summarize
things, when to do math.
Like it can decide on thedifferent things that needs to
do, and you can give it.
Access to different tools.
And tools can be software thatyou have in your company, which

(07:56):
leads to the third thing.
It can use memory, it can useits own memory to remember
things through the process.
And it can use, I'll call itthird party memory, meaning it
can pull data from othersources.
It can pull data from your CRM,it can pull data from your, uh,
ERP system.
It can pull data from aspreadsheet, like whatever it is
that you wanted to pull datafrom.
And then you can use its ownmemory and make sense in all of

(08:18):
that.
and the last thing that I willsay is that a large language
model is a flat one level thing,and an agent environment can be
a multi-layer with anorchestrator and multi-level of
tools, which doesn't happen in alarge language model.
So these are the, like, the key,main things that you touched on,
on one word here, one wordthere.
but really, let's dive in.

(08:38):
Let's start looking at theactual thing and, and I think
looking through an example andthen diving in on how it's built
or probably is gonna be, uh,very helpful for everybody to
underst.

Jake George (08:49):
Absolutely.
Yeah.
Let me just pull it up and sharemy screen here.
gimme one

Isar Meitis (08:53):
second.
Now we are going to show you howto do this in a tool called N
eight N.
So the letter n, the numbereight, and then letter N, again,
it's a tool that really took offcompletely in the last year and
a half, but it's in its essence,in its original essence.
It is very similar to Zapier andmake, it's a process automation
tool.
It's just open source.

(09:13):
Uh, and it's really flexiblebecause of that.
But about, I don't know, a yearago, a little less, probably
nine months, they, they, theygave the option to build agents
within there, which made itextremely powerful and hence,
uh, drove the attractiveness onthis tool to a lot of people.
I'll say one thing, it's alittle more technical than
learning how to use make, butonce you master that, the

(09:35):
options are literally endless.
So it's probably worth a steeperlearning curve.

Jake George (09:40):
Absolutely.
And that's one of the reasonsthat we love it.
You can get very technical withit, and we've yet to find a
problem, an agent that wehaven't been able to use to, uh,
and any to end to use to solvethat problem.
So it, you can really get verydeep with it.
Um, let me know if you can seemy screen here.
I can.
So this, just to give you kindof like the broad overview of

(10:03):
what we'll be looking at, wecall this one, we built this for
an insurance client that wehave, and they wanted,
essentially they call it likethe CEO agent, but more of like,
let's say like ex executive,assistant agent, um, to handle a
lot of backend tasks.
So we're not going to get intoall of them because like I was
saying, that could make thislike six to eight hours.
Um, but we will get into some ofthe basic ones here.

(10:25):
So at the top we have the levelone, uh, manager agent.
So.
This is kinda like thearchitecture of like how we will
build agents behind the scene.
So to the client, they justthink, they just, I mean, they
know it's a team of agents, butthey just really interact with
one agent for the most part.
But that agent has thesesub-agents that can go and
perform tasks, and underneaththe sub-agents we have tools and

(10:48):
workflows.
So that's really how we willstart off.
And it's a good way to go aboutit.
Just sort of like building fromthe ground up rather than
building everything from a highlevel to begin with if it
doesn't need to be.
Because another, uh, largemisconception is that everyone
thinks that like everythingshould be solved by agents.
And so they, you know.
Try to sometimes like cutcorners and then they'll be

(11:09):
like, okay, here's the agent,here's the tool.
Okay, now, um, do all of my workfor me every day.
And like you were saying aboutyour friend, it's like, it just
doesn't usually end up beingquite effective.
So it's like you need to buildsome, a lot of things are better
solved by workflows and maybeworkflows with, uh, LLM steps in
them.
And there's nothing wrong withthat.
And there are also some,something that's just like.

(11:30):
It won't always consistently do,uh, the task correctly that you
want it to when you just slap anagent on it.
So we always start with thetools, like let's build the tool
and the workflow, and if itshould be a workflow, we'll
build a workflow to solve thispart of what they want done.
And then we give those differentworkflows and tools to a
subagent.
So then for example, you know,we find it best that it's like

(11:51):
each subagent focuses on onearea of what they want done.
So this one, for example,manages their CRM.
And then we have one over herethat is sort of like the email
manager agent.
So these are very specific areasthat they're focusing on.
And under each of these, theyhave like the email writing, uh,
agent, these are more of likeworkflows.
Um.
But they'll have like tools andworkflows within here.

(12:13):
But sometimes the workflow willalso have agentic parts.
So there's like many differentlevels to it.
But essentially, so the, then welet the CEO agent or the manager
agent really focus on likeoverseeing the task.
It knows what all the otheragents do, and it doesn't really
focus on actually executingactions itself.
It's more of like, okay, whichagents do I tell to do which
parts in which order?

(12:33):
And then based on what theyreturn to it, then it decides
the next steps.
So that's from like a, highlevel how this whole system
works.
I'll pause you just for one

Isar Meitis (12:42):
second because you touched on a few very important
things.
One, those of you are notwatching the screen.
I'm gonna tell you what's on thescreen at any given point.
This is just a static flowchart.
This wasn't the actual agenttool.
And so when you come to designthese things, you gotta think
about, okay, what does it needto do?
What does it need to achieve?
What tools that need the accessto?
And you first of all, design thearchitecture before you start
creating the actual tool.

(13:02):
Number two is.
There's this really importantdifference that we touched on
earlier, but now kinda likereading between the lines with
what Jake was saying isimportant to understand.
A workflow is a step-by-stepprocess that you define.
Versus an agent that has a levelof autonomy and there's pros and

(13:24):
cons in each and every one ofthese approaches.
A step-by-step process isawesome if it's exactly the same
step-by-step process everysingle time.
And so whenever you need one ofthose things, you can build a
process as a tool that the agentcan use.
And then when it gets into thatsituation, it will follow the
same exact thing every singletime, which is important when

(13:45):
you need flexibility, meaningyou needed to think, you needed
to be more creative, you neededto figure out what to do next,
then an agent is more effective.
And learning how to mix andmatch these is what drives what
we started with when I did theintroduction.
Effective consistent results,which without it, you can't

(14:05):
actually use it.
And so this is a greatintroduction.
So let's dive into the actualtool itself.

Jake George (14:11):
Yeah, absolutely.
That's a, a great recap there.
And that's that's the reason whysometimes you wanna use
workflows.
'cause if you think about it asa company, there's processes
that you always want done thesame way.
You don't want someone to guessevery single time on how to do
it.
But then there's sometimes thatyou want those processes done in
a more dynamic way.
You might want someone to writeme an email.
You might want them to go checkyour CRM and see what leads

(14:33):
haven't been talked to in thepast three months and have a
policy renewal within a month.
And then go and pull all theirpast conversations and call
recordings and then write acontextual based email follow up
to them.
A lot more complex.
But at its core, there are, youknow, I think a lot of times
people will use AI because it'slike they don't want to think or
they don't wanna decide and theywant the AI to figure it out for

(14:54):
them.
And that's just the wrong way togo about it.
Like, you don't want AI makingall the decisions, you leave it
too broad and typically you seethose unfavorable outcomes.
So it's best to have some thingsthat are like hard coded, you
might say, of just like thisprocess.
We always want it done the sameway.
But you can call these differentprocesses done in the same way,
in a dynamic way to achievedifferent outcomes and goals.

(15:14):
So, um, perfect.
Yeah.
great.
Um, recap there.
Um, let me see.
Sorry, I can't see which screenI'm sharing here.
I gotta, I see a list

Isar Meitis (15:25):
of agents in any 10 or a list of workflows.
Okay.

Jake George (15:28):
So you are seeing NAN Okay, cool.
So we'll switch automatically,sorry.
Now the screen won't go away.
There we go.
Okay, so let's start with themain, um, manager agent here,
and I can kind of like break itdown of how this works.
So, this is already, I mean,this is just like the base level
of the agent that calls theother agents.
And you can see it's, um, alittle bit more complex than

(15:51):
what you'd usually see onYouTube, which would usually
just be like this part and thena few tools.
Um, and so the reason for thatis that there's a lot of,
typically with any like realworld deliverable, there's gonna
be like.
A lot of these sort of like dataprocessing and cleaning up steps
and just making it like functionproperly to get the input and
output, in there correctly.

(16:12):
Um, so that's what a lot ofthese are.
We use mainly for communicationswith agents.
We do all of that through Slack.
It's a great platform to use iton.
Obviously Salesforce is usingthat for their agent force and
it just, it works very well.
There's a lot that you can dowith it and it makes for a great
user experience.
And then also notion for anysort of like dashboard, uh, user
facing database, uh.

(16:34):
Parts of the agent, uh, itallows them to easily like,
update things that they mightwanna change and sort of see
like, you know, any output logsthey might wanna see, so on and
so forth.
So what we do is we start offwith, the user.
We just essentially messagethrough Slack.
Uh, we just filter it out tomake sure it's, you know, that
user's message.
Um, we get their user data.
And then for this one, becausethis agent will sometimes have

(16:56):
what we call like long runningtasks that may take multiple
minutes to accomplish, we haveit just send a response right
away just saying like, Hey, Igot your message and you know,
let me process a task and getback with you.
And this is again, somethingthat you might not think about
until you have someone using it.
But it's just like, if the taskmight take three minutes, people
are, and they don't get aresponse back before that three

(17:17):
minutes.
People are just gonna keepspamming it and sending all
these requests and then in threeminutes they get like, and
they're like, oh goodness.
And it does the task like 18times.
uh, we just do a little bit, youknow, sort of like notify them.
Hey, got your message.
Right here what we're doing iswe're looking for any files.
So you can see like JPEGs, PNGs,uh, and then just the regular
text from the Slack messageCSVs.

(17:38):
So what we do with that is ifthe user sends any message,
because they will sometimes usethis for, um, adding like leads
to their cm.
So they might snap a picture ofa business card and then we just
have it, you know, get the imageused, OpenAI to sort of like
pull whatever the image is andthen turn that into a normal
text, which we can then pass tothe CEO agent.
and then this part we're kindof, it has a lot of different

(18:02):
inputs here.
so this is for, this one isspecifically for long running
tasks.
So what we're, and we're stillbuilding this as well, we've,
you know, been working on thisagent maybe like three or four
months for this client.
And they're, you know, lookinglong term with ai.
So like we're always addingthings onto it.
So.
Because of that.
Sometimes it'll have these, wecall, like I was saying, the
long running tasks.

(18:22):
So what it does is itessentially says, okay, it will
output at its first turn and saylike, Hey, this is gonna be a
long running task.
And it actually will thenmessage the user back right here
and just say, Hey, this is gonnabe a long task.
I'll get back to you when it'sdone.
And then it re-triggers itselfwith the task so it can message
and then go and complete thetask so the user kinda can
understand it, will actuallytell it the steps.

(18:43):
It'll like, Hey, this is a longtask.
Here are the 10 steps that Ihave to do to complete this.
This might take a few minutesand it will trigger itself.
So that's what this is righthere.
It's essentially so the agentcan re-trigger itself.
This one right here is from ourreminder workflow.
This is super important andpretty much every client wants
this.
It's another thing a lot ofpeople don't consider that it's

(19:04):
like.
AI agents are not like aliveperceiving time like you and I.
It doesn't just sit here all daywaiting for something to do.
It activates when it'striggered.
So if you just go specificallyto an LLM and just say, Hey,
remind me in two days to do thissort of thing, it will say,
okay, but it will not do that.
It'll just say, yeah, sure, I'lldo that.
But it doesn't have anyperception of time after that.

(19:25):
It doesn't know when two dayshas passed.
So what we created, um, and thisis we, this is one that we put
in, almost every, agent for aclient because they will pretty
much always want it, is it canactually set reminders for
itself.
So it says when it should be dueand what the actual task is.
So when it, uh, becomes due, itwill actually go through this
workflow and trigger itself andsay, Hey, you set a reminder for

(19:46):
this time to do this action.
Go ahead and do it now.
It's the right time to do it.
Um, super important there.
Um, this one comes from theirZoho.
And so what the, because it'sinsurance.
They wanna know when clients arecoming up on their, uh, 30, 60,
and 90 days from their policyrenewal.
Because that's really whenthere's the sort of like sales
cycle will kind of like restartfor that client.
They wanna go find them betterdeals and they wanna keep in

(20:09):
touch with that client to ensurethat they will renew with them.
Uh, so it's super important.
Zoho has its own, like internal,as most CRMs do, like triggers
and timers and workflows andstuff like that.
However, they're workflows Ireally don't like, and they're
hard to use, but, um, they, wecan have it trigger there.
So then essentially thattriggers the agent as well.
So these are sort of likeexternal triggers that aren't

(20:29):
direct messages from the user,but they are important for the
agent to receive and then takeaction upon those.
Then for the outputs, this isanother thing is, you know, when
you're using ai, it pretty,like, if you send it a message,
it messages you back.
Um, and we don't always wantthat, especially if it's
something where it's like abackground task.
We don't want it to necessarilysend a message.

(20:51):
Sometimes we just want it totake an action.
Like if it's 30 days from apolicy renewal, we want it to
get all that client's data andwrite them an email saying, Hey,
we're looking for better policyoptions for you, but we don't
really, you know, we want thenthe email agent to message us of
like, Hey, this is the email,here's the email I'm going to
send.
So we give it the option toactually continue with, um, no

(21:11):
response here.
If it's like a sort ofbackground task.
That's another thing is thatsome agents, you know, will only
act like when you ask them to dosomething.
And, or some they can alsooperate just in the background
as well, but we don't want it tonecessarily message the client
every background operation itdoes because it just gets
annoying and spammy and they'llsee the action is completed.

(21:31):
They don't need to see it and betold.
Um, and then right here, so wealready went over the long task
so it could say, Hey, this is along task.
And then what we do here is wehave it either send or update
messages.
This is another cool thing thatyou could do with Slack.
And it makes it just like acleaner user experience.
So it will say, Hey, I'm righthere.
Hey, I'm getting started.
I got your message.

(21:51):
Uh, let me get started on thetask.
And then over here it takes thatsame message and updates it to
just say, Hey, here's thecompletion of the task, or
here's the results that I gotback.
so I I I

Isar Meitis (22:02):
wanna pause you just for one second.
yeah.
'cause there are.
Really three main keycomponents.
And we touched on two of themeven though it was a very long
detailed one.
Uh, one of them is inputs, likethe agent gets inputs, right?
It gets, and in this particularcase, you touched on a lot of
inputs.
Some of it is our inputs that inthis case are coming in through
Slack.
I want you to do 1, 2, 3, 4.

(22:23):
Some of them are inputs fromdifferent systems such as CRM,
some of them are inputs from theoutput of the agent.
And that sounds a little meta,but that's the way it is.
Like the agent did something.
It's like, oh, uh, this triggerssomething else.
So go and.
Put this as another input to theagent.
So these are the inputs, theoutputs could be anything you

(22:46):
want.
The output could be a messageback to the user.
It could be taking action in asystem, could be updating
another agent, could be manydifferent things.
So the output could be either atask or a message or an update
of a different platform.
And in the middle, which is thepart that we're gonna get to
right now, is the actual agentitself.
Meaning what does the agent doto take the data from the

(23:09):
various inputs that it gets andturn it into the outputs that we
need.
And so let's dive into, I assumeI'm, I don't wanna, I'm not
trying to lead you here.
I assume that's the next step.

Jake George (23:22):
Yep.
Yeah, no, you're, you're spoton.
Yeah.
Great recap there.
That's, yeah.
There, there's a lot ofdifferent ways that they can be
input to and output.
And also, you know, one of thekind of like challenges is sort
of like making them think like ahuman, like having stuff like
memory, knowing when to dostuff, setting tasks for
themselves because like ashumans, that's just like normal,
you know?
It's like you don't really thinkabout, oh, I'm setting a task

(23:44):
for myself to do on Tuesday.
You just think on Tuesday I haveto.
Take my dog to the groom orwhatever.
so now we can get into this, andyou can see this is quite a long
and detailed prompt, so I won't,you know, necessarily read, uh,
every single word of it.
Um, and I can just explain moreat a conceptual level.
But there's, this is anothervery important thing that people
quite often overlook.
Typically, you'll see someone'sprompt is like this long, okay?

(24:07):
And it's like two sentences andjust a very broad, vague
description of kind of what theywant it to do.
And so there's some reallyimportant, uh, parts that you
should have in your prompt.
And this is actually, like, thisis all backed up by like,
research, by like anthropic,opening eye Google.
Like this is just a good way toprompt, an agent.
And it makes sense because it'slike you wanna write things out

(24:28):
clearly and break them up intodifferent sections.
So how we do this is we writethings in markdown.
Uh, we use different delimitersso it can understand the
different parts of the promptbecause if you just mash
everything together and you justwrite like.
Usually we try to avoid sectionslike these, but I will put it in
roll in objectives because thisis just like a high level
overview of what it's doing, butit's best to, like, we use like

(24:50):
bulleted lists and let's see ifwe have any step by step.
Yeah.
Like uh, numbered processes aswell.
So it's like these are all veryimportant.
So what we start off with isroll in objectives.
So this is kind of like a highlevel, level overview of like.
This is what you typicallyshould be doing.
This is the type of actions thatyou perform.
This is why you're doing them.
This is the outcome that wewant.

(25:11):
So very high level overview.
Usually people stop at this andthink that that's enough and it
just is not like AI can do amillion things, but it only does
exactly what it's told.
Like it tries to do exactly whatyou say.
It's like if you're talking tosomeone and they take everything
you say literally, and you'relike, oh, that's so annoying.
Like, come on, use your skillsand infer stuff.

(25:31):
Ai, it's getting better at thatfor sure.
But it will still try to do whatyou say kind of literally.
So you have to describe thingslike you have to be exact, and
that's one of the things that wesee people are not very good
with.
Especially when we start withclients.
They'll be like, oh, well I justkinda, you know, I want it to
like write me marketing emails.
You are like, okay, but like,what's your process?
Who do they go to?
What should they, what should bethe content?

(25:51):
Where do you get the content?
Who writes these?
What's your strategy to writethem what, you know, the,
there's a million little detailsthat go into it, but a lot of
times people are not so good atlike breaking down and you know,
being specific on what theyactually want.
So from role and objectives, nowwe have, a subsection under here
about the core capabilities.
And this is sort of like, thisis how it does the task

(26:12):
delegation.
It leads the team of otheragents.
Uh, one of them is the CRMmanager.
One of them handles emails.
Um, it can also do crossplatform research to read emails
and CRM data to make decisions,uh, goes into the decision
decision making.
And then user alignment as welljust to kind of say like, Hey,
before you do like irreversibleactions or like add or update

(26:33):
leads or something, check withthe user always.
This is another thing that isvery important to add.
We never just let an agent looseinto a company on the first go.
We always add human in the loopand then a testing period.
And we try to make it astransparent as possible, which
is why we love using Slack.
'cause if they're already inSlack and they say.
See that the agent is like, Hey,I'm gonna do this.

(26:54):
Hey, I noticed this, sotherefore I'm going to do this.
And they become morecomfortable, uh, to the point
where, you know, after a couplemonths of that, typically, then
they'll be like, okay, put the,at least these parts on
autopilot.
Like, I know it's gonna dothose, it always does'em right.
You know, I have confidence init.
But that's very important to,let clients know, like, what is
actually going on.
And yeah, just let them havetheir input on it as well.

(27:16):
'cause it won't always geteverything right on the first
go.

Isar Meitis (27:19):
I wanna pause you just for one second because this
iss a very important point.
Mm-hmm.
The, when you're developingthese things, the expectation
should be, it is not going towork properly the first time or
the second time or the fifthtime.
Uh, on the 20th time, it's gonnawork okay.
90% of the time, which for someof the tasks is fine and for
some of the tasks it'sunacceptable.
It just depends what the taskis.

(27:41):
And the way to solve this is bytesting it with live
environments, meaning.
Letting it run on actual dataand then seeing what it actually
does.
Now to make sure, like Jakesaid, it's not creating havoc
and trashing your best clients,is to put a stop point that lets

(28:01):
a human check what they're aboutto do.
And you could do this in Slack.
Uh, I'm doing this as well withlike tasks.
So you can open a task in Notionor in Jira, Asana, click up
Monday, whatever, whatever it isthat you're using for task
management, you can open a taskand until the user closes the
task or confirms the well, likewhatever the thing is that it
needs to do until that the, theagent will move forward and

(28:24):
that's your way to increase yourlevel of control and
understanding of what the agentactually does, then you go and
fix it.
You do it again, you go fix it,you do it again, uh, like Jake
said, until you get to the pointlike, okay, I don't need to
check this thing.
It's correct every single time.
You're just wasting my time.
Uh, the other thing that itdoes, doing it in Slack or in
your task management platform isthat it's just another.

(28:46):
Person on the team, right?
That tells you what they'redoing and updating you
regularly.
And what I tell people all thetime is that building an agent,
or in general using these modelsis like having the best intern
on the planet.
It is going to do exactly whatyou tell them.
However, if you don't give themall the information and you
don't explain to them exactlyhow to do the thing you, you

(29:07):
want them to do, don't expectthem to do the task properly
because they're.
Still an intern.
And so this is the approach,right?
If you're gonna break it downstep by step and give them all
the information, just we wouldgive a person to do the task.
When they don't know anythingabout the company or you, or the
process or the project or thetools or whatever, give them the
time, the resources that youwould've given a person.

(29:29):
They will do the task amazinglywell, and they take feedback
amazingly well.
And so, uh, so let's continue.
So I, I, I, sorry, I, I brokeyou up, but this was a very
important point.
So you said, uh, if you canscroll up, we started with, uh,
a general, kinda like tasksetup.
So that was, you called it rollin objectives, roll in.
The second thing was your corecapabilities.

(29:49):
So these are the things that youcan do with a bullet point that
describes each and every one ofthe things.
What is the third component?

Jake George (29:56):
So then the third component is we have for this
one goal, and these are not, Iwant to go into like, these are
not.
We don't always start with like,oh, you must have goal, you must
have role and objective, so onand so forth.
We really, like every prompt isbuilt differently and it should
extend based on how complex thetask is, and it should be a very
iterative process.

(30:17):
So it's like sometimes I'll putgoal in role and objective.
Sometimes that doesn't need tobe clarified, but when we're
doing something like the manageragent, as you can imagine, has
the most extensive prompt andthen the subagents are a little
bit less, and then the agentswithin workflows or tools and
stuff are even less and morespecific.
So these are not like hard setthings of like.
You must have this or the promptwill not work.

(30:39):
You really build the promptaround the task in doing it in
sort of like an iterativetesting phase.
And this is what a lot of peopleget wrong.
They use AI for it.
And not that AI couldn't write agood prompt, but there's a one
specific reason why AI istypically not as good at writing
prompts as humans is because itdoesn't go and test them.
It just says, Hey, here's awhole bunch of fluff.
Here's like 18 paragraphs goingover everything that it could

(31:02):
do.
And when you do something likethat and you paste it in, you
don't really know what's workingand what's not working, and you
also don't know how to change itbecause you don't know at what
point it started to go offtrack.
So it's really best to startwith like just the very basic
things, like just give it likea.
One sentence roll when youstart, and then like a one
sentence goal.
And then rules.
I always just add as it doesthings wrong.

(31:23):
Like that's the reason I addrules unless I know like, this
is something that AI always getswrong, or just some like rule of
thumb, best things to add inthere.
I just add these as it makesmistakes, I write all of my
prompts by hand, like I writeeach word in them.
Sometimes when I'm done with it,it might be like maybe a little
messy, so I might send it to AIand just say, Hey, format this
better for me.

(31:44):
But the most I ever put in fromAI is maybe like one line.
Like let's say I try to explainsomething that.
It's kind of a simple concept,but maybe when I write it out,
it comes about this long becauseyou know, when people write,
they're just like writing asthey think.
And I go, can you make this moreconcise and clear?
And then if I read it back, I'mlike, that's good.
I might paste it in.
But what I never do is be like,write this entire section.

(32:05):
'cause it's just guessing onwhat it should do, what might go
wrong, what the rules should be.
So I highly recommend that, youdon't use AI to prompt,
especially if you're, you know,if you're like an absolute
master at it and you've builtyour own agent to write prompts,
just how you'd write them and soon and so forth.
Sure.
I'm sure that there are peoplethat are really that good, but
it's like I write all but byhand and I recommend people do

(32:26):
it too.
It's a good thinking exerciseand it's a good learning
exercise.
And if you always use AI to doit, you just won't get better at
it.
And that's why a lot of peoplestruggle at these sorts of
things because they neveractually learn.
They're just, oh, ai do it forme.
They don't know what's goodabout it, bad about it.
Anyways.
The next step is rules.
And then I break these down intosubsections.
So again, rules, uh, bulletedlist.

(32:48):
And a lot of times these are,like I said, they're written
because it messes up.
And then I say, okay, you know,do this because it's screwed up
on that.
So that's the reasoning forrules.
I break them down intosubsections based on like, you
know, oh, the workflow,communication emails, how to use
tools.
I think that there's one.
Planning and reflection, um,reasoning strategy as well.

(33:09):
So I guess this is underworkflow processes.
Um, but this is another thing.
So then when we have exactprocesses,'cause the cool thing,
um, we were kind of talkingabout it before.
When you have hard setworkflows, sometimes you want
things always done a certainway.
When you use an agent, it'salmost like an AI build your own
workflow.
'cause it has all these tools,depending on which order it's
using them.
Every time you run it, it canbuild its own workflow.

(33:31):
That's what's really cool anduseful about it and makes it
more similar to a human.
However, there are some timesthat it's like you wanna tell
it, like always do these thingsin these order.
Like don't go and send an emailbefore looking up data about the
client, because then you'll haveno context on them.
You would think that's obvious.
It's obvious to a human.
It's not obvious to an AI unlessyou specifically tell it exactly

(33:51):
what you wanna do.
Also explaining reasoningstrategy.
Things like plan extensivelybefore each function or tool
call, take a moment to outlinethe steps.
Um, and then always like thinkbetween using tool steps.
'cause sometimes another thing alot of people don't think about
it will just run tool after tooland not really think about the
return inputs and try and do allthe thinking at the end.

(34:12):
You can actually change this bytelling it, like think between
the tool steps and then make thenext decision.
Um, these are actually outlinedin, uh, in OpenAI.
Paper about the 4.1 modelsbecause it will follow prompts a
little bit more literally thanthe previous models, which would
kind of try to infer more andthey, you can actually get a 20%
better result when you explainthe reasoning steps and use this

(34:35):
part here.
and then there's another part atthe end here just telling it you
are an agent.
Keep going until the user'squery is completely resolved or,
uh, before ending your turn andyielding back to the user by
using this.
And then a reasoning step.
Um, and then there's also oneother one, which I don't think
was, uh, necessary in here aboutlike reviewing data.
or it may actually be in toolusage.

(34:55):
Um, but using those you'll justnaturally get about a 20%
better, uh, result from it justby reminding it of these sorts
of things.
Um, and so it's reallyinteresting'cause a lot of
people don't, you know, theywouldn't think deep enough to
think I need to explain areasoning thinking process to
get it to think like me and dothings like me, but it really
significantly improves theresult.

(35:16):
next we have steps.
So this is still under workflowprocesses, so steps for
researching a person, um, CRMand email.
So again, we want them to alwayslook in the CRM, we want them to
search, want it to search forrecent emails.
Um, and then we want it toreview these.
And then see another thing thatit does in the background is
look for any missing data onthat person.
And if it can find that missingdata, like their, uh, email or

(35:39):
their address, or their LinkedInor something, if it can find it
in an email from them, it willactually go in, um,
autonomously.
Add that to the CRM.
Um, then it compiles a summary.
So this is also important.
a lot of things if, if.
You pro, I'm sure that you'veheard of it, maybe not a lot of
other people have, uh, dependingon how much they build agents,
but the sort of like contextengineering of like, you can't

(36:00):
just, there's usually two sides.
People give it like a onesentence instruction and it's
not very clear or they just dumpeverything they can possibly
imagine into it.
And either of those is a greatstrategy.
If you overload its contextwindow, they just naturally
like, yes, it might have amillion, token context window.
That doesn't mean it willprocess all of that properly.

(36:21):
It, AI tends to skip over a lotof parts.
So you wanna only give it thereally the necessary data.
So, uh, when we do, especiallyin subagents, we kinda have it
like summarize, uh, the data andthen pass it onwards.
It helps preserve its contextwindow.
Next we go into the toolsection.
This is very important toexplain.
We give in here like a broadoverview of what they do, and

(36:41):
then it's best for structure toexplain more in here, which I
don't know if this one, yeah, sothis one is pretty good.
So this has like a more detailedexplanation of how to use the
tool, what the tool can do.
Sometimes when to use itdepending on the tool.
Well, I wanna pause just

Isar Meitis (36:57):
for one second.
For the people who are notwatching the screen.
Uh Sure.
Yep.
For each of the agents in every.
Tool you use, it doesn't matterwhich, where they're using NA 10
or something else to build theagent.
The agent had a section of thetools it has access to.
You're literally connecting aline, uh, and giving it access
to different tools.
A tool could be A-C-R-M-A toolcould be something you build.

(37:19):
So a, a process that you build.
It's something that the agentcan use to achieve the goal that
it was given.
And so what Jake is showingright now, he's saying that on
the high level agent, in theactual prompt of the agent, when
it's describing to the agentwhat it can and cannot do, it's
giving it a short description ofthe tool.

(37:39):
Inside the tool itself.
There's a much longerdescription of the tool, which
makes.
Perfect sense, right?
Because you want the, it's,think about it, just like Jake
mentioned, think about the CEO.
The CEO doesn't necessarily needto know exactly how to do each
function in the company.
It needs to know the functionexists, it needs to know what

(38:00):
the function does.
It doesn't need to know how todo the thing.
But the, uh, the guy thatcreates the reports from the ERP
to the inventory manager needsto know very, very well how to
do this.
One very specific thing whereyou're gonna define that tool in
much more detail.
In this particular case, thiscould be a Zoho CRM, data query,

(38:23):
which is what we're seeing rightnow.
So that part has to be very,very detailed.
But on the high level agent, theCEO agent, the orchestrator
agent, you just tell it that ithas that functionality.

Jake George (38:36):
Exactly.
Yeah, you summed it up verywell.
And it, and kind of like a wayto think about it in sort of a
human level is think if you'regiven, you, you just, you,
you're an intern, you justjoined the company.
They give you this long list.
Here's all the things that youdo.
Here's the processes.
They have it all documented.
You're not gonna, every time yougo to do a task, read the entire
thing and remember each step andwhatever you're gonna look for
that section, oh, I gotta updatethe CRM, let me find that

(38:59):
section.
Okay.
I need to use this tool.
Let me go read how to use thetool.
And that's kind of in anutshell.
AI doesn't think exactly likehumans, but kind of like it pays
attention to different partsdepending on what it's doing.
So that's why the structure ofthe prompt is extremely
important and it, it will payattention to different parts of
it, depending on what it'sdoing.
So you wanna break it downgranularly and you, yeah, you

(39:19):
hit it right, nail right on thehead that, you know, you give it
a very brief explanation of howthe tool works.
And then.
The prompt.
And it's also important to keepthe naming the same.
So don't call this one like Zohotool.
And then it's just the other oneis like CRM managing workflow or
something like that.
Like I always will name themthis sort of way and then

(39:40):
reflect the name exactly withinthe prompt so it knows it, you
know, be very specific and exactwhen you're speaking to an ai.
So very high level, low reviewwhen you go into the tool makes
more sense.
Give all the details therebecause it doesn't necessarily
necessarily need all the steps,use it every time.
If it's doing an email managingtask, it doesn't need to know
all the steps that it's, uh, theZoho, uh, managing agent does in

(40:02):
here.
Lastly, uh, we have vectorstores.
So this is if you have, uh,external databases, uh, vector
stores, anything like that thatI might need to reference, kind
of just tell it.
We just tell it like this iswhat's contained in here.
And a really cool thing thatwe've done well, we do for a lot
of managing agents, but this oneas well, is that it actually has
like a shared memory, um, whereall the agents will output to a

(40:24):
database and super base andthey'll kind of like write down
like, here's what the input was.
Uh, sometimes it's reasoningsteps and then the output.
So this makes it like a wholeunified team because if they
each have their own separatememory, they won't always know
like.
What was actually done.
They just know user input,output.
That's it.
And that's a lot of times notenough context, especially in

(40:45):
multi-agent systems.
So by giving them like a sharedmemory, it's like having a group
chat.
Imagine like if you're trying tocoordinate something with your
team at work, if it's allone-on-one conversations, no one
knows what's going on.
If you do it all in the groupchat, everyone kind of stays on
board and they're like, oh.
And if they don't know what'sgoing on, just go and check.
So we do that here.
you can always, you can also usethis as like more of like a, a

(41:07):
structured query database ratherthan like rag.
Um, which is, you know, dependson what the use case is, but
there's different ways to querythis.
But that is essentially like thegist behind why we do that.

Isar Meitis (41:18):
I'm gonna pause then at the end just for one
second.
For the people who get diarrheawhen they hear the word database
or superb base, sorry, store.
Uh, so you don't have to worryabout it.
Like it's, it sounds verytechnical, but the reality is
the tools today became really,really simple.
Like literally, if you go tosuperb base, which is one of the
most commonly used tools, uh, inthese worlds, uh, it's a

(41:38):
database tool and you can set upa new database and give it a
name, and that's more or lesseverything you need to know.
And from that moment on, uh, theagent or in this particular
case, all the agents can callthat database, which is just a
storage.
It's a container of informationwhere they can all save
information to, and you tellthem what information to save.
So when you take an action, aannounce it to the database,

(42:02):
when you post something to theCRM, put it in the database,
like you tell them what to save.
And it's called a vector storebecause these AI.
Databases are built on vectorinformation.
So basically it's like a lengthand a direction, but again, you
don't need to know all of that.
You just need to know there's acontainer that saves information

(42:23):
and saves data that all thetools have, that all the agents
and all the tools have accessto, and that's their way to
collaborate and have a unifiedknowledge of what's going on
across the different processes.

Jake George (42:37):
Absolutely.
And there's, in a nutshell,there's two ways that, uh,
databases are commonly used foragents and one is just their
like chat memory.
So like maybe the past 10 thingsthat were said.
And then the other one would bethe vector database.
And usually that's when peoplesay, I trained an AI to do
something.
Usually what they mean is theyuploaded all this information to

(42:57):
a vector database and it's justlike, imagine having like a user
manual for something.
You can go in there and look upany data when you want to.
So there's like chat memory andthen here's all the data that
you can look up if you need it.
And that's how those two willtypically operate and why
they're a little bit differentthere.
Um, then at the end here, uh,this is super effective and

(43:18):
important.
The, uh, AI models will payattention to the things that
they top and bottom the most,and I believe bottom a little
bit more than top.
Um, so.
When you have a very long promptthat it might kind of like skip
over things on, it's really goodto give it a recap.
And again, I do this iterativelyof like the things that it might
forget or is not doing well,that you're like, it's in the

(43:40):
prompt, it's just not doing it.
Just restate it at the endbecause the more times that it
appears in the prompt, the morethat it will adhere to the
instruction.
And also if it's at the end, itpays more attention to the end
and it just makes it that muchmore effective.
For this one, it follows theinstructions very well, largely
because they're, you know, wellwritten, well structured and
everything.
The thing I put at the end isfrom the open AI paper about,

(44:02):
you know, how it's an agent andkeep going until the user's task
is complete.
Um.
The AI models, they justunderstand what agents and
agentic frameworks are.
And so then it just thinks morelike an agent rather than like
a, you know, you send me aquestion, I send you an answer
sort of thing.
It just makes it perform better.
so that in, yeah, that is howthis, uh, main agent works here.

(44:23):
I can definitely go into, um,maybe a subagent now, or I could
even show like, sort of how theoutputs look like in a user
experience sort of way fromSlack.

Isar Meitis (44:32):
So I, I we're, we're short on time.
I want us to do two things veryquickly.
Sure.
One, let's open just one of thesubagent, just as people see
what it looks like.
And basically, if you thinkabout it, a subagent is kinda
like a tool for the main agent,and then you can go as many
layers deep as you want, or aprocess could be a tool for an
agent and so on.

(44:53):
So you can see now that Jakeopened a subagent, it has a lot
less stuff.
So let's just go over the.
Components that this isconnected to so people
understand.
Uh, and then maybe we'll look atone run and see how the output
looks like.
And I think that's gonna beawesome.

Jake George (45:10):
Sure, yeah.
For this one.
Um, so yeah, it is much simplerand one of the reasons for that
is because it's input is justdirectly, it's just a natural
language string from the manageragent.
We don't have to worry aboutprocessing the data from Slack,
having a reply, so on and soforth.
Um, so what this one does, weactually use some different, um,
MCP servers here.
These ones are more of likeworkflow ones that we had to

(45:30):
build, uh, different steps into.
These ones are more like directactions of like getting deals,
leads many deals, many leads,accounts and contacts from
their, um, CRM.
And then it also has, so the,uh, Postgres chat memory here is
the database, like I was saying,where it's just the chat memory,
like it has, you know, this manymessages that happened helps
keep it on track a lot more.

(45:50):
And then as you can see, for theprompt it's structured.
Very much, uh, similarly to theother one.
Um, we have like morestep-by-step processes in here,
explain all the different tools,so on and so forth, um, and how
to do, uh, different processesin here.
Um, but yeah, this one is mainlyfocused because it doesn't need
to know all the instructions ofeverything.
It's really just focused onlike, it's how you complete the

(46:13):
queries that will go to you.
Here's the tools that you haveavailable, here's how you would
use them.
Um, also using the think tool isquite useful as well.
It's, you know, allows it tokind of have like a sort of like
a scratch pad where it can justlike put down thoughts and kind
of like reason through them.
And it just, it helps it thinkbetter through a longer or more
complex tasks that might takemany different steps.

(46:33):
so that is, I'll add one

Isar Meitis (46:34):
small thing.
First of all, great recap ofwhat the subagents do.
You touched.
For on MCP servers, MCP serversare the ultimate AI connectors,
and they basically are builteither by the companies
themselves or by third parties,and they're exposing the
functionality of a tool.
In this particular case, Zoho,but this could have been

(46:55):
Salesforce, ERP email platforms,knowledge graphs, social media
network, like whatever has anAPI that connects to it.
It exposes it to an AI in avery, very simple way.
So instead of having to developAPIs for weeks with a team of 10
people, you write four to 10lines of code and you have

(47:18):
access to everything that thetool has to offer.
So when, uh, Jake is saying, Iconnected MCP servers to provide
me information or take actionsin Zoho, what he means is that
they've connected specificfunctionality from that.
Into this agent, which again,sounds really, really
complicated, but it's a loteasier than you think because

(47:39):
that's what MCP does.
It makes it very easilyaccessible.
And then in simple English, youcan tell the agent, Hey, I need
you to go and do this.
And it knows how to go and grabthat function from the MCP and
then it knows how to call inthis particular particular case,
Zoho and say, Hey, I need toknow all the new leads that came
in in the last 24 hours.
And then it just works withoutyou having to figure out what's

(48:01):
actually happening in thebackend.

Jake George (48:03):
Yeah.
it makes it, it makes thesethings very quick and easy to
develop too.
You just simply go, you makeyour MCP server, you throw all
the tools in it, and it yeah,makes it very easy to, uh, spin
these things up very quickly.
one more I can show sort of likea demonstration.
See, let's see how this

Isar Meitis (48:22):
looks like on the Slack channel.
Uh, yeah.
So people understand what theoutput looks like after all
this, uh, detailed work.
Let

Jake George (48:30):
me go into here.
Okay.
Slack.
Okay.
Do you see Slack now?
Yes.
So.
What we have here, this is kindof like the, uh, this is another
one that we're working on, butthis is kind of like a manager
agent here, uh, that has a wholebunch of different subagents.

(48:52):
Uh, Joe named it Jarvis for thisone.
Um, but these are sort of likethe functions of how the
different agents would interact,uh, with the user.
For example, he says, Hey, canyou look for information on
himself?
So it's going to go ahead andlook through the CRM.
It says, Hey, here's the CRMsummary.
I also look for all the emailsfrom, uh, him as well, and then
ask the user what they wouldlike it to do next.

(49:14):
So for sometimes we usestructured.
Another really cool thing thatyou can do is use slack blocks,
which I'll give a visualdemonstration of in a second.
A lot of times some agents weuse like unstructured texts like
this because it can be sodynamic.
But other ones that are focusedon a very specific task, uh,
this one for example.
So this one is the emailmanaging agent, which is under
the subagent.

(49:34):
So again, like I was saying,sometimes if it just like, Hey,
I know that I should write anemail to this person, we don't
need.
Jarvis to tell us, and then theemail agent to tell us, it's
just duplicate.
So then Jarvis would choose,Hey, email, uh, agent has it, I
don't need to respond.
Um, then the email agent wouldsend this.
So it's just like, Hey, um, thisis the email that came in and
then here's the new one that I'mgoing to send based on that.

(49:56):
So then we have it referencingpast emails so it knows exactly
how to write those, knows what'sgoing on.
And then to make it very easy,because it's like you can have
sometimes to keep it better ontrack, especially for clients
because it's like in your mindyou might know exactly all the
8,000 things that this agentdoes, but a client does not.
So sometimes it just makes iteasier to just like, these are
your four options, and just givethem some buttons because it's

(50:19):
like they either want to sendit, draft it, uh, to be
rewritten or just decline it ingeneral.
So it's very useful to kind ofjust give them those direct
options.
Yeah, for you can do this usingSlack box.
When

Isar Meitis (50:30):
you do this, you build, and again, now we're
getting way too technical, butyou build a Slack tool that has
these functions that thenconnects in the backend to the
functions in the AI agents.
So that's how these twouniverses interact.
Jake, this was fantastic.
very detailed and I wanna do avery quick recap and then I want

(50:53):
to let people know where theycan find you.
So.
Agents are awesome, especiallywhen you're combine them with
structured processes and theright data sources and the right
tools, and that's the way toactually make them work.
And what Jake showed youliterally openly, which is
incredible because it's work heactually does for his clients,

(51:13):
is how to think about it.
So how to create thearchitecture, how to build the
layers of the different things,what kind of tools you need to
think about how to connect them,and what are the actual prompt
structures of a master agent andthen the sub smaller.
Tools level or specificoperations level agents.

(51:36):
and then we showed you a veryquick example on how that can
work while connecting it intoSlack as the way to communicate,
as the, if you want, the frontend, the user interface for
these agents.
So the users don't need to knowany of this.
The end user just sees a Slackchannel and they talk to the
Slack just like it's a personand they can, uh, communicate
back and forth.

(51:56):
So again, incredible, incrediblework.
This is extremely helpful forpeople.
Uh, by the way, I don't expectanybody after listening to this
podcast to be, uh, an agentNinja, but it gives you a very
good idea of what it takes toactually develop an agent.
And then you can call somebodylike Jake to actually build it
for you, but you will have amuch easier work of explaining

(52:19):
what you actually need, which isa big, uh, important part.
So, Jake, if people wanna learnmore about your work, uh, work
with you, follow you, see yourcontent, what are the best ways
to do that?

Jake George (52:30):
Yeah, so definitely I can connect with me on my
LinkedIn.
Hopefully you can just drop alink in the description or you
can just search for me by nameand then also at my website,
which is just ag agenticbrain.com, which is spelled
right here.

Isar Meitis (52:43):
Awesome.
Really, really great.
I really appreciate you,appreciate what you're doing,
appreciate the content you'resharing, uh, and keep on doing
the hard work.
And for everybody listening, ifyou wanna learn more about
agents, we have several otherepisodes on several other
platforms that are not NA 10, sothey can broaden your horizons,
but they all follow the sameexact concept that Jake,
beautifully laid in front of us.

(53:05):
So thanks again, Jake.
Uh, I'll see you around.
Thanks everybody for listening.

Jake George (53:09):
Thanks for having me.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.