All Episodes

July 11, 2024 35 mins

In this episode, Mark and Shashank dive into the intriguing world of AI agents. The duo unpacks the fundamental concepts of what AI agents are and how they function within workflows. They explore why these agents are garnering attention as the next big thing in the AI landscape. The episode also offers insights into practical applications and potential advancements that AI agents could enable. Join them as they discuss the upcoming workshop on building AI agents and provide a sneak peek into what participants can expect at the event in Palo Alto on July 25th. Whether you're a new listener or a returning fan, this episode promises to enhance your understanding of AI agents and their role in automating complex tasks.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hello everybody and welcome to another episode of the podcast.

(00:05):
So for those that don't know us or who those coming back just real quick, I'm Mark and
with my wonderful co-host Shashank, we talk about Gen AI in general.
So we run a meetup in Silicon Valley and it's a weekly meetup and we meet a lot of cool

(00:27):
people and we talk about a lot of interesting concepts.
So that's why we made this podcast.
We wanted to bring those interesting conversations that we had with the world.
Hence you.
So we hope that you're able to get some you, like as a new listener, we hope that you're
able to get some value out of our conversations.

(00:50):
So today we were thinking about talking about agents.
What is an agent?
What's an agent workflow?
Why they're kind of like a new hot topic?
And then like maybe what are some kind of like things that agents will enable.
Now the reason we are talking about this today is shameless plug.

(01:13):
We have an event on the 25th of July this month.
So that's like in what is that about two weeks from today.
We're going to have an event in Palo Alto.
You can find the event details on our meetup page.
We'll put it in the description, but it'll be meetup.com/gen-AI.

(01:35):
You'll be able to see it there.
It's going to talk, it's going to be a workshop all about building agents.
So hopefully we'll see you guys in person on that one.
But regardless, let's get rid of it.
So Shishank, do you want to start us off and maybe describe what is an agent?
Sure, yeah, I can take a stop at it.

(01:57):
So we started seeing agents actually as early as two years ago when ChatGPT first came out.
And people started trying to think of what are the limitations of this tool and how do we
augment it.
So some of the obvious things that an LLM can't do is search the internet or at least back

(02:20):
then the LLM couldn't search the internet.
Now they have these functions or plugins, whatever you want to call it, that augment its capability
to give it tools and ability to do other things outside of all the knowledge that is contained
within the LLM's neural network.
So using tools in a structured format to do repetitive things is usually something that

(02:51):
can be automated by an agent.
So let's take a use case.
Maybe you want to summarize the top 10 articles in New York Times every day rather than going
through it manually.
So you can build an agent with this specific use case in mind, which has a bunch of
repetitive workflows.
You scrape the web, get a list of articles, you go through each article, summarize it,

(03:17):
and return the summary in like a coherent manner.
So an agent is something that can run multiple queries within LLM by itself or call other
functions outside of the LLM.
Do a bunch of processing, do a bunch of self correction, and then return and answer after

(03:40):
doing multiple steps.
Yeah.
So one thing to note is it's not just like one task, so Shashan gave the example of summarizing
the New York Times where it would be like a one step thing where the agent would go and
pull the articles from the New York Times and summarize them and then just do that once.
But you can do it where there are maybe multiple steps.

(04:01):
So for example, maybe you wanted to make an agent if we're taking this example a little
bit further to give maybe like a commentary or maybe like fact check each of the articles
in the New York Times.
So like maybe the agent would go for each of the articles, it would go make the article
by article and then article one, it would go and then maybe do look at it, summarize it

(04:24):
and then maybe Google search on each of the maybe key arguments made and then try to
see if it could find maybe some supporting data for that and then maybe aggregate a list
of like I don't know how you say it like a list of sources or like articles or whatnot
to validate the coherency or like the truthfulness of a particular article.

(04:52):
So it could be like multiple steps and then like that's kind of like a contrived example,
but like you could go and build like a lot of really more somewhat complicated workflows.
So like in addition to like fact check and articles, you could use it maybe for like investment
research, you could do things like product research.
I think like research is like a pretty good thing.

(05:14):
Yeah, you could do Google searches, maybe call like a calculator, call APIs that don't like
maybe like a flight API, you could get access to calendars, maybe like movie show times,
maybe you could ask it a survey question to get like human feedback.

(05:38):
There's like a lot of things that you could potentially do with this and try to combine
them all together with LLMs.
Yeah, so apart from tool use, it can just borrow a lot of software engineering principles.
It can like be asynchronous, it can do some LLM processing weight maybe for like some relevant
information like you mentioned to some user input maybe or wait for a specific time day

(06:03):
and then run a little bit of computation again.
It can also get around some of the other limitations of LLMs, which is like the context window.
So if you're thinking about a lot of data, a lot of maybe websites, a lot of PDFs, your
LLMs going to run out of context at some point even though these LLMs are getting really

(06:25):
big today.
But another way to get around this within agent would be to store some kind of memory with
key information that the agent can keep referencing over time and use other types of information
augmenting like retrieval augmenter generation, build a rag and then the agent can pull relevant

(06:47):
information out of that rag.
So a lot of other tools can be added on top of these LLMs and the agent can leverage these
things to do higher level thinking.
Yeah, yeah, that's right, that's right.
So like I think we've kind of described like agents at like a little bit like a high level,
but let's kind of like sort of guide, listen and think about like hypothetically, let's just

(07:11):
do like a thought experiment.
How would we somewhat build an agent and let's think about like what's the sort of possibilities
of them.
So like one thing that I was sort of thinking about is like doing one, having an agent for product
research, right?
Like how would we go about and like build something like that?
So like I was thinking about it for a little bit, right?

(07:36):
And with like product research, like let's say like I'm a company, right?
And I want to figure out like what is like a good product that I want to be able to sell,
right?
So a good product to sell, like there might be certain things that like make a better product
to sell than other products.
So let's say like you're looking at the total universe of products and you want to see
like you know what may be good.

(07:57):
So one thing you might be looking for is you might be looking for like maybe Google Trends,
like maybe somebody's like talking about like a particular product or a different particular
type of product, right?
So like let's say it's like summertime and it's like frisbee's and everybody wants to go
buy a frisbee.
So what the agent could do is the agent could maybe come up with, there could be a singular

(08:22):
agent that goes and just looks at let's say products that like exist and then maybe it could
just go and then look at the Google Trends for each product and then see like if the trend
is good and a lot of people are talking about frisbee's or maybe another agent could go

(08:44):
look at like the weather forecast and then see that or like maybe the type of season and
then say like hey the season is summer, more people are outside and then it seems like
searches for frisbee's go up.
That might be like a really good product that could be useful for the summer, something

(09:06):
like that.
Yeah, that's a good idea.
What other things could you add on to this agent?
So you have some market or demand discovery, you come up with a few hunches for what kind
of products would make sense in that time and location.

(09:32):
Would the LLM be deciding that?
Would you be pulling information from somewhere else?
So I think that you could ask the alum to decide it, right?
So you could have an LLM where you say like hey, given like all these factors, let's say
for example given a lot of people are talking about frisbee's in the summer, a lot of people

(09:56):
are maybe giving positive sentiment reviews to frisbee's on amazon.com.
Maybe a lot of people are giving, like a lot of like maybe frisbee videos have recently gone
viral on YouTube and they're also trending.

(10:19):
Like these three things made it so that we feed all that thing into the LLM and say like
hey, LLM, what do you think?
Do you think this is like a good product to sell?
And maybe they would say that LLM would say yeah, that seems good.
But maybe you'd given the alternate example like, okay, like nobody's talking about horse

(10:42):
and buggy.
It's a really slow way to get around.
Plus, it's snowing outside so the horse gets snuck in the snow.
Maybe that's not a good product so the LLM would say no, I don't think it was a good product.
Yeah, I think it would definitely weed out the two extremes.
They would definitely say, okay, no horse and buggy in 2024, not trending topic unless it

(11:08):
becomes hip again with some viral Instagram meme.
But it can also detect obviously correct answer, hot summer, cell sunscreen, frisbee's, bathing
suits, etc.
It could maybe take it a step further.
Since it's an agent, it could maybe put out some Google ads to validate, okay, is this

(11:30):
a truly popular product, especially with all the other competitors out there and check
the click through rate, analyze that information, and then give you some kind of a market research
analysis too.
But I feel like these LLMs in general, which are powering these agents, are somewhat limited

(11:56):
in new ones.
So do you think it would be able to detect some unique opportunities that don't exist?
I feel like that's often a challenge because these LLMs are very easy to be primed with
existing information.
So if it sees a lot of keywords for frisbee's, for example, it'll be like, okay, frisbee's

(12:22):
the only thing that I'm going to think about for the next 10 projects.
Yeah, that's a good point.
I think that the LLM probably really good at finding what's popular, but it may not be
able to figure out what that next big thing is that hasn't been invented yet.
I think I agree with you there because I think that certain things, like if the LLM had never

(12:49):
been trained on it, I'm not seeing that it would necessarily be able to come off of the
yeah, next iPhone, for example, right?
Because the iPhone, when it was first created, was like so much different from everything
else.
Or you could say like sure, like there was MP3 players in the past, there was phones in the

(13:10):
past, but for it to combine those together, I don't know.
I'm not seeing that it would be able to come up with that.
Yeah, I don't feel like it can either.
I feel like it will probably get stuck in the same cycle of priming itself over and over
again with one keyword that it starts with.

(13:30):
Yeah, yeah.
So I think maybe like, but if you're like a drop shipper who just are like reselling items
or you're selling something that is already like established business model, I think this
would probably work pretty well.
Actually, it kind of reminds me, so I tried building an agent workflow a couple weeks ago

(13:51):
where I was playing around with some agent tools where it would allow you to build agents
on top of, actually I was using Olamma.
So Olamma is a tool that allows you to run LLM locally.
It's an awesome tool.
Basically because it was running through my open AI, it will way too fast when I was calling

(14:13):
it.
I decided to run it locally and I wanted to build a stock researcher.
So I wanted to say like, hey, like, is this stock like a good buy?
And just tell me everything you know about it.
And unfortunately, the agent that I built kind of got stuck a couple of times.
So what was happening is I said, like, hey, why don't you do some research on, let's say

(14:34):
Tesla?
And then it would start doing some research on Tesla.
It would do some cursory stuff saying like, oh, like it will look like the price is going
up, but the price is going down.
But then it started going off and down all these rabbit holes.
It started going off and looking like it's like, oh, Elon Musk is the CEO of Tesla.
And then it started reading a bunch of articles like Elon Musk and then it started reading all

(14:58):
these articles about batteries.
And then it started reading all these articles about Tesla cars and the problems of the cars.
It just wouldn't stop.
It just kept going and going and going.
And then I think my computer eventually ran out of memory.
So maybe like a fight on more power for computer, like it'd be okay.
But yeah, it just got stuck and it kept going.
So I was looking aside, like, in computer science, there's like the concept of like depth first

(15:26):
versus like breadth first search and like also like recursive problems.
So like, briefly, when we called it depth, we're talking about like, it's like a graph structure.
I don't know the best way to describe this, but basically it's kind of like sampling all
of the little topics a little bit.

(15:46):
So like, for example, if I was like a person and I was what you call like person was into
a lot of things.
So like maybe like, oh, I'm into cooking.
I'm also into sports and I'm into programming and I'm into, I don't know, woodworking and
I'm into like, you know, I have a bunch of hobbies and I just know a little bit about them.

(16:08):
I'm not really that good at them, but you know, I'm just like a little bit okay, each of these
things.
But then there's the, so that's depth first.
So like, I'm just like a little bit that's breadth first.
Oh, sorry, breadth, my bad, my bad.
You have a breadth of interests and hobbies.
So you're skimming a little bit off the top of each of these different things.

(16:31):
Whereas breadth, whereas depth first, as the name indicates, you go really deep into one
particular area, you're obsessed with climbing, let's say, and then you're just watching climbing
videos, you're going to the climbing gym, working out your little dead hangs, building
finger strands.
Right, exactly.
So I feel like the problem is is like when I was running these agents, it was doing the

(16:55):
problem with both.
So it was doing too much breadth, too much breadth and also too much depth.
And it would never actually tell me if Tesla was a good buy.
It's just kept on going.
And I think that's kind of a fun thing about the problem with agents is it's kind of hard
to tell it when to know when to stop.
I think at the beginning when you mentioned summarizing New York time articles, that's something

(17:18):
that was pretty clear.
It's like, okay, here's a clear stopping case.
But if you're doing product research, you can just continue to do research.
I can just keep on learning more about Frisbee's and just keep going.
It's hard to know when to stop.
I guess that's when you need to really spend some effort into building this orchestrator that
manages this individual pieces within the agent.

(17:40):
So maybe something that would keep tabs on how deep or how many times it's thinking about
a specific product, specific stock to research.
And if it's doing it too many times, be like, okay, this is a good enough stopping point.
Let's move on to something else.
Yeah, yeah, that's true.
So, but sometimes I feel like none of those things are actually going to give you the answer

(18:04):
that you're looking for.
Because it's just like, if I say like, I want to look for Tesla and then like, it starts
going off the Elon Musk.
Then it's like, you know, you keep just like, keep going on that rabbit hole forever.
And then like, you have to tell it to kind of come back up and sort of stop.
Even though like, you and all that information on Elon Musk never actually told you if Tesla
is a good buy.

(18:25):
Even though it's sort of related.
It's tangentially related.
Exactly.
So I guess you would, if you were a human being with some common sense and some basic understanding
of market research or finance, you would get enough relevant information until you started
getting diminishing returns maybe and then decide, okay, I know enough about this topic.

(18:47):
Let's move on to something else.
Yeah, that's right.
But then like at the end of the day, sometimes you just like, would chalk that up as like a
risk.
Right?
It's because like, like, maybe, like, there's like a key man risk, I think, at Tesla, where
like if Elon Musk had hit by a bus tomorrow, Tesla might tank because, you know, he is kind

(19:11):
of like the face of Tesla.
It is at least from the outside looking in, it seems like he makes like the majority of
the kind of the key decisions there.
Right?
So it's like, maybe like a human like investor would be like, all right.
It's a risk that like Elon Musk has so much power and then maybe like the LLM would like
try to factor that in somehow, but that's kind of like a hard thing to program for.

(19:34):
Yeah, so I feel like if you're building an agent, you would want to list out specifically
what the agent should be doing.
So you mentioned a, what is it like something in finance to call at risk of like an individual
leading the company.
So what if that person dies, got for bit or something happens, they get bored, they leave, to try

(19:58):
to codify these list of indicators for this company and have the agent go through each
one of these indicators and see if this company, you know, mitigates this risk or is in danger
for this thing.
Yeah, yeah, that's right.
So I think the problem is is like when you're making the agent, like I mentioned key man

(20:22):
risk and like you could probably code in like, hey, like key man risk and like a much of
your risks.
I think we're trying to get away from that from the regular traditional software engineering
principle.
Right.
And have the agent be truly intelligent, come up with these requirements by itself.
Right.
Exactly.
And then that's the hard part in like figuring out like the determination.
Because another thing that I noticed when we're working on the same agent that I was

(20:46):
trying to build is we occasionally get stuck.
So like basically, you know, you would have the LLM try to figure out the next prompt for
the LLM again.
So it kind of like recursively call itself.
But then sometimes it would, I would find that we get into sort of local kind of like

(21:09):
tops where it would start to just like start getting like loops where like it would say
a thing and then like it would go and then say another thing and then come back to the
same thing that it said.
So like, I mean, it didn't do this, but like an example, like a loop would be like, oh,
we think that like Tesla has key man risk and then like it would say like, okay, like the

(21:35):
key man risk is Elon Musk because Elon Musk is a charge of Tesla.
And because he's in charge of Tesla, it has key man risk and then it would be like, oh
okay, like what's the key man risk and then it would be like, oh, the key man risk is Elon
Musk is in charge of Tesla and then like it would just keep on looping like between like
key man risk and Tesla and Elon Musk and then key man risk and then Tesla.
So it would just like keep on like recursing and then going back and forth and just get stuck

(22:00):
and it would hard to, it's kind of hard to get unstuck with that.
Yeah, I feel like again, going back to having an orchestrator manage each of these individual
I guess prompt engineered LLM personas, that really is important to make sure that the

(22:23):
agent is making progress in the right direction.
An alternative that comes to mind is to try to just interrupt it yourself, be the orchestrator
to manage all these little individual characters to try to make enough progress and then stop
them when you need to.
Yeah, that's true.

(22:45):
So you want to go into maybe a little bit detail on what an orchestrator is and how that
would be used.
Sure.
Yeah.
Actually, I can't think of an example.
Let's take with the same finance exploration.
Yeah, well I would think that to give an example of an orchestrator, it would be like a product

(23:05):
manager in like to take a software development approach, right?
Just like even like a manager, right?
So like a manager, maybe they're having, they're working with a bunch of different employees
and the employee does a bunch of tasks or employees that are a bunch of tasks, then oftentimes
they're working on different things all in parallel, not together and then sometimes

(23:28):
like the employee will go off and do their own thing.
So that sometimes the manager will need to say like, hey look, like, hey Timmy, like stop
it.
Like, you're enough looking at Elon Musk.
Just come back and we've got enough, we'll just chalk it up as a risk and then move on,
like go continue your research, right?
And I think that like we would have to have like an agent that acts as sort of like the manager

(23:54):
to like make sure that all the other agents underneath don't go like out of control.
Yeah, that's a good way to think about it.
So you built an agent that did something similar a while back.
It was like a startup, a tech startup agent that took some high level problem to build an
app, a game, et cetera, and then just boom, built it with a bunch of different, I guess, focused

(24:19):
LLMs.
One was a product manager, one was an engineer, one was a QA I think.
So I don't remember if you had an orchestrator, but that would be a perfect use case.
I did somewhat.
I had a product manager that would help.
So that is your orchestrator?
Yeah, I didn't call it the orchestrator, I didn't know what the orchestrator was at that
time.
But yeah, so you guys remember, I think it was Devon, which is the thing that would like do

(24:45):
coding.
I made that like probably six months before Devon was launched.
So, you know, where's my billion dollars?
But anyways, yeah, Devon's a cool thing.
It was like a thing where I think it used agents to be able to program.
And then didn't Microsoft buy them, I think.

(25:05):
Did they?
I had no idea.
I thought so.
I thought I remember hearing something about Microsoft buying Devon.
I don't know.
Shoshank will do some research on that.
I don't want to buy Devon.
But anyways, basically Devon was like a cool thing where you could give it a prompt and

(25:25):
then it would be able to just like write code so I could say like, hey, Devon, make me
a tech-dress game.
Hey, Devon, make me like a website that sells my products and it would just build it.
Yeah, so in my mind, an orchestrator would be like the role of the CEO maybe, who is one

(25:46):
step above even their product manager.
Because let's say we start out with a high-level description, I want a cool new game that is
really trendy and relevant today.
So maybe the business analyst would look at the top performing apps or games on the app
stores and see what kind of themes they have, send that information to the product manager

(26:12):
who comes up with a bunch of requirements.
And if the product manager was the orchestrator, maybe they would just keep coming up with
requirements and keep refining these different requirements, go in depth until they take over
the whole operation.
As opposed to someone being in check who doesn't have any other function as other than just

(26:38):
to keep tabs on all the individual personas that are doing different tasks.
You know, that's a really good point.
Yeah, I feel like the CEO is like a better example of an orchestrator.
So with all this being said, how far away are we from just like, or if you had to speculate
like how far away are we from just like creating like an entire company and just off of AI

(27:03):
agents.
And like what do you think we're missing?
Yeah, that is a good question.
I think Andrew Eng was really bullish on agents a couple months ago because we were
like, okay, LMs are kind of plateauing.
They have similar limitations as they did maybe a few years ago.

(27:23):
Some of the tools and functions, plugins, etc.
that augment their capabilities have increased.
But still they're not truly intelligent.
They get stuck.
They cannot accomplish higher level tasks.
So a lot of people started looking into agents.
And I think the first few versions of agents that were written up by a couple software engineers

(27:46):
and pushed it on GitHub were called AGI's.
It was like, baby AGI or yeah, I was looking to that source code.
Yeah, yeah.
So we've been bullish about this for a while.
I think it's possible.
I just feel like the agent, agentic, you know, blueprint framework to construct something

(28:08):
that accounts for all these limitations, making sure it doesn't get stuck, making sure it doesn't
go depth first into a rabbit hole, you know, having checks and balances on each of those
different personas that is doing work.
If we solve some of those problems, I feel like we could get AGI with the next version

(28:30):
of ChatGPT that comes out.
Maybe something multi-modal that can understand and reason in different domains, video,
image, audio, text.
And if you put that into an agent, I feel like it would be pretty close.
Whoa.
So like a good agent.
You just said like AGI by the next version of ChatGPT and a solid agentic framework.

(29:00):
I mean, wow.
All right, guys, you heard it yet first?
I don't think that's trivial, though.
I feel like that would take a lot of information theory and, you know, some kind of game theory
to make sure all these different personas interact well together.
Some kind of economics or something to understand how different entities make progress towards

(29:24):
an end goal, all of that stuff.
So 2025, AGI, it's happening.
No said dates, but my intuition is that stronger multi-modal model with better agentic frameworks
would get us a long way.
Yeah.
You know, I think multi-modal is like a really big key.

(29:47):
So I went to a hackathon, I think it was two weeks ago, where they were talking about
the ARC prize.
So have you heard of this ARC prize?
Was it from the browser?
So it's made by like a couple people.
There's one guy who started Zapier.
He was at the hackathon and he gave a little presentation about it.

(30:07):
And then there's another AR researcher, which I can't remember his name.
So they came up with this ARC benchmark.
So basically what the prize is is it is a set of tasks that show like puzzles that are

(30:31):
arguably simple for like humans.
Now honestly, I don't think they're that simple.
So some of them I thought were kind of hard.
Like maybe I'm just like not that good of these puzzles, but to me they were a little bit
hard.
But arguably it's like something that like a human could solve without knowing any detail.
So like oftentimes it would be like, stuff that you'd see like an IQ test.

(30:54):
So it would maybe like show you like a certain bit of, I don't know how to describe it.
It would have like some sort of like N by N grid where like maybe like a five by, like
it would look kind of like Sudoku sort of.
And like maybe like the Sudoku puzzle would be like colored.
And then like it would be like Sudoku puzzle A is color, you know, with this set of patterns.

(31:20):
And then Sudoku color B is color with this.
And then like how do you get from A to B?
And they would give maybe like two or three examples.
And then you'd be like, okay now like here is like an unsolved one.
Like how do you solve it?
So basically just like you know kind of like complete the pattern, but like in a multimodal
way.
And like a none of the LLMs right now are able to solve it more than I think, I think like

(31:47):
the number one approach to solving it was like around 50%.
And that cost a lot of money.
And it also was like a super brute force way of doing it.
So apparently the way it was done is they made like 10,000 like a different like programs

(32:09):
for each of the patterns.
It is like brute forced it.
And it worked pretty well.
I think they got like 60%.
So they argue that if they're able to solve these types of problems that would either
be enough of an agentic flow, maybe not agentic, but like that would solve enough problems

(32:31):
in order to solve AGI, not sure, but that's what they argue.
And then maybe so I'm guessing, I'm trying to say is I think maybe something like that is
what we're missing.
This feels very different though.
It feels like a computer science problem, maybe computer vision, or if you could code
it into an N-by-n array.

(32:54):
So for context, this arc prize, it shows a grid of shapes and there's an input, there's
an output.
You see some incomplete shapes in the input and the output would be to complete it with
a certain shade of colors.
And as a human being, as a human being, I can kind of like into it pretty quickly.

(33:20):
We have like three dots in the shape of an L and you put like the fourth dot and make
it a square.
So I was like, okay fine, you take other shapes and fill in this pattern.
And I think this is, yeah, this seems very computer vision like, or some kind of an algorithm's

(33:42):
task as opposed to AGI.
It doesn't contain any textual information, it's not reading anything, it's not reasoning
about information in text.
It's very logic puzzle based.
Yeah, I guess that's true.

(34:04):
But I guess that, like if you think about it, like reading text isn't like, I think that's
the same required for AGI.
So like human, even if like a human was illiterate, they would still be maybe smarter than like
LLM's that we have right now.

(34:24):
Maybe they could talk or maybe just like solve problems that like you wouldn't have been
able to solve like animals like can't, well they don't use speech like traditional
like sense, right?
Like dogs, for example, they don't like talk.
They bark, they kind of talk in like tones sort of, but they don't necessarily like talk

(34:48):
in the same way we do.
But I would say argue that like dogs are like pretty smart, like interacting with them.
So I don't know.
I guess that like not like really a fully formed thought on that, but yeah, now we're getting
into what is intelligence.
Yeah, exactly.
But anyways, we're kind of out of time, so we don't have time to talk about that.
But anyways, maybe we'll talk about the next one.

(35:11):
We'll see.
So maybe like take away from this would be agents are cool, but they have a lot of limitations.
I would say if you're trying to build an agent, it only makes sense if you have a very repetitive
workflow that you can refine.
And codify into specific instructions exactly what needs to be done.

(35:34):
The more specific, the better performing that agent will be.
Yeah.
I think that summarizes pretty well.
So anyways, until next time, bye everyone.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.