All Episodes

February 4, 2025 38 mins

Send us a text

The conversation centers on the transformative role of AI agents in shaping the tech landscape. Through expert insights and practical examples, the episode explores AI agents' functionalities, their implications for the workforce, and the ethical considerations that accompany their development.

• Introduction of AI agents and their significance 
• Evolution of AI technology from simple models to complex agents 
• Practical applications and examples of AI agents in various fields 
• Mechanics of building and utilizing AI agents 
• Considerations regarding workforce changes and AI's augmentative potential 
• Discussion surrounding the ethical implications and risks associated with AI 
• Encouragement for listeners to engage with AI agent development and experimentation


How to connect with John Capobianco:

https://bsky.app/profile/automateyournetwork.ca

https://www.linkedin.com/in/john-capobianco-644a1515/

Purchase Chris and Tim's new book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/

Check out the Fortnightly Cloud Networking News
https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

Visit our website and subscribe: https://www.cables2clouds.com/
Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
Follow us on YouTube: https://www.youtube.com/@cables2clouds/
Follow us on TikTok: https://www.tiktok.com/@cables2clouds
Merch Store: https://store.cables2clouds.com/
Join the Discord Study group: https://artofneteng.com/iaatj

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Tim (00:14):
Hello and welcome back to another episode of the Cables to
Cloud podcast.
As usual, I am your co-host,tim McConaughey at carpe-dmvpn
on Blue Sky, and with me, asalways, is my co-host.
Amazing, wonderful co-host TimMcConaughey at carpe-dmvpn on
Blue Sky, and with me, as always, is my co-host amazing
wonderful co -host.
I ran out of words for youalready, Chris.

Chris (00:34):
I was very happy-go-lucky intro.

Tim (00:36):
Welcome hey somebody told me yesterday I had a face made
for radio, so I figure I needthe voice to match.
But yeah, so Chris Miles iswith me at BGP Main on Blue Sky
as well, and with us we have areturning guest and a friend of
the podcast, John Capobianco.
I don't remember, John, are youon Blue Sky?

John (00:59):
I feel like you are.
I feel like you are right, I am, and I love that you can
register with your own DNS.
If you've got a domain Like I,love that.
Yeah, so it's I'm not sure ifit's just my name at automate
your network on Blue Sky, whereyou can find me now.
Okay, I've been hanging outthere a lot more yeah.

Tim (01:15):
Yeah, I like the vibe there more than yeah, than the other
place has been.
So yeah, anyway.
So let's get right into it.
So we brought John back becausehe has been doing some pretty
cool stuff and, honestly, giventhe listener base we have, given
the circles we're all in.
You know, some of you may havealready seen this, but we

(01:35):
thought it was really importantto bring John on to talk about
AI agents and specifically AIagents in the context of you
know what is an AI agent?
Where does it fit inside theinsanely fast growing, you know
AI technology stack?
If you will, you know, mostpeople are still talking to chat

(01:55):
, gpt or Claude or whatever, andso the idea of an agent is a
little strange.
You know, people don't knowwhere it fits and how they
should use it.
So, yeah, john, I won't standon ceremony man, let's kick it.
Let's get right into it.

John (02:09):
Yeah, let's get into it.
So I know there's been a lot ofhype.
Even myself, I've hadpredictions for 2025 and AI
agents was my number one.
I think it's important forpeople maybe to understand how
we got here and maybe what anagent is.
I know there's lots ofdifferent definitions.
Here's one thing to keep inmind I think agents are like AI

(02:29):
2.0.
And so like, if we back up, ifwe roll that back to AI 1.0, I
would say is making an API callto a large language.
Model 1.5 would be theretrieval, augmented generation
and other retrieval augmentedapproaches that you covered very
awesomely, tim.
I watched your session.
I thought it was great.

Tim (02:49):
Thanks, sid, I appreciate that.

John (02:55):
No, like I really liked the library and the magazine
comparison and now, because ofthe advancements in the models,
mainly we can now do agentic orAI agents.
So I found this wonderful.
This is probably the mostsuccinct and this isn't like a
Google search.
This is from Google.
Google put a paper out that'scalled.
It's just called Agents, butthese are authors from Google.

(03:17):
And they say it's thecombination of reasoning, logic
and access to externalinformation that are all
connected to a generative AImodel that invokes the concept
of an agent.
I think that's probably themost succinct definition that
I've found so much like RAG, wecan do external calling based on

(03:39):
what my work and what myapproach and what I've figured
out over the past few weeks.
You decorate tools.
Here's an example you mightmake a little calculator
function in Python.
You know X plus Y or amultiplication function.
You literally decorate that asa tool.
You say this is a tool and thenin your prompt to the AI, you
give it specific instructionsand make it aware of the tools

(04:03):
that you're providing it.
So you say here's the prompt.
By the way, you have access tothis little calculator tool
because AI isn't great at math.
It doesn't claim to be good atmath.
So maybe when, if you do, andthen you know subsequently when
other people ask questions, ifthe AI agent detects oh, there's
some math involved in thisprompt.
Oh, check it out.

(04:23):
I have this little calculatoragent that I can go ahead and
invoke to do the math function.
Or I have a little weather appand the weather app might have
an agent that makes externalcalls to a weather system API
and it's the combination of thereasoning and the action.
So that's the approach that Ifound is literally called a
React agent model and yeah, so,and there's actually a paper on

(04:48):
this.
It's kluai and I found thispaper from their website, the
React agent model, and it's kindof works like this.
Here's an example of my code.
So within my prompt I say touse a tool.
Follow this format thought, doI need to use a tool?
Follow this format thought, doI need to use a tool?

(05:09):
Yes To the action to takeshould be one of these tools.
Action input is the input tothe action.
An observation is the result ofthe action and then the final
answer, and you literally putthat in your prompt that you
send to the LLM and it willfollow those instructions.

Tim (05:26):
Interesting.
So yeah, I mean so to back thisup a little bit, because
there's a lot to unpack at thehigh level.
What I'm hearing is that, youknow, agents are more like a
almost like a Docker containeror like a Pythonic class or
function.
It's wrapped up, it's like awrapped up prompt, you know that

(05:47):
includes the tooling necessaryto carry out the task.
Is that accurate?
Yeah, exactly.

John (05:54):
So, for example, you know, like CRUD activities with a
REST API, right?
So you're going to make fourtools a create tool, a read tool
, an update tool and a deletetool.
You're going to package thoseup in the prompt and say by the
way, you have these CRUD toolsthat you can use to invoke
against ICE or DNA or whateverAPI of choice.

(06:14):
It is right, and there'sminimal coding.
There's no deterministic code.
It's not like an Ansibleplaybook or even a Python script
where you're specificallylaying out the logic to take
with if else type statements.
You're saying look, if you needto read something from the API,
here's the tool, here's the URLand the credentials and
whatever.
Right Now, it helps if you givea few examples.

(06:36):
So, after that prompt that Ijust explained to you, I
typically have a few examples oflike do I need to use a tool?
Yes, what's the tool called Getdata from Netbox?
What's the action input?
There is no input because thisis a get activity, things like
that.
Right?

Chris (06:52):
Yeah, that was going to be.
My question is like, obviously,giving an AI agent access to
specific tooling soundsrelatively powerful, but I was
curious how much of thatinstructiveness do you need to
give to the agent?
Right, tim and I or you we knowhow to use an API.

(07:13):
We could comb through and say,oh, this is relatively what I'm
trying to do, but how muchtraining do you have to give the
agent in order to be able touse the tool appropriately?

John (07:21):
Oh, not much at all.
So some of my scripts, like,let's say, a Netbox agent, it
might be 300 lines of Python,the entire agent as a Python
file.
You're going to have your tools.
You're going to have yourprompt Most of the code is
actually English in a prompt andthen you're going to have, you
know, an agent executor inLangchain or Lama, index or
whatever framework you're using.

(07:41):
You're going to use theiragentic approach to invoke this
agent I interact with.
So I use Langchain for my agent.
Framework is what I've writtenthe code in, with a Streamlit
front end, with a naturallanguage front end.
So what's neat is in the logs.
You know, when you start to getyour hands on this stuff.
You can see in the logs the AIsays thought and then it tells

(08:04):
you it's thought oh, I see this.
Yeah, I think I need to go tothe netbox api to answer this
guy's question, and then it will.
I need a tool to do that.
Oh, there's a read tool.
Turn on the read tool.
Oh, here's the json that gotsent back.
I bet you it has the answer.
It's, it's.
It's like watching a childverbalize their early thoughts.

Tim (08:24):
Yeah Right, I was actually just today.
This is weird.
I was working with Andrew Brownon.
You know, he's doing thegenerative AI boot camp and I'm
helping him with some of theJapanese stuff and whatnot.
We were building an app and hewas showing me, you know, an ID
tool.
Was it Wildfire or somethinglike that?
I think that's what it's called.
Anyway, point is you know an IDtool, was it wildfire or

(08:44):
something like that?
I think that's what it's called.
Anyway, point is you know,through he was building code and
he was having the tool help,but what was interesting is that
the calls were doing exactlythat.
You could actually follow thereasoning that the model was
using and like.
It was pretty interesting tosee almost like.
It's almost like debugging, ifyou will like see how the llm is
is reasoning out the thingyou've told it to do.

(09:06):
So you can adjust the prompt orjust the code or whatever,
right?

John (09:10):
yeah, it's really neat to see it.
And sometimes you'll go, oh,it's called the wrong tool here,
and then you realize that it'syour instruction set.
Do you know what I mean?
Like it's doing what it's told,and and it's your own logic and
the and the way you word things, and and it's your own logic
and the way you word things, and, and it's funny, things are
emerging.
Like if you put now begin atthe very bottom of your prompt
the last thing you say, nowbegin, it apparently helps these

(09:33):
agents work even better.
Apparently some people areoffering monetary rewards at the
end of the agent code, saying,for every question you get right
, I'm going to give you twentyfive thousand dollars.

Tim (09:45):
For some reason it seems to improve the performance of
these reasoning models rightonce they take over the world
they're going to be coming tolook to pay up.

John (09:52):
man.
Well, I was thinking as I wasdoing this.
So I started with a fun agent.
I tried to write one for thepokemon api, and then it sort of
became almost a repeatableformula.
I figured I cracked the codewith Pokemon, now let's try
Netbox, because it's REST APIs.
So then, after Netbox, it waslike these little isolated

(10:14):
agents aren't good enough.
What I want is a beehive or anant colony.
So then I said well, what if Imake an agent?
For, let's say, I'll spin up aCML environment.
The modeling labs are now free,right?
So I'm going to put two routers, two switches, connect them,
some VLANs, some IP addresses,whatever, put all that

(10:35):
information into Netbox, in theDevNet or the DevNetbox cloud
instance, and then see if Icould have the agents work
together and I'm calling thisinfrastructure as agents, like.
It's almost like infrastructureis code, where you've got a
source of truth and a YAMLdefinition or whatever per piece
of infrastructure.
Well now, like, we can have ourF5 agent and our core agent and

(10:57):
our firewall agent and thisagent and they can do this
reasoning and action together,right?
Oh, the firewall agent has atool to update an access control
list or whatever, right?

Tim (11:07):
Okay, and now these and these agents are essentially
built or coached or prompted insuch a way that they're, they
know how to interact with thespecific object.

John (11:18):
That we're okay, yeah exactly so the router agent.
Instead of CRUD code I have aconfiguration tool that is a
PyETSconfigure.
I have a PyETSparse or learn orwhatever, right?
So those are the tools that Iuse with routers and switches,
cisco routers and switches.
And then the agent says oh,they want to know what their

(11:39):
default route is.
I'm going to use the run showcommand tool.
Here's the command command showIP interface brief as the
action input.
And literally I see the LLM sayhere's what I need to do.
And then I see my test bedloaded and then I see it connect
to the router and then I see itconfigure the interface.
Right.
So I tried to see honestly thiswork, guys.

(12:01):
So I have router one, routertwo, 10 interfaces, four vlans,
a bunch of ips in netbox.
I say to the llm can you pleaseconfigure all of the ip
addresses and descriptions onrouter one and two?
And then could you pleaseconfigure the vlans and
interfaces on switch one, switchtwo.
The agent goes to netbox a bunchof times and gets all the data

(12:21):
it needs connects to router one,connects to router two,
connects to router three,connects to writer four or
whatever, and literally buildsthis whole topology like cdp.
Neighbors come up and a runningping starts to work, just from
the natural language prompt um,it's impressive, it's and it
really like that's me alonehacking away at this with chat
gpt helping me write the codeand just struggling and fighting

(12:44):
with it.
But when it worked I was reallylike wow, this is, like you
know, I don't want to costpeople jobs Right, like I'm
doing this because it'sexperimentation and it's
bleeding edge and the tools arehere and the models have evolved
.
But I'm very frightened by someof this work that I've that you
know, like this could beFrankenstein's monster kind of
thing, right.

(13:05):
Oh yeah, we were talking aboutthis, Chris, you might want to
look into this.
If you just Google ServiceNowAI agents, they literally have a
calendar countdown to the 29thof January when they're
launching their agents and thereare things saying these agents
can do things like networkreviews, verify network
stability, analyze, use cases,Agent force so Salesforce they

(13:28):
actually have something calledagent force.
Oracle just launched out agents, Postman of all things.
Postman is launching an AIagent builder, so we've all used
Postman to do our API work.

Tim (13:40):
Oh, a builder, that one makes sense.
Yeah, so now we can use Postman.

John (13:44):
Yeah, they're going to give you a toolkit to build
agents within the Postmanecosystem.

Tim (13:49):
That one makes sense because you can use Postman to
build API calls and all of that.
And then why not just take allof that work you just did and
shove it into an agent for thatpurpose?

John (14:00):
yeah, no, that makes perfect sense so I don't, you
know, I don't know how far awaythe tidal wave is, right, but
but we're on the beach watchingthis thing.
Come at us right now, right, uh, we, I just mentioned four big,
massive companies off, you know, just rattling off the top of
my head, um, so what can we doabout it?
Like what, I don't you know,how do we harness this power?
How do we use it as individualcontributors?

(14:21):
How do we use it to improve ourdaily lives at work?
Right, there's a lot to a lotto consider here, guys.

Chris (14:27):
Yeah, yeah, very interesting.
I'm curious so, like from yourperspective, like obviously up
to this point probably wait toyour previous points about AI
versions 1.0, 1.5.
I to your previous points aboutAI versions 1.0, 1.5.
I feel like there's been a lotof basically people building
front ends and wrappers around asingle LLM right, probably chat
UPT, right.
There's basically just a frontend that they interact with that

(14:51):
gets the information directlyfrom that LLM.
So with this orchestration ofagents and I'm assuming they
will live they're probably notall going to live in one place,
right.
They're going to be kind ofscattered throughout the
ecosystem in some way.
How much effort needs to gointo building a front end for
that right?
Because I mean, at the end ofthe day, it's how we're going to
interact with it.

(15:11):
That shows the value, right.
So like, how much effort goesinto that piece of it?

John (15:16):
So I know that, um, so I let's just with a previous
example.
We have four differentinfrastructure agents and a and
a net box agent.
Yeah, I had to write a.
Let's call it main agent orparent agent that really acts
like a router or a shepherd.
It's like a router.
So when the initial promptcomes in from the user, that

(15:38):
main agent will say OK, I needto call the net box agent first
and then I need to call therouter agent and it sort of does
that routing and orchestration.
That.
That's even smaller than theother agents because there isn't
a lot going on there.
You're just kind of saying hey,main agent, here's all of the
sub agents you have access toand can you orchestrate
communications between the userand the backend In terms of the

(16:01):
front end, like an interface, Ilove Streamlit.
If anyone is out there everstruggled with Django or Apache
or IIS or any of those web sortof things.
I've got this awesome code andI want to put a web interface on
it.
That becomes a bigger problemthan your original code for some
people right, I've built Flaskapps for that purpose.

(16:22):
Yeah, so Streamlit has to mereally democratized that
experience.
It's a Python import and youliterally say like stheader, and
it makes a header page,stinputbox, input box, and then
strun in your Python script andthat will bring up the Streamlit
app that listens on 8501.
So I think that that's going tohelp people get their proof of

(16:45):
concepts out rapidly.
Or, you know, layers ofabstraction, let's call it.

Tim (16:51):
Yeah, that makes sense.
I mean yeah, I don't want tolike.
What do you think about agentsthat use multiple LLMs or do you
just try to like?
You know one, one agent per llmup.
It's probably a little biteasier to go with, but like you
know what I mean.
Like task oriented, like some,some llms are going to be better
.
You know even claude as, orclaude, or you know chat gbt

(17:12):
have different versions that arebetter at different tasks,
right?

John (17:14):
so there's a lot to be done there.
Yeah, so there's some likethere's specific coding llms,
like they're coder specific forgenerating python code like
cohere is really good fordevelopers and stuff.
Yeah the other thing.
I'm glad you brought up models.
Tim, I just about slipped mymind.
You're gonna start to noticedash r on models now yeah, I
think I already got one yeah, Ithink that connotates reasoning,

(17:38):
that it's a reasoning capablemodel or that it could do tool
calling.
So command.
I've been playing with coheres,command r, that's 7b, really
nice model, really small, 7billion parameters.
It can do tool calling.
It can do agents, um.
The other one that just cameout yesterday was deep seek r1,
the real deep seek.

(17:58):
So I got fooled.
Someone put a fake deep seekout that was just llama or 3.2
reskinned and I sort of took thebait.
I didn't really do muchresearch, I just started using
this model, um.
But the real deep seek justreleased their actual model on
olama no, actually I want tovery quickly.

Tim (18:15):
I'm curious about this.
So if people start poisoning,like llama index or whatever,
like are they?
I mean, are we assuming thatbasically anything I put into
that poisoned model is goingsomewhere?
Is it like, or like, what's thevalue in poisoning a model like
that?
Or not really poisoning, butkind of a replacement, a bait
and switch, if you will?

John (18:35):
Yeah, I think it was just to get clicks and to get
downloads of their specificmodel.
Someone reached out, theywatched my video and said to me
privately the model you're usingis actually not DeepSeek's
official model.
Someone has re-skinned Lama 3.2for whatever reason.
What's the value?
I have no idea.

Tim (18:55):
That's what I mean.
I mean, we're already talkingabout AI.
I don't want to change thesubject, right?
No, we're talking about AIsecurity.
Now it's becoming a big thing,especially like prompt, of
course, the prompt harvestingand whatnot to be able to pull
stuff out.
You're not supposed to pull out.
We know about that, but couldwe get to the point where we
have these poisoned models?

John (19:14):
Right supply chain.
Issues with the models.

Tim (19:15):
Right, exactly, anyway, issues with the models right,
exactly.

John (19:18):
So anyway, it's no.
No, there's a lot going on, andI'm wondering how long, for
example, chinese models will beavailable to us in the west
right I mean, and should we usethem?
I don't even know right yeah,apparently it only cost them
five million dollars to make andit competes at chat gpt 0103
levels I, that's what they said,but have we seen it like?

Tim (19:38):
I don't know, but I've also seen things.

John (19:40):
Like you know, if you ask it about tiananmen square, it
goes.
I have no idea about this event.

Tim (19:45):
Yeah, that's.
Uh.
Yeah, I don't know what that is.
Is that in the video game?
Is that in a video game, or?

Chris (19:50):
something.
Your social credit score goesdown.

Tim (19:53):
Yeah, you asked the question no, that's a this are
there.
There's so much more to thatright, like without one, unpack
it too much because I want tostay on the agents.
But there's so much more tothat right, without wanting to
unpack it too much because Iwant to stay on the agents, but
there's so much more to thesecurity, to the integrity, like
you said, supply chain.
It's like in a supply chainattack right?
If somebody gives you apoisoned LLM, to what end?
Right, could that LLM besomehow taking the data, the

(20:14):
things you're putting into it,you're copying and pasting code
into it?
Is it doing something with that?
You know Well.

John (20:20):
I'm glad to see, in particular, the Cohere model,
but Lama 3.1 does tool callingas well and old Lama offered
tool calling support in June orJuly of 2024.
So we're not limited to thecloud providers or having to pay
to do.
If you want to get involved andstart writing your own agents,
there is a mishmash of free opensource tools and stuff to get

(20:44):
going at home.
Um, for a while there you needa chat, gpt 4.0.
No other, no other model couldreally do tool calling.
You know where do we go fromhere?
Like I, I don't know.
I sort of it feels like theearly internet.
We're gonna have agents, right,one organization might have all
of their agents.
Let's say academia, let's saysomething that's non-competitive

(21:04):
right, ucla is going to havethe ucla agents and maybe they
could talk to the harvard agentsand maybe we start to get
swarms of agents like darpanetsales, the house stuff, exactly.
Yeah, so that, like I see it,growing and connecting like the
world wide web did, with evengreater potential, right?

Tim (21:21):
Yeah, no, absolutely I hadn't thought about that.
Yeah, Brave new world.
But you're right, it does feela little bit like early internet
, right, like the fundamentalsof getting things talking to
each other that previously hadno way to share data.
I mean, of course, in our case,I don't even know if it's a
good thing, right?
So I guess it ultimately has.

(21:41):
It's going to happen regardless.
So, yeah, we need to understandit.
So one thing that's been alittle bit murky to me, a little
hand wavy, is this idea thatyou could just so like you know,
say, say that one of ourlisteners wants to build an
agent.
Right, it still feels a littlehand wavy, like, oh well, you
just call a tool and create anagent, make a prompt, like it's
you know what I mean.

(22:02):
Like could you, could you get alittle bit, a little bit deeper
on like okay, so how doessomeone actually build an agent?

John (22:10):
Yeah, so I would start with.
Start with something like a lowhanging one tool, a read
activity against an API.
You're going to need, I wouldsay, langchain, so a little bit
of Python experience and theyhave an agent executor class
that you can call and build andit will have things like the max
iterations.
So agents will self-correct orknow that they're wrong or that

(22:32):
they got the wrong informationand literally try it again.
Like I've seen, the agents justiterate, iterate, iterate.
So you have to put a maxiteration so if it goes off the
rails it doesn't just infinitelyloop.
Um, anyway, that's a minordetail, but um, you're going to
literally use at tool.
So if anyone's done a decoratorin python or decorated a

(22:53):
function, yeah, you decorate attool.
Um, pokemon, read a Pokemon API, read tool, the URL of the API
you want and just the simplerequestget right.
So you write a little functionthat would normally work on its
own as standalone Python to do arequest against the Pokemon API

(23:13):
.
And you put that in the prompt,no, just a separate standalone
tool that you could actually runthat function and it would work
as a standalone function.

Tim (23:22):
Okay, and then the rest of it, honestly is a prompt
template.

John (23:27):
Now that is very important and I urge you to like steal my
code honestly.
Go look at some of my templatesbecause they took a long time
to figure out.
But it's that whole thought,input, input, action,
observation, sort of thing inyour, in your template.
Honestly, do this with a chatGPT helping you write an agent.

(23:49):
If you explain to the chat GPTI'm using line chain, I'd like
to make an agent that talks toan API.
Right, here's what I've gotstarted with.
Can you help me build this tool?
Um, I I promise you you'll havea working agent in an hour, two
hours interesting.

Chris (24:05):
Okay, so it's.
It seems wrong.

John (24:07):
I mean getting started with ai, it's so if we think
about the complexity of ragright vector stores, that embed
and like that's not an easything to do, a retrieval
augmented generation line chainso many pieces, yeah.
You know there's a lot of movingparts there.
This takes, you know you don'thave any of that complexity.
It really is quite simple andit's it's two to 300 lines of

(24:30):
code from end to end and most ofthat's going to be your prompt
template in in natural language.
Anyway, the videos I did do thevideo about the pokemon one.
If you want to get started like, if you don't want to start
with infrastructure, if that's abit much.
I don't want to talk to routers, I just want to get a general
hello world.
Um, take the pokemon one, takethe netbox one.
I like the netbox one becauseit'll work out of the can.

(24:51):
You could clone my repository,bring up the docker, compose,
get a key from the public netboxdemo and and it would just work
.
You can literally just startchatting with netbox.
I wish there was more out there.
To be honest with you, like Iwish I could just say go to how
to build an agentai and followthe instructions, right, um, but
.
But I do think it's important toto kind of cut through some of

(25:15):
the hype.
I don't think this is going tolike be a great depression, like
wave of layoffs, because agentsare here all of a sudden, right
, it's an augmentation.
It's like giving everyone onthe accounting floor the first
digital calculator, or whateverlike a cell phone or something
like a cell phone or BlackBerryor whatever, like.
I think that it's going toaugment us and you might say,

(25:36):
wow, look at all.
This time I have to focus onwhat's important, because these
agents can do all this.
You know, mundane heavy liftingof stuff, right, yeah, yeah, I
think.

Chris (25:44):
I think, obviously, similar things were said about
automation as well, right, andso I mean, this is sounds like
just the next step in thatdirection.
I guess I should say I'mcurious to get your take on
something that we discussed onthe show just a couple of weeks
ago, um, about a quote fromJensen Wong which, uh, we found
pretty comical but, um, yeah, sohe's.

(26:04):
He was obviously he's uh, youknow, he's a showman, right, so
he's going to say some some uh,you know, uh, nuclear hot takes,
but he was like said this thingabout it just becoming the HR
of AI agents.
Right, you're just going to bemanaging, you know, fleets of
agents and things like that.
I mean, how much of a realitydo you see that?

John (26:26):
Well, from what I've seen so far, I think he's closer to
being correct than incorrect,like I think that we're going to
become custodians or humanrepresentatives of the agents
that we've built, or that theagents that we support.
Hey, that weird agent fornetbox is suddenly swearing in

(26:47):
every third answer.
Right, we better tune thatprompt a little bit.
It's getting a little cheekywith people, things like that.
Right, are the agents playingwell together?
Just, john's agents work withtim's agents.
Now they're.
You know we've got some humanresource type conflicts between
these agents.
They're not working welltogether, right?
So, I mean, I think, like you'reright, it's a showy, flashy
thing to say, but it does giveyou pause, a reason to pause and

(27:10):
think.
I mean he's been correct abouta lot of things so far, all
right.
He's clearly a brilliant manand, um, you know, I don't know
that that's going to.
Obviously it's all to help himsell graphics cards.

Tim (27:22):
Of course, of course.

John (27:23):
Right, right, yeah, but maybe I don't know like I think
our roles have shifted withautomation.
I'm glad you brought upautomation.
I've got a bone to pick withthe automation community.
I agree with what you said.
This is an evolution.
This is the next step of whatwe've been doing all the way
back to bash scripts and ticklescripts and other things like
that.

(27:43):
It really is just time marchingforward.
But for some reason, becauseit's too gimmicky or because
it's, I don't know, I thinkbecause a lot of us are Gen Xers
, there's a bit ofcounterculture in us.
And we sort of want to rejectthe popular thing and where
there's a bit of counterculturehappening right now with AI,
where we want to rage againstthe machine a little bit.

Tim (28:05):
I think there's definitely some truth in that.
There's a gestalt in us.
You know not including Chris,because he's not a Gen Xer- I'm
a millennial.
His problem is he can't afforda house and he can't retire.

(28:27):
We have different problems thanthat.
No, I think that's true, but Ialso think that there's a little
bit of what's the word I'mlooking for.
There's just some vinegar inthe in the mix, if you will.
Just how many people are sotired of hearing about how you
know the next thing is going tobe the thing, and it's got you
know, especially the people thatbought a whole hog into network
automation.
Like those people, they wereready right and they were like,

(28:48):
yeah, this is going to be it.
If you don't learn this, you'regoing to be out of a job, and
it just it didn't it didn'tmaterialize it didn't happen, I
know, but at the same time I canremember.

John (29:00):
I can remember a serious conversation with a PBX engineer
.

Tim (29:03):
Yeah.

John (29:04):
In 2002.
And me explaining to him listen, we're getting this new system
called voice over IP.
And like we're gutting all ofthis, like we're gutting all of
this like we're that pbx is gone.
Yeah, pbx has been in herelonger than the mainframe and I
said, well, I mean, there's twoways to look at that, right
right exactly I mean it all hasto do with the shift of the
lowest common denominator right.

Chris (29:26):
Eventually, things move forward.
Um and um yeah, I mean people.
Now it's like oh, at the end ofthe day, there's always going
to be somebody that needs to,you know, configure the vlan on
a port, blah blah.
You know you're not going to dothat you know somebody has to
plug it in, so yeah, likeeventually things do shift.
Um yeah, like to tim's point,you're probably not changing
when somebody needs to plug shitin, but um, actually I don't

(29:46):
know.
There's plenty of robots andshit that might be doing that
soon.
Elon's, elon's already on it,yeah right but it's one thing
that I struggle with in that, inthat particular capacity, just
like thinking about um, you knowthis thing about being the hr
of ai agents, you know as as umkind of obtuse as that might

(30:07):
sound like the, the fucking,like the observability and the
blast radius in that sounds sovolatile to me, like it sounds
so unpredictable, like I don'tknow how you put guardrails on
these things where people feelsafe.
Not safe to them, likepersonally, but to their
infrastructure.
Right, that sounds like.
If I can't predict it, there'sno way I'm going to put that in

(30:29):
there, right?

Tim (30:29):
Yeah, what if an agent, you know, oh shit, my prompt.
Or they change the model, themodel trains, and all of a
sudden it feels differentlyabout a certain percentage of
you know.
You know, it's like basically abig predictive text machine.
Well, if the model gets updatedand numbers have shifted, god
only knows what's going tohappen next time I run my agent.
There's definitely, you know,and for people who are, every

(30:50):
minute of downtime is a milliondollars, like there's, there's.
There's an element of riskthere that you wonder if the
appetite is there and I don't,I'm not.
I'm not using that as a shield,like saying ha ha ha.
Therefore, therefore, they'llnever use agents.
That's not it at all.
Right, but there has to be areconciliation, a risk reward
Probably going to be last again.
Right, yeah, network's probablygoing to be last again right,

(31:14):
yeah right, network's probablygoing to be last in this.
Well, it touches everything,right.
Like, if network goes down,like everything blows up.

John (31:18):
So right, it's the very last line of defense, like if
you will yeah, if you know, ifthat one agent with that
one-to-one conversation with asingle customer glitches or has
a problem, that's that'snegligible risk.
If the agent that's pushingroutes out gets it all wrong,
yeah, right, and uh, so I don't.
I, you know, I mean, I'm a bigstar trek fan, right, I think

(31:41):
that we're going to get there,maybe not tomorrow or in the
next weeks or whatever, but Ifeel like, like I said that
tidal wave's coming andeventually, yeah, we're gonna
not even like the keyboard mightstart to go away, right, we're
just gonna start talking tothese things, uh, sooner than
later right, I often pick up mymouse and just computer, you

(32:01):
know like scotty style yeah,yeah

Tim (32:05):
uh, no, I think you're, I don't think you're wrong, right,
and uh, I don't know when thatis.
Um, I don't know when that'sgoing to be, but I, I, I think
generally, I don't know, I don'tknow.
So the thing is, I don't knowif it's going to be more
efficient.
I think there's some efficiencythere, and over all of this,
over all of this the agenticstuff, like just the more
efficient we are getting and andthe better ai is getting, I'm

(32:27):
still there's still to be a hugequestion mark, and it's the it,
it's the cost, like right now,it's, it's being fueled by an
absolute gigantic wave of of,you know, vc capital and money,
and you know what you see atProject Stargate shit with $500
billion, we're going to changethe world Like but, but and and.

(32:49):
The point is like you know, achat GBT subscription is like 20
bucks a month or something, andthey know OpenAI is just
shoveling money into a firebecause it costs so much more
for a person to use itApparently, even the $200 a
month is losing money yeah.

John (33:06):
And on that 500 million I don't know if you saw
Microsoft's attire he goesbecause they were asking that.
You know, elon said there's notenough money and that SoftBank
doesn't have it, and he goes.
I've got my $90 billion.
So, yeah, pretty remarkablethings.
You know.
What are we all going to do,right?

(33:27):
Like are we going to enter,like, a post-scarcity society?
Are we going to need to workLike is any of this going to
happen to benefit humankindother than funneling more money
up into the top?

Tim (33:39):
A few humankinds probably.
Right, yeah, Very few.
But yeah, I mean yeah, no, Icouldn't agree more.
Some would even say 1% yeahmaybe even less than 1%, maybe
two or three people.
But yeah, I mean, what aboutthe OpenAI's thing, like you

(34:01):
know, for the benefit ofhumanity and all of that, but,
like you know, running as a Nowthey're saying they're going to
do a benefits court, whichhopefully we talked about that
on the news last week, you know,hopefully that gives them the
shield, the legal shield thatyou shouldn't even need, but the
legal shield to do right.
But the question is, will SamAltman actually, you know, push
comes to shove when there's abillion, billion, billion
dollars in his pocket.

(34:22):
You know, will he do, right,there's so much, anyway, yeah.

John (34:25):
I want to be hopeful.
You know I'm a little concernedin.
You know and it's a gooddiscussion that we've been
having and you know a few peoplehave brought up the point about
the gap between let's call them, you know, mid to senior
network engineers.

Tim (34:39):
Oh yeah, the rungs of the ladder, the bottom of the ladder
.
We've been talking about thison the show too.
Yeah.

John (34:44):
If the senior network engineers are proficient enough
and start writing agents to helpthem as opposed to juniors to
help them, you know where doesthat leave that drift.

Tim (34:54):
There's a gap when those people retire, and then what is
the agent going to take over?
Like there's no pipeline toexpertise?
Yeah, this is something we'vebeen talking about for a good
long time and I agree.
And what is the answer?
I don't know.

John (35:06):
I don't know either, and I think that maybe it's a little
short sighted, for I'm surecertain MBA type people go well,
agents don't need to sleep.
Agents don't need vacation.
Agents don't have children.
Agents don't have pets.
Agents don't need to go to thedentist.
You know that someone's doingthis calculus.

Tim (35:31):
Oh yeah, don't need to go to the dentist, right you?

Chris (35:33):
know that someone's doing this calculus.
Oh yeah, 100, you know for sure.
No question in anybody's mindabout the budget line items
being figured and and all ofthat, right.
So yeah, they're, they're theactuaries of it.
Pretty much, right, right.
But yeah, uh, I want to.

John (35:42):
Well guys, I've left you a lot here to unpack and to think
about.
So maybe um once, once we havea little more traction, a couple
more weeks go by, we'll comeback and we'll revisit I want to
.

Tim (35:52):
I yeah, I'll be honest, I kind of want to try to build an
agent and just see like, okay,what does that work, what does
that look like?
And this is because I've beenworking with andrew a lot on the
gen is.
I've seen so much behind thethe you know, behind the curtain
now that it's kind ofinteresting.
I'm not going to be an aiengineer or anything like that,
but I am curious like, all right, well, what, like, what, what
really is goes into doing this?

(36:13):
Like how hard is this going tobe?

John (36:15):
so it's interesting, but yeah, well reach out if you need
any templates or anything, tim,just let me know, I'll be more
than I need everything.

Tim (36:21):
Help you get going, I will reach out.
I need everything.

John (36:24):
All right, all right, and that goes for anyone listening
honestly just ping me, drop me aLinkedIn, drop me a blue sky.
I'm I'm trying to really helppeople pick up on this and it's
exciting.
It's fun too.
It's really fun.
Sometimes you sit there and gothere's no way this worked,
there's no way that this workedright, and you read the logic
and it you know it lays it allout.

(36:45):
Uh, it's, it's, it's.

Chris (36:47):
I think you're going to really enjoy it once you start
building them cool, cool it'sbasically, the lesson is get in
now, before the bottom rungs ofthe ladder are gone.
Yeah, yeah, yeah, be the one atthe top of the ladder, not the
one, uh reaching for the rungsthat aren't there that's right
um, all right, well, uh, thanksfor coming, john.

Tim (37:04):
It's always wonderful to have you on there.
It's been a great discussion.
We'll definitely have to haveyou back again and, um, yeah, I,
for everyone else who'slistening, I hope that.
I hope you found this helpful.
Uh, interesting, entertaining,hopefully in some small way.
Uh, please, uh, subscribe to uson your favorite podcatcher, if
you're not already.
Uh, watch our youtube.
Uh, you know, and do all thenormal things we give you as a

(37:26):
call to action that I can'tremember right now because I'm
tired.
Now you go to bed.
All right, we'll see you nexttime hi everyone.

Chris (37:35):
It's chris and this has been the cables to clouds
podcast.
Thanks for tuning in today.
If you enjoyed our show, pleasesubscribe to us in your
favorite podcatcher, as well assubscribe and turn on
notifications for our YouTubechannel to be notified of all
our new episodes.
Follow us on socials at Cablesto Clouds.
You can also visit our websitefor all of the show notes at

(37:56):
CablesToCloudscom.
Thanks again for listening andsee you next time.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.