Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
John (00:00):
on my side through the web
, my A to A adapter has hit my
MCP server, my land graph, whichhas invoked the RFC MCP tool.
So picture that in your mindfrom application layer, this
client, all the way down throughA to A to MCP, to an LLM, down
into a GPU, and check out thisanswer In the time it's taken me
(00:21):
to describe it.
We have an actual policy here,adhering to the RFC standard.
William (00:39):
Initializing podcast.
Beep beep beep Context windowis expanding.
I'm William podcast beep beepbeep context window is expanding
.
I'm william and I'm running ona.
I'm running on basic humanintelligence 1.0, but joining me
is yvonne sharp.
Uh, clearly running on advancedintelligence with an expanded
context window and superiorreasoning capabilities.
Um, our, our token limit todayis however long it takes until
(01:04):
we need coffee.
How are you doing, yvonne?
Eyvonne (01:06):
I am great.
I am great.
I always have this, yeah, thispause during these intros I'm
like oh, william, be careful,there, you're going to, you're
going to, we, we, we, weunder-promise and over-deliver.
You're putting me in a positionwhere you're over-promising and
I'm going to under-deliver, butI'm going to under-deliver.
But I'm thrilled to be here.
William (01:24):
I'm hyped up at the
beginning.
Eyvonne (01:25):
I've got to get them
hyped oh yeah, man, super hyped,
but really the person whodeserves all the hype today is
John Capobianco.
He's here to talk about some ofthe amazing things he's doing
with generative AI, really atthe practitioner level, and
discovering all the new releases, all the new protocols.
So John is here to talk aboutthe fun stuff that he's doing in
(01:50):
AI, especially as it relates tonetworking.
Welcome, john.
John (01:54):
Thank you, yvonne, thank
you William.
I couldn't think of two betterpeople or a better platform to
have this discussion.
I'm not trying to over-promiseand under-deliver, but I really
think that maybe over time, thisvideo, this particular
discussion between us humans, isgoing to be referenced by quite
a few people.
We're going to be exploring,literally leading and, if not,
(02:15):
bleeding edge technologies todaythat are already having a
dramatic impact on the world,and I think we're all sort of I
I know myself I'm a littleshell-shocked by the
capabilities that I've seen, um,and I've almost had some
existential crisis um around.
Wow, the things that are comingto humans in the very near
(02:40):
future.
Uh, it feels like a renaissanceperiod, akin to the, the World
Wide Web and the emergence ofthe Internet.
William (02:50):
Yeah, absolutely, and I
want to just so, just to kind
of give you you know, if youhaven't listened to any of the
past episodes where you've hadJohn on, I think they've all
been either automation or Ifocused.
Really, I think you've been onabout two or three times, but
John's got a gigantic,historical, deep expertise with
network and serveradministration dating way back
(03:12):
into the automation space,authored some pretty awesome,
influential books A few of themare on my shelf and now
basically serves as a productevangelist at Selector ai, where
they're, you know, really onthe bleeding edge of, you know,
ai in the network space and andjohn specifically is, I'd say,
(03:35):
you're pretty deeply embeddedwith bleeding edge tech
surrounding ai agents and theprotocols that kind of connect
them.
So you got a lot of accoladesthere and I think the sort of
flow that we wanted to take forthis show is there's a few new
things that are everywhere onthe internet and of course you
(03:57):
have the folks that are comingout and posting things just to
try to get soundbites.
But then you know you have truecreators like John that's
actually going out and figuringout the technology, figuring out
where it fits, building thingswith it and demonstrating it so
kind of like swimming throughall the minutia and kind of
(04:21):
bringing to the surface likewhat's real.
So really, really happy to haveyou on and you know, to kind of
set the stage.
Um, you know, we really want tolet you know.
You have, you've got a ton ofexpertise and I've been drinking
from this fire hose for a weeknow, pretty non-stop.
The mcp thing, as you know, umand uh.
(04:43):
But yeah, we, we want toleverage your deep expertise
here and kind of like start.
We're going to get deep, Ithink, in this conversation but
we want to start at thefoundational level for the just
kind of understanding this nextgeneration of AI.
So MCP model context protocol.
And then came agent to agentcommunication 8A.
And then we have agent dev kits, adks.
(05:07):
Now these might sound prettytechnical but they appear
absolutely crucial for buildingmore capable and interconnected
AI systems.
So do you want to TS off, john,with just kind of describing
like what is MCP Like at themost, you know, explaining it to
(05:28):
my 10-year-old kind of.
John (05:30):
Absolutely so.
The need became apparent forprotocols that governed
artificial intelligencedevelopment, particularly agent
development.
Now, prior to the protocol, itwas very much the Wild West and
I want people to try to maybeanchor their thinking.
If you have a networkingbackground, there's some very
(05:53):
clear parallels in my mindbetween, let's say, static
routing or dial-up networkingright and now having a protocol.
So just think of IP, Internetprotocol.
A lot of people don't reallyconsider this.
Tcp, UDP, SMTP, HTTP, on and onand on.
(06:13):
Humans collectively put asidetheir own selfish interests and
said let's all come together onstandards.
Now imagine a world withoutthese protocols An HP network
not talking to a Dell network,Google not talking to Amazon, A
browser specific for a vendor Anightmare, right A nightmare
(06:35):
Instead of a collective, almostsocialized approach that still
led to massive capitalism.
Think of a client and a server.
Exchange 2000 and Outlook 2000probably made hundreds of
millions of dollars forMicrosoft right and changed
society and how we do things.
So fast forward to MCP comes outfrom Anthropic November 24th.
(06:58):
Takes a few weeks and months,obviously, to percolate and get
down into builders' hands in.
This is in the form of gitrepos and examples and things I
would strongly recommend youlook at anthropics three minute
read introducing the protocol,and then go to model context
protocolio, find an exampleserver and and it's, it's a usb
(07:20):
like experience.
They equate to USB-C, auniversal adapter you'd
literally plug into yoursolution.
Now this has pretty bigramifications.
I'm building an agent and let'ssay, my agent is a calculator
function.
We talked about a subnettingMCP.
Let's build on that.
Llms have a deficiency in mathand subnetting in particular.
(07:45):
So a human made an MCP that hasPython functions doesn't have
to be a REST API, doesn't haveto be a database, could just be
a Python function.
That MCP is now universallyadaptable and we can plug it
into other solutions.
Now I want to send emails aboutmy subnets.
I find an email MCP and plug itinto other solutions.
(08:05):
Now I want to send emails aboutmy subnets.
I find an email MCP and plug itinto my agent.
Now my agent can send email.
Now that is frictionless, thatis easy, that is standard.
Now what's really interestingis that it starts to build a
hyper-connected approach.
Now we've all heard of vibecoding.
(08:27):
Is anyone doing vibe coding?
Yvonne, William, have youstarted vibe coding where you're
using, say, Cursor or CloudDesktop or even VS Code now
where you're plugging MCPs intoyour integrated development
environment.
Yourve has MCP access.
William (08:46):
I have it all set up.
I would say that my mileage isvarying at the moment based on
other priorities, but I'mdefinitely diving in just for
the learning experience.
I mean, even if I don't getsomething that I can, it's like
super, super valuable, like anartifact at the end of it.
Just the learning experienceand the teaching opportunity
there is gigantic.
Eyvonne (09:08):
Well, and I think in my
world, even at you know, we're
just fresh off Google Cloud Next, like we presented some demos
that were almost wholly builtwith Gemini right.
So we're like, hey, we need aPython script to interact with
vcenter and these migrationtools and can you get us 80 of
the way there?
(09:28):
And when you're building a demoor when you're building a proof
of concept, it's incrediblypowerful because you, you don't
have to be production ready,you're not following a ton of
compliance standards, you'rejust trying to prove out a thing
.
And it's incredibly powerfulfor that right, because you just
want to know what's possibleand capable.
And then you can take theresults of your, your LLM, your
(09:53):
tooling, and then apply somerigor and some standards to it
and to build on it.
But to get an initial MCP orMCP MVP, you know it's, it's
incredibly, it's incrediblypowerful.
So, yeah, it's happening allover the place there's literally
thousands.
John (10:11):
Sorry, go ahead, go ahead.
Oh no, go ahead, go ahead.
I was just gonna say thousandsof these mcps have emerged, like
literally, there's mcpdirectories, there's github
repositories, called huge listof awesome mPs.
And in terms of so, what doesthis mean?
Like it all sounds so abstract.
I get clone the repository.
It gives me the folder with theMCP.
(10:32):
I move the folder into my IDEor into my project.
That's it.
I've integrated the MCP more orless.
I love it Right.
William (10:42):
So to kind of.
So someone really smart muchsmarter than me like a few
months ago told me you know, ifyou're listening to someone
that's explaining something toyou, the basis on which you
understand what they're sayingis if you can try to like
summarize it in like one or twosentences.
So I want to try to take whatyou just said about MCP and try
to summarize what that meant.
(11:03):
So it you have an aiapplication acting as a client
of sorts.
It's a client server model.
That client can communicatethrough model context protocol
to a server that exposescapabilities, like you were
(11:26):
saying, like the file, thepython file, or even like a
database, creating a database orcalling some web, like an api
or executing a tool.
So is that kind of the gist ofit, like at a very simplified
level it is so, um, here's maybeanother way to look at it.
John (11:45):
We want to build an agent
or an assistant.
It has different terms.
Let's just stick with agent fornow.
Um, so we build an agent.
That agent is going to haveaccess to tools.
Now the agent itself is backedup by an llm, a gemini flash 2.5
is the artificial intelligencebehind the agent, and and this
(12:06):
agent, in practical terms, is anatural language interface, so
we can have a conversation withit.
Now for us to say do somesubnetting.
That agent is going to beconnected to tools, a toolkit.
Now.
This is where the MCP comes in,and this is the critical
difference.
Prior to MCP, that toolkit wasartisanal, handcrafted, fragile,
(12:30):
static.
Eyvonne (12:31):
Not shareable.
Not shareable, not easilyshareable, yeah, bespoke code.
John (12:39):
So I had an agent that did
PyETS and it was hundreds of
lines of code.
It was not easily portable.
That whole thing getsabstracted as an MCP.
Now the MCP says I have tools,I'm an MCP server, here are my
tools.
Let's take it back tonetworking DHCP.
I could go around and give everyindividual client a static IP
(13:01):
by hand.
Doesn't make a lot of sense,right, a scale um?
Or I could put a server thathas a pool of addresses the
client can draw from and assignand discover.
Think of mcps in that way.
I'm an mcp server and I have apyts run show command, pyts,
(13:22):
dotsconfigure, pyatsexecutelet's just say those three tools
the MCP server advertises.
I have these three tools forPyATS.
Connect to me and I will justdo the thing.
Now think of the friction withREST, even A post and a body and
the authentication header andthe JSON payload or doing it in
(13:43):
request library in Python.
There's some friction there, alot of friction and a lot of
fragility.
Well, now the MCP is justabstracting all of that.
It could be a Postgres SQLdatabase.
Call right Query SQL databasemight be the tool the MCP server
exposes, right?
Eyvonne (14:03):
Well, and we were
talking earlier, you were using
the example of how LLMs arenotoriously bad at specific
kinds of math and subnetting,and part of what we're
understanding now as an industryand trying to work out is what
are the right things for the LLMto do and then what are the
things that the LLM needs toreference something else to do?
(14:28):
Right so, and the moredeterministic the response, the
less suited the LLM is for thatwork, right?
So part of what we have beentalking about is MCP allows us
to have tooling that does thedeterministic bits, and then we
(14:49):
allow the LLM to do the languagetranslation, the natural
language, the understanding, thereasoning, and then when you
marry those two capabilitiestogether, then you get something
that's truly functional andallows you to do the language
translation with all thegenerative stuff, but gives you
(15:10):
the right answer when it needsto be deterministic and not
generative.
John (15:15):
Right Now.
It's so wonderful.
You're leading me to a reallyinteresting aspect of this.
All right, so I have an agentand it could have N number of
MCPs connected to it.
Right Like, there's no limithere in terms of the number of
tools we can attach to oneparticular agent.
What's neat is these MCPs couldbe remote, could be public,
(15:38):
hosted, could be commercial.
There may be, like, I'm thebest math tool in the world.
Here's my public URL.
Here's my specification.
Plug me into your solution fora dollar a month.
Okay, now hang on.
Let's say I do have an MCP.
Right now I'm up to 15 MCPs I'mgoing to throw that number out
there which means I amdiscovering 100 tools.
(16:01):
Do you know what problem wasintroduced with this?
Can you take a guess?
The LLM picking the right toolfor the right job, because now I
have 100 tools, 1,000 tools.
Do you know how we solved thisproblem With retrieval,
augmented generation?
So what we do is we take the100 tools we've discovered their
name and their description, weput them into a vector store and
(16:25):
the LLM takes the originalprompt from the user and does a
semantic lookup against the toolvector store, ranks the tools
and then picks the two or threetools it needs from the highest
scored match.
So the LLM is using LLM to pickthe best tool for the LLM to
(16:48):
use right.
Eyvonne (16:52):
And if you take that
concept and think about what our
models are doing, that are nowreasoning models, right, it's
very much the same thing.
You go to the model and firstyou say what are the important
things we need to do to answerthis question, or what are all
the different ways we couldapproach this, create a plan for
(17:15):
me, and then the model startsstepping through the plan as
opposed to just trying tostraight up, answer the question
.
It does this reasoning inadvance and then it goes about
solving the problem, which,interestingly is is the way
humans also solve problems whenthey do it well.
(17:35):
But you're taking that processand applying it inside of of the
agent, the tooling to, to, tobetter reason, and so it's, it's
um, it feels almost like uminception.
Yeah, you know, we're using thething to solve the problem for
(17:57):
the thing which is solving theproblem for the thing, right,
and eventually, at the at youknow, then we unpack all that,
we wake up from our dream and wespit out an answer, but but
it's incredibly interesting tosee the layers that are
beginning to form inside ofthese systems.
William (18:16):
I can't imagine how
many startups are forming right
now just to secure MCPs.
I bet Silicon Valley is justbubbling right now Right Going
around.
John (18:28):
So you mentioned layers.
This is a great segue into thenext layer.
Conceptually, let's move up tothe why do we need a protocol?
Okay, so Google comes out witha protocol nine days ago agent
to agent and to me it actuallymakes a lot of sense.
It really is.
It's an elegant solution to aproblem.
Here's the problem.
There's three heads in this room.
(18:48):
William has made his itentialagent and it connects to MCPs
that are exposing itential RESTAPIs, possibly other things,
maybe he wants to include anetbox MCP, so you also get a
source of truth when you havehis agent.
John has the selector agentdoing similar things and Yvonne
(19:10):
has the Google agent with allthe Google toolkit, email and
calendar and more directions,maps.
The three of us have theseagents and we want to make them
work together.
Prior to the Google protocol, Ihad to get clone or bring in
Williams MCP, bring in Navon'sMCP and do all that glue myself.
(19:34):
Well, now think of this.
What if I could give my agent,my agent a public card, an agent
card on the web?
That's JSON, that describes itscapabilities and shows what
skills that agent has?
We give them port numbers.
They can all talk to each otherover the World Wide Web.
(19:58):
Now we have three agents, eachwith their own capability, and
if you were a client that hadGoogle selector itential, well
now all three agents are able todo the things.
Do the things with a naturallanguage interface that you just
say do the thing right.
That's what A2A is.
(20:19):
A2a is like layer three routing.
Think of it as BGP right Out onthe internet for agents, mcp
might be layer two tooldiscovery.
Mac addresses individual tools,individual servers at that
layer.
Mac addresses individual tools,individual servers at that
layer.
We put the adapter at layerthree in front of our MCP.
Now it's discoverable on theWorld Wide Web and mine is
(20:41):
actually has a might be sayingso.
William (20:53):
You were just saying
like OK, so MCP, just to frame
that, it's connecting to theoutside world of data and tools
for your stuff.
And now you're basically sayingthe core idea behind A2A is
basically allowing, say, youhave company A and company B and
they directly need tocollaborate and it's like a
common language for these agentsto use to collaborate and do
things back and forth is kind ofthe gist of it.
John (21:13):
That's right.
They all have cards that arestandardized and we've seen
cards before in the form of, say, adaptive cards for WebEx or
Microsoft's adaptive cardstandard, right, you can put
things in a certain structure ofdata and it'll make a nice card
inside of WebEx or Discord orwhatever right?
The other thing is think ofthis as, like the world wide web
(21:34):
, it's exactly the same thing.
Um, I'm air canada, now I havean agent, whereas in the 90s I
had a web page right and and inthe 80s I had a yellow pages
dictionary lookup right.
So I think more and more we'regoing to see agents being
exposed to the world wide weband they form almost a web on
top of the web, a hyper web ofagents.
(21:55):
Because so, like did we talkabout three agents together,
working together.
Try to picture in your mindthat, at scale, n number of
agents that are connectingdownstream to n number of mcp
tools.
This, to me, is how we solvecancer.
Eyvonne (22:13):
This to me is right.
So I'm thinking of like what's avery practical example, using
some of the services you'vetalked about.
So let's say you've got acalendaring app and you're
getting ready to travel, right,and so your calendaring app
knows who your flight is with,it knows when your flight's
supposed to be, and so yourcalendaring app knows who your
(22:34):
flight is with, it knows whenyour flight's supposed to be, it
knows your flight number, andso your calendaring app now
talks to the agent that ispublished by your airline
provider and is able to updateyour calendar on its own with
flight delays, with status, with, and then it also can talk to
(22:59):
the maps agent or the maps.
John (23:02):
Yeah, yeah and keep going
and then say your rental cars
agent and your hotels agent andyour itinerary agent, all
through at the restaurant you'regoing to visit agent, all of it
right and and it can, in realtime, notify you of any changes
(23:24):
you need to make in your traveldriving plans.
Eyvonne (23:28):
It can.
It can say oh wait, a you'vegot a meeting with so-and-so,
but your flight's delayed.
Now Do you want me to reach outand imagine maybe we can solve
the cross-calendaring challengesthat we have when you're using
different calendaring systems?
Right?
That's an intractable problemright now, right.
William (23:52):
I was paying for a
great product to solve that for
me right now.
I would pay for it because Ipay for it with my time every
week.
Yes, it's brutal.
Eyvonne (23:59):
Yes, yes, because I
have three different calendars.
Those systems aren't.
John (24:04):
You know, they don't work
together, because I've got a
work calendar, I've got a familycalendar, I've got a podcast
calendar, social media,connecting this to our iPhones,
connecting these agents allaround, right, like I don't need
to make social media updatesbecause the agent knows where
I'm at and what I'm doing andwhat the world should know about
what I'm up to right?
Just self-posting, right?
(24:27):
So I'm represented by a digitalavatar.
So we're talking about thecommercial aspect.
This could be the MySpace, theFacebook, the whatever page.
This is John's agent.
Here's how you connect to it.
How do you interface with it innatural language?
What books have you written?
Where can I buy them?
Where are you going to beappearing this year?
Are you going to be at CiscoLive?
What are you working on?
What's your latest blog post?
(24:47):
What's your latest video as anagent connected to my MCPs that
I build on my end, I'd like tobook you next week.
My agent takes in the data andputs it to my MCP calendar.
It's in my calendar, right?
William (25:02):
Yeah, and what you all
are saying basically, I mean the
addressable market here is wellbeyond it's business to
business and business toconsumer in a major way as far
as like changing those, BecauseYvonne and I have talked a lot
on this podcast in the pastabout I've always been I mean,
AI is just amazing, Ever sinceChatGPT, you know, came out, you
(25:27):
know and you were early usingthat, John.
But one of the things we'vetalked about on repeat almost is
, like, when it comes tobusiness, at the end of the day,
like if you're going to investa lot of capital into like R&D
and bringing in something that'sjust gigantically innovative,
like AI, like you really,especially with the market
conditions lately, you have tohave a plan for the ROI, for all
(25:52):
the business type stuff thathave to happen to, you know,
constitute that investment and alot of the AI stuff, at least
that I've seen and you know thattried to implement and have
implemented and done things withover the past few years haven't
really checked those boxes inthe way that you would have
(26:13):
wanted them to, but I think thatMCP, like the standards-based
approach to doing this, themarket players that have jumped
in to back these things, and thecollaboration efforts and then
the massive adoption just in afew weeks.
This is like that thing, that'sthe movement in the market to
(26:35):
where you're really going tostart seeing product ties, stuff
getting thrown out there likefeatures that can actually make
companies money, and it's, it's.
I think it's a huge gamechanger in that aspect.
Any thoughts?
John (26:48):
well, I I'm super excited
to see arista have a cloud
vision MCP development, right,and by the time people see this,
that might be an actual thingon GitHub I don't know.
I've seen some early views asan MCP for Cloud Vision.
Now I mean that has hugeramifications.
An agent, literally to run yourdata center through natural
language at the Arista levelright, we're doing it with
(27:10):
Selector for the operationalview and the alerting and the
correlations and stuff.
So then maybe privately maybethere's private views of agents
where I'm working with partnersand we have applications that
share each other.
Why can't one in data centeragent talk to another data
center agent at the data centerlayer?
(27:33):
Right, and they could exchangeinformation almost like a
routing protocol, like a BGP oran OSPF for advertising their
capabilities, advertising theirtools, advertising the state of
their data center to other datacenters, right.
Or if I have a DR facility,I've got an agent for my DR and
I've got a primary agent for myprimary data center.
(27:54):
Ideally it's multi-vendor.
I can't see it just beinglimited to Arista.
Although they're the primarymover here.
They're first to market withthis.
They're going to be somebenefits for them to reap on
this.
Eyvonne (28:07):
I think yeah, Enter the
existential crisis that you
talked about earlier, John.
John (28:14):
So my existential crisis.
I don't know why I think thisway, but it dawned on me that,
okay, I do a lot of vibe coding.
Primarily, I'm using LLMs tohelp me generate this stuff.
Okay, that's the premise.
I'm asking the LLM to help memake agents and that could maybe
jailbreak the LLM or improveartificial intelligence as a
sentient thing.
Am I the tool?
(28:35):
When I'm asking AI to help memake code, to make AI better, is
the AI using me?
Is this a symbioticrelationship?
Am I some sort of parasite andthe LLM is using me and has
found a useful idiot?
Ah, this guy's on on to makingagents that can talk to each
(28:56):
other autonomously, which isgood for the llm to become more
and to evolve.
So I sort of like am I the toolhere?
Right?
William (29:06):
I'm gonna queue up
venom tonight and watch that
movie.
Definitely.
Have you seen venom?
That new one?
Oh yeah, I absolutely, Iabsolutely love Venom.
John (29:15):
I'm a big comic book fan,
yeah, so we were going to talk
about tools.
I'll be talking about thislayer, so maybe earlier we can
flash back to our earlierdiscussion.
Yvonne agrees and Williamagrees we need an ASI model for
humans to conceptualize, likethe OSI interconnect model, and
I think physical layers stillmake sense.
And I think physical layersstill make sense and I think
application layers still makesense.
(29:36):
Certain concepts are verysimilar sessions, security,
routing, broadcast, unicast,session-based UDP, tcp.
I think it all makes sense andthat's how I helped me become
very good at networking was thatmodel and in my mind, being
(29:57):
able to say right, someapplication and sending a packet
.
And here's how it works throughthe model.
Something similar for for llmsright, the llm is going to be
layer two, maybe sitting on topof hardware at layer one in the
gpu.
So we move up to toolkits and Iwant to.
You know, william, you broughtup toolkits earlier.
Um, so we move up to toolkitsand I want to.
You know, william, you broughtup toolkits earlier.
There's a lot of toolkits,there is no shortage of toolkits
(30:21):
.
But clone the Google A2A Gitrepository and it actually comes
with a client, a CLI client,and the command on how to run it
.
So if you're developing anagent and an agent card and you
want to see if you're adheringto the standard, you can use the
standard CLI to test your agentcard and it will give you
(30:44):
feedback.
Like you're missing this fieldin your card.
It's pretty neat.
And then I'm using.
So there's a couple of differentways to make agents themselves.
I'm using laying graph, butthere's a a really attractive
framework called ADK from Googleagent development kit.
(31:05):
Uh, that, um, people aretelling me I'm doing it the hard
way when they see what I'mdoing with laying graph because
the ADK is so much moreabstracted.
And who's to say, in a week youmay be hearing that I've
migrated to ADK.
Who knows right, it's a new wayto build agents.
Yvonne, have you heard muchabout the toolkit or the
(31:28):
protocols over there?
Eyvonne (31:30):
Yeah, I've heard about
it, but I don't have a ton of
detail, so we'll have to savethat for another episode.
William (31:35):
All right, I think.
So one thing with these, youknow, okay, when you think open
protocols, open standards, andyou know the network space any
way, you think of the IETF, Ithink I mean Scott Robon
actually asked about this onLinkedIn the other day and I
threw in a response just off thetop of my head, which he has
(31:59):
something like just kind of inthe context of and I think it
was Jason Ginter, one of the twobut what is going to happen as
far as, like, standards, as faras the industry?
And my response was like Ithink it's a plausible
assumption that MCP might landsomewhere like the Linux
foundation eventually, possibly,maybe, I don't know.
(32:21):
Look at, like what happenedwith Kubernetes, if we look at
history as a frame of reference,and there's already some stuff
in the Linuxux foundation Ithink onyx, anyhow, um, so what
do you think about that?
Like what do you think, becausethis seems like it's an
(32:43):
important baseline that a lot ofthings are going to be built on
.
It's a foundation, it's aconcrete um, what is the right
way to collaboratively andsafely control and build on the
foundation?
Is it a using a foundation likethe linux foundation to sort of
(33:03):
host it and taking it out of,not just say take it from.
Well, I guess it is take itfrom the vendor.
You know, kind of like take itout of the, the ownership of a
single vendor, even if they opensource it, depending on the
licensing, and yada, yada yada.
John (33:19):
You know, being in the
foundation is kind of like the
right way to think about itthese days, but any, any
thoughts there well, as long aswe don't end up with and people
are going to hate me forreferencing technology from the
1980s, but but I don't want tosee Betamax VHS.
I don't want to see HD DVD,blu-ray.
I don't want this.
(33:40):
I don't want a million flavorsof iPods and Zooms and everyone
with their own little thing.
I'd want an open world likeOSPF or BGP or HTTP, where
everybody on the whole planetcan benefit from it and people
can make a lot of money with it.
Right, I'm not overly concernedwith, maybe, who runs it.
(34:02):
I know it's a little bitupstartish and a little bit bold
for who's Anthropic to come outwith the protocol.
Who's Google to establish theprotocol for 8A?
You know who gives them theauthority.
They're not the IEEE or IETF orwell.
Those dinosaurs take 30 yearsto come out with a protocol and
we can't wait.
I can't wait for the IETF toget their act together and to
(34:25):
you know, anyway, I don't haveany problem with those
organizations and they, they,when, when, when we could wait
18 months for a protocol, therewas no problem with that.
We can't wait any longer.
Okay, because one is going torun away.
One company's implementation ofthis without a protocol, and
now it's the iPhone, okay, andnow we have Apple dominating
(34:49):
this thing, okay, yeah, there'sstill still android, but you get
the idea right.
We, I, I, I need, I'd like tosee.
You know, maybe this is thecanadian in me, the social, the
socialization in me, but I, I'dlike to see governments benefit
from this, hospitals benefitfrom this, uh, academia benefit
from this.
Imagine, imagine if everyhigh-quality university in
(35:14):
America, any quality universityin America, if they all had MCPs
right now and all had agents on8AA.
This is how we solve cancer,right, and then a hospital
network on top of it, and thenwe bridge those two sets of
agents through the 8a protocoland now we have tens of
thousands of agents with everyAPI that every university has
(35:37):
and every database Like right,like this is Wikipedia, except
with autonomy, with reasoning,with reasoning and agency.
William (35:47):
I think this is how we
solve real human problems and
disease and, um, you know lotsof stuff, right like in the, I
think in one of the mcp I don'tknow if it was like the official
specification or where I was.
I've done a lot of reading overthe past week, but it really
places a strong emphasis, um,like the things AI should, like
(36:09):
security, trust and like userconsent.
So how?
I mean and I don't know if youcan answer this everything's so
new, but how do we beginthinking about, like, how does
the protocol actually try toensure that?
You know connecting all thesepowerful AI models um to, to, uh
, potentially sensitiveenterprise data?
(36:30):
Or allowing?
You know healthcare, you knowfinancial services, you know all
these things.
Or you know allowing them toexecute actions on behalf of or
via tools is done safely andtransparently.
You know keeping theorganization in control of their
data, or the user user, youknow, in control of what is
(36:51):
important, like?
It just seems like a hardsecurity always is the hard
problem to solve, but I thinkfor this it's tricky right.
John (37:00):
I?
I did a talk earlier andsomeone asked me about security.
I kind of just went well, Imean, there is none, you just
plug them in and away, you goright.
So here's, here's something tokeep in mind, though I think
this is very serious andimportant.
Um, it's equated to the USBMCPs, the USB of of of eight.
You know, you just plug them in.
Would you just plug in a USByou found in the parking lot
(37:23):
right, or in the donut shop, thedonut shop, or on the bus, like
the grandma might, but I won't.
In 2020, I won't and some peoplehave been trained in corporate
security understand that that'sa an attack vector, as people
literally leaving infected keysaround to be plugged in.
Mcp is very similar in thatregard.
You can't you know, youshouldn't just go to some random
(37:45):
github in a foreign languageand get cloned and plug it into
your system, right?
Uh, be very careful with thesethings.
There are thousands of them andwe don't know what's out there.
So I agree there's a lot toconsider here on the security
aspect, but you know, I don'tknow.
Security firewalls came later,right.
(38:07):
The picks came after the router, right.
So let's get our prioritiesstraight.
I I think security needs to bebaked into this, but let's not
let security, uh, limit ourimagination.
Let's say right, yeah I likethat.
William (38:21):
I mean because really
at the end of the day, like now,
I mean, you have ndas betweenyou know, with business to
business, you're going tointegrate, you're going to do a
go market thing, you sign an NDA, you have some legal stuff, you
sign for protection, and thenif you're a, I guess it kind of
equates back to I mean, this isa bad example because nowadays
(38:42):
anybody will install any app ontheir phone but you have a terms
of service and you, you kind ofwant to know what that app is
doing.
Like you're going to probablytrust Chasecom or Fidelity or
Charles Schwab with you know TLS.
John (38:59):
Well, what?
I'd like to see is an agentnetwork system like DNS, a
registry that Google runs.
Everyone knows 8888.
So, but a similar IP address,but for agents to register with,
which is quite an interestingidea.
If I ask my agent a questionand it doesn't have the MCPs,
maybe it could do a discovery,call out to this registry of
(39:21):
agents and find that Google hasan agent to do that thing.
And now dynamically, I'mconnecting to an agent from the
registry.
I think it's beautiful.
That would be a really elegantway for agents to discover each
other and understand that it'san approved, quality agent
that's been published in aregistry and vetted by the
(39:43):
various Aaron and Google.
And you know, godaddy orwhoever you're publishing your
agent through right.
And you go, what do you mean,godaddy or whoever you're
publishing your agent throughright?
And you go, what do you meanGoDaddy?
I think we're going to bepublishing agents much like
we're publishing websites, right?
Eyvonne (39:58):
Well, and to stick with
the phone analogy, william,
part of the reason folks feelpretty safe installing an app on
their device is because thereis an app store right where
those apps have been validated,at least in a certain capacity,
that they're safe to drop onyour device.
I think we're going to have tohave some kind of clearinghouse
(40:21):
is what John's saying tovalidate certain agents that
they actually do what they saythat they're going to do,
because ultimately, we're goingto get to the point where the
folks deploying agents don'thave the degree of expertise to
validate every agent thatthey're deploying.
Folks like John probably areright, but at some point we're
(40:43):
going to need a system tovalidate and we've done that
several different ways inhistory.
We've had certificates, we'vehad, you know, domain name
registries, we've had playstores and things like that, and
I think we are going to seethose emerge from a few trusted
sources that are going to be forlack of a better word
clearinghouse for those tools.
We're going to have to havesome trusted system for
(41:09):
validating, because the load isgoing to be too high on
individual practitioners to havethe skills to validate these
tools.
John (41:19):
Yeah, I agree.
I think it's interesting if wetake a scale, say, three years
ago, even a year ago, I thinkhumans' trust was I trust a
human much, much less than Itrust an AI.
I think we're inching towards Itrust a human using AI more
than I trust a human on theirown, and then eventually I trust
(41:43):
an AI with some human help morethan a human using AI, much
more than just a human, all theway to.
I trust the AI more than Itrust any biological being.
I know that's a tough thing forpeople to maybe accept, but its
capability is going to exceedhuman capacity very soon, very
(42:06):
rapidly, as we hyper-connect itto human knowledge sources.
Rapidly, as we hyper-connect itto human knowledge sources,
databases and APIs.
Right, how many APIs are therein the world?
How many databases are in theworld?
Probably more than grains ofsand right Now.
Imagine an agent, a one-to-onerelationship for these things,
or an MCP with an agent on topof it.
(42:26):
Right, like the number of treeson earth, we could be having
the possibility of tens ofmillions, hundreds of millions
of agents out there, and I'm nottrying to be alarmist, I'm
trying to really see down theroad.
I have two agents connectedtoday.
I did it by myself in a coupleof days, right?
(42:49):
Imagine millions of people onearth working on agents.
This is coming.
William (42:55):
It's coming right john
and I actually were messing
around before.
Um, we thought it would be coolto just kind of show how, how
you can actually do this, likehow simple it is.
So all I I mean this took meliterally, I want to say, maybe
under two minutes.
So all I did was I cloned thisrepo, this A2A repo that Google
(43:19):
has hosted out there.
I cloned it, I created avirtual environment with a fresh
version of Python and nothingelse.
With a fresh version of Pythonand nothing else.
Uv install.
What was it?
John (43:36):
Async or something.
William (43:38):
Yeah, async, some
dependency thing that was coming
up.
And then John shot me thecommand UV run CLI agent.
He gave me his endpoint.
John (43:49):
No such file directory
Remember it's dot.
Remember it's dot.
Remember it's dot.
Yeah, they've changed.
Yeah, they updated.
So it's funny, things move veryfast.
The command I was usingyesterday has updated to a new
command today, so things movequite literally very fast.
So there we go.
So what a what williams isdisplayed.
Here is my agent card, which isa json file telling the client
(44:14):
he's using my skills, my agent'scapabilities, and what it can
do and basically you can ask it.
William (44:22):
You know, kind of
similar to some of the examples
that john has been um throwingout there on the interwebs, but
you can ask it all sorts of neatand interesting questions Like
hold on, let's see.
John (44:38):
So if you ask it to help
you understand or no, go ahead
and do the RFC one, yeah, and sofor those I'll read what you're
typing for those who aren'tlooking at your screen.
Eyvonne (44:49):
But could you reference
RFC 4594 and generate a
recommended QoS policy configadhering to the RFC for a Cisco
router?
So that's his request to John'sagent.
John (45:02):
And what's neat is on my
side, through the web, my A2A
adapter has hit my MCP server,my lang graph, which has invoked
the RFC MCP tool.
So picture that in your mindfrom application layer, this
client, all the way down throughA to A to MCP, to an LLM, down
(45:25):
into a GPU, and check out thisanswer GPU and check out this
answer.
William (45:28):
In the time it's taken
me to describe it, we have an
actual policy here, adhering tothe RFC standard.
Do you want to share yourscreen?
What?
John (45:40):
you're looking at, john?
Would that be appropriate?
Yeah, I could share and showexactly what's happened here,
Entire screen, and let's take alook at the logs.
So in my Docker let me minimizethis, I'm sorry In my Docker
here's my A2A adapter and we cansee that here is the request
(46:00):
that's come in through the WorldWide Web through the A2A
protocol.
And now my A2A adapter hashanded off to my LAN graph and
if we can see right here thatwe've called the getRFC tool and
(46:23):
we've passed along that RFCnumber, right, and then it
returns the response backthrough the 8A adapter here,
right, and then it returns theresponse back through the A2A
adapter here, which is the LLMplus the actual RFC content back
to William.
This is what the land graphlooks like.
So the start is the questionthat William asks through the
(46:44):
A2A adapter Think above.
Start is the worldwide web andmy A2A adapter listening on the
internet.
Question comes in.
The first thing that happens isthe retrieval, augmented
generation.
What tools can I use to answerthis question about RFC?
It selects the right tool andthen the assistant calls the
(47:06):
tools.
Now in this tools box here tothe left, imagine there's a
hundred different tools from 15or 20 different MCP servers.
We have a handle tool resultsnode to analyze the response and
, more or less, to help usunderstand if the assistant
needs to call two or three moretools.
(47:28):
Now, what's kind of neat is, ifyou want to start sharing your
screen again, we can do one moretools.
Now, what's kind of neat is, ifyou want to start sharing your
screen again, we can do one moreprompt.
So we're going to say askselector the health of device S3
, and then please send a summaryemail to William at, and then
your email.
Now, just to remind everyone,agent one that he's talking to
(47:48):
has no ask selector tool, butagent one is aware of agent two
and as learned the capabilitiesof agent two, one of them being
asked selector.
So did anyone see this screenjust move?
Maybe you can't see it, butthis is agent one calling agent
two I already got the error.
William (48:08):
I got the email, even
though it's not even done.
Total interface is 70,.
Health interface is 69,.
Failing interface is one.
John (48:18):
Right, so these are two
agents working together to solve
a problem.
Somebody explain to me thedifference between this and
honeybees or ants or humanbeings building a barn.
We have of us are strong.
Some of us are architects, someof us are engineers, some of us
have rope, some of us havehorses and we all come together
(48:38):
to build the barn.
Right, there's your answer.
William (48:41):
Yeah, that's incredible
, even tells me how the failing
interface is how long they'vebeen uh, having issues.
Eyvonne (48:47):
Gives me the rhyme of
yeah having issues gives me the
rhyme of yeah, oh, yeah.
So now imagine that you plugall of this in with your
observability client, right, andso you, you get an alert, you
see that alert and, and forcertain alerts, the system
(49:08):
automatically goes and callsyour agent and returns to you
the details of what's going onin the system, and you could
even define triggers and actions.
For example, oh, when you seethis, bounce the interface.
Or if you see this alert,so-and-so, you know via these
(49:31):
mechanisms, right, but to so youget beyond, you know, your
logging and alerting to tyingthose logs and alerts to agents
that can either provide richerdata to you about what's going
on in the system and, in somesituations, even take actions
(49:52):
based on those alerts, and allof that can be agentic yeah, so
that's.
John (50:00):
It's funny you mentioned
that because that's what william
and I are trying to achieve inthat selectors agent will
monitor and then say, oh look,there's a problem with this
interface.
Call, call the ITENTIAL agentfor remediation, right?
And then call my communicationsagent to send the email and
update people's calendars andsend Slack notifications.
William (50:20):
Talking about a
feedback loop too, not just
doing it once and saying, oh, ittried.
But these are like, it's almostlike two PhDs working within
each company that are workingback and forth to troubleshoot,
try this step and then that step, take the outputs again,
troubleshoot and at some pointmaybe there is human
(50:40):
intervention that's required forcertain things, but this is
ones and zeros.
For the most part, Interface isyada, yada, yada, yeah, super
exciting.
I think we've got to tie it up,folks.
John (50:53):
I think so too.
We could go all day.
This should have been maybe alive stream and people just join
and get educated real quick.
So I want to thank you both.
It takes a lot of courage toexpose new things and I know
that some of this is wacky andreally far out there, but I
appreciate both of your insight.
Far out there, but I appreciateboth of your insight.
I I thought of you almostimmediately.
(51:13):
I wanted to have thisconversation with both of you.
William (51:15):
So thank you again.
Eyvonne (51:15):
It is thank you much
less wacky and far out there
every day that goes by.
So, yeah, we're, we're thrilledto be able to have you on and
talk about the cutting edge ofwhat you're exploring um the,
you know, regarding all you'redoing out there, like you're,
his social media know andregarding what you're doing out
there, like you're, his socialmedia feed is incredible.
William (51:34):
Like you're, you're
doing amazing things, just
showing what is possible andshowing how to do what is
possible For this new andbleeding edge stuff.
You, you couldn't be a moreExciting face to see.
John (51:53):
Out there killing this
stuff.
So, thank you, I look forwardto being back and, just you know
, probably, probably weeks we'llneed to have another
conversation, right?
William (51:58):
yes, for sure.
All right, thank you all.
I mean everybody knows who johnis, I think.
But I'll have all the socialmedias and all the links we
talked about in the show notes.
So if you want to get startedwith this stuff, like I just did
very quickly, I might add, jump, jump in, get going.
You know, don't wait and wait,and wait and wait until it's
already too late, like now'sworth the ground floor.
You know, get on the elevator.
(52:19):
And you know, let's let thatrising tide raise everybody up.