Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jerod (00:04):
Welcome to Practical AI,
the podcast that makes
artificial intelligencepractical, productive, and
accessible to all. If you likethis show, you will love the
changelog. It's news on Mondays,deep technical interviews on
Wednesdays, and on Fridays, anawesome talk show for your
weekend enjoyment. Find us bysearching for the changelog
(00:24):
wherever you get your podcasts.Thanks to our partners at
fly.io.
Launch your AI apps in fiveminutes or less. Learn how at
fly.io.
Daniel (00:44):
Welcome to a fully
connected episode of the
Practical AI Podcast. In theseepisodes where it's just Chris
and I without a guest, we try tokeep you up to date with a lot
of different things that arehappening in the AI industry and
maybe share some tips and tricksor resources that will help you
(01:04):
level up your machine learningand AI game. I'm Daniel
Witenack. I am CEO ofPredictionGuard, and I'm joined
as always by my cohost, ChrisBenson, who is a principal AI
research engineer at LockheedMartin. How are you doing,
Chris?
Chris (01:20):
I'm doing really good
today, Daniel. We got some
interesting stuff to talk about.
Daniel (01:23):
Yeah. Yeah. You know,
it's been as always an
interesting season, you know,new model releases, new tooling,
new frameworks. Of course, itdoes seem like 2025 is set to be
the year of agentic AI, and whata lot of people are are talking
(01:45):
about.
Chris (01:45):
Indeed.
Daniel (01:46):
And, you know, of
course, it it keeps keeps coming
up for us. Is agentic AIimpacting your world in any way,
shape or form?
Chris (01:56):
Without giving away any
of the stuff I'm not allowed to
talk about, yes, it is mostOkay,
Daniel (02:04):
okay. So yeah, I would
say in a lot of ways, I see a
bit of a pattern developing withour customers where it's kind of
like they've done the rag thing.So like a knowledge based
chatbot. They've done maybe astructured data interaction,
(02:28):
maybe like text to SQL,something like that. Maybe
they've created an automationlike, hey, I'm gonna drop this
file in here.
And these few things willhappen, some of which are driven
by an LLM, then something willpop out the other end or I'll
email this person. So they kindof start developing those
(02:50):
individual assistants. And thenI see them start to kind of have
this light bulb moment of thelayer on top of those individual
assistants or tools, which Ithink we could generally call
that agentic layer, which is nowsaying, well, I can interact
with unstructured data. I caninteract with structured data. I
(03:11):
can interact with theseautomations.
Maybe I can interact with othersystems via API. How do I start
tying those things together ininteresting workflows and in
various ways? That's sort ofwhat I'm seeing. I don't know if
you've seen that pattern aswell.
Chris (03:30):
I have, and I just wanna
point out that you and I have
been talking about all of thesethings coming together for a
while. Before AgenTik came out,we weren't using the term
because that's the term thatended up being But we kind of
went through a lot of thegenerative thing and we were
kind of saying, okay, the nextstep is for a lot of these
(03:51):
different architectures to tieback in. And we're definitely
seeing that now. And it has aname.
Daniel (03:57):
Yeah, and it has a name
and actually it has a protocol.
Chris (04:01):
It has a protocol, yeah,
nice segue.
Daniel (04:04):
Or anyway, has a
developing protocol, which is
definitely something I think wewanna dig in today now that we
can kinda we don't have a guest,we can take a step back a minute
and just dig in a little bit tomodel context protocol. So where
did you first see this pop up,Chris?
Chris (04:26):
Well, when Anthropic did
the blog post, you started
seeing it all over the placepretty quickly. So it was only
hours after the blog post andI'm sure it was the same for
you. And then all of thefollow-up posts and articles
have come out about it andeverything, but that's yeah. It
(04:46):
made us flash.
Daniel (04:48):
Yeah. So technically
this was last if I'm looking at
the announcement date right,this was last year, November
twenty fifth of twenty twentyfour. Anthropic released an
announcement introducing modelcontext protocol. And of course
(05:08):
they wrote a blog post about it.It came from Anthropic, but then
also linked to an open project,Model Context Protocol, which is
just Model Context Protocol atGitHub.
There's a website,ModelContextProtocol.io, which
talks about the specificationand all of that. There is this
(05:30):
origin with Anthropic, but Ithink from the beginning,
Anthropic hoped that this wouldbe of a widely adopted protocol.
Maybe we should talk about theneed for this first. We've
talked a little bit about toolcalling on the show before,
(05:51):
Chris. I don't know if youremember some of those
discussions, but there's veryoften this I think we even
talked about this cringe momentof people talking about an AI
model renting you a car orsomething like that and
interacting with kayak.com.
Well, the AI model does not dothis. Something else happens
(06:15):
below the hood of that, which Ithink has been for some time
maybe called tool calling orfunction calling, which is
essentially the steps would bethe LLM. You would give the LLM
context for maybe the schema ofthe kayak.com API. You would
(06:36):
then ask a query and have theLLM generate the appropriate
maybe JSON body to call thekayak.com API. And then you
would just use regular good oldfashioned code, Python requests
or what have you, to actuallyexecute the API call to the
(06:56):
external tool get a response.
Does that flow generally? Am Imisspeaking anything?
Chris (07:05):
No, that's my
understanding. There's a fair
amount of custom code thatpeople would put in there to
glue those different aspectstogether. And it varied among
organizations widely.
Daniel (07:19):
Yeah, yeah. So that's
kind of the, I guess, stress
point or the need that cameabout is that everyone saw,
okay, maybe it's better if weplug in AI models to external
tools rather than having the AImodel do everything. There are
(07:41):
certain things that AI modelsdon't do very well. When I say
AI models, I'm kind ofdefaulting to GenAI models, and
let's just zoom in on languagemodels, large language models.
They're not going to be yourbest friend in terms of doing a
time series forecast, forexample, maybe with a lot of
(08:01):
data.
But there's tools that do that.So what if I just leverage those
tools via API and I could ask myagent, quote unquote, to give me
a time series forecast of X.There's a tool under the hood
that interacts with thedatabase, pulls the data. Then
there's a tool that makes thetime series forecast, and boom,
(08:22):
you tie these two together. Itlooks like your AI system is
doing the forecast, but reallyyou're calling these external
tools under the hood.
The thing is everybody hasdifferent tools. Everybody has
different databases. Everybodyhas different APIs. Every
product has different APIs.Everybody has different function
(08:43):
code.
And so similar, I think, to inthe early days of the web,
everybody was sort of postingthings and creating their own
directories of content on theweb, their own formats of
things. There was no protocolnecessarily that everyone
followed in terms of their usageof the internet or the web. But
(09:08):
then there were protocols thatwere developed, right? Like
HTTP. And now it's commonpractice.
Like I have a web browser,right? And I can go to any
website and I expect certainthings to be served back to me
from the web server that givesme the website. And those things
should be in specific formatsfor my browser to interpret them
(09:31):
and me to get the information.And so there's a protocol or a
standard in place for each ofthose web servers. So when I
visit Netflix or whatever, it'ssending back similarly, let's
say, structured or formatted orconfigured data to me as if I go
to Amazon.com and I'm searchingfor products.
(09:55):
This was not the case untilrecently with these tool calls.
An AI model, everybody was ontheir own to integrate tools
into their AI models usingwhatever custom code, whatever
custom prompts, whatever customstuff, which means, Chris, if
you have created a tool callingagent and I now want to use some
(10:20):
of your tools that you'vedeveloped, and I have my own
tool calling agent, I may haveto modify my agent to use your
tools, or you may have to modifyyour tools to be compatible with
my agentic framework. And that'skind of the situation that we've
been in for some time, I guessis painful, reasonably painful.
Chris (10:44):
It is. We've talked about
this general idea a number of
times across a number ofepisodes. And in my mind, it
kinda comes back to that notionof the AI field maturing over
time. And we've talked a lotabout the fact that AI is not a
standalone thing. It's in asoftware ecosystem of of various
(11:06):
capabilities, many of whichbecome standardized.
And and I think this is yetanother step of this field that
we have been in and that is, youknow, raging forward, maturing
naturally in the way that itneeds to go. You have you know,
now we have a protocol thatoperates as that standardized
glue that we that can beadopted, and everyone knows what
(11:29):
to expect. Just a great analogywith HTTP, as you pointed out,
in terms of being you have astandard format, standard
serialization, and you can plugright into it. So, yeah, this
was good news from mystandpoint.
Daniel (11:42):
Yeah. And I think I had
a ton of questions about this
because it certainly impacts theworld that I'm working in
directly. Indeed. And it bringsup all sorts of questions. So,
you know, questions like, well,how do you build an MCP server?
Who's creating MCP servers? Whatkind of model can interact with
an MCP server? What, you know,payloads go back and forth? And
(12:06):
so, so it it may be worth justkind of digging into a couple of
those things. So first off, itmay be good to dig in a little
bit to what kind of how an MCPsystem operates.
And then that may make sense ofsome of the other things that we
talk about in terms of modelsthat can interact with MCP
(12:28):
servers. So there's a series ofgreat posts, which we'll link in
the show notes. I encourage allof you to take a look at those
mini posts. We'll of course postthe main blog posts for
Anthropic, but also the Protocolwebsite, but a few of these blog
posts as well. So one of theones that I found that I really
liked was from Phil Schmidt,Model Context Protocol and
(12:51):
Overview.
And this one is really useful.And I think it's useful because
it helps you form a mental modelfor the various components and
how they interact in one ofthese systems. So you might be
in your car listening to this.I'll kind of talk through these
main components because youmight not be looking at the
article. But in the systemthat's using MCP, there are
(13:16):
hosts, there are clients, andthen there are servers.
So the host would be anapplication that the end user
application. So let's say thisis my code editor, and my code
editor under the hood somehow isgoing to use MTP stuff. I'm
going to be coding, I'm going tobe vibe coding, right? And ask
(13:38):
for various things, and it's
Chris (13:39):
going to
Daniel (13:39):
interact in the ID
somehow and cause things to
happen. So that's the host, thesort of end user application.
There's a client which livesinside the host application,
which is an MCP client, meaningthis client might be a library,
(14:02):
for example, that knows how todo MCP things. An analogy would
be in Python, I can import therequest package, which knows how
to execute HTTP calls back andforth to web servers. Think
about the client, maybe an MCPclient, as a similar thing that
(14:23):
lives within your applicationand knows how to do this back
and forth with MCP serversinstead of web servers.
And then the servers are thoseexternal programs or tools or
resources that you reach out tofrom the client. And those
expose, like I say, tools,resources, prompts over this
(14:47):
sort of standardized API. Soit's a client server
architecture. The client liveswithin the host, which is the
end user application. And thenthere's the server, which you
could think of again as this MCPserver.
Now we should talk a little bitabout what the MCP server does,
(15:09):
but you could think about theclient as invoking tools, making
requests for resources, makingrequests for prompts or prompt
formats or templates. The MCPserver is exposing those tools,
resources, and prompts. Doesthat make sense, Chris?
Chris (15:28):
It does. I mean, it's
kind of a new form of middleware
in that sense. Those who And Irealize that term may not
resonate with everybody that'slistening, but that's kind of a
classical term of that, thenotion of connecting different
aspects of services and systemstogether in a way to try to
(15:50):
simplify and standardize. Andso, you know, it's a different
way of putting it, but yeah.
Daniel (15:56):
Yeah, yeah, I think
that's a great way. Even Phil in
his blog post has this kind ofMCP in the middle as this
mediator. So I think that that'sa good analogy. And we mentioned
tools, resources, and prompts.So an MCP server within the
specification of the protocolcan expose tools, resources, or
(16:23):
prompts.
So the tools are maybe thingslike we already talked about
with the Kayak API or callinginto a database. They are
functions that can performcertain actions like calling a
weather API to get the currentweather, or like I mentioned,
you know, booking cars, orthere's MCP servers that will
help you perform GitHub actionsright on your code base. So
(16:47):
these are the tools or thefunctions that are that are
exposed. So that's thing onethat an MCP server can expose.
Thing two would be resources,which you could think of as
datasets or data sources that anLLM can access.
So things that you want toexpose either as configuration
(17:09):
contacts or datasets to theapplication. And then the third
would be prompts. And thesewould be kind of predefined
templates that the agent can useto operate in an optimal way. So
let's say that your tools inyour MCP server are related to
(17:30):
kind of question and answer andknowledge discovery. You might
have some preconfigured questionand answer prompts that the LLM
could use that you know would beoptimized for a certain scope of
work or something.
You could think about it likethat. So these are tools,
resources, and prompts that areexposed in the MCP server. Does
(17:54):
that make sense?
Chris (17:54):
It does. I know in that
particular article you pointed
out one of the things that I hadreally keyed in on that helped
me kind of grok that immediatelywas the tools for model control,
resources are applicationcontrolled and prompts for user
controlled. That was easy enoughfor me to wrap my mind around
quickly. So yeah, that's a greatexplanation from you there.
Daniel (18:16):
Yeah, yeah, definitely.
So then there's kind of a couple
things that are possible in theinteraction between model client
or MCP client and MCP server.One of the things is an
application needs to understandhow to connect and initialize a
(18:39):
connection with an MCP serverand sort of open that
connection. That can happen overstandard input output, meaning
your server might be runninglocally or, you know, as part of
an application, or it may berunning remotely and you could
interact via server sent eventsback and forth to the to the
(19:02):
server. But then you also needto execute a kind of discovery
process.
I was thinking back to the goodold microservices days, Chris,
which you may fondly or notfondly A
Chris (19:17):
bit of both, depending on
what I was doing.
Daniel (19:19):
This made me think of
microservices discovery things
where it's like, Hey, whatservices in my big microservices
environment, how do I discoverwhere those are at and what
domain I connect to them on andthose sorts of things. What are
they? So this was a whole topic,I guess it maybe still is a
(19:40):
whole topic, but there's thisdiscovery type of mechanism
where you can, between the MCPserver and the MCP client,
actually expose kind of a listof tools or a list of prompts or
a list of resources. And thoseare discoverable to the AI
application so it knows what itcan do. Can I book a car?
(20:05):
No, I can't because that's notexposed as part of the MCP
service, but maybe I can doGitHub related stuff, or maybe I
can do, you know, databaserelated stuff or or whatever
that is. Alright, Chris. Sowe've talked a little bit about
MCP clients and MCP servers.There's certainly much more that
is available, to talk about anddig into in the protocol itself.
(20:32):
And, you know, we've scratched alittle bit of the surface here.
We're not gonna go through thewhole protocol on the podcast.
Maybe that's a relief to to ourlisteners, but there is a whole
protocol there. I think it wouldbe good to talk though about
kind of two additional things,which immediately popped into my
mind when I saw Anthropicreleasing MCP and talking about
(20:55):
it is, number one, how do Icreate an MCP server or where do
I get access to MCP servers totie into my own AI system? And
then secondly, well, what if Idon't use Anthropic models? Can
I use MCP?
Those were two immediatequestions from my end. I don't
(21:20):
know, Chris, if you've there'svarious GitHub repos that are
popping up and also examples ofvarious MCP servers. Have you
seen any that are interesting toyou?
Chris (21:37):
The one that's most
interesting to me because when
I'm not focused on, you know, AIwith Python specifically, I'm
very focused on edge with Rust,and there's an official Rust SDK
for the model context protocol.So that is that's naturally
where I gravitated to.
Daniel (21:54):
Yeah. Yeah. And there's
Python implementations. I think
there's many programminglanguage implementations.
There's also sort of exampleservers that are kind of
prebuilt.
I've seen various ones for likeBlender, which is a three d kind
of modeling animation type ofthing.
Chris (22:16):
Which is open source.
Daniel (22:17):
Yeah. Exactly. And then
Ableton Live, which is platform
that is like a music productionplatform. There's ones for
GitHub, I already mentionedthat. Unity, the game
development engine.
There's ones that can controlyour browser, integration with
Zapier, all sorts of things. Sopeople have already created
(22:41):
many, many of these MCP servers.And again, when you're creating
this MCP server, basically youjust have to create essentially
a you could think of it like aweb server that has various
routes on it, but these arespecific routes that expose
specific sorts of things, thesetools, resources, and prompts
(23:05):
over a certain protocol. Andthere is communication back, for
example, back and forth of JSON,for example, over server sent
events. But again, there's aspecific protocol that's
followed.
Now you can look through all ofthe specific details of the
protocol if you want to saycreate a model context protocol
(23:27):
server for your tool. And Iactually wanted to do this. So
we have an internal tool that weuse for doing text to SQL. It's
very frequent. I often call itthe kind of dashboard killer
app.
It's like everyone's createdtons of dashboards that no one
uses in their life. And wouldn'tit be better if you could just
(23:50):
connect to your database and asknatural language questions? So
we have a whole API around thisand you can add your database
schema information and all sortsof fun things and do natural
language query. And so there's,I don't know, six or seven
different endpoints in this kindof simple little API that does
structured data interaction. SoI'm like, Okay, cool.
(24:12):
We've written that with FastAPI,which is awesome. So it's a web
server. It has certain endpointsthat allow you to do the SQL
generation or allow you tomodify database information or
allow you to do various elementsof this operation. And we
utilize that as a toolinternally via tool calling. So
(24:34):
I thought, well, what would ittake for me to convert that into
an MCP server that I could pluginto an agent?
Well, you could kind of do thatmore or less from scratch just
following the protocol, butpeople have already started
coming up with some really greattooling. So there's a thing
(24:55):
called FastAPI MCP. If you justsearch for that, this is a
Python framework thatessentially works with FastAPI
and basically converts yourFastAPI web server into an MPC
server or MCP server. It works.From my experience, I just added
(25:16):
a few lines of code to myFastAPI endpoint, wrapped my
FastAPI application in thisframework, and then ran the
application, which is, again,this FastAPI application.
That was immediatelydiscoverable as an MCP server,
meaning that I could, if I hadan AI system, which we'll talk
(25:39):
about that bit here in a second,if I had an AI system that could
interact with MCP servers, myservice now, the Text to SQL
system that we use, would beavailable to that agent to use
as a potential tool that'splugged into a database that we
(26:00):
would connect it to. Does thatmake sense?
Chris (26:03):
It does. That was a good
explanation.
Daniel (26:04):
Yeah. So I'm sure also,
I mean, you mentioned this Rust
client that you talked about. Iimagine a similar thing is
possible there with a bunch ofconvenience functions and that
sort of thing. I don't know Rustquite as well, but I imagine
that's the case.
Chris (26:22):
It is. It's one of those
I love the fact that MCP is
rapidly gaining so much languagesupport off the bat. I think
you've heard me say this before.One of my pet peeves is kind of
the Python only nature of a lotof AI. At least it starts there
and depend from and I think I'vesaid in previous episodes, it's
(26:45):
a maturity thing when you canget to where you're supporting
lots of different approaches toaccommodate the diversity that
real life tends to throw at us.
That's good. I love and MCP hasshot up that very, very quickly.
So yeah, in the world that I'min, playing it, kind of
combining MCP as a protocol thatworks at the edge as well as in
(27:07):
the data center is a big dealfor me.
Daniel (27:10):
Yeah, and it does
actually work also kind of
single node. I mean, we'vetalked about client server,
right? But you can run an MCPquote server in this sort of
embedded way that isdiscoverable in a desktop
application or in a single nodeapplication. So there's
certainly no so I guess what Imean is if you're using MCP and
(27:34):
this is security andauthentication related, it
doesn't mean that you need toconnect over the public Internet
That's right. To an MCP server.
And it doesn't mean that all ofthat is unauthenticated or you
can't apply security of anytype. What it does mean is that
if you are for example, in theexample that I gave, so I've now
(27:58):
converted our text to SQL engineinto an MCP server. I can plug
in a database connection tothat, connect to a database. But
depending on how I set up theconnection to the database,
there could be potentialproblematic vulnerabilities
there. And if I don't have anyauthentication on my MCP server,
(28:22):
and I put that on on the publicinternet, anyone could use it.
So there's two levels of kind ofsecurity or authentication or
threat protection that'srelevant here. One is the actual
connection level authenticationto the MCP server. And the other
is, well, I can still create atool that's vulnerable, right?
(28:44):
Or has more agency than itshould. Yeah.
Chris (28:47):
I think one of the things
I love about that call out from
you is that you can be operatingon that one physical device and
tying various systems together.And just like if you take it
outside the AI world and youtalk about protocols that we are
commonly using, you mentionedHTP earlier, protobufs are
(29:09):
really common and things. Youmay be using all of those other
ones that we've been using foryears on one device. It doesn't
mean that they're by definitionmany services in many different
remote places. It can all becollected there.
And it still brings valuebecause you still have that
standardization and the variousvendors, whether they be
(29:29):
commercial or open source, canprovide interfaces to that to
make it easier. So it becomes amuch more pluggable and yet not
tightly integrated, which is agood thing, architecture. And I
think MCP really gives us thatthat same capability now in this
space. And so it's like I said,it it it really is pushing it up
(29:50):
the maturity, you know, youknow, for up the maturity level
from from we're all writingcustom glue code to now, hey. I
I'm gonna standardize on MCP andaway we go.
Daniel (30:01):
Yeah. And I think
similar to people can carry over
some of their intuitions fromworking with web servers into
this world. Like, you wouldn'tnecessarily just download some
code from GitHub and expectthere to be no vulnerabilities
in it when you run that server,you know, locally. Same goes
(30:22):
with MCP. Right?
You would you would definitelywant to know what you're
running, you know, what'sincluded, where you're running
it, how authentication is setup, etcetera, etcetera.
Similarly, if you're connectingto someone else's MCP server,
like Chris, you're running oneand I wanna connect to it,
depending on the use case thatI'm working with, I may very
(30:43):
much want to know what what datadoes your MCP server have access
to? How are you logging,caching, storing information,
etcetera, etcetera? You know, isit multitenant? Is it single
tenant?
Etcetera, etcetera. So you canbring some of those intuitions
that you have from working inthe world that we all work in,
which involves a lot of, youknow, client server interactions
(31:06):
and bring that into into thisworld. Okay, Chris. We've talked
about MCP in general. We'vetalked about creating MCP
servers or the development ofthem.
There's one kind of glaringthing here, which is Anthropic
released or announced this modelcontext protocol, and certainly
(31:28):
others have picked up on it. Andyou see OpenAI also now
supporting MCP where before theyhad this kind of their version
of tool calling in the API. Sothere's a more general question
here, which is, well, I'm usingLaMa 3.1 or DeepSeek. Can I use
(31:50):
model context protocol? And moregenerally, as models
proliferate, which they are, andpeople really think about being
model agnostic, meaning they'rebuilding systems where they
wanna switch in and out models,do I have to use Anthropic or
now OpenAI to use MCP.
(32:12):
So the answer to this question,at least as far as what we've
discovered in our own work is asof now, sort of yes and no. But
in the future, definitely therewill be flexibility to many
things. So what I mean by thatis Anthropic has a kind of head
(32:34):
start in a sense, in the sameway that OpenAI has released
certain things like variousagent protocols or tool calling
or stuff. It was something theyreleased, something they had
been working towards and theyhad maybe an advantage
initially. Anthropic obviouslyhas been working towards this.
(32:54):
Their models, their desktopapplication, etcetera, supports
it well. Others are playing alittle bit of catch up and that
would include open models. Ifyou think about something like a
LAMA 3.1 or Quinn 2.5 orwhatever model you're using,
those open models, there'snothing preventing them from
(33:17):
generating model contextprotocol aligned things. Agreed.
But they haven't necessarilybeen trained as part of their
training dataset to generatethose things.
Meaning you can have an openmodel that generates what you
need for model context protocolinteractions, but you're
(33:38):
probably going to have to loadthe prompt of that open model
with many, many examples ofmodel context protocol and
information about it for it tobe able to generate that, which
is totally fine. You can dothat. We've done that internally
and I've talked to others whohave and there's blog posts
about it, etcetera. In thatsense, that's why I say yes and
(33:59):
no. There's nothing preventingyou from doing this with open
models right now or models otherthan Anthropic.
You might just have to kind ofload that context window with
many, many examples that are MCPrelated and aligned for you to
generate consistent output forMCP servers. But what will
(34:20):
happen similar to what happenedwith tool calling. So if you
remember, you know, tool callingwas released. Everybody The
progression, I kind of see itthis way. It's like people found
out There've been a lot of casesof this.
People found out that modelsgenerally can follow
instructions. And so at acertain point, people developed
(34:43):
prompt formats like Alpaca,ChatML, etcetera, that had a
generalized form of instructionfollowing. And those generally
got more standardized and nowall training sets, well, not all
training sets, but many trainingsets for kind of the main
families of models like LAMA andothers include instruction
(35:06):
following examples. Then peoplestarted doing tool calling. And
then people started developingtool calling specific examples
to include in their data setsthat they're using for models,
including tool calling formats,which are in Hermes and other
datasets now.
(35:26):
And so now many models do havetool calling examples in their
training datasets. Now we'regoing to have the exact same
progression with MCP. People cando MCP right now with open
models if they perform in acertain way. It will become more
efficient though as MCP examplesare then included in training
(35:47):
datasets for open and otherclosed models moving forward. So
it's kind of now and not yetsituation.
Chris (35:56):
Yeah, I agree. I mean,
and at the end of the day,
there'll be Differentorganizations will go both ways.
Some are just going to say,let's adopt MCP outright.
Others, like the open AIs thattier of providers, some of them
will open source their ownapproaches to try to compete.
(36:19):
And the marketplace will willyou know, people will try it
out.
And based on, you know, thingslike, you know, providing
examples that make it easy,there'll be a certain amount of
kind of all the things competingand probably something that will
kinda shake out as more popularthan the others in the alarm
because this is what we see overand over in software. And
there'll also be a point whereany that are genuine contenders,
(36:42):
you'll have servers that supportboth MCP and all those top
contenders with examples of eachuntil it becomes clear kind of
what the world is is going to godo. So I I think, yeah, I I
think Anthropic was smart to dothis, and they got a a leg up.
And they put out a high qualityprotocol with a lot of great
(37:02):
examples and SDKs right off thebat. And that was a smart thing
to do to try to kind of win themarketplace very early in the
game.
So it'll be interesting to seehow that but I think the the key
point that I'm trying to make isand that you're making clearly
is that the the world haschanged in that way, in a small
way, in terms of, you know,everyone's gonna now have to
(37:23):
level up into having this kindof AI specific middleware that
ties the model into all theresources in the resourcing and
tooling and and prompting thatit needs. So I'm I'm very happy
to see it come into place, andwe'll see some shakeout in the
months to come.
Daniel (37:40):
Yeah. Yeah. Well, I, I
definitely am interested to see
how things develop. There arecertainly toolkits of all kinds
that are that are developing,and maybe I can share a couple
of those. And and, Chris, youcould share the the Rust one,
and think you had another Rustresource that you wanted to
share.
But but the ones that I werereally was, using from the
(38:03):
Python world, if people want toexplore those and and look at
those a little bit more, Theone, if you're a FastAPI user,
then I would definitelyrecommend you look at FastAPI
dash m c p.
Chris (38:16):
Yep.
Daniel (38:17):
That's the framework
that I use. You can I imported,
or inserted three lines of codeinto FastAPI app and was up and
running? Now you may wanna kindamodify a few more things than
that eventually, but that willget you up and running. The the
other thing that was helpful forme is there is actually, an MCP
(38:40):
inspector application. So one ofthe things like, for example, in
FastAPI I like is you can spinup your application and you
immediately have APIdocumentation that's in Swagger
format.
You can go and look at that.Well, the MCP inspector can help
you check if you're connect toyour MCP server, validate which
(39:02):
tools are listed, executeexample interactions, see what's
successful, see what's returnedfrom the MCP server, all of
those sorts of things. So veryuseful little tool that is
actually also linked in thefastapi MCP documentation as
well. And, Chris, you you hadmentioned a a Rust client. I'm
(39:23):
sure there's a lot of other onesthat are out there.
I am intrigued kind ofgenerally, you you you've been
exploring this Rust world quitea bit. Would love to hear any
resources that you've beenexploring there. People might be
interested.
Chris (39:38):
Yeah, there's one that
I'll mention. It's separate from
MTV, but it's one that I thinkis very interesting for
inference at the edge inparticular. It's hosted at
Hugging Face. It's called Candleas in I think I like a
candlestick. And you can findit.
And it is it advertises itselfas a minimal ML framework for
(39:59):
Rust, but it's really caught myattention because as I'm often,
you know, advocating for edgecontext and edge, you know, use
cases where we're getting AI outof the data center strictly and
and and doing interesting thingsout there in the world that may
be agentic, may be physical aswe go forward. Candle is an
(40:21):
interesting thing. And if we'relucky, we might have an episode
at some point in the futurewhere we can kinda dive into
that in some detail. But if thatif edge and and high performance
minimalist things areinteresting to you in this
context, go check out Candle atHugging Face.
Daniel (40:39):
Yeah. Yeah. Encourage
people to do that. All the
crustaceans out there, isn'tthat the
Chris (40:47):
Rustations. Rustations.
That's right. It's a crustacean
theme though, you're right.
Daniel (40:53):
Definitely. Okay, cool.
We'll definitely check that out.
As I mentioned, we'll share somenotes in our show notes with
links to all of the blog postswe've been talking about, the
MCP protocol, the Python andRust tooling. So check that out.
Try it out. Start start makingand and creating your own MCP
(41:15):
servers and let us know on, youknow, LinkedIn or X or wherever,
what cool MCP stuff you youstart building. And, and we'll
see you next time. Greattalking, Chris.
Chris (41:26):
Good talking to you. See
you next time, Daniel.
Jerod (41:35):
Alright. That is our show
for this week. If you haven't
checked out our changelognewsletter, head to
changelog.com/news. There you'llfind 29 reasons. Yes.
29 reasons why you shouldsubscribe. I'll tell you reason
number 17. You might actuallystart looking forward to
Mondays.
Chris (41:56):
Sounds like somebody's
got a case of the Mondays.
Jerod (41:59):
28 more reasons are
waiting for you at
changelog.com/news. Thanks againto our partners at fly.io, to
Brakemaster Cylinder for theBeats, and to you for listening.
That is all for now, but we'lltalk to you again next time.