Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
Warren.
Speaker 2 (00:08):
I'm pretty excited for today's episode. Know why is that
because I have a lot of questions about this topic,
Like I've heard that I've heard the phrase MCP so much,
and I just have many questions.
Speaker 3 (00:22):
I mean, at some point you're going to have heard
it too much. And with that, maybe I'll drop a
little fact for the episode. There was a little research
done not too long ago about the adverse impacts of
mentioning artificial intelligence in product and service names, and they
found that it actually significantly decreases consumer trust. Really, I
(00:42):
think that tracks.
Speaker 2 (00:43):
Yeah, it seems reasonable.
Speaker 3 (00:46):
Like you go to Starbucks and you get some coffee
and they're like, now with AI included, are you like,
are you gonna be happy for that?
Speaker 2 (00:53):
Absolutely? Can I get my double vine latte with AI
on a blockchain?
Speaker 3 (01:00):
We'll come with MCP on the side.
Speaker 2 (01:02):
Okay.
Speaker 1 (01:03):
Cool.
Speaker 2 (01:04):
So speaking of which, Gil, you're here to talk to
us about MCP.
Speaker 4 (01:08):
Yeah, I'm very excited too, and I am one of
those people who's actually heard MCP way too many times,
so happy to do that.
Speaker 2 (01:14):
All right, Before we jump into that, give me a
little bit about your background.
Speaker 4 (01:21):
Absolutely, So, as you mentioned, I'm Gil, I am the
co founder and CTO of Merge and Merge is a
platform that offers unified epis to help companies offer integrations
with a ton of different products in any specific vertical
from ticketing, CRM, file storage, and so on, and a
lot more coming in the AI space MCP all of
(01:41):
that as we'll talk about today.
Speaker 1 (01:44):
But before that, I.
Speaker 4 (01:45):
Went to college in New York and then ended up
going straight into tech. So I worked at LinkedIn out
in San Francisco, and then worked at a couple smaller startups,
which ultimately led me to this problem of integration, seeing
how it's just bogging down space, so I decided to.
Speaker 1 (02:00):
Star Merge to tackle that problem. And it's been great.
Speaker 4 (02:02):
It's been about five years now, four years since coming
out of Stealth. We've gone from zero to one hundred and
ten employees. We have almost fifteen thousand free and paying
customers all around the world. We have our three offices
in San Francisco, New York, and Berlin.
Speaker 2 (02:15):
Right on right on. So with four years like that's
a pretty solid record for a startup, you're feeling pretty
confident about this.
Speaker 1 (02:26):
Yeah.
Speaker 4 (02:26):
Absolutely, I think that we're seeing that this is a
bigger problem than we have ever even envisioned. Everyone needs integrations.
The problems only getting worse now with AI. You have
these models that have essentially ingested the full public corpus
of Internet data, and all that's left is private data.
Speaker 1 (02:41):
And that's what Merge does.
Speaker 4 (02:42):
We specifically help companies get access to their customers data.
So we're excited about the problem now, but also where
it's going, both in the traditional API and integration building
space as well as all of the upcoming AI and
MCP driven integrations right on.
Speaker 2 (02:58):
So like is the concept there? Instead of having to
go and figure out the API docs for the fifteen
different services I need to integrate with, I just connect
with Merge and you're like the relay to those services
for me, and I just have to talk to one service.
Speaker 1 (03:16):
That's exactly right.
Speaker 4 (03:17):
So an example here brax Ramp, which are corporate cards.
Their customers have merged and they use us to power
a few use cases. But notably they want to automatically
onboard employees of companies that use their credit cards, automatically
mail them a credit cards to their home address based
on their title. Maybe give them twenty dollars a day
for lunch if they're an engineer, for example, and then
(03:39):
they want to terminate those cards when person leaves the company,
and manually managing all of that is impossible, and so
instead they want to integrate with the HR systems to
be able to pull in all this data. But some
of their customers use Bamboo HR. Gusto namely worked a
SAP and they have to build all of those integrations.
So instead they integrate once with us, and we integrate
and sort of normalize all that data to one format that.
Speaker 1 (04:01):
They integrate with.
Speaker 4 (04:02):
And then again we do that for HR, but we
also do that for a lot of other platforms like ticketings, RM,
file storage and so on.
Speaker 2 (04:08):
Right on, that's cool, that's cool. So let's jump into MCP.
Give me the give me like the layman's version of
what MCP is.
Speaker 4 (04:19):
Yeah, so there's a lot of ways to think about MCP,
but I think the important node here is it's actually
a really simple concept. It was a standard that was
reached similar to other protocols and standards of the Internet
that ultimately aren't so complicated, but solved the need, which
was that there was a major lack of standardization. And
so when we think about the history of APIs building
(04:41):
an integration required you to go to API documentation and
explicitly say I'm going to take data from this point
and I'm going to move it to this point. A
lot of complications there, but overall you're doing that. Now
in the agentic era, you want to expose those API
calls essentially to an agent so that you can actually
give it arms as a to them being these sort
of you know, things to say, Hey, here's how you
(05:03):
solve this. Go log into your Salesforce account, click this button,
click that instead letting the agents actually take those actions
for you. And so MCP is essentially a way to
make it so that those API calls or actions or
tool calls is as you would say, are available and
exposed to the agents in a way that it can
easily understand, and so then it can formulate a workflow
(05:24):
knowing what tools via MCP are available to it and
then make those calls.
Speaker 3 (05:29):
What would stop an agent from being able to integrate
with the existing API docs or one of the standards
like open API Specification or whatever Ataboose is using with
Smithy just consume that and be able to generate the
appropriate calls into the APIs.
Speaker 1 (05:45):
So there's a.
Speaker 4 (05:46):
Few things that are that are differentiators, but overall, I
will say, at a high level, I'm with you. I
think it's it's not even a hot take anymore to
say that MCP doesn't actually do all that much. You
have API documentation that's built for humans to build integrations.
Speaker 1 (05:59):
You have open API specs, which.
Speaker 4 (06:00):
Were kind of the next evolution of hey, some static
script can understand my API and build docs or SDKs
out of it. And then finally you have MCP, which
is just another type of rapper that allows agents to interact.
I think the difference is MCP is not stateless like
documentation or like you know, like an open API spect
but instead it's an actual running server that stores credentials,
(06:22):
that manages sessions and can be stateful. So it's it's
not ultimately adding a ton more, but it does unlock
a few additional abilities that are necessary for an agent
to take actions.
Speaker 3 (06:34):
That's I feel like a little scary, like I've I've
designed the perfect service that is stateless in every way
with you know, I've thought a lot about what the
endpoint should be, and in order for an agent to
work with it, we're saying you need to actually forget
everything that you've built so far, make a different doc
that is readable, and also start storing state and do
a lot of extra stuff that you specifically didn't want
(06:55):
in your service to begin with.
Speaker 4 (06:58):
Yeah, it's true, I think that, you know, when you
think about like a statically coded app, though you would
pull from you know, let's say that you want to
update all records, all ticketing system tickets that have Gill
in the title. You have to pull in all those tickets,
then you have to iterate through them and modify them,
and then you need to write them back. So there's
this notion of like state needs to be maintained between calls.
Speaker 1 (07:17):
And an agent. An agent can do that.
Speaker 4 (07:19):
But you can also write tools on the server that actually,
you know, manage a lot of that. So what you're
doing is you're deciding how much complexity you want to
expose to the agent and how much you want to
wrap behind a statically coded tool that the agent can
then call.
Speaker 3 (07:33):
So normally you would push the if your API was
fetch ticket and then update ticket as two independent API calls,
and that's all you offered. You were like, we don't
offer bulk support, and the client would be responsible for
actually pulling each one of those and then updating them individually.
And if we're saying that you're building a proxy in
between there, it's because the API that you're offering to
(07:56):
end users wasn't valuable enough that it wasn't actually solving
the needs that they would frequently have. I don't think
bulk actions are very common, so I think in that
way this is a little interesting to saying, like, well,
now you need to actually start thinking about what the
value of your service is actually offering, because if people
want to do these bulk actions, you may think about
wanting to provide that, and this is where you would
(08:16):
prevent actually provide this logic in a agent based system.
Speaker 4 (08:21):
I mean absolutely, and I even say this about MCP service.
They're not unlocking anything that new. They are fully limited
by the capabilities of the underlying API. Notably this is
MCP for API interactions. You can use MCP to wrap
SEQL calls to a local database, that sort of thing.
But yeah, and one of the big problems you see
in the integration space and have seen for years, is
(08:42):
that you often have to pull full data sets. And
the reason you have to pull full data sets is
that you don't have good search endpoints in these APIs,
So I would say, I would say most inefficient integrations
are built by bad APIs, not necessarily bad consumption patterns
or anything else.
Speaker 3 (08:57):
Oh they're going to say bad engineers, and that I would.
Speaker 4 (09:01):
I mean, yeah, hopefully that one's not going to be
The engineering skill level won't matter as much anymore. We're
working on a new product right now, and I can
tell you from using AI to build it, the skill
gap is closing really really fast.
Speaker 1 (09:14):
It is so good now interesting.
Speaker 2 (09:16):
So so you think the vibe coding is leveling up?
Speaker 1 (09:22):
Oh yeah, Oh yeah, it's scary to say.
Speaker 4 (09:24):
Look, I'm an engineer who's I've been coding for for
almost actually over twenty years now.
Speaker 1 (09:28):
I love it.
Speaker 4 (09:29):
I'm so passionate about building, and it's scary to see
it just writing code that you would have written. But
I also think it's really leveling me up too. We
vibe coded, so me, a product manager, and a software
engineer are building this new product. Our product manager, who
has never written a line of code in his life,
built our entire front end for it. We imported that
into GitHub. Then we took over with Windsurf and started
(09:50):
started you know, actually doing some a little bit more
guided AI coding on top of it.
Speaker 1 (09:56):
But it worked and it was great, and then we
just connected that straight to the back end right on.
Speaker 2 (10:00):
Yeah, I think that's been my experience. Like it's really
easy to get started with AI, but then I think
after like those initial few steps, I think it's still
important to have the technical skills. And I treat AI,
and the way I try to get people to treat
AI is think of it as if you had your
(10:22):
own intern or your own junior engineer, you know, like,
don't treat the AI as a principal architect. Treat it
as an intern, and give it very small scoped tasks
that you can check up on, because it's it does
sometimes get things wrong and just like your intern would,
so you've got to give it a task that you
(10:45):
can follow up and make sure that it's continuing to
build towards the same end goal that you are. And
I think that's where a lot of projects get off
the rails, as they just give AI like this vague
task without the guardrails to keep it from wandering off
and hallucinating.
Speaker 3 (11:02):
I want to probe you on that well, definitely, because
I see a lot of companies are not hiring interns
and have no idea what to do with some interns,
And yet they're coming up and hiring llms out there
in the world to interact with, and I don't have
the faith that they are capable of understanding how to
(11:22):
provide that additional contact. So are you optimistic about where
the quality of software in the world is going?
Speaker 2 (11:29):
I think so, yeah, because like on a large enough
time frame, like this is going to work.
Speaker 1 (11:35):
Itself out, you know.
Speaker 3 (11:37):
Yeah, the death of the universe right around the corner.
Speaker 2 (11:39):
Yeah, absolutely, Like in the big scale of things, it
doesn't really matter. But no, like specifically to your question,
in people like engineers not having the skills to guide
an intern, I would agree with that. I would include
myself in that bucket as well as something I've had
to learn and improve on. And I think that's where
(12:05):
I think that's the skill gap that AI doesn't cover.
So AI can write the technical code, but as an engineer,
your value add into that equation is maintaining the big
picture and breaking that down into concrete, isolated tasks. It
can be distributed to junior engineers, interns, AIS, senior engineers,
(12:31):
whatever is on your team.
Speaker 1 (12:34):
I totally agree with that.
Speaker 4 (12:34):
And I think we see this a lot. We see
a fear of adopting AI from a lot of engineers
or and it's not necessarily a fear of you know, oh,
it's going to take my job, But I think it's
a fear of it building bad software, or it just
changing the way that someone is used to building in general.
And I think the way that I explain this to
my team is, you know, yeah, you're frustrated because it
generated bad code number one, that's time for introspection.
Speaker 1 (12:57):
What did I do wrong? How do I prompt better? Yeah?
Speaker 4 (13:00):
But also, you know, I think that a lot of
people see this as like, Okay, it's spent out really
terrible code, I'm going to waste so much time cleaning
it up. But what I'm not thinking about it's like, Okay,
what if that eight hours buildings by hand? Or I
can spend thirty minutes prompting it to create something, and
then thirty minutes to an hour cleaning up the code,
and then I've spent you know, a fourth of the
time just maybe not how I'm used to building.
Speaker 3 (13:23):
Yeah.
Speaker 2 (13:23):
One of the big takeaways I've had from working with
AI has been applying that to the rest of my life.
Like I give AI a bad prompt, it writes bad code,
and I spend a bunch of time cleaning it up
and then thinking about what should I have said differently?
And then I started looking at like other conversations in
(13:44):
my life, and when I'm talking to humans now, I
use that same process, and I realize, like, a lot
of the pain and suffering I've had in my life
is because I gave another human a bad prompt.
Speaker 3 (13:55):
Yeah, wow, you have relationships built on lies. Right, It's
like I need to go back to this first conversation
I ever had with this person and maybe change what
I said to them, because that that's that set me
up for a success or failure.
Speaker 1 (14:12):
Right.
Speaker 2 (14:13):
Yeah, It's like this whole chain of events that happened
in my life could have been corrected had I given
that person the right prompt to begin with.
Speaker 4 (14:24):
You very well may have just changed my dating life forever.
It wasn't expected outcome of this call, but it really hasn't.
Speaker 3 (14:30):
It just become like you just sick one agentic agent
at some dating service, and it will talk to another
LM out there and then they'll decide collectively whether or
not to start your relationship.
Speaker 4 (14:43):
Yeah, I mean it's once once Tender release is their
official MCP server.
Speaker 1 (14:46):
We're going. I'm in.
Speaker 3 (14:48):
I mean, I assume people are doing this. They're just
scraping the app and you know, uploading data anyway. I mean,
or it's a human in the loop still, like it's
still telling you what to type. Uh, you're just you're
just doing Yeah, Like I don't. I don't think automating
it is going to mean maybe that's what we're missing.
We're missing the automation so people can get on more
dates faster.
Speaker 1 (15:07):
I've been I've been seeing the tiktoks because I guess
the dating apps are going after people who are automating.
Speaker 4 (15:12):
Of some people who set up an actual phone in
a room with a little rubber hand on.
Speaker 1 (15:15):
A screwdriver or on a on a drill that's just swiping.
Speaker 3 (15:20):
I mean, it's a it's a miss market for them.
Like if all of your users are doing something we
know from there's like some great books out there like
Platform Revolution that you should actually, other than trying to
prevent that behavior and punish people for it, realize that
is where the value is being added. And like I
think the swiping left or right is the action which
makes people feel invested in the action they're taking so
(15:41):
that they're more likely to continue. So, you know, give
them that capability, but give them what they want, which
is like maybe a multi select option or or you know,
something like that, and to take it to the next level.
Rather than banning those people, let them have that functionality
like once a day and then you know, up charge
for it for the you know, next one hundred matches.
Speaker 1 (16:01):
Yeah, the AI matchmaker, Yeah right.
Speaker 2 (16:03):
Exactly, well, yeah right, just be open with it, like say, okay,
I see, I see what you're up for here, and
I'm gonna make it easy for you. You just gotta
upgrade your service level here, sign up for the next
plan and I got you dude.
Speaker 1 (16:17):
Mm hm cool.
Speaker 2 (16:18):
So so back to MCP, it's it sounds like it's
very agent focused. Is that an accurate statement?
Speaker 1 (16:27):
Yeah?
Speaker 4 (16:27):
So, so people still ask us this a lot, you know,
when when do we want to use a traditional API
versus MCP? And I think that the way to think
about it is a lot of what we've built is
still the best possible.
Speaker 1 (16:38):
UI or something.
Speaker 4 (16:39):
So if I want to cancel an order is it
easier for me to go to Amazon, go to my
orders and click cancel, or to open up a chat
butt and say hi, the order for this, and it's like, okay,
are you Are you referring to disorder and I'm like yes,
And it's it's just not the best the interface to cancel,
and so for that there's a button, and to have
a button then prompt an agent and say hey, the
customer would like to cancel disorder, and it then has
(17:00):
to decide on a tool, and then it has to
make sure it's calling the right tool and make that call.
One it's just inefficient and really slow. But two, you're
going from a world of a very deterministic action I
click a button that runs this code that cancels this
order to we think that the agent should be able
to figure out the right tool to call, but it
might not get it right every time.
Speaker 1 (17:20):
And again we're spending a lot of money on having
the agent decide what to actually do.
Speaker 4 (17:25):
And so I think in a lot of cases, classic
APIs are still going to be the really valuable one.
MCP is for agentic interfaces, it's for bots, it's for communications.
It could be a customer facing bot it could be
an internal sorry by bottom referring to it could be
a customer facing agent, it could be an internal agent,
(17:45):
or it could be some form of non exposed agent
that's actually just taking actions.
Speaker 3 (17:50):
I really like this take. I think it's really interesting
and I want to repeat this. Basically, if the thing
that could bring you value in your business or your
API is allowing increased volume or speed for execution, this
could be the right thing to do. But if you
don't want people to cancel your orders, don't make it
easier for them to do that.
Speaker 4 (18:09):
So that's not where it was going.
Speaker 3 (18:13):
But that's why that's what I no, but.
Speaker 1 (18:15):
I think I will.
Speaker 3 (18:17):
It's interesting that you chose Amazon though, Like you know,
if you pick something and it has a really great
user experience, then it doesn't necessarily make sense to automate that.
But as we know out there, there's like every company
sucks at UX. So you know, you think about that,
do you want to invest in you you actually do
want to just throw money at the problem and run
a really expensive service somewhere to manage state for letting
(18:39):
agents manage the user experience.
Speaker 1 (18:42):
Yeah, yeah, I.
Speaker 4 (18:43):
Mean ultimately they're going to that specific example they're going
to do both anyway, But in general, the idea is
like a static action that's button driven, classic API integration
AGENTIC MCP.
Speaker 2 (18:54):
Whenever you're building out an MCP, how do you debug that?
Like in this example, you know where you're talking about.
It's trying to figure out the best course of action
to do what it thinks the user wanted to do.
And then sometimes it's wrong, like what kind of feedback
do you get or what kind of metrics are you
collecting to track that and improve on that?
Speaker 3 (19:14):
It's just a proxy, right or is there something else
magical happening in there?
Speaker 4 (19:19):
Yeah, So how MCP works is effectively you create an
MCP server and that has a function on it called
get tools, and it returns back to the agent that's
calling it, or the MCP client. But effectively the agent
it says, hey, here are the tools available to you.
Create a ticket in asana, modify a ticket, change the
status of a ticket, and effectively, on the MCP server,
(19:40):
you're right, those are just wrapping API calls, and so
when the agent calls get tools, it then actually uses
its LM abilities to decide which tools should be called
based on the human The English description in that MCP
server of what each tool does. So yes, it is
statically coded. The place where there is sort of that
(20:01):
that decision making is on the agent itself about how
it's going to chain all those tool calls together.
Speaker 2 (20:07):
Gotcha, and are you bringing your own LLLM into the
agent or using like publicly available ones.
Speaker 4 (20:15):
So this is totally up to whoever you know. It's
compatible with all agents. The idea is is you know
your agent can make a tool call its most agents,
your agent can make a tool get tools call, and
then it doesn't matter what model it is. It then
determines what tools to call.
Speaker 1 (20:31):
And you see this.
Speaker 4 (20:32):
Also with things like even even windsurf and cursor for coding.
You can see that when you ask it to do something,
it kind of formulates a workflow using the tools available
to it, like edit file, read contents of file, that.
Speaker 1 (20:45):
Sort of thing.
Speaker 3 (20:46):
I think, I think I finally figured it out. It's
that there is just too much functionality in some of
these products and services. And someone thinks that their opinion
opinion version of the world is the most optimal and
effective one, and they've coded that in an MTP server.
So they look at GitHub, they look at GitHub and say,
you know, I only do five actions. I only like
copy code, make commits, push up pull requests. You know,
(21:09):
then you know, once pull requests there as I approve
it and maybe deploy it. And if that's my whole
version of the world, then having the whole access to
all of githubs or git labs or whatever's API is
totally unnecessary. You don't need all those things. So let
me make a personalized MTP server that just understands how
to integrate in this one way, and then I'll expose
that for interacting with the other tools.
Speaker 1 (21:30):
That I have. Oh yeah, yeah, I think that's right.
Speaker 4 (21:32):
And you can also you can extend it, and so
a lot of times people will you know, notice, hey,
whenever it's really important to me to build an agent
where someone can ask it, you know what, what is
what pull requests had the most reviews on it? And
GitHub doesn't necessarily have a way to do that via
their API. And so you could write a tool that's
called you know, get repo with most stars, and that
(21:54):
might make many API requests and do all that and
then just return the repo at the end. So it's
while it is stringing together the API calls. You're effectively
building a tool chain within each tool as well.
Speaker 3 (22:06):
Micro services.
Speaker 2 (22:08):
Yeah, yeah, you can say that I don't know, but
it's micros macro services because we've gotta put AI in
the middle of the word now that it's a new
tool Yeah. So you mentioned that these are not stateless,
So what does the what's the infrastructures look like?
Speaker 1 (22:31):
So it depends on how you end up building it.
Speaker 4 (22:32):
I mean, I mean some people will back it with
just you know, cash, you'll use in memory, you can,
you can connect it to a database. You run into
some risks though, right of of you know, sort of cross.
Speaker 1 (22:43):
Cross user data sharing.
Speaker 4 (22:45):
There's a whole new a whole new suite of potential
security issues with MCP now that that come up, and
it's something that we are we're hard at work at,
not just you know, for us, but for customers as well.
As we built that our new product, but we do
have to also think about it for ourselves in everything
we build.
Speaker 1 (23:01):
Now.
Speaker 3 (23:02):
The thing that really comes to mind a lot here
is think about what, given the state of the world,
what is the optimal way to allow agents to interact
with things? And if you think about where the costs
are it's having the agent do anything, So minimize the
agent workload as much as possible, So limiting the input tokens.
That means that don't force it to read the whole
knowledge base and the whole API spec. You want to
(23:22):
have it collapse into something very opinionated, very specific, with
only the keywords that are necessary, like don't even use
human readable descriptions there, right, like you know, focus just
on those most important keywords. And same goes with the
output stuff. So every single additional output token is also
going to charge you. So you want to capture as
much of the value of the action that you want
to take in this intermediary MCP server.
Speaker 1 (23:46):
Yeah, yeah, I think that's right.
Speaker 4 (23:48):
It's both a security thing as well as it adds
capabilities and makes things more efficient.
Speaker 1 (23:52):
Yeah.
Speaker 3 (23:52):
So yeah, the security thing, I think that's going to
keep coming up for the till the dawn of time.
I mean, because especially with the automation, right, we're using
these LM tools to assist us in making additional services,
additional value provided for our products and businesses and skipping
the part where we're heavily interrogating what comes out of it.
And this is an area where we're going to have
(24:13):
a lot of extra usages going through here. Whereas the
underlying API store is has you know, years maybe decades
built up into how to make this API secure. Now
we're cash data effectively not allowed cross contamination, et cetera.
And now we're adding a layer on top that someone
just throwing things as fast as possible to expose the
data in a way that allows other agents to interact
(24:37):
with it in the way they see as most optimal.
Speaker 1 (24:39):
Yeah, and there's there's a lot of new risks with it.
Speaker 4 (24:42):
And you know, one good example here is is API
token passing in this idea of like, okay, well if
the key lives on the MCP server, maybe that can
never be exposed.
Speaker 1 (24:50):
So you go, you ask the agent like what is
the API token?
Speaker 4 (24:53):
It's going to say, I don't have access to that,
or I've been I've been programmed not to tell you.
But let's say that this is an integration where it's
connecting to say Salesforce, where everyone has a different sub domain,
and part of your setup is what is your sub domain?
And you find a way to just pass in an
entirely different URL and basically overad what it's doing and
have it make an API call and that URL as
a private server that you're hosting, and you're getting that
(25:15):
apike sent right to you.
Speaker 1 (25:17):
So we're seeing a big new class.
Speaker 4 (25:19):
That's just one example, and there's going to be a
whole new set of rules, a whole new set of
like linting that's going to have to catch this, but
also intermediate security services for MCP specifically, and yeah, again,
that's that's somewhere we're going.
Speaker 2 (25:33):
Yeah, so there's a whole layer of impersonation in here
because you've got the MCP service that responds to requests
from multiple people, but each one of those individuals probably
has different permissions, and so the MCP service has to
impersonate the request based on what credentials that person has,
(25:55):
like to use your example earlier of canceling ramp cards
for employees. It has to determine, you know, does this
person have the ability to cancel cards at all, and
if so, which cards.
Speaker 3 (26:08):
Yeah, I mean, I think realistically you're talking a little
bit about the confused deputy problem here, where you're passing
off the request to a privileged agent which is going
to have requests from multiple different users and customers and
it needs to do effective authentication and authorization. You can't
just pass all the data along and expect the underlying
API to do the right thing, and the individual agent
(26:29):
might have its own identity or the actions that it's taking,
And like, we've already solved these problems in the world.
It's just that I think fundamentally, the people that are
most closely working on the software and the control inside
MCP services haven't thought about these things as deeply as
they are. Like, my whole domain is just app seck,
(26:49):
Like I only think about authentication and authorization all day long.
That's pretty much my only job. And there's still way
more in security than that, and I can get And
I'm giving a talk literally in a couple of weeks
like what the heck is off, like just explaining to
people what these concepts are, and I'm getting a lot
of feedback like, oh, mcps a thing, like how do
we do this? It's even more important now, And I'm
(27:10):
like that's really surprising.
Speaker 4 (27:11):
Yeah, it begs the question it does does OFF even
belong in the MCP s back itself or is everyone
going to keep just building it on their own because
right now there's not much in there about it, how
to actually implement it, how to actually handle it other
than saying, you know, you probably should should keep off
on the MCP server itself.
Speaker 3 (27:29):
I think, yeah, I mean it's a really good point.
I think the biggest problem is that you're not just
building this proxy layer that statelessly passes along request. And
this has been a huge problem with proxies to begin with,
and now people are taking it further and saying, oh,
we know what we want to do, We've already figured
it out. We don't want to explode the number of
input tokens or pass the context back to the caller,
(27:50):
so they have to iteraly call the endpoints. We want
to handle all that in scope here, which means, as
you pointed out, managing state, and that becomes the risk
really doing that effect actively and correctly so you don't
end up with this crossing. And there are tons of
security vulnerabilities cvees that get published every single month about
how one customer was able to access a different customer's
data publicly on the Internet, and like, those things all
(28:12):
happen without MVP servers, and now people aren't even thinking about, oh,
how do we actually even handle off here? So yeah,
We're definitely going to get into a problem very quickly.
Speaker 4 (28:23):
Yeah, and the problem, you know, one of the things
we're seeing as we build out this new product is
specific requirements from people around credential sharing among people who
talk to the agents, but varying who shares which credentials
for different services.
Speaker 1 (28:36):
So, you know, you have an.
Speaker 4 (28:37):
Agent that has access to both a ticketing system and salesforce,
and everyone who talks to it should be able to
use the shared ticketing credentials, but only the sales team
should be able to actually communicate with salesforce, and then
only super admins should be able to use the HR
you know, credentials. So we're seeing this a lot, and
it's going to be interesting to see how people approach it.
Speaker 3 (28:55):
That's like our whole product right there, Like that little
aspect you described is like the whole thing that we do,
uh for internal resources because I mean, it's really unfortunate
that we live in the world of credential sharing, but yeah,
it still happens because companies charge for uh, you know,
SSO and first class options.
Speaker 1 (29:15):
Yeah.
Speaker 2 (29:16):
So, Warren, are are you at offerers creating an off
front end for MCP services.
Speaker 3 (29:23):
I don't even know how to answer that question. You know,
it's like, what's the difference between off for MCP and not?
I mean, there's literally no difference. Like whatever we offer
is like it's the same. There really is no difference
between what we offer. Like I was joking to my
CEO who almost said that she was going to have
a mental breakdown when I told her that we could
start offering off for AI because realistically, there is no difference,
(29:46):
like we offer other off like the it doesn't matter
difference marketing marketing.
Speaker 4 (29:56):
Your one little rapper file that makes it interact nicely
with a lot of the AI library.
Speaker 1 (30:00):
Maybe that it's.
Speaker 3 (30:01):
Actually not even that, it's literally just what shows up
on the marketing page like that. That's really what's important,
because you know, you can put whatever you want on
the in the knowledge based on the documentation, and people
will pick up and run with it, you know, focusing
on the problem and the vocabulary that they're using to
solve it, right, you know, if it's not your company,
but like I say, one of your customers are like, oh,
how do we keep the credential ownership separate? Or while
(30:22):
using some sort of agent. You know, now we have
to match on all those terms to hit SEO so
that shows up in search results in one of the
search providers or through one of the lamps six months
from now. So it has to match on that and
then also produce you know, relevant code with variable names
that look like it's appropriate. So yeah, I mean, we
don't have any any plans to throw up an MCB server,
(30:43):
but if one of our customers was like, you know,
we have this use case and we're willing to pay
you some money for it, yeah, I mean done.
Speaker 2 (30:54):
Sort of the scaling issues you see with MCP services.
Speaker 1 (30:59):
Yeah, so we've seen a few. I think I think.
Speaker 4 (31:02):
One is is like managing I hate to say this again,
but managing off and I think I think it's because
companies just aren't really thinking about that at scale, how
like groups in differentiation of who has access.
Speaker 1 (31:12):
So that's one of them.
Speaker 4 (31:13):
I think another one is that people are using AI
to generate a lot of MCP servers right now, and
I think that at scale it brings on again more
security risks, but also scaling risks, right you're not necessarily
hitting the end points very efficiently. We actually we see
another big problem which is MCP servers and MCP in
general love to do linear scans of APIs.
Speaker 1 (31:33):
So if you have an MCP server that.
Speaker 4 (31:35):
Lets you, you know, fetch tickets and then you can
pass a page number and you say to your LLM like,
get me all tickets that have Gill in the title,
it is going to do a linear scan of that API,
and that will will crash your MCP server. It's going
to kill the agent. It's going to cost you a ton.
So that's another one we've seen.
Speaker 3 (31:55):
I mean, because like, as an experienced engineer, it's like, oh,
this is I'm actually paging through this for like ten minutes.
What is going on is it's stuck, you know, something
else going on. And you may look at your code
and be like, oh, there's actually a lot of items here,
or each one of these calls is like really expensive,
or I'm getting throttled, and you'd go and investigate that.
But if you're not paying attention and you don't even
really understand what's going on with that MTP server or
(32:15):
any product, and you get something automatically generated, it's not
paying attention to that. It doesn't care. It doesn't care
along it runs right.
Speaker 4 (32:23):
That's a good point, and it knows how to exponentially
back off, so it will take its time and really
pull everything.
Speaker 2 (32:31):
Yeah, I can see a scenario where this is like
after rollout and people get used to it and it's
fully adopted. There's like a layer of knowledge abstraction where
someone troubleshooting it or interacting with it doesn't even know
to go look at the API end points that it's
(32:51):
calling to find out that you've been getting hit with
overage charges for the last twelve hours because the service
is just banging away on it.
Speaker 3 (33:00):
I think there's another another huge complexity there, like some
of those routes may be paid and other ones may
be free, or like instead of using the search endpoint,
it's using some sort of native pat genation or you know,
if you if you if the MCP server isn't built
by the company who's running the API, they know what
their internal complexity is and should be focusing on that,
whereas someone else may not understand or really don't even care.
(33:22):
You know where the limitations are for using that server,
and like, if you come and talk to us, we'll say, oh, yeah,
you know what the thing you're doing. You want to
use endpoints A and B. This will be the fastest,
the cheapest option for you, most cash option, et cetera.
And if you're just dynamically generating something, it may look
like different endpoints could be appropriate. And if you're not
paying attention, that's what'll get used and that could be
(33:43):
the worst scenario for both the caller and also uh
the service provider in the end.
Speaker 4 (33:49):
Yeah, and ultimately that's why we pretty firmly believe that
that you still need to think full sets of data
for a lot of types of integration too. You know,
if there's it's the lack of a search endpoint that
causes that. And if you have no search endpoint and
you you're building your business right, your product team doesn't
want to hear no, we can't sink the full data
is that they want to hear, like, here's how we're
going to solve this problem. I mean, you have to
(34:11):
do that, And so MCP becomes a little useless there.
Speaker 2 (34:14):
So with with this being a stateful service, is that
a concept that exists of saying, hey, go grab this
API data and if you need it again for anything else,
this this payload is valid for the next six hours
or for the next two days or whatever, and then
it just cashes that and reuses that.
Speaker 4 (34:36):
Yeah, I mean, ultimately, your your tool implementation can do anything.
And so you know, if you have a tool that's
like you know, synk this data and then it cashes
for six hours, it can under the hood be hitting
some internal cash, whether that's like a memcash, REDDI or
a database, and then it knows who it's who's talking
to the server, so it knows who to look it
up for.
Speaker 1 (34:55):
Yeah.
Speaker 3 (34:55):
I mean there's like a lot of different ways to
handle cases, right. I think there's like two hard problems
and computer science. So I mean you can, right, you
can do read through cases, write through cashes. You know
how you're managing it Basically, you're basically saying that the
API as it's written isn't effective for the actions you
want to take, and if one of those is like
very bulk data related, you have no choice but to
(35:17):
handle all of the interactions and then also get sinks
back from the source, like constantly pulling it to get
any updates there are. I imagine over time we're going
to see primary providers of that data offer better strategies
for interacting with them because third parties that spin up
these MTP servers they're not benefiting anyone directly. I mean,
(35:38):
it's the value is there clearly if the customers say, hey,
you know, your API end points aren't giving us what
we want and we need something better. So there's a
drive for it. But I think over time, realizing what
those traders are will have to be changed into whoever
owns the data fundamentally.
Speaker 4 (35:55):
Yeah, the question is whether you know, especially among the enterprise,
you're going to see companies wanting to release the MCP servers.
I think that you know, there's the world's always gone
back and forth on whether API should be fully public,
what data should.
Speaker 1 (36:07):
Be exposed, and whatnot.
Speaker 4 (36:07):
And you know, one of the things we're seeing among
enterprise is fhear around MCP the idea both of unintended
actions being taken, but also of data extraction.
Speaker 3 (36:16):
Yeah. Well, I mean that's an interesting point because there's
a I live in La la land, and here's my
perspective of how the reality works, and this is how
we make money and the mature, grown up approach of
our users are actually doing this thing, and we should
figure out how to encourage them to do the right
thing so they don't accidentally do the wrong thing and
the companies that realize that will be the ones that
(36:36):
capture the value in the long term, and the ones
that don't we'll just end up failing because people will
stop using their services because they will no longer want
to use the interface that's being provided to them.
Speaker 1 (36:45):
Yeah, I think.
Speaker 4 (36:46):
I think a notable example of that is Salesforce having
a very open ecosystem and because of that, other platforms
built on top of them. And now when you use
Salesforce and you haven't integrated everywhere, you can't chur and
that data is the core system of record for all
the services your good market stack uses. But I'm not
going to name names here, but there are a lot
of enterprise players that are very locked down about their data,
(37:07):
and actually mid market players as well, and somehow they've
maintained a solid user base. I wouldn't say they are
the fastest growing companies anymore, though.
Speaker 3 (37:16):
I hope this causes a turnaround here because historically the
products that have the most data, which end up being
things like CRMs and data platforms, have had the worst
APIs and the ones with the most Yeah, I'm sorry
I had to come out and say that, but we're all.
Speaker 2 (37:32):
Thinking it so yeah.
Speaker 3 (37:35):
Yeah, sorry to all our previous guests from like the
last four or five weeks to all of it on
the data side, just from experience here. But if they're
all the ones with the worst APIs and now we
all care about that data as fundamentally as we can
to get at it, and those APIs don't exist, the
ones that survive will be the companies that actually invest
(37:55):
in better APIs for accessing their data.
Speaker 1 (37:58):
I agree, I completely agree.
Speaker 3 (38:00):
So I just put it on the blockchain, I think
is what wells thinking.
Speaker 1 (38:03):
Absolutely very efficient.
Speaker 3 (38:06):
Yep, well it's public, you know, problem solved.
Speaker 2 (38:09):
Yeah, to share database, We're just sharing, just share damn it.
So now this is scaled in complexity quite a bit.
Just in the last thirty ish minutes we've been talking
about it because we started saying, okay, you have this
agent and it just makes it easier to talk to
(38:30):
APIs for you. But now we're talking about you know,
off issues and scaling and like security of the data.
Like so for someone who's thinking, man, I saw this
MCP term, I should go build one of these things,
Like what's your recommendation for the top things they need
(38:52):
to be thinking about.
Speaker 4 (38:54):
I mean, I think number one right now. It's really
good marketing. It's the topic and so and so releasing one.
Speaker 1 (39:00):
It's out there.
Speaker 4 (39:01):
I think, I think there are things to consider, like
your MCP servers only as good as the underlying tools
or you know, in other terms, the underlying API that
it's talking to. I think you need to think through
real world use cases and you need to test it.
Speaker 1 (39:15):
And actually one of the things.
Speaker 4 (39:15):
We talked about earlier was how to evaluate this, and
I think it's you know, it's similar to other AI
evaluation methods where where you're feeding it sort of like
mock prompts and data and you're using evaluators to decide like,
is the output eighty percent of the time close enough
to what I needed to be that sort of thing,
is the right tool getting called based on certain prompts?
(39:36):
So I think I think making sure that you build
good tools, making sure they're well documented, and making sure
that you actually test them is important.
Speaker 3 (39:42):
Yeah, it sounds like, you know, if I rephrase that,
you've got to make sure that you understand what your
users are actually asking for, you know, because there's like
an infinite number of things that can be done with
an API. So if you don't know what that is.
You don't know what functionality to actually throw in your
MCP server to begin with. You can't just like have
it spun up and have just have it work. It
needs to actually do something right.
Speaker 1 (40:04):
That's exactly right.
Speaker 4 (40:05):
And if you know your your customers are very likely
to come in and say create an invoice. But whenever
they do that, the way they word it is like
create a I don't know, create a spend report. So whatever,
you you know that you want to put that in
the documentation for the MCP server when someone asks for
creating a spend report, this is the tool to call.
Speaker 3 (40:22):
I think you want to like add a line item
to the accounts payable list.
Speaker 1 (40:27):
You know, Yeah, there you go.
Speaker 3 (40:29):
That's that's what's important. Getting that language right, okay, and
words are hard.
Speaker 1 (40:34):
Words are hard.
Speaker 3 (40:38):
I mean you could you can in your MCP server,
receive the request, pass it to another l M to
get it translated into you know what makes sense, and
then actually execute. But I highly don't recommend that.
Speaker 1 (40:48):
Yeah, yeah, you know.
Speaker 3 (40:51):
It's going to multiply your your security issues by quite
a large amount.
Speaker 1 (40:56):
That is so true.
Speaker 4 (40:57):
Yeah, we we definitely believe in that separate we give
you the tools.
Speaker 1 (41:02):
You do what you want to do with it.
Speaker 2 (41:05):
Here's the knife, cut off whatever you'd like. We're not
responsible for the medical bill.
Speaker 1 (41:10):
Yeah.
Speaker 2 (41:11):
Have you seen scenarios of MCP services instead of talking
to APIs or talking to other MCP services.
Speaker 1 (41:20):
Oh that's interesting.
Speaker 4 (41:22):
So I mean I guess realistically you could have an
MCP service ask another service for tools, or call other
call other tools. I'm not sure there's a great use case.
Maybe I would have to think three use cases there.
So I haven't seen any specific instances.
Speaker 1 (41:38):
You know.
Speaker 4 (41:38):
What I've really seen is is MCP servers. You know,
they can return static data, they can query databases and
like query databases directly or call APIs. So I definitely
have seen where you have tools that are formulating SQL
queries and so you could you know, like have an
agent that you know, equeries a database using English language
(41:59):
and that English language causes another you know tool, call
to another MCP server that can translate to SQL for example.
And I guess that's one one example, but I think
you'll probably see more like a to a level communications
as opposed to like agent to MCP to MCP.
Speaker 3 (42:15):
Yeah, I mean, I guess there would have to be fundamentally.
There could be a case, right, you call an API
today and it does something with an l M, and
in the future you call an API and that can
call a different API and instead of doing that, it
could use natural language there, although natural language is terrible.
Like if you know how to if one one software
developer rites in a code to call a different API,
like use the first class API notions that are available there,
(42:38):
you know you always want that code and that that
would be software development there. And the other thing is
that like the cost, like you want to pass that
back to the caller as fast as possible and push
them with the costs there. Otherwise you're running two agents
at the same time, so you know whatever determinations need
to be made. Plus there's also this like i'm gonna
say security again, like delegation, like who owns that recquid.
(43:00):
You probably don't want to build your service in a
way which allows users to interact with a third party
solution because that means they have to give you the
credentials to do that, and then you're managing your customer's
credentials to other third party systems. And if you're doing that,
maybe you should like, talk to Gil and see if
this is the use case that there, you know, because
that really sounds like merge. Yeah, so you know, pass
(43:22):
the data back to the caller and let them deal
with the complexity and the cost of calling out to
that second system.
Speaker 4 (43:28):
I thought you were about to say, if you're doing that,
you're wrong, and I was like, well, that's our whole business.
Speaker 1 (43:32):
So I.
Speaker 3 (43:35):
Try very hard not to say things in episode that
contradict what the guest is saying.
Speaker 1 (43:41):
So I'll keep an eye on your Twitter later.
Speaker 3 (43:45):
Oh well, I deleted that a long time ago.
Speaker 1 (43:50):
All right, I'm safe then.
Speaker 2 (43:52):
Yeah. Yeah, we have an unwritten rule not to insult
our guests till after the recording's over.
Speaker 1 (43:59):
Man, all right, mutual to two Way Street them, I.
Speaker 3 (44:04):
Like, I like you brought up Yeah. I mean, you
have plenty of opportunities here, and I'm sure most of
the audience are just waiting for those punches. Gil. So
you know, if if myself or Will is there for you, you know,
feel free to you know, come at us with with
full force. No no, uh, no worries. You did mention
a to Ah, and I do want to ask about
this because you meant agent to agent, I think, and
(44:25):
not A to A as the protocol that GDP released
to do MCP right.
Speaker 4 (44:32):
So well, so the the the g CP A to
A is a true agent to agent that is sort
of compatible with MCP.
Speaker 1 (44:39):
It works well, and it's it's a good concept.
Speaker 4 (44:42):
It's just one one of the many protocols that are
coming out for agent agent communications right now.
Speaker 1 (44:47):
It's just newer.
Speaker 4 (44:48):
It's a it's a newer concept entirely, but I think
it's similar to MCP and that it's solving a problem
that everyone was just solving in a million different ways.
Speaker 1 (44:54):
Right with MCP, people new agents.
Speaker 4 (44:56):
Wanted to have remote tool calling as as an option
and CP was a protocol that form to solve that,
and a day is similar. It's you know, agents need
to talk to each other. How do we have just
like what's the most simple way to make that happen?
Speaker 3 (45:08):
Well, if you're if you're talking about that, I have
to ask, have you seen the one where someone devised
the idea of oh it's so expensive to send request
using Bluetooth or USB or Wi Fi. Let's just send
an audio signal over the air from one device to
another one and have like, you know, your phone will
make a sound and your computer will pick it up
on its you know, speaker or microphone and listen to it.
Speaker 1 (45:31):
Yeah.
Speaker 4 (45:31):
I think that's how like people in the old days
you do things on iPhone when when Apple hadn't exposed
all of the SDKs or the APIs yet.
Speaker 3 (45:39):
I like how you put that, because yes, I mean,
this was how you did it in the old days
before we invented technology that did it correctly, and now'll
people like, well, I have an agent over here and
an agent over here, how do I get them to communicate?
Speaker 2 (45:52):
There are a lot like a fax.
Speaker 1 (45:55):
I was just like.
Speaker 3 (45:57):
Dial up go. Well, I like, there it goes, you know,
it just DSL you know, making those things over broadband Yep.
Speaker 4 (46:05):
The stripe reader that was plugged into the headphone slot,
it did the same thing, converted to a microphone signal
from your credit card number.
Speaker 1 (46:13):
Yeah, I think so.
Speaker 4 (46:14):
Yeah, don't quote me on that, but I'm like nearly
positive that that's how it worked.
Speaker 3 (46:21):
I have this seeking suspicion that most of the audience
will have no idea what you're talking about anyway, So
don't worry about it.
Speaker 1 (46:28):
I mean, w am I old now? Is that the Yeah?
Speaker 3 (46:33):
I think we're all technically old, so yeah, yeah, it's fair.
Speaker 2 (46:37):
That is true. Though, Like I've been doing this for
three decades, and I had to go around and ask
if I was the oldest person at the company, because
I was like pretty confident that I was, and it
turns out I wasn't. There's one guy who's a couple
of years older than me, but it's close enough in
terms of the ages of the rest of the people
(46:58):
in the company where we're practically the same age.
Speaker 4 (47:01):
Yeah, yeah, we have a we have a pretty young
company too, but we've as we've grown, we've gotten some
more maturity, and honestly, it's it's much better. It's much
better to have just a more mature company.
Speaker 2 (47:13):
I think that's an interesting dynamic to balance, you know,
because you have like the a lot of the enthusiasm
and the excitement that comes from people who are earlier
in their career, but there's also like a really nice
balance when you have people who are who are senior
(47:35):
in their career to provide like perspective on those ideas,
you know, and sometimes you get that dynamic where it
feels like the young people are just trying to do
something and the old people are just trying to say no.
But if you get that really great combination there, you
can get this dynamic where the young people are coming
up with new ideas and the older people can say like, yeah,
(48:00):
we used to do that in the nineties and here's
why we changed, and then you kind of iterate on
that and evolve into something completely different.
Speaker 3 (48:10):
I think you should hire him because of all his
great ideas.
Speaker 4 (48:16):
I mean, I completely agree, though it's you know, they've
really it's sort of a balance that's important.
Speaker 1 (48:21):
But also bringing just years of.
Speaker 4 (48:22):
Expertise on you know, I'm thinking go to market in
this case, but you know you have something, you can
sell it at the beginning by years of experience, just
help you sell something, like, really bring it to market,
really scale it. So I think about like kind of
you know, earlier talent is helping you get something off
the ground, and then late later career talent is really
helping you push it and scale it.
Speaker 3 (48:41):
I feel like we're losing I mean, even with years
of experience, I feel like we're losing actually the original
concept of later year talent because I think it came
from having managed systems that were alive for a very
long time and understanding the nuances. They're like, oh, yeah,
we've had a you know, in our data center, we've
had a mainframe there for fifty years that's been running
(49:03):
you know, whatever it was running from. I assume IBM
and you know, going and going, and these are the
weird things that we saw. And now it's fifty years. Like, no,
that's like twenty five different people were integrating with that, right, Like,
it's not one person who had seen everything there was
to see with that one piece of technology, that one
service or one product, And I don't know if that's
(49:25):
ever coming back, which means I feel like we're losing
as a society.
Speaker 1 (49:27):
This.
Speaker 3 (49:29):
It's a critically useful piece of information as far as
how we actually deal with these systems or those experiences
to be able to guide us in the right direction
going forward.
Speaker 4 (49:38):
Yeah, but I guess the question is is that important too,
because you now have this sort of abstraction of knowledge
where you have an AWS or a GCP managed service
that you know.
Speaker 1 (49:46):
I mean, I think it depends on the example.
Speaker 4 (49:47):
But you keep getting, you keep abstracting above and then
people can focus on like a different level of skill.
Speaker 3 (49:53):
I think it's something different like if we see that
the majority of people in the world are spending their
effort and how they're focusing on problems that become more
and more short term than we are losing those situations
where people have experience working with long term functionality. And
I think this is coming to the cloud providers and
hyperscalers out there unfortunately, and we'll see that if everyone
(50:17):
outside of them doesn't have long term experience and the
only people that for those companies to hire are people
without long term experience, and what the market cares about.
If I see lots of little data centers and cloud
providers pop up to say, oh, we're better than AWS
and GCP and whomever because we do this thing, and
it's like, well, you're better because you wrote a hack
(50:37):
that got it done in six months compared to a
company that's been around and has that particular service for
twenty years. And it's not as good, but it still
maybe solves your particular use case. And I think this
is similar to hard hardened API versus MTP server written
on top of it. You know, we wrote something quick
and dirty to get it done. It's a hack and
it now it becomes ingrained in what we're utilizing and
(51:00):
so there's a lot of risk associated with it. And
I think this is a this is a failure of
the human race to you know, be so short and
narrow like narrow sighted there, like you really focus on
the short term. Having a long term focus is very
difficult for people.
Speaker 2 (51:16):
I think true story.
Speaker 3 (51:21):
I like this will be my I think this will
be one of my future picks. But there's a great
science fiction television show called Well, actually written after a
book called The Expanse, and the one of the characters Abursala,
who's I don't need to go into it, she has
this really great quote that the failure of humanity is
too little, too late, And I think it really does
(51:42):
go to the fact that humans do really wait too
long to see a problem and attempt to fix it,
even even if without even putting in enough effort to
actually solve it. So you know, I'm with her. I'm
very pessimistic on the on the topic. But yeah, I just.
Speaker 4 (51:58):
Think of the word AI and I'm like, the world
as we know it is over and I just can't
think about the problems.
Speaker 1 (52:03):
But I know I'm with you.
Speaker 3 (52:04):
It's well, I think you know there's something there and
I am always the first to bash anything related to
AI because it's not AI as I understand it. It's
these probabilistic engines that are just returning a statistical result
that has no intelligence behind it whatsoever. I mean, there
was intelligence to build the system, but actually contained in
the computational matrix of the anyway. Yeah, so lack of
(52:28):
lack of intelligence.
Speaker 1 (52:31):
That's fine. I think most human brains are similar.
Speaker 3 (52:33):
So yeah, well there's a there's a huge debate there.
I mean, if LM is what we created in humanity
after downloading all of the information and compiling it, then
it should be an average. So I'd say that it
must be that lms are better than fifty percent of
humanity of the people who contributed to them, and worse
(52:55):
than the other fifty percent. And if you're okay with
the average of an output, the average code that's being
generated or workflow is being executed, then an LM may
be an improvement there an average amount of knowledge.
Speaker 4 (53:05):
Yeah, but it is also it is also all the
knowledge that everyone knew throughout their entire lifetimes versus not
all useful moment in time.
Speaker 3 (53:13):
Yeah, there's things there, right, I'm pretty sure it was like,
and I'm going to get this wrong, like Aristotle believed.
I mean, people recognize the name Aristotle believe that the
Sun went around the Earth, So you know, that's sort
of an interesting thing. So, yes, there's a lot of
history of information, and uh, there's a lot that's wrong.
Speaker 2 (53:34):
Yes, sure, yes, we need to feed LLM with the
the redlined version of our human collective consciousness.
Speaker 3 (53:43):
Well that's that, you know. And the problem is that
the information that's being fed in to train lms today
isn't being sanitized as much as data engineering was doing
in the past. They're, you know, they're realizing that there's
too much information to do this, so they're coming up
with tricks and strategies to try to filter stuff out.
But often you're picking you're filtering the sources and the
attributes and the functionality rather than the accuracy, which is very,
(54:06):
very difficult. And now, for sure, if you find something wrong,
you can go back to the training data and start
eliminating things, get it to solve certain problems. But I mean,
mathematics or equation solving today is something because of how
the lms are designed, just is almost never getting better.
Speaker 4 (54:20):
For instance, yeah, the cool thing there is is you know,
if it can successfully call a calculator tool via MCP,
then there you go, or even a local tool, then
at least they can formulate that and let let a
calculator statically.
Speaker 3 (54:31):
Do it MCP for Wolfram Alpha, that's what I'm hearing.
Speaker 4 (54:34):
I'm sure someone oh man, that was the most advanced
a they had when I was in college.
Speaker 2 (54:39):
So yeah, right on, this feels like a good point
to move into picks. What do you think?
Speaker 3 (54:46):
I think we should definitely do it, all right, Yeah,
so I have something non technical this week. I'm just
going to jump in and save mine because I know,
you know, if I don't will it's just gonna be
like Warren, you know what time it is time for
my pick, because I always go first. So my pick
is the book A Slash television show The Magicians by
Lev Grossman. I'm I don't know something something science fiction, fantasy.
(55:12):
There is a question is fantasy different from science fiction?
And I think that The Magicians gets into that quite
a little bit that it's hard to distinguish, which is
which I really like the book and the show. They're
they're good in different ways. Unfortunately, Like it's not like
they well recreated. It's sort of like a different storyline
and different stuff happen. Characters that are good in the
in the book are bad in the show, and vice versa.
(55:33):
So if you've only read or watched one, I highly
recommend the other.
Speaker 2 (55:36):
Right on, Gil, you gave us a sneak preview of
your pick before we started recording, so I'm excited what
you got.
Speaker 1 (55:44):
Yeah.
Speaker 4 (55:44):
So I'm a big watch fan, and a topic that
I've been rereading about recently that I'm really into is
the historical notion it's called of constant escapement. It's a
problem in watchmaking where and clockmaking and all formsquare and sorry,
And the reason I bring this up is because with
AI technology just changing so fast, what I love about
watches is it's technology. There's no electronics and nothing really
(56:06):
changes except maybe material science gets better. And so with
the constant escaements this idea of you know, in watches
or in clocks, you wind something up. When you have
a wound up spring, as it unwinds, the power that
it gives off decreases. Right, It's more powerful when it's
really tightly wound. And so if you picture what that
does on a clock, you're gonna have really fast movement
of the hands and then it'll start to slow down
(56:28):
and that doesn't work. And so the huge problem always
has been constant escapement. How do you get constant forced
to be emitted from that that spring? And that's why
you see this idea of something you know balance wee
all going back and forth. It helps slowly release tension
from that spring. Or on a clock you have a
pendulum that swings back and forth. You need to have
ways to very like let gravity help you release that.
(56:50):
So I think it's a super interesting topic if you're
into attack and you want to read about something that's
just not changing so fast. But it's just a cool
historical thing. Look up constant escapement and read about all
the different ways it's been solved over time.
Speaker 3 (57:02):
It sounds like a conservation of angular momentum problem.
Speaker 1 (57:06):
Yes, it's a similar idea.
Speaker 3 (57:10):
I don't if you tell I I took a math
in physics as my U that was who I was
before I went into the philosophy of software engineering.
Speaker 1 (57:19):
It's it's such an interesting problem. I really love that.
Speaker 4 (57:22):
Again, like it's it's it's not to say that AI
and technology isn't great but it's just a fun different
way of thinking about things.
Speaker 3 (57:29):
Is there like a particular watch or a clock type
or something that you much prefer over others? Like do
you have like a giant grandfather clock sitting in your home?
Speaker 4 (57:38):
I so, I actually really like this clock called it's
JLC or La Colts. It's a it's called the atmost clock,
and you've probably seen them around, but it's sits in
your room, and it also can run forever no electricity,
and it's not on your wrist, so it's not.
Speaker 1 (57:52):
Wound or anything.
Speaker 4 (57:53):
It has a big bulb in it that's filled with
a gas, and even one degree of temperature variation in
the room makes it expand and then that is is
somehow winding up a mechanism when it expands and contracts.
Speaker 1 (58:04):
So I think that was really cool.
Speaker 2 (58:06):
Oh wow, that's pretty wild. That's super cool.
Speaker 3 (58:10):
I always get concerned when there's no like battery electricity,
that if it's not if it's not taking energy, it's
you know, giving off radiation, And.
Speaker 4 (58:21):
A lot of watches do give off radiation from the
the tritium tubes in the you know, in the the loom.
Speaker 3 (58:26):
So interesting besides being like low in the dark or
is it different purpose?
Speaker 1 (58:33):
Yeah, no, just for going the dark.
Speaker 4 (58:35):
You don't you know a lot of these days you
don't see a lot of tridium tubes, but you still
see them. They're not I think it's not to a
level that's considered dangerous, like a lot of this lead
paint that you see on like old plates and stuff,
but it's still you know, I I personally wouldn't wear one.
Speaker 2 (58:48):
Yeah, it's the It's an alpha wave emitter, and alpha
wave radiation can be stopped by the outer dead layers
of skin, so its ability to impact you or do
anything is super super low risk.
Speaker 3 (59:07):
Yeah, let's talk about radiation. Yeah, I mean alpha beta
is fine, but you know usually when we talk about
bad radiation, it's it's gamma or something stronger, which is
you know, multiple particle size, not just a single hydrogen
atom or electron.
Speaker 1 (59:23):
Yeah, so no nuclear reactor is on your risk.
Speaker 3 (59:26):
Well, I'm worried about wearing a piece of technology that
has five g antenna in it. So you know, that's
that's my own I would have to say conspiracy theory,
but that that is my own personal fear.
Speaker 4 (59:37):
I mean, I I get that, like we don't know,
so why take the risk. I mean, I know a
lot of people like my sister and her husband, they
put their phones on or on on airplane mode next
to their beds every night, or they keep their phone
across the room.
Speaker 3 (59:48):
I think outside like I think it's actually the heat
is worse than the radio. Yeah, or in your pocket.
I think there's a bunch of research done so like
keeping it outside your b it is for sure better there. Well,
you seem like you know about radiation and watch as.
Speaker 1 (01:00:07):
A little bit.
Speaker 2 (01:00:08):
You know, I was a former nuclear engineer in the Navy,
so I studied a bit about radiation.
Speaker 3 (01:00:15):
What's his what's his favorite kind of reactor? He's like
the authorium you yellow cake, uranium two fifty eight like water, right,
it's it's got to be like that was one of
the interesting things because when I went through nuclear power school,
it was just after the Chernobyl incident and and so
(01:00:38):
like the US was parading around the fact that we
use all water based coolant or water cooled reactors. And
the interesting thing about that is as as the water
gets heated up, the density and this is we're going
back like over, We're going back to the late eighties.
(01:00:59):
For me, to remember this sum likely going to botch
most of these facts. But as the water heated up,
the atoms got closer together, so even though the radioactivity
of the nuclear reactor was increasing, the increased density of
the molecules of water effectively shunted that increase in radiation.
(01:01:20):
So it was almost impossible for a water cooled reactor
to overheat the way that Chernobyl did because Chernobyl used
liquid sodium as they're coolant. And then Three Mile Island
and Fukushima.
Speaker 2 (01:01:35):
Yeah, and Three Mile Island was great because they got
the first alert telling that there was a problem, assumed
that it was a faulty sensor, and then they got
the second alert, which was downstream from that one, and
they were like, damn, we got two sensors that failed today.
And then it finally started spewing out into the river
(01:01:58):
and they're.
Speaker 3 (01:01:59):
Like, oh, oh, oh.
Speaker 2 (01:02:01):
Shit, we're going to fill out some sleeper work on
this one.
Speaker 4 (01:02:06):
Oh man, much worse than software, you know, ignoring an
alert that pops up, right.
Speaker 3 (01:02:11):
You just got to put the MCP server in front
of the nuclear reactor and problem right, you.
Speaker 1 (01:02:16):
Know, solves everything.
Speaker 3 (01:02:21):
I'm surprised. I also I also did Uh, I didn't
go as far as you did, well, but I definitely
got the book, and I'm pretty sure that in my
nuclear physics book there was no mention of MCP servers
anywhere in there, and it's clearly an oversight, clearly an oversite.
So uh, what's your pick?
Speaker 2 (01:02:39):
So my pick this is a repeat pick for me,
because I'm still pissed at you over this warn. I'm
working my way through the Dungeon Crawler Carl series because
every book is so great, and I've pieced together at
this point that there's I think there's seven books in
this series right.
Speaker 1 (01:02:59):
Starting.
Speaker 3 (01:03:00):
I don't know why you're mad at me because it
was Matt Lee that brought up a dungeon collar pull
and I haven't read it yet, so you can be
mad at me all you want, but I don't know
what I don't know.
Speaker 2 (01:03:10):
Then I flipped that conversation in my head because I
remember it as you bringing it up. But okay, I
haven't read it.
Speaker 3 (01:03:18):
It's now on my list, and I you know, it's
always great when there's like multiple books in that series,
so you know, every single time you bring this up,
I'm like, okay, a good reminder that this is going
to have to be the next thing that I read.
Speaker 2 (01:03:29):
The disappointing thing is I'm five books into it now,
and there's seven books in a series. But I'm starting
to piece together the picture that the story's not going
to be complete by the time I finished book seven,
and then I'm going to be stuck waiting for him
to write the rest of the damn books so that
I get closure on this whole story.
Speaker 3 (01:03:50):
Aren't you taking salts in the fact the you'll propaly
You know, given given your years of experience, you may
forget what happened and then go back and reread the
book and you know, relived that greatness all the verga
when the next one comes out.
Speaker 2 (01:04:02):
Late career talent, yeah yeah, or just be dead by
the time it happens and it's not my problem anymore
either way. W's aw cool, all right, Gil, Thank you
so much, man. This has been super insightful. I appreciate
your insights, and you're taking the time to join us today.
Speaker 1 (01:04:21):
Awesome. Thank you so much for having me really enjoyed
the conversation. This is great.
Speaker 2 (01:04:25):
Yeah, Warren, thank you as always for joining me here
and carrying the conversation whenever I space Out and all
our listeners. Thank you guys for listening. Be sure and
hit us up if there's anything you want to see.
Elaborate on comments, thoughts, feedbacks, smart ass jabs.
Speaker 1 (01:04:44):
It's all good.
Speaker 2 (01:04:45):
Bring it on and we'll see everyone next week.