Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hello, everybody, Welcome to another exciting ever episode of javascripts Jabber.
I am Steve Edwards, the host with the face for
radio as you can see in the voice for being
a mine, but I'm still your host with me Today
on the panel, we have mister Dan Shapiro coming live
from tell Abev, Israel. How you doing, Dan, But well.
Speaker 2 (00:22):
You know it's hot here as surprise, surprise.
Speaker 1 (00:25):
Yeah, we're going back and forth here. It was pretty
warm last couple weeks and now it's a little cool
and cloudy and it's gonna be pretty mellow, so pretty nice.
As we're getting to the end of August, I'm going
to let Dan introduce our special returning guests, So go for.
Speaker 3 (00:39):
It, Dan.
Speaker 2 (00:39):
Yeah. So I'm really excited about today's guests. We have
two of them, which means it's double the pleasure, and
they are both a living testament to the fact that
you don't need to work at Google or at Meta
or at some other fang in order to make a
huge impact on web development. So we have Ryan Carneato
(01:01):
and we have Tanner Lindsley. Hi, guys, want to introduce
yourselves in case somebody somehow doesn't know who you.
Speaker 1 (01:08):
Are Oh sorry, I hit the plauser. Okay, I got
to get that crowd to control themselves. Lots of fun.
Speaker 4 (01:15):
Yeah, I'm Brian Carneado, probably most known for creating solid
Jazz and my work on Signals CEO of Signals and Tanner.
Speaker 3 (01:25):
I'm the tan Stack guy.
Speaker 2 (01:26):
And for those who don't happen to know, Tanstack is
also where react query came from.
Speaker 3 (01:32):
Now tanstack query right, React Query I even I still
call it that, which is fine.
Speaker 2 (01:38):
And also lots of other great tools. You've recently introduced
a couple.
Speaker 3 (01:43):
Yeah, we we have just a couple. We're working on
a new framework inspired by solid Start called tan sax Start.
We have a router. We have a new library called pacer,
which is about like debound throttling, ate leming stuff. We
(02:03):
have the controversial name tan stack dB, which is a
reactive client store. I mean it's a database, but it's
only for like the client on the front end. It's
pretty sweet, guys, except that it isn't right. Yeah, it's
it's an awesome library. That's all you need to know. Uh.
(02:25):
And we've got like table and form as well virtual
h We've got a couple. We're even working on something
called tan Sak dev tools right now, which is oh
yeah about that.
Speaker 2 (02:37):
We will have to bring either you or somebody else
on the team on to talk about that when you're
different than you guys already h and I have to
also mention create tense ten stack start or whatever Jack
these days.
Speaker 3 (02:53):
Whatever we're going gonna call it, and it might even
just be called create tan Stacking like that.
Speaker 2 (02:58):
For those of you don't know that is. We actually
had Jack Harrington on the show to talk about it.
It's an awesome replacement for create react at least.
Speaker 3 (03:08):
Yeah, we have some fun plans for it.
Speaker 1 (03:10):
So it's Tanstack Ranger. I'm looking at a last.
Speaker 3 (03:13):
Ranger is kind of a throwback to my old open
source days. It's it's I needed a library to do
multi range inputs, so like sliders with multiple handles. I
made a headless version of it. I called it Ranger,
which is kind of fun.
Speaker 1 (03:34):
See I'm thinking like Park Ranger and uh, you know
Jellystone Park and I remember that.
Speaker 3 (03:42):
Yeah, it's probably it's it's the most niche library that
we have, which it's cool. I still use it every
now and then.
Speaker 2 (03:51):
And one more thing about you, Tanner, correct me if
I'm wrong, but you're now you've transitioned to work full
time on your open source staf.
Speaker 3 (04:00):
Right, yeah, about a year and a half ago.
Speaker 2 (04:03):
That is awesome. So basically you're the guy not making
a living, so the rest of us can.
Speaker 3 (04:10):
I mean as of nine months ago, I'm making a
good living now.
Speaker 1 (04:17):
So yeah, so what is your business model for that?
Speaker 3 (04:19):
Then? Well, it's it's different than most others, I think,
so I abhor VC around oss. I've done venture capital
for my last company, and I've also bootstrapped plenty of things.
I didn't want to do that for tanse Dec. So
(04:40):
Tensex basically run like a media company right now, as
like a business side of things, so marketing, advertising, partnership,
brand deals and the like. And that's how tanse Dec
is making money and paying both me and also now
(05:01):
contributing a lot towards contributors as well, and that's just
going to grow over time hopefully.
Speaker 2 (05:08):
Does that mean running ads within your website, within your
products or something else?
Speaker 3 (05:15):
Yeah, So tan stack dot com is the revenue generator
and the libraries are just guests on the tan stack
dot com show every day.
Speaker 2 (05:28):
That's interesting. Yeah, I'm really happy that it's working for you.
Speaker 3 (05:33):
Yeah, it's working so far, so we'll we'll see. I
don't know if it's scalable or not, but it works
for me, and I think that's that's good enough for now.
Speaker 2 (05:43):
And how about you, Ryan, how are things with you?
Speaker 1 (05:46):
Pretty good?
Speaker 4 (05:47):
I've been at Century now for almost a year, so okay, Yeah,
like this week coming up marks a year Century So
they've been great, been able to obviously support my U,
R and D. I've been doing a lot this year
because I've been working a lot of new areas kind
of for upcoming releases of Solid and and they've been
(06:10):
supporting lots of other ways. I've been getting closer with
the syntax FM guys.
Speaker 1 (06:15):
I was gonna ask if you worked with them.
Speaker 4 (06:17):
Yeah, So we've been talking about how to up my
game on video content and different kinds of media stuff
like better promotion.
Speaker 2 (06:25):
So I'm guessing they told you you need to make
longer videos.
Speaker 1 (06:30):
Yeah, that's funny, No, they were.
Speaker 4 (06:34):
Everyone's been really stoked on those shorter videos that I've
been doing recently. They said they were like a little crunched,
could do some more production value, but generally speaking, I
usually stream for like five hours ish you know, maybe
every other Friday or so. And I started in the
last couple of months doing like ten minute videos, and
for me, doing ten minute videos very hard.
Speaker 1 (06:55):
It's condensed.
Speaker 4 (06:56):
It's like because I picked really deep topics, but I
just needed a place to point people too. When people
are like, you know, signals, blah blah blah blah blah,
you know, like complains. You know, I'm a React developer.
I don't get why they important. It's just like, okay, well,
can I make an FAQ basically as a video. And
that's sort of what I've been trying to do. Find
content that's sure enough that people will not like bulk
(07:18):
looking at the whole length of the video, but then
still get a lot of unique content in there. It's
difficult because my expertise is I said, very very deep.
You don't really find content like I do in most
video content. Like you guys sometimes get really deep on
your podcast, which is why I love this podcast. But
(07:38):
a lot of video content isn't the same sort of thing.
You know, it's either like how do you do this?
Or it's like the most beginner intro level of stuff.
So being able to ask the whys, I think is
really really important and like looking at the design stuff.
So I'm going to try and bridge that gap, you know,
maybe pick smaller topics that I can actually talk about,
(08:00):
you know, in seven to ten minutes.
Speaker 2 (08:03):
So you kind of remind me of that. I think
it's Mark Twain who said I didn't have the time
to write your long letters short letters, so I wrote
you a long one. So yeah, it's exactly much much
more challenging to create short and concise and to the
point content.
Speaker 4 (08:21):
Yeah, especially if you spend all your time like like
really like like just kind of exploring ideas and stuff,
you know, and taking stuff which way to actually come
up to a consolidated messages way more work. I can
stream for five hours, but then I'm like mostly done.
The video is going to take me at least two
or three days.
Speaker 2 (08:39):
You know, that's for sure. Well, let's put it this way.
I think that once you get the hang of it,
you'll be able to do it much more effectively. Maybe
you should also consult about it with THEO.
Speaker 3 (08:51):
Yeah.
Speaker 4 (08:51):
Yeah, he's been waiting for me to like level up that.
He was like, yeah, I just keep on doing what
you're doing, but when it's time come see me you know,
like better gear, better stuff, you know.
Speaker 1 (09:03):
So yeah, yeah, I did one video for for you
Mastery a couple of years ago, And the amount of
work it takes to get to just to put together
simple videos is crazy. Whether it's recording, I would assume
that once you get your setup in place, you know,
your your software and your different artwork components and your
(09:23):
speakers and all that stuff, and you know what you're doing,
it's probably a lot quicker. But that's that's got to
be a learning curve just in getting set up to
do video production like that.
Speaker 4 (09:33):
Yeah, and I like editing stuff in I'm a big
fan of fireship videos and even my articles are like that.
I've always done a lot of meme pictures and like
like images to back up what I'm saying, Yes, can
I keep it going? Even my conference talks are like
that too. So I find the video editing actuallyising a
long time because I need to find just the perfect
clip or the perfect like image to kind of overlay
(09:57):
the different parts. So like it's like it's almost on
like a twenty second cadence, so like it's you know,
twenty minute video.
Speaker 1 (10:05):
You know, that's a lot of clips.
Speaker 3 (10:07):
Well, just let AI do it for you.
Speaker 1 (10:10):
Well, you know, I have the same struggle with always
trying to find a perfect dad joke to publish on
a daily basis and to throw in here and there
so I can somewhat they all perfect.
Speaker 2 (10:18):
Stage, they're all perfect.
Speaker 1 (10:20):
Oh boy, I got to tell you some stories about
how they're not always, but that's a topic for another time.
Speaker 2 (10:25):
Yeah, I think we need to finally get to the
to the crux to the main issue.
Speaker 1 (10:31):
We had a topic for today, any topic that we're
here to discuss, gotcha.
Speaker 2 (10:34):
So again, I'll give a brief background. A while back,
I had this idea of le let's go even further back.
I started noticing a certain pattern that was occurring in
a lot of frameworks, especially as they were becoming meta frameworks,
and that's kind of and I termed it to come
(10:56):
back of RPC, and we'll talk about what RPC is
and why it's making this comeback now. But when I
noticed that, it occurred to me that this could be
an interesting topic for a conference talk, which I created
and already gave once at JS Heroes in Romania. But
I reached out to a lot of my friends, people
(11:17):
like Ryan, like Tanner, like Rich Harris and others, and
started in Jack Harrington, whom we mentioned, and started talking
with them about this whole thing, and I realized that
a lot of them had very different viewpoints about what
our PC is, how it should or should not be used,
(11:41):
why it's interesting or maybe not. And what this ended
up is this conversation here. I basically asked both Ryan
and Tanner if they'd like to come on the show
and talk about this, so, you know, take it away, guys.
Speaker 4 (11:57):
Yeah, I start because I feel like this starts with
stuff that I was doing. But to be fair, this
is stuff that Tenor definitely relates to right off the bat,
because the first thing you hit when you're writing a
meta framework, like the most obvious thing is actually meta
frameworks are centered around routing, right and the router And
(12:19):
I know this has nothing to do with RPC right
off the bat, but like.
Speaker 2 (12:23):
Maybe we should define our PC before we dive into it.
Speaker 4 (12:26):
Yeah, okay, sure, A remote procedure call basically this idea
that you can in different environments call a function from
the other environment.
Speaker 1 (12:36):
Because I thought it was really popular comedy for a moment,
glad read.
Speaker 2 (12:40):
I like to use a slightly different definition the way
that I like to define it is basically as a
method for invoking a service or obtaining data from another process,
even on another computer across the network, using a function
call paradigm. Are you okay with this definition? Yes, So
(13:04):
it's basically it looks like a function call. It feels
like a function call. You. It's as easy to use
as a natural to use as a function call, but
instead of just actually being a function call, it translates
to an operation across the net.
Speaker 1 (13:19):
Yeah.
Speaker 4 (13:19):
The funniest thing is, like my first introduction to rpcs
didn't feel as easy or natural to use as function calls.
Speaker 1 (13:25):
I was using a.
Speaker 4 (13:26):
Like soap in Microsoft stuff back in the early two thousands,
And if anyone's us that it's like all XML based
and they call they get categorized under RPC. But that
like in my head when someone says RPC to me, traditionally,
that's what I thought of. I thought of these like
XML end points from like Microsoft Server from like two
(13:50):
thousand and two.
Speaker 1 (13:52):
Memories.
Speaker 2 (13:53):
Yeah, I'm a bit of a history buff. I'm a
bit of a history buff. So it's interesting because it
was that period in time where RPC was effectively dying
so our PC was really RPC. The term was coined
in the eighties. It became really popular in the nineties
with the native platforms for it like Corba or Dcom
(14:18):
from Microsoft, and then the Web happened, which kind of
killed it. Soap was an attempt to somehow continue to
do RPC despite the Web, and we can talk about
why the web killed RPC for a while at least,
and now it's making a comeback in a much more
natural way, which is in a lot of ways very
(14:40):
reminiscent of how it was in the nineties, but compatible
with the modern technologies that we use nowadays.
Speaker 4 (14:48):
Right, Yeah, yeah, I mean that sounds about right, Like
no one was touching RPC after that early mid two
thousands period until maybe like twenty twenty.
Speaker 1 (14:58):
Is they're like in the web technology, But.
Speaker 4 (15:03):
I mean yes, obviously, when we start designing these surver
rendered meta frameworks where we actually have the server and
the client being developed in the same place, you start
going like, well, if I control both sides, why not
like make that pipe easier.
Speaker 1 (15:23):
But I mean that was the motivation.
Speaker 4 (15:24):
The thing for me personally when I started looking at
this stuff was I wasn't I was aware of this
the soap stuff you know and that, But I was
just trying to like I was looking at other meta frameworks,
was this is the beginning of all start Like twenty
twenty one is actually before I even really sold at
one point zero, And I was like looking at other
meta frameworks a bit, and I was looking like next
(15:45):
JS and and mostly and like you know sapper from
and like next and they all had these like API
end points, like like where you'd like do this file
based routing system with apien points. And the thing that
was weird for me is I used to be a
real guy for a bit, so like you would define
(16:07):
like like in the file system, you'd define I guess
not a file system was an auto madeive.
Speaker 1 (16:12):
Filesism, but whatever.
Speaker 4 (16:13):
You you skelton out these like models essentially and API
n points and they would describe the whole collection of
the whole model of things, and the filesystem approached a
lot of these APIs had taken was kind of it
felt weird to me because you'd have like a different
file for like the general like groups versus like group
(16:35):
I D like you didn't put all the crowd commands
together like it would like separate based on the URL path,
not based on the conceptual model, if that makes any sense,
Like instead of being talking about groups as a single thing,
you would have like groups and groups slash.
Speaker 1 (16:50):
I d like it's a funny thing.
Speaker 4 (16:53):
But I was looking at these API shapes and I
was like, this is such a hassle to like every
time I do this to build out these files.
Speaker 1 (17:01):
I saw Remix. I think this was for me.
Speaker 4 (17:04):
They did their big reveal in spring twenty twenty one,
and I know this, and when I saw their loaders
and actions and the way that they doubled it up
on the API side, I'm like, this is kind of nice.
It's like turned into a single approach, because what was
kind of awkward was that people would build their app
in one folder and then they'd build their API and
another folder, and it was getting kind of clunky. You
(17:26):
do everything twice, kind of connecting it up. And they
were like, no, you can just kind of stick it
all in the same place, follow the same kind of pattern.
The challenge I had Remix is because I've been working
on routing for a while and nested routing and with
you know, parallel data fetching and all these things. Is
the remix loader basically said that this part of the
(17:47):
UL loads this data, and that wasn't granular enough for me.
I was working a solid gazz which is based on singles.
So as you can imagine, there's multiple different data sources.
You want them to update independently, right, you want to
be to refresh them independently. You only want to update
certain parts of the page. Treating the whole page like
it was like a single data flow was not particularly
(18:10):
nice for me. So I was like, okay, can I
take my kind of like route loaders, which are they're
not really loaders, they're isomorphic. They run on both sides,
these kind of nested rout kind of areas where you
can fetch data and instead of you know, hitting API
calls like we used to, instead of using one single loader,
can we make it more ergonomic to define individual pieces
(18:32):
of data you can fetch. And I was looking when
I was doing the client side, and I'm like, I
love this. You know I can put you know, you know,
solid query. This is like a ten I rack in
the conversation for a second, but like I can put
like my different data fetching things inside this loading and
I can manage them separately. If I'm on this page,
can I can grab users and their posts and have
(18:53):
multiple things and call them and invalidate them separately. And
I'm like, this is a great pattern. It works on
the server, works on the client. The only problem is
you have to call these extra API in points. So
I started working on a solution, which was, what if
I could use the file system in a nicer way,
(19:13):
like better than the API in points, where instead of
being quite as bulky and like having these URLs, I
could use like a proxy or something like where you
could maybe make some types for an object, and based
on what path you go through the object, it would
construct the URL on the client and then know what
endpoint to hit on the server. And I kind of
put this together and I call them actions after remixes actions.
(19:35):
And the difference is they weren't really exactly actions, or
they were both actions and loaders. You could basically hit
these URLs both on data fetching and on mutation, and
you could call them They're just like autogeneratid APIs. And
I remember I put that out around April. I made
some demos around April twenty twenty one, but then Solid
one point zero needed to get released, and I put
(19:55):
it aside, and I basically was like showing people and
talk like how cool this was going to be, but
you know, people just like, oh, yeah, yeah, whatever, Ryan,
and and then and then it was almost a year later.
Speaker 2 (20:08):
It could be fair half of us don't know, don't
often don't understand what you're talking about, so.
Speaker 1 (20:13):
That that's that's perfectly fair. But some people did.
Speaker 4 (20:17):
And I was talking to someone in December or like
around Christmas time, and they're like, oh, isn't that what
blitz JS is. I'm like, Blitz, isn't that next JSS thing?
And I look at it, and sure enough, that is
actually what blitz was. I basically independently also came up
something similar to what Blitz was.
Speaker 2 (20:34):
Which is blitz what is it?
Speaker 3 (20:37):
It was?
Speaker 4 (20:37):
It was it was kind of like a meta framework
on top of Next. I thought it was like Redwood,
like it was one of those more opinionated meta frameworks
on top of Next. But eventually they just became like
a Next plug in and kind of downscaled it because
but originally it was like a separate product, like you
when installed Next, you install Blitz, and it was it
(20:58):
was a React meta frame essentially. But it's cool feature
was that it had these basically kind of generated API
functions in the same kind of way that I was describing,
where you could just kind of put these basically API
call things in a file folder structure and then call
them directly without like the fetch and the ceremony around
(21:21):
like that. You could just call them basically it's functions
via like a proxy like I was talking about, because
the cool thing that the proxy approaches you can have
the proxy return whatever you want, so it just wraps
the details of the fetch away from you. So it
is kind of the beginning of this RPC thing because
you still have the API point where you have a
file system running telling you like basically where these functions are.
(21:43):
But then you could use the path on the proxy
like get users dot get or like whatever to actually
construct the URL and obscure the fetching. And I know
this is kind of crazy history, but you know, like
beyond where we ended up, but it was important for
me to get there because when I went back to
solid Start and after playing with this again, by this
time I had met Nikhil Sarah, who was kind of
(22:04):
responsible for a lot of stuff in solid ecosystem. He
did a lot of the heavy lifting on the original
salt Start release, and he also created Vinci, which we
used to actually base salts Start and tasks Start initially.
And he was like, man, veat is amazing. We don't
need to use all this proxy stuff. We can just
use a compiler and split these things out, like even
(22:27):
in the middle of the files, like it doesn't matter.
He started doing crazy stuff where he was analyzing like
things further down the tree and hoisting them automatically and
doing like all this stuff. And I'm like, this is crazy.
I can't even wrap my head around what's happening. But
he's like, look, as long as we have a symbol,
like some way of it's saying like this is a
(22:47):
is a function call, we don't need to use the
file system. We can just go like this is a
server function or whatever, and we passed the function to it,
and now that function will always be called on the server.
Speaker 2 (23:00):
This is where you guys invented bling. I think you
called it back then, right.
Speaker 1 (23:04):
Yes, almost.
Speaker 4 (23:06):
Bling would come very soon soon when we brought Tanner in.
On Tanner, what this I actually presented our original version
like this at vcomp the first b COMP back in
September twenty twenty two, and that's when people were like, oh,
this is actually kind of cool, you know, because it
had types like when you start having functions and you
(23:26):
can write them like functions where you can just import
them the types match up.
Speaker 2 (23:32):
Yeah, And I'll get to that because it brings an
interesting point from my perspective about why RPC is so
attractive and what changed, because we talked about how the
web kind of killed RPC, so it's interesting to talk
about how RPC was able to make this kind of comeback.
But before we go there, Tanner, I really want to
(23:53):
bring you into the conversation finally.
Speaker 4 (23:56):
So I mean, it is basically almost at that point
because Tanner is really into types. So after I demoed
this stuff and the type stuff, what came out in
the world that we realized that, like, what's cool, You
can literally write the function as is. You don't you
know a lot of type things that graft you or whatever.
You'd have to write these schemas were this the types
inherent off the functions simply from importing them automatical transferred.
(24:19):
You didn't have to do anything you wouldn't be doing
usually for typing. So once once we uh basically realized that,
uh you know, uh, I started kind of publishing it
out there and talking to other people. I was like, man,
I still couldn't get over how magical it felt. And
then I looked over it Quick and I was like,
(24:40):
maybe we just need a better symbol. So I was like,
let's add a dollar sign on it. And I actually
told the Quick guys like they didn't know I did this,
and I was like, I was tarting to manual In
at a conference in that fall, and I was like, man,
I think Quick could add this. I honestly stole your
guys's dollar sign just because it's like a good indicator.
Speaker 1 (24:58):
And I'm like, we can just like you. It's just
the reverse of what Quick does.
Speaker 4 (25:01):
Quick it was really good at pulling out these little
pieces on the client. This is like pulling out these
little pieces for the server. It's just like inverse Quick
and they added it to their meta framework within a
month after that conversation, like they were just like, yeah,
we already have everything here, and they did it almost immediately.
And as said, that's when tanner Gun involved around that point. Oh,
actually it might have been even a little bit later,
(25:23):
is around when next Uh and React they were like
saw this stuff too, and they're like, oh, and they
came up with like use server, but it was specific
for their for their rpcs, for their server function communication.
Speaker 2 (25:38):
Their Initially they actually had it in the fine names
if you recall the dot server dot ts and dot
client dot t s.
Speaker 3 (25:46):
That was that was the catalyst for me right there.
Speaker 4 (25:48):
Yeah, I mean there there was there was like a
subtle different like put it because technically server functions start
on the server and there's like a naming convention where
they're like, okay, these are server, these are client, and
these are shared. But then like it's a different scenario
because the server stuff can be done all in a
single pass, like in a single go. They when they
(26:11):
originally came up with that, that was to denote server functions,
but not sorry, server components, not necessarily server functions, right or.
Speaker 2 (26:19):
Like they didn't have an aultation. Use client came as
an alternative to dot server and dot client my fine names.
They didn't have anything for you server, which came around.
Speaker 4 (26:33):
Later, right exactly, So they started proposing this stuff and
I mean, yeah, Tenner can jump in at this point,
but I think he saw what they were doing and
it was just like, what the hell is this?
Speaker 3 (26:43):
I didn't I mean, I thought it was neat. I
was more interested in, like what they were doing under
the hood with like the compilation, what was actually happening
to the build and transforming that the AST and whatnot.
But I did not really like the approach to like
(27:04):
use client use server at least for like RPC. I
looked at it and I was excited about like, oh,
I can just throw this token in there and all
of a sudden creat an RPC, and like that part's cool.
But then all these questions started bubbling up about like, Okay,
(27:24):
clearly they don't believe that like configuring and RPC is
as simple as a one string Like that's ridiculous.
Speaker 2 (27:33):
So I want to touch that briefly again to give
a little bit of context to our listeners. So we
talked about the fact that RPC is about invoking a
function somewhere else as if it's a local fun a
local function, and specifically in the context of full stack frameworks,
(27:54):
it's basically invoking a server side function from client side
code using the function called paradigm. So it looks and
feels like a function. Now, in order to be able
to do that, you kind of need to have, from
my perspective, three things. First of all, you need to
have type safety. Otherwise what's the point and going back
(28:17):
to your or to your point, Ryan ab when disappeared,
Obviously it only came into being after Typescript really went mainstream,
because before Typescript, the whole typing thing was irrelevant. And
the other thing that you need is to be able
to simulate blocking you call to the remote function. You
(28:41):
want to have it to feel like a local function,
which means you don't get the execution back until that
remote function finishes and provides a value. And we only
got that after we got a sinkle weight.
Speaker 4 (28:56):
Basically, yeah, yeah, I mean, I'm native support of promise is,
but more specific to this conversation, when framework started having
tools to do what they think, which secularly you know.
Speaker 2 (29:06):
And which brings me back to the final point is ideally,
not necessarily, but ideally you also want what you mentioned
about vat, the ability to effectively instrument the code basically
write something that looks like a function call, identify that
this code is going to be running on the client,
(29:26):
this code is going to be running on the server
and generate all the magic code that makes this thing work. Basically,
it's it's a stub and on the client and on
the server that handles the whole networking layer underneath that
function call.
Speaker 3 (29:43):
Now I'm mentioning that you don't.
Speaker 2 (29:45):
Necessarily have to have it because types trpc does without it,
which makes it less magical and and also less fun. Potentially,
going back to I want to give Tanner, I basically
stole the mic from Tanner. So Tanner, you said that
(30:08):
you looked at that the fact that they introduced this
kind of magic string and all of a sudden it
kind of worked, and you had thoughts about.
Speaker 3 (30:18):
That, Yeah, because like details matter eventually, details matter a lot.
Like I think what you see is like you you
write use server and a function, and that's that's really magical,
and it's it's magical both in a sense of like
(30:40):
it's doing a lot under the hood for you, but
it's also a magical experience for most like basic operations
or basic use cases. It's so simple, it's deceptively simple
that it's like, oh, that's all it takes, right, and
you just you put U server in there, But there's
so many details that bubble up that you cannot get
(31:05):
away from the fact that this is happening over the network.
There are implementation details there that matter sometimes and there
for example, yeah, caching protocol methods, header's authentication. There's there's
(31:27):
like edge case flexibility that needs to be inherently in
the system to allow you to do things that you
maybe don't do ninety percent of the time, but you
will need to do. One of the basic ones was
you server defaults to a post sorry not be defaults.
(31:48):
It is a post request in like next jays and
React reacts to kind of spec right.
Speaker 2 (31:56):
You also don't have any control, like you said, either
over the end point all the protocol.
Speaker 3 (32:01):
No or headers or anything like that. You're you can
kind of get it some of the implementation, but only
at like a global place. But if you wanted to
customize any of that other stuff, you can't really do that,
and you have to break out of the system entirely.
And I didn't like that because immediately I was like, well,
I want to use this with reac query, and that's
(32:24):
great for cashing, But wouldn't it be great if if
the browser could actually just cash this get request that
I'm making, it's totally capable of doing that. And then
there's questions of well, how is that URL formulated in
a way that's going to be stable for the browser
to cash, And there's lots of questions that came up
(32:44):
for me around that, and I just knew at one
point we're going to need to like ninety percent of
the time same experience as just use server. Just create
a function, give it some kind of a symbol, a
dollar sign or something that would make it easy for
a bulk of the use cases, and then have at
least the ability or the option to expand and and
(33:09):
make that configuration advanced. I just don't understand when people
create APIs that cannot be ripped away easily or like
dug into easily. I don't understand it. And these directives
of use server or use cash or whatever are like
the biggest offenders of that principle for me.
Speaker 4 (33:28):
So it's it's funny because as you get guys may
not know actually I started with server functions, solid uses
use server today because compatibility with react or like conceptually
there yes, wrapper functions are.
Speaker 3 (33:45):
By the way, like intense sex start you can write
use server ye and you'll get the same experience.
Speaker 4 (33:52):
But it's yeah, like rapper functions are as tenders that
are almost necessary for configuration. And that's like positively at
least is that because this configuration comes from the client,
wrapping is definitely something you can do, Like you can
compose these kind of pieces on top of server functions
(34:13):
because the server parts on the server, the use server,
the server offs and represents the split between the like
this goes on server, this goes in the client.
Speaker 3 (34:23):
So it's basically.
Speaker 2 (34:26):
One of the things that kind of enabled I'll give
you in a second just mentioning that one of the
things that enabled this whole thing is like you mentioned
when we talked about VAT, is that smart bundler that's
that's analyzing the code in any event, sees it and
then knows to generate different code for the server and
different code for the client from the same source code. Right,
(34:49):
That's the sweet magic.
Speaker 4 (34:50):
We should talk about beat here for a little bit more,
because the thing is at this point Tenner and I
are talking the keel and we're like every framework could
use this.
Speaker 1 (35:00):
Why just use this.
Speaker 4 (35:01):
Only in solid Why have you know, you know, quick
make their own version. Why have next make their own version?
And Antenner was like, okay, let's let's like just make
a generic plugin for this, right, So that's where Blain
came about. When it was like Kennedy, he picked it
up and he was like, let's let's make this generic.
Speaker 1 (35:22):
The thing that is.
Speaker 4 (35:23):
Bling didn't quite survive in the same form because what
we realized was Blain was basically like a was it
a Babel plug in. It was basically a transform. It
basically transformed a code without bundler awareness, which does work
a lot in this in the simple cases, but it
(35:45):
doesn't work in the more complicate cases, mainly that if
how do you register these endpoints?
Speaker 3 (35:53):
Right?
Speaker 4 (35:53):
Real apps have code splitting, so you might not load
all the code in the client, or you might not
load all the code on the server. And what ended
up happening with the just doing a compiler approach was
that we would sometimes navigate to a page on the
client that hadn't been navigated to on the server, which
means the endpoint wouldn't be registered yet.
Speaker 3 (36:15):
It didn't exist yet, so it didn't even occur to me.
Speaker 4 (36:18):
So like basically, this generic approach, it wasn't enough just
to stop a babble and a compiler. We needed something
that would actually go through the whole module graph and
oist out these things, and it had to be bundler level.
So this is that's that's actually why Blaine died. In
a sense, Tannor has remade.
Speaker 3 (36:38):
It was repurposed, right. Actually, what's funny is like a
lot of the code from Bling made it into VINI.
Speaker 1 (36:47):
Yes, yeah, yeah, I mean at the VAT.
Speaker 3 (36:50):
Level, right, And then what's funny is then eventually we
ripped it back out.
Speaker 4 (36:54):
Right, and the other part of Vinci ended up going
directly to VAT when VEAT adopted the environment's API, because
we're building these things and the VAT team had, I
guess their own problem with deployment, like the solved with
the deployment environments, like you have cloud Flare, which is
different than Netlify and Bursell, and you could build for
like different outputs like workers versus pages and all these
(37:15):
things people wanted to have, like you know and client preserved.
They want to have these different environments. Vinci actually did
that because VAT didn't. It was hard to manage these
kind of things, and.
Speaker 1 (37:29):
That's why we created it. And in the end.
Speaker 4 (37:31):
As I said, VINCI now has kind of been dismantled,
but the environment API now exists in Beat very similar
seeing usage to how we used to do it in VINCI,
and we still have the server functions plug in in
the form of what what's under Tanstak that's now used
both by Soul, certain tank Tack start Tanner.
Speaker 2 (37:51):
You're very polite. You raise your hand.
Speaker 3 (37:56):
Oh, I was. I was gonna say. One of your
questions went unanswered, and I wanted to answer it before
we went too deep into compiler land. And it was
about type safety, and that is a massive part of
another massive reason of why it all went this direction
(38:16):
for me, and it's it's also because type safety. A
lot of people think, well, it's just defining, you know,
the types that this function can take on. But in
reality you're not just writing typescript types and expecting those
to get extracted away into validation. Because that's another that's
(38:39):
another part where like just sending, just treating it like
a function still is not totally safe. I mean, I
like it that we that we want to get to
this functional zintax of just calling the function and having
it be awaited, and it's it's really nice, but under
the hood, it's these details you need to be aware
of them, and one of them is that you're going
(39:01):
across the network. So type safety without validation is a
complete lie. You can validate anytime you go across a
serialization boundary, even into the URL or local storage, anywhere
you have to validate or it's just a lie.
Speaker 2 (39:19):
So yeah, it's really I'll let you continue in a second.
I'm just clarifying again the terminology. It's that distinction between
dev time type safety, maybe even build time type safety
and run time type safety.
Speaker 3 (39:33):
Static type checking versus actually run time validation.
Speaker 2 (39:38):
And exactly in Typescript very intentionally does not do any
run type type checking. So the fact that you're building
on that you're leveraging Typescript types is really cool. You
get completion, you get squiggly lines during development, but at
runtime over the network, anything might be sent, especially if
(39:59):
it's like a public point.
Speaker 3 (40:00):
Right and you know, you might feel like you're just
writing a function and it might feel secure, But no
one in their right mind would create a rest endpoint
straight into their back end and not validate the inputs
that they're getting from people. And so the same way here,
like you you want to be validating the inputs going
(40:21):
across these functions in a special way that isn't just oh,
it's a typescript type. So I'm just going to trust
that I'm the only one calling it and that my
system is the only one that has access to this
public RPC endpoint. Like all this stuff is easily reverse
engineered and abused.
Speaker 2 (40:42):
It's even worse because when you're doing posts like straight
on RESTful APIs, you know that you know that you're
going to have to check the types. You're going to
have to effectively cast the types.
Speaker 3 (40:55):
Yeah, because when you're losing the types perfectly going into
this exactly on transform mechanism and pulling them back out
on the server side, you know you you have to
do the same thing coming back out. Otherwise it's like, well,
this is just a blob of whatever, you know.
Speaker 2 (41:13):
I think, yeah, and with and and I forget if
it was Rich Harris or Lirental who literally showed me
an example I think where they used typescript and instead
of a certain value pass vr PC being like saying
just a string, they said it needs to be like
the quote unquote quote child or quote teacher and nothing
(41:40):
at run time validates that some other string is not
passed in so you so typescript literally lies to you
in this case if you're not careful.
Speaker 3 (41:51):
Which typescript needs to come after that, which is it
needs to be a byproduct. And that's exactly why we
we did two things. One we adopted Standard Schema. I mean,
we helped kind of push standard Schema into reality. It
was created by all the brilliant people that build these
validation libraries like yeah, like like ZOD and type, archetype
(42:16):
and stuff like that. But but we needed validation, so
we did validation first. And what's interesting is with with
our APIs to create our PCs, we do not allow you.
I mean, you can do anything you want, but I mean,
the the happy path of our API is that you
do not get types in your server functions in your
(42:39):
rpcs unless you provide validation that gives you those types.
If you try, like, we don't have a generic API
to just cast the the types coming back with just
typescript because we we think it's unsafe. So that is
the happy path. You have to use a ZOD schema
(43:00):
or an archetypes or whatever to to get that validation
and turn it into types.
Speaker 1 (43:06):
It's funny how full circle this is going.
Speaker 4 (43:09):
And this happens to me a lot, I have to admit,
because of the lookliness. Like the reason that RPC died
in the first place was because it was clunky to
in my opinion, there's many reasons, but part of it
was that it was clunky to define stuff. And like
it's a challenge like graft Tuel going away because you
(43:30):
know you're defining these like.
Speaker 1 (43:32):
To define the types.
Speaker 4 (43:34):
So people get these Jason APIs and then and the
people did the Jason on APIs and they like, you know,
manage themselves, but it wasn't schema basis through the code
at run time.
Speaker 1 (43:45):
And then like but there's a lots of holes there.
Speaker 4 (43:47):
So now people have reinforced on types, which obviously again
gets us back to RPC. But like, it's funny the
framework author's side, like let's say like deeper on the
UI frame side, Like say, guys like myself or people
working on React, we're looking at this and we're like, okay, McKenna,
this works nice. It's ergonomic as thing. People can build
stuff on top of it if they need to. But
(44:08):
I have the pieces I need to do what I
need to mechanically, and it's and it's smooth, but then
we we get to a point where we add back
our own friction because of like best practices.
Speaker 1 (44:21):
So that's what it was. Kind of kind of like the.
Speaker 2 (44:25):
Nice things on the web, that's truism.
Speaker 4 (44:28):
So yeah, it's kind of it's kind of interesting because
because like, yeah, I mean, I love, you know, the
standard schema, A big fan of Valabot and all that
kind of stuff, But it's but it was, it's like
one of those things where like the people building UI
frameworks usually maybe the meta frameworks can, but the UI
frameworks can't. Can't generally won't push those kind of things
on you because they're like they're not they're not in
our range of to be enforced because people will have
(44:50):
different opinions, things will change over.
Speaker 3 (44:52):
Time, too much opinion coming from right exactly.
Speaker 1 (44:55):
So it's like we.
Speaker 3 (44:57):
Being a meta framework, then you get to enforce a
little bit more those opinions.
Speaker 4 (45:00):
Yeah, it's just tricky because meta frameworks have a shorter
shouldn't theory have a shorter shelf life?
Speaker 1 (45:06):
Because of that, Like.
Speaker 4 (45:08):
This was always the struggle I had on was work
on all start because I like didn't want to make
meta framework. I just wanted to make a like a
create React app that work on the servers. This kind
of kind of tension is interesting for the science space.
There was something I wanted to say. You said, there's
three things you need for rpcs to come back.
Speaker 2 (45:25):
Yeah, I mentioned, I'll said again, maybe it'll jog your memory. Basically,
it was a type for the underlying system to be
typed or type safe that at least a development time.
Otherwise there's no point. So you probably wouldn't do RPC
if you've just got JavaScript, a weight so that you
(45:47):
can quote unquote block until that remote function finishes, and
optionally the compiler slash bundler so that you can automatically
general rate the stub code to do to translate that
function called into you know, do the serializations, send the
message across the wire, do the de serialization on the
(46:10):
other side and back again?
Speaker 4 (46:13):
Right yeah, yeah, And you just mentioned it twice because
I said at the beginning I wasn't trying to build RPC.
I think the final piece that access cemented it for
me was the serialization part. Because we don't communicate in
Jason anymore with these you don't.
Speaker 1 (46:31):
No, no, no, I think.
Speaker 2 (46:32):
You do, don't you the call over the wire is
it Jason or is it something else for server functions?
Speaker 3 (46:40):
Yeah, yes, by default, yes it's Jason. Yeah.
Speaker 4 (46:45):
See didn't And this is one of the attractive to
be though. Yeah, this is one of the attractive things
for us time because you can trouble sides. So this
is like react love this, we love this, framework has
love this. This is this is where you can go
back and to what framework wanted is. At the exact
same time I was working on this, we were updating
our approach to doing streaming SSR and stuff, and we
(47:07):
were making it faster and more efficient. Like every framework
has their own serializer. Some are publicly available like solid seabo,
Marco's warp nine devalue from spelp, and we were.
Speaker 2 (47:21):
Going to need to give a little bit more context
I think. Okay, so to some of our listeners about
why it is that you need this sort of civilization
for SSR.
Speaker 3 (47:29):
I think that'll be really useful.
Speaker 1 (47:31):
Yeah, okay, sorry.
Speaker 4 (47:33):
Basically you're sending HTML, which you're like, okay, that's great,
but the problem is now you have something ASIC so
you can send the HTML in chunks. Let's say, so
you're sending you know yes or it's streaming is basically
the problem. You want to be able to not only
(47:54):
send the HTML markup, but you want to be able
to send the data across and if you wait for
it all to finish, then just send the Jason and
the HTML. Then you show the whole page at the end.
But if you can, if you can send different types
of serialization values, it's possible to send pieces of the
app in parts and then for the client to go, oh,
(48:17):
you serialize a promise, for example, and the client go okay,
I know this data is coming. That's the most fundamental
way of doing it, where we basically describe a promise
over the wire. So when you know it starts hydrating
part of the page and the whole page hasn't loaded
in from the server, we can go okay, treat it
like a client side promise, and we know that what
will end up happening is actually on the same website page.
(48:39):
We'll actually stream that Jason later, but then we'll when
it comes in, we'll fulfill the promise on the client.
Speaker 2 (48:45):
So again, just to clarify the need, if I understand correctly,
correct me if I'm wrong. It's it's your HTML intrinsic
is intrinsically streaming. So the HTML protocol, unlike JavaScript, essentially
you can stream it and it will just execute. The
(49:06):
problem I think is the order of things is the
fact that you that HTML streams top to bottom, whereas
you want to be able to stream things and inject
them in the middle of things that have already arrived. Right,
and Tanner has raised his hand again, so interrupt.
Speaker 3 (49:26):
I just want to punctuate that and say that it's
it's about order. It's about componentization of these calls to
to say I have something that needs this amount of data,
you know, and you can have multiples of those in
whatever order you want, And inside of those responses you
can have immediate, immediately returned partial parts of those responses
(49:52):
and have some of them be delayed. So it's almost
like you get the same experience inside of HTML where
you can give the important stuff really quickly and delay
the other stuff. You get that same capability within each
request response life cycle.
Speaker 2 (50:07):
You can see it in this just just to say that.
The joke that I like to say is that the
big difference between old websites and modern websites is that
we've replaced the single large spinner with a lot of
small spinners.
Speaker 4 (50:21):
Yeah, we've been trying to solve that.
Speaker 3 (50:24):
I mean.
Speaker 4 (50:25):
The challenge is you can see in the shape that
ht amount it self, it's x amount or you know,
it's a markup language. So these things are naturally nested.
You see it in your components. You have components and
components and components and components. So like doing it top
the bottom doesn't even work because the outer component hasn't closed,
you know what I mean, like these envelopes. So what
ends up happening is the trick to the h mell.
(50:46):
Actually isn't that difficult mechanically because you can put it
on the end in something invisible like a template, and
have JavaScript move it back into place as it loads.
The hard part is that is the data. If you
actually want to wake up parts of that page before
it all loads, you need to tell you need like
a placeholder for the data. Essentially, you need to be like, look,
(51:08):
this data will be coming or you know.
Speaker 3 (51:10):
And that's a promise and that's a problem or a substream.
Speaker 4 (51:15):
Right, So we started serializing well, I mean, and then
there's different types of data right, like data objects, you know,
and different things. So like even stuff like TRPC has
like advanced like super JSON or like advanced data types
you know. And but for us promises, we're the key
for and we realized that we could bring that serialization
(51:36):
not just in our original SSR, but use it in
our server functions, so we could serialize anything, you know, HTML, elements, promises, whatever.
And this meant that because both sides, RPC let both
sides understand and this is in a sense like how
server components work and React because they can go we
have our special jsx on format and we can communicate
(51:57):
both sides, and now we can communicate react components of
the wire. As I said, our needs are a little
bit simpler with like promises and streams, but it works
out kind of similar. We it lets us build on
really powerful capabilities on this because you can say, return
some data with a promise encoded in it, and then
like have the data come back in chunks.
Speaker 3 (52:17):
You can make an.
Speaker 4 (52:17):
ACYNC iterable and basically return the stream so the server
can just keep on yielding data into your server function.
And you write it literally as like an ACE generator function.
So you write it as a like an acing function,
and it'll our generator word yields and will literally be
yielding from server to client. So there's a lot of
really cool mechanics here.
Speaker 2 (52:39):
Do you support that in? I know that Quick supports that.
I know that, or RBC supports that. Do you guys
also support that that the ability to stream a response
from ann RBC?
Speaker 4 (52:53):
Yeah, Salades, Yeah, it's in. It's in, Sarah Belle. I
don't know if someone Start handles it the best in
all cases, but the core technology support to it.
Speaker 3 (53:01):
So I guess side uh side quest here is we
we and we started out building our own serializer I
did for tansex Start that was had. It was a
little bit lighter than something like sarahvel Or or the
big giant serializer that that can do like re ex
(53:23):
server components and stuff. And there's also super Jason there.
There's even one that the remix team built right for
react about.
Speaker 2 (53:30):
Somebody on the chat called it a tool boards.
Speaker 3 (53:33):
Cha Jason, Right, Yeah, there's there's a couple of these,
and and I wrote a simple one that would kind
of get to m v P for tansex Start Alpha.
And even even the simple one could serialized promises and
serialized streams. But then we we realized that we want
to do more complex stuff. We've decided to adopt Sarahvell.
(53:58):
I think I told you that, Ryan, right, okay for you.
We so we have one that works for tansax start,
but we're adopting Sarall because is beyond just being able
to stream this stuff. Sarahville has some very unique So.
Speaker 2 (54:13):
What is Serville I think I missed explanation.
Speaker 4 (54:16):
It's it's the serializer used by SOLODJS use both in
our server rendering and in our server functions. I was
built by Alexis Monselek core team members sologs and yeah.
Speaker 2 (54:28):
It's basically serializes stuff over the wire and then de
serialize it. But it's not exactly Jason. It's something else,
like like React doesn't use Jason either. In case people
don't know.
Speaker 3 (54:42):
For their.
Speaker 2 (54:44):
Responses from server from server components, you.
Speaker 3 (54:48):
See a lot of them that have like it's it's
like a superset of Jason that they then encode and
decode special characters, special things.
Speaker 2 (54:57):
Look for at the very least, you won't command lines
separate the Jason in order to be able to send
multiple messages and not be kind of stuck on on
not closing the bracket or something.
Speaker 3 (55:10):
Is I think Saravel is very unique and really cool
because Saravel does not Servell uses javascripts essentially to stream
like to serialize all of this. Saravel uses JavaScript and
streams it and executes it over the wire, and it
is it's incredible to watch it work. Because even beyond that,
(55:32):
I mean super JC yeah, a little bit. Yes, I
wouldn't want to call it that.
Speaker 1 (55:40):
But yeah, I mean there is a bit of that.
Speaker 4 (55:42):
And in some cases where the security isn't concerned, he's
he's been looking at alternate serialization formats so that you
don't have the evil aspect of it. But yeah, it is,
it is, there is there's.
Speaker 3 (55:56):
Great power comes great responsibility.
Speaker 4 (55:58):
But I mean the power is incredible because this is
this is what enables future facing patterns. We've never had
things like single page apps be able to easily do
basically picture. You know, people familiar with React query where
you can like do a mutation and invalidate like a
couple of queries. What if you can do the mutation
(56:21):
so send it to the server and have the server
figure out what the invalidate and then while it's still there.
Start fetching the data, that data on the server and
send it back, you know, like return right away from
your mutation, but then have it streaming back the data
for like the next page you're going to visit right.
If you have the knowledge of say like a router,
and you have this capability of serialization, you can do
(56:43):
like single flight mutations and stuff.
Speaker 1 (56:45):
Crap is gonna restart because of not.
Speaker 3 (56:48):
Just single mutations, but stream Yeah, for a minute or so,
I'll be back. It looks like, but you'll yeah, it's
not that it's single flight mutations doesn't been around forever,
but I mean that's how the web worked anyway. But
(57:08):
single flight mutations that are streamed, I think that's the
most important part of it. It's like as you get
the response, as you get the requests on the server,
and you start processing things that need to change, you
don't again, don't need to wait to know all of
the changes that are going to happen to send them
back to the client. You can start processing them and
shipping them as you find them to the client. That
(57:31):
that is incredible, so that you know you're in Validation
from the server can essentially be streamed in as soon
as you know about it to the client in the
same request.
Speaker 2 (57:43):
The funny thing, though, is that that kind of goes
beyond RPC. That's more in the territory of SSR, isn't it.
Speaker 3 (57:51):
No, Because we're still just talking about individual requests an
RPC for instance, to say, you know, I want to
go the base query maybe and then or say I
want to update a user. I want to update a
user capability. Right, I'm adding a capability to a user,
(58:12):
and you you go to add that capability on the
server side, You'll you'll receive that mutation, You'll you'll create
that mutation, and then you're probably going to have a
bunch of other things that need to be invalidated for
the user or refresh or streams to the user that
are going to be new. With that new capability, they'll
gain access to new items, new things, new collection types
(58:36):
inside of their application, and some of those items might
take longer for you to fetch on the server and
give to them than others, So out of the gate,
you'll want to ship them a new user object that'll
be really fast, So you ship that down in the
same response, you push that down, and then you're still
(58:56):
working on maybe maybe their team data or their work
based data, and then as you get those, you ship
those down as well, and so you can start in
the same request of just a mutation, the response can contain,
you know, immediate and streamed data. It's really interesting.
Speaker 2 (59:16):
First of all, you're turning on a lot of light
bulbs for me, so thank you for that, and you're
raising and it's interesting because what I'm thinking. I'm old
enough to remember the original RPC implementations back in the
nineties and one of the main problems with them was
the fact that they were built as being synchronous type operations,
(59:41):
and because the assumption was that it is kind of
like a function and function calls usually are cheap, so
you can make one function call for the fast stuff
and another function called for the for the slow stuff,
or get the call back or stuff like that, and
they did I didn't really accommodate for that scenario that
(01:00:02):
you're describing, where I'm requesting multiple data items and they
are available at different at different times, and that I
can create this kind of a sophisticated staggered response that
data arrives as soon as it's available, and I don't
need to make multiple calls in order to get all
(01:00:23):
this data. Yeah, I'm giving them mic back to you.
Speaker 3 (01:00:28):
I think that it even goes He takes that even further.
Is yes, it works for data that you request you
may know about, but it also works for data that
maybe you don't want to know about but needs to
be sent. Is it leaves room now for this hidden
layer of meta data, of meta actions or meta code
(01:00:51):
that could happen with a request response. So, for example,
the developer experience that I would want as a developer
when I say update this user capability, I just want
to update that user capability. I don't want to I
don't want to have to be responsible for receiving and
(01:01:11):
acting on all of this extra data coming back to
me though. I just want to trust that it's going
to work. And so naturally, what do a lot of
people would say is that say, well, I'm going to
update this user with a new capability, and then they
get it back, and what do they expect. They expect
the response to be a user and the new version
of the user that has the updated capability. What they're
(01:01:32):
not going to expect is to have a bunch of
other random, you know, data invalidation payloads that they now
have to handle. So what's nice about something like Sarah
Bell is that it's it's not just you know, this
super set of Jason or whatever, but it's the ability
for even frameworks or middlewares or things in the middle
(01:01:53):
of that operation more code can just be sent down
to the client to execute that the response doesn't even
need to handle or know about. So it leaves a
lot of room for optimizations and frameworks to get into
this request response life cycle, but still keep the developer
(01:02:14):
developer experience really lean, not.
Speaker 2 (01:02:17):
Meaning it means, but a little bit scary because if
I'm understanding correctly what you're saying is, let's say I
want to get the new user an updated user object
from the server. So I do an RPC call, I
get back the new user, and I do with it
whatever I do with it. But it turns out that
(01:02:39):
it also realized that some group that the user belongs
to updated, so it sends that down as well. I
don't handle it because I don't expect it, so I
didn't even write the code to handle it, but it
also sent the code that will handle it for me.
If I understand correctly, what you're kind.
Speaker 3 (01:02:59):
Of it depends. It depends. A lot of this gets
into how you use such a tool, right, Yes, you
could stream down the code that handles it as well,
but more than likely that kind of code would live
on the client. It would live on the client side.
To say, we are orchestrating this metadata layer right as
(01:03:20):
the client. We know that we're going to want this
extra metadata to invalidate.
Speaker 2 (01:03:25):
So it's a callback that I attached that one, or
middleware that I attach that gets invoked with the extra
data that happened to arrive on because of some other
call that caused it to be sent.
Speaker 3 (01:03:38):
And we're talking about middleware that can span the client
and the server just like a server function. Right, it's
middleware that can compose together inside of this system that's
extracting into client and server and creating. It's RPC middleware
essentially what.
Speaker 2 (01:03:53):
It is, correct, I'm trying to think of an actual
valid scenario and you tell me if it kind of matches.
So I know that a lot of times as secure
systems have this concept of a refresh token. So if
your access token in validates, you have a refresh stoken
(01:04:16):
that generates a new access token for you, and that
happens behind the scenes. You don't need to be aware
of this whole thing or this whole mechanism going on.
So is it something kind of like that.
Speaker 3 (01:04:27):
I don't know if I would use this for that.
Speaker 2 (01:04:31):
I mean you could, No, I'm just thinking about the
scenario where you're making a function call for a but
that triggers also other stuff to take place in the
system that you may not necessarily be aware of.
Speaker 3 (01:04:47):
Yeah. I think what we're mostly talking about here is optimizations, right,
kind of like the example that Ryan brought up is
like you send off pre crashing. Yeah, you send off
a request with a mutation and the intent is known
that you're going to be at this destination next. Instead
of waiting for all of this to to settle, the
(01:05:08):
dust to settle before you go request the assets for
your next destination, you can do all of this in
one request. You can send all of the assets that
are going to be requested anyway.
Speaker 1 (01:05:19):
Yeah, it's it's kind of interesting.
Speaker 2 (01:05:21):
Kind of resource sense in a sense.
Speaker 3 (01:05:23):
Yeah.
Speaker 1 (01:05:24):
What I found kind of interesting about this resource.
Speaker 3 (01:05:27):
Pushing resource pushing.
Speaker 4 (01:05:29):
Yeah, when I when I was working on the solution,
was the kind of that realization is that people are
already writing their code mostly isomorphically, which means that like
they're already creating these patterns. So like running the say
the routeloaders again on the server and a mutation and
then expecting the code to be basically the same on
(01:05:50):
the client because it is the same code it's source
code basically gives you this ability. So like what was
kind of cool is yeah, the kind of optimization look
you can go in because if you're in a nested route,
if you're familiar with nest to routing, maybe the top
section doesn't change, only the like the two sections underneath
swap on this navigation, so you're like you're staying on
(01:06:10):
the same main page and then you go to the
next page. Well, if that data from the top main
page isn't being updated, then you don't have to fetch
it again on the server loaders. Like basically we were
able to use the same sort of knowledge of what
route changes and what data is already available based on
what things it would fetch when it navigates to those pages,
(01:06:35):
to the point that we basically go, Okay, this is
already there and the client don't need the updated and
it hasn't explicitly invalidated. This is already there, this is new,
this isn't just on this page, this has been updated.
And then fetch those things and put them on that request.
But the client are the server responds right away. So
the server goes, okay, we're going to redract to this
next page, and the client goes, okay, I'm going to
(01:06:58):
go to this next page. So the client goes to
that next page. But because those promise stubs are there,
when it goes to load the data, it goes, oh, no,
promise stub right here, it's already in the cash. I'm
going to use this promise instead of fetching it again
for the stuff that's new. So it's it's kind of
it's weird how you can kind of like do this
(01:07:20):
kind of predictively, but in the end, what you end
up with, which is my favorite part about this, is
you're actually leveraging your knowledge of the client side cash
on the server here, so you actually don't need cashing
on the server in this mechanism. Like you could still
optimize it with like a rettis or something, but we
never went through that whole Next DS use cash dilemma
or cash stup because when you do server functions, you
(01:07:42):
re render the whole page. It's kind of like remix.
It's like that loader bit. You know the whole route sections.
So when you hit a you have five data points, well,
guess what, you are fetching all five data points.
Speaker 1 (01:07:51):
There's no way for you to separate those apart.
Speaker 4 (01:07:54):
Which means that if you don't want to refetch all
five data points every single time, you are going to
cash them on the server. This approach you'll let you
basically use the knowledge of the route and the knowledge
of the invalidation from the mutation to actually only fetch
what's needed, so server cashes aren't necessary. It feels like
a single page app word. You know, the cache is
on the client, but it's actually doing this all in
(01:08:15):
a single.
Speaker 2 (01:08:16):
So again, let me see if I'm getting this, because
to an extent, it's kind of incidental to the fact
that it's RPC. It's basically the fact that you're controlling
both sides and the protocol that lets you do a
lot of magic. The fact that it's RPC, like I said,
it's almost incidental. It might have been some other mechanism.
Speaker 3 (01:08:39):
But what you're.
Speaker 2 (01:08:40):
Saying is this, I'm triggering an operation to receive data.
I know that this operation will likely trigger redirection or
transition to another page. I can prefetch or push more accurately.
(01:09:06):
I can like you said tenor I use the wrong analogy,
because it's not client instigated, its server instigated. The server
side knows that I'm likely going to be needing that
additional data. I can send it down, hold it somewhere
in the client side so that when I actually needed
(01:09:27):
it's already there.
Speaker 3 (01:09:28):
Basically, and that becomes even more powerful as you stream,
because we could do this before with single flight mutation.
As you send it, and if you spend enough time
on the server, you can gather up all of the
data and finally send it all back and then allow
the user to navigate. But streaming allows you to respond immediately,
(01:09:52):
but tick off those requests server side and then stream
them down to the user and utilize utilize that connection
that's already open.
Speaker 4 (01:10:02):
It's because the frameworks support a signatively. Things like suspense
basically will trigger and treat your cached promise or your
pre pushed promise, and a new promise would fetch identically.
So in these kind of scenarios, you actually get further ahead.
You can start rendering the next page before that data
has arrived.
Speaker 2 (01:10:22):
And the fact that we're componentized kind of necessitates this
whole thing because otherwise one of the big problems with
components is that in capsulation, if you're following through on
it often leads to waterfalls. So the fact and react
query it back of the day at the time was
(01:10:44):
so great in this regard because you fetched from one
component and then it got reused the data in other
components without you needing to like break in capsulation be
very iplicit about it. It was totally implicit. So now
you're kind of taking it to the next level in
(01:11:05):
a sense this and this.
Speaker 3 (01:11:08):
Opens up new can of worms. I don't know how
deeply you want to get into that, but components have
you on again component level data fetching it. I'm still
I think it's still really the way to go. And
then thinking about thinking about the the optimizations that need
to be made at the routing level and the SEO
(01:11:30):
level and the page levels to get rid of waterfalls,
and that you know these are optimizations and making those
optimizations as easy as we can is really important, and
we can do that through API design with our routing.
We could do it through compilers like relay, but they
are optimizations. I think at that point, I like this,
(01:11:53):
I like everything that we've talked about is really.
Speaker 2 (01:11:55):
I interrupt you again because I have to say that
one of the biggest downfalls of RP see back in
the old days, and one that I was so concerned
with that I actually at the end of my presentation
or RPC, I talk about the potential downsides of RPC,
and it's one of the items listed there is the
fact that RPC kind of encourages waterfalls because you make
(01:12:20):
a function call to get some data, and then you
make another function call to get some more data, and
another function called to get some more data, so you
end up with chatty protocols and waterfalls because the implicit
assumption is that it works like functions and functions are cheap.
And what you're saying is that you're overcoming it by
(01:12:43):
being smart realizing you know, you can still write it
like it is if it's a waterfall, but you've triggered
the requests, the generation of the code that's going to
be requested in the third request, in the first request,
and already be even scented down, so by the time
I get to that third RPC call, it's returning immediately
(01:13:06):
because all the data is already there.
Speaker 3 (01:13:08):
Yeah, we're getting into what I would call orchestration of
our PC, like RPC orchestration because and I think that
you know, single flight mutations is one kind of flavor
of utilizing rpcs and orchestrating like data flow through rpcs.
There's other ones like routing is a big RPC orchestrator really,
(01:13:31):
anything that's providing life cycle based on user input, so
even forms or the framework itself.
Speaker 4 (01:13:37):
It's it's tricky because the biggest cause of waterfalls, in
my opinion, in terms of co location is often those
two causes. There's the code splitting. We want to code
split at the route level generally, like you know, you
can do lazy components wherever, but because of the code splitting,
(01:13:58):
it's hard to fetch the day before you have the
code to pitch the data essentially, So that's what pushes
us to hoisting. And it's necessary because you don't want
to load all the JABSKO for your whole app.
Speaker 1 (01:14:08):
But if you didn't, if.
Speaker 4 (01:14:11):
You didn't actually do that, then the other second thing
you hit on the client. Is that co location of
UI and components can lead to waterfalls depending on how
the renderer handles the ACYNC in the in the rendering.
So like, for example, let's say every time you hit
a promise, you throw and it then re renders the
(01:14:32):
whole thing. Well, if there's a promise after that, well
then you're causing a waterfall, right.
Speaker 2 (01:14:37):
And I'm saying that because RPC potentially brings in the
third cause, which is just the fact that people can
look at the code think that if it looks like
a function, then it is a function, and just write
the code in a way that's very inefficient and generates
a waterfall regardless of the framework.
Speaker 4 (01:14:57):
Right right, which is which is well, I mean there's
two different types of waterfalls, necessary waterfalls and unnecessari stary waterfalls.
Necessary waterfalls can still be made unnecessary if you say,
move everything into a single function call and you know,
put them together whatever.
Speaker 1 (01:15:11):
But there's a dependency.
Speaker 4 (01:15:13):
But there's a dependency, right, Like there's like you need
to get the users to get or get the user
to get the posts for some reason. But if you
could fetch them both off the user ID, like the
poster is user ID and the user information off use ID,
then that's an unnecessary waterfall. The problem is we cause
unnecessary waterfalls all the time in our UI rendering. If
if because we go okay, well the posts go under
(01:15:35):
the user section, and then you you basically by by
using ACYNC in your component, you're basically like putting in
a weight in the middle and blocking. Blocking rendering sucks
a weights suck for rendering UI. You don't want ACYNC
components really it React gets away with it on the
server because it's on the server, and server waterfalls are
not as bad.
Speaker 1 (01:15:55):
But I'd argue against that.
Speaker 4 (01:15:56):
I'm generally speaking it's this co location of data and
UI that causes this blocking. The reason I'm stressing on
this is because it's solvable with signals, Like basically, if
you're doing fine green rendering, there's no reason to await
or block except for where the data is finally being
used at the end of the data chain, which means
(01:16:16):
that you could parallelize this data fetching for unnecessary waterfalls
not only for like sibling components, but also nested components.
Because the component aspect doesn't matter, you're not blocking to
re render the the you know this component, you're literally
just going, oh, I actually need this in this text,
(01:16:37):
in this div I'm going to block there and everything beside.
Speaker 1 (01:16:40):
It's a sibling. You basically turn all you flatten the
whole tree.
Speaker 2 (01:16:44):
And this is something you know, just to be clear,
just you do all this magic for us automatically, right,
It's not something.
Speaker 4 (01:16:53):
I mean, this is run to time what I'm talking about.
The funniest thing is it's not actually magic. It's just
the way signals work. This is something that I kind
of discovered a while ago when I was doing fine
green rendering. That pine green rendering is actually it flattens
the tree. That basically what does there's no components anymore.
You just have a bunch of independent things. So if
you fly on the.
Speaker 2 (01:17:14):
Tree, because signals don't care about component boundaries essentially.
Speaker 4 (01:17:18):
So if you fly in the tree and talk about
async that way, then it's the same thing. Nothing has
to block unless it actually has a data dependency on itself.
So necessary waterfalls still walk, but things that are not
necessary waterfalls don't need to block. So I think this
is another wind that you're going to see in the
signal of libraries. So people haven't really been leveraging this yet,
(01:17:39):
but it's coming soon and we know what it looks like.
Speaker 1 (01:17:43):
Right, So I'm going to jump in here because we're
running a little long here on time, and I have
a suspicion with anything that Dan gets into. We could
talk to for two or three hours when he gets
into a topic. So before we wrap up and head
to picks, is there any last word on our PCs?
In summary?
Speaker 2 (01:18:02):
And all I can say is my summary is that
we absolutely positively need to get Ryan and Tanner on
to continue this conversation because we are scratching the surface.
It feels to me.
Speaker 3 (01:18:12):
Yeah, I mean, even just what Ryan brought up about,
like all the ACYNC signal stuff, there's a lot that
we could go into.
Speaker 2 (01:18:21):
Yeah, but then stuck is built on react. What does
that have to do with the sink signals?
Speaker 3 (01:18:26):
Maybe you say something about that, let's say, let's say
it for later, Let's save that for later. We are
running along. There's a lot to talk about. But and
you know, hold on, you brought up a good point
though about the necessary waterfalls as well, Like necessary waterfalls
are still there and they still need to be handled,
And like I feel like I've been spending a lot
(01:18:48):
of time on that as well as like, you can
still make mistakes there that you can't solve with a
bundler or you can't solve with with signals or anything
like that. So but you can solve some of those
problems with with good API design and prefetching and and
you know, so, I think there's a lot more to
unpack there. I don't really want to. Okay, there's a
(01:19:11):
reason that tan Stack everything is built to be agnostic.
I'm a very firm believer and React and obviously it's
it's our most supportive framework, but it by no means perfect.
It's it's not, it's but it's amazing, it's great. There
there's pros and cons to every UI framework, but I
(01:19:33):
I'm very much preparing for a future where there are bigger,
better UI frameworks doing amazing things. And the things that
Ryan just teased and talking about with like signals and
acinc and flattening signals to to you know, around suspense
(01:19:53):
and things like that, that stuff's really exciting. Uh I.
And I'm just gonna gush a little bit right here
before we end and say that the things that Ryan
is working on with solid two point zero have made
me the most excited about what's technically possible with the
UI framework since I discovered React and discovered hooks and
(01:20:16):
really got into it. So hopefully it's a good teaser
for maybe what we can talk about next time.
Speaker 2 (01:20:21):
Thank you, look, I'm here for it each and every
time you just say when I'll have you on the
show again.
Speaker 1 (01:20:29):
Alrighty well, thanks to Ryan and Tanner for scratching the
surface on our PCs and waiting our whistle for more
discussion on the truly geeky stuff that we talk about.
With that, we'll move to picks picture, the part of
the show where we get to talk about anything we
want to talk about, could be tech related, could be
not tech related. We run the gamut. I will start
(01:20:52):
off with what I consider the high point of any
of our episodes, which is the dad jokes of the week.
So the other day I was watching a show called
ten Ways to Avoid a Shark attack, and I was
surprised that stay out of the water wasn't on the list.
Speaker 2 (01:21:06):
There was actually a skit about that on YouTube where
they do a parody of a horror movie when there's
the shark in the pool, and everybody's like, oh my god,
there's a shark in the pool.
Speaker 3 (01:21:19):
What can we do in this?
Speaker 2 (01:21:20):
The one guy keeps on saying.
Speaker 3 (01:21:22):
Just don't go into the pool, right right.
Speaker 1 (01:21:25):
So yeah, my drum joke here is not working. And
I'm really bummed because that adds all the impact. So
I'm gonna have to sue a riverside for that one.
So my wife looked at my feet this morning and said, honey,
your socks don't match. And I said, that's funny. I
have another pair just like them in the drawer. But
it boom. And then finally, in a surprising announcement, Head
(01:21:47):
Shoulders have decided to discontinue their popular anti dandruff shampoo line.
The decision left many people scratching their heads. I expected
not that I need that, as you can tell by
the video. So those are the dad jokes of the week. Dan,
you got any picks for us?
Speaker 3 (01:22:03):
Yeah? So two funny things.
Speaker 2 (01:22:06):
First of all, Avishai Shalom, we've had on this show
a while back, made an interesting observation on X and
I literally quote him because he just wrote it very well.
He was talking about the fact that will Ai destroy
the Worldwide Web because the bit it's destroying the web's
business model, and he says it's actually worse. NLM's marked
(01:22:29):
the end of open networks because the cost of producing
junk and malicious content is now near zero. We already
see this in LinkedIn, Facebook, et cetera, but more alarming
in GitHub and NPM. And it's a very interesting observation,
I think, And so that would be my first pick,
(01:22:49):
that observation. And the second one is this funny story
that just came out about the fact that the UK
government is trying to sue four Chan. Turns out that
the the UK passed the law that requires age verification
before you access potentially offensive content, and four Chan, being
(01:23:11):
an American company, is not complying with the UK laws,
so they're actually trying to force them to comply by
they sent them a threatening email and they basically responded
with First Amendment bitches. And it will be interesting to
see where this goes. So yeah, that would be my
(01:23:33):
second pick. And I think you can tell which side
them on in this argument, or put it this way,
The worst thing that has ever ever happened to the web,
I think is GDPR.
Speaker 3 (01:23:46):
Yes, and.
Speaker 2 (01:23:50):
Yeah, we don't need any more of that.
Speaker 3 (01:23:52):
So yeah, those would be my two picks for today.
Speaker 1 (01:23:55):
All right, Tanner, you got any picks for us?
Speaker 3 (01:23:57):
Picks? Picks them of these pics? Do these picks needed?
Is random the room? The randomer the better, randomer the better.
Speaker 1 (01:24:07):
Okay, why you think about it, We'll go to Ryan.
See if Ryan anything.
Speaker 4 (01:24:11):
Ryan, I'm just gonna think here because I've been so busy.
If you guys don't know, I have a little baby
at home, so I have that to your pick. Yeah,
I mean, I mean, honestly, I don't get to watch
or check out nearly as much as I want to
that and like kids shows. I recently watched what it
was like K Pop Demon Hunters.
Speaker 1 (01:24:31):
I've heard real good things about that one.
Speaker 4 (01:24:32):
Yeah, you know, I mean if I mean, people are
gonna disagree with me, but it gave me like it's
got almost like Sailor Moon vibes, but like some big production,
you know, thang it.
Speaker 1 (01:24:45):
It was fun, the kids liked it.
Speaker 2 (01:24:47):
Since you mentioned the show, and to give Tennor a
little bit more time, I have to say that I
was really disappointed with the second season of San I mean,
they had an entire episode which was literally funeral. I
mean what the half.
Speaker 1 (01:25:02):
I haven't watched that one, so I can't.
Speaker 4 (01:25:05):
Oh, I know, I know what else I want to
say without giving away too much. I actually enjoyed the
ending of season three of Squid game station to have
split it, but oh really, but I like the second
third season together. It ended in a I like the
way it ended, So maybe I'm made me minority. But
I actually if if if you're willing to watch the
second season, that you should finish it with the third season.
Speaker 1 (01:25:26):
Well, yeah, is really into that one, and he was
explaining the plot to me. I haven't watched the NAVA,
but he's explaining to me, and I mean, it sounded
the first season is just creeped me out. It was
more of a Hunger Games type vibe and it just
sort of creeped me out. But he was explaining what
was going on in the second third seasons to me,
and that's that's pretty intriguing the way they're doing it
right now.
Speaker 2 (01:25:45):
Thank you word for it, as long as you don't
tell me that you enjoyed the eighth season of Game
of Thrones.
Speaker 1 (01:25:52):
No, no, that was that was not my favorite. All right, Tanner,
we gave you some time.
Speaker 3 (01:25:55):
What do you got. I mean, I like Ryan I,
so I don't consume a whole lot of of what's
out there. I'm I'm more of a builder, but there
there are a few things that I do I do enjoy.
So I'm overly hyped for the new season of Stranger
Things coming out, and so we've been rewatching all of
(01:26:17):
the seasons before it. So that means that I've seen
the fourth the first season four times, the second season
three one, and so forth, and we're watching the fourth
season four the second time now, and then we're going
to anxiously await number five. Yeah.
Speaker 2 (01:26:32):
That's an interesting argument that I had with aj about
fantasy book series that tend to come, you know, not
yet finished, and then the next book comes three years
after the previous book, and you don't remember anything. And
I never have it in me to reread that those
previous books. So you're basically you do have it in
(01:26:53):
you to rewatch the entire series.
Speaker 3 (01:26:55):
They were they were so good, and you know, Stranger
Things connoisseurs will tell you, well, this season was worse
and this season was terrible, and but like going back
and watching them, I mean, I'm it's been a long
time since the first one came out, so like, I
am very much more adult now than I was then
(01:27:15):
watching them, and just the the acting from these kids
is just incredible. It really is so good. And and
the writing for for how quick they were taking out
the first couple of seasons, like the writing in some places,
I was like, okay, like that could have been better,
But for the most part, we're just spoiled with great content,
(01:27:37):
like what a fun story, a great story. And yeah,
I was just telling my wife. Season four got a
little dicey because it started feeling really evil, and I
was like, well, I don't know if I want to
take part in this anymore. And then all of a
sudden it went super sci fi and I was like, okay,
this is awesome. So yeah, we've been watching Stranger Things
(01:27:59):
and that's about the only like one episode at night
and then I'm asleep, so that's about the amount of
time that I have to consume.
Speaker 2 (01:28:08):
How old are your kids, if I might ask, I.
Speaker 3 (01:28:11):
Have a a seven year old, a four year old,
and a almost two year old, So well, I'm not
they're not watching Stranger Things right now? For them? Is
I just really I just introduced them to spy kids
and they are like, this is the coolest series ever made,
(01:28:34):
So you know, they're they're living in a different world
right now. That's just as cool, but way more child friendly.
Speaker 2 (01:28:42):
No, I was asking for the perspective of your free
time because, like Ryan said, then you have little children
at home.
Speaker 3 (01:28:48):
Yeah, we both have little children, and I know that
I know from talking to Fran that they consume, happily
consume a lot of our time. So yeah, I it's
basically at night for maybe an hour that you know,
we get to indulge in something.
Speaker 2 (01:29:06):
So all I can say though is from watching from
the other side as it were enjoyed while it lasts.
Speaker 3 (01:29:13):
Yes, oh yeah for sure.
Speaker 1 (01:29:15):
Yeah, mine are twenty five, about to get married twenty
two and Congratulateen, and so yeah, we have a lot
more a little more free time. But yeah, I missed
some of those. I'm looking forward to grandkids and be
able to have the kids into then say here you go,
you know.
Speaker 2 (01:29:31):
Yeah, well my baby goes twenty three years old working
in tech.
Speaker 3 (01:29:36):
Very cool.
Speaker 1 (01:29:37):
Yeah all right, So with that, we're going to wrap
this up and hopefully we'll have these two backing in
to continue our discussion in the meantime. Thanks for watching
slash listening, and we'll talk at you next time.
Speaker 3 (01:29:48):
Bye.