Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hey, folks, welcome back to another episode of JavaScript Jabber.
Speaker 2 (00:09):
This week, on our panel, we have Steve Edwards.
Speaker 3 (00:13):
You had to think about which one you were going
to go to their first second, didn't you.
Speaker 1 (00:16):
Well, I didn't want to say dad Joker. Oh sorry
what my brain said?
Speaker 3 (00:20):
Oh well that's what most people think. I'll do my
age as yo yo yo, come in at you live
from cold and rainy Portland.
Speaker 2 (00:28):
We also have Dan Shapier.
Speaker 4 (00:30):
Hi from a freezy tel Aviv at fifty degrees fahrenheit
nine degrees celsius, which is, you know, literally freezing for us.
Speaker 3 (00:38):
That's freezing. That would be that's above where I'm at
now in the middle of the day.
Speaker 2 (00:43):
Yeah. I'm Charles Maxwood from top Ed Devson. Yeah.
Speaker 1 (00:45):
It's it's forty some degrees here. Yeah, and I went
for a walk, so.
Speaker 3 (00:50):
Yeah, I know it's far better for us.
Speaker 2 (00:52):
Man.
Speaker 3 (00:53):
Yeah.
Speaker 2 (00:54):
We also have Lee Robinson. Lee, welcome back.
Speaker 5 (00:58):
It's good to be here. It's fifty eight degrees fahrenheit here,
which is like a sauna because it was negative fahrenheit
like last week.
Speaker 1 (01:06):
So yeah, I was gonna say when I come out there,
but if you're getting negatives, forget it.
Speaker 5 (01:13):
Yeah, that's classic Midwest. It's very cold.
Speaker 2 (01:17):
Yeah.
Speaker 1 (01:17):
So it's been what a year or so since we
had you on What's New.
Speaker 5 (01:23):
A lot, there's lots we can talk about through next Yes,
per Sell, V zero and AI, all sorts of fun stuff.
Speaker 1 (01:30):
Yeah, sounds like you're mostly doing the same kind of
thing that you were doing last time.
Speaker 5 (01:34):
Yes, Yes, definitely.
Speaker 4 (01:36):
I was saying before we started that it's pretty amazing
to me that such a small company relatively speaking, of course,
because you're a multi billion dollar company with a few
hundred employees, but it's still relatively small, is able to
have such a huge impact on web development. Like it
seems like everything is kind of revolving around you these
(01:58):
days in one way or another.
Speaker 5 (02:01):
One thing that I think has worked well for us
that Ghermo likes to call recursive founder mode, which I
think is kind of funny if you've see in all
the discourse about founder mode, is trying to hire former
founders or people who have that energy and then giving
them ownership and agency over a domain and letting them
kind of run with it. So a good example of
this is just last week we released a flags SDK
(02:24):
for feature flags, an open source toolkit you can use
to implement feature flags correctly in your app using the server,
preventing layout shift with any feature flag provider that you want.
And this is kind of a founder led thing. We
have Dominic who is really owning this project from end
to end. It's a very small team working on this,
but small teams can do amazing things when empowered and
(02:45):
given the right direction and resources.
Speaker 4 (02:47):
I was curious if it's based on edge computing.
Speaker 5 (02:51):
So we do really recommend people to use the server
in general because most of the jank that you see
in websites that have feature flags is because they wait
for the initial document to get on the page, the
first bundle of JS to load, and then they're evaluating
what feature flags are on and you get that weird
(03:12):
shift of maybe one experiment and then the other. Or
you also see this with authentication, like the logged out
state and then it switches to the logged in state.
So we are strongly pushing for server side experimentation server
side flagging, and to do that, generally you want to
put your flags close to where your database is, so
it's not necessarily that the flags need to live at
(03:34):
the edge. Generally, they'd live at origin by your database,
but you can still do some things early in the
routing layer at the edge. Maybe that's redirects or rewrites.
Maybe that's pre computing multiple different variants of the page
and putting the static shells of those pages at the edge,
like at a CDN layer, so you can fetch those early.
So it's kind of a bit of both. We want
(03:56):
to use the edge layer for the static assets, uh,
and then put your data next to your compute at origin.
Speaker 4 (04:05):
Yeah. What some people actually do to reduce shift in
these scenarios when they use client side is to actually
intentionally put render blocking scripts at the very top of
the page. But that obviously comes at a very high
cost for coor vitals Totally.
Speaker 5 (04:24):
I think the biggest offender that I see is teams
who don't realize that they're actually leaking all of their
flags on the client side. They're using client side experimentation.
But if you inspect the dom or you you know,
do some magic in your deb tools, you can actually
see every single experiment or every single flag, and you know,
(04:45):
for the you.
Speaker 4 (04:46):
Know, it's not necessarily bad.
Speaker 5 (04:48):
It's not it's not necessarily bad. Although sometimes I think
maybe on a large enough team. The developer thinks that
is a secret value, and in reality it is not.
They don't realize that they're actually exposing that to the
client side. So it can be fine if you know
what you're doing. I've just seen it misused a few times.
Speaker 4 (05:09):
Don't put credentials in feature flex.
Speaker 1 (05:12):
Yes, yes, definitely, don't put credentials in your source code
at all.
Speaker 5 (05:19):
Yeah.
Speaker 1 (05:20):
One thing that I want to just touch on here
because I love the idea of kind of the internal entrepreneur.
I have to say that when I've worked like contracts
or full time jobs, a lot of times it's we're
going to give you features and then you're just going
to implement them. And I really like that open feeling
(05:41):
of hey, we need this solution, go make it like that.
That would kind of be if I didn't want to
take the entrepreneurial risk, that would be my ideal job.
Speaker 5 (05:53):
So Yeah, we like to think about empowering startups within
a startup. So another good example of this is our
productvzero dot dev, which allows you to very quickly build
websites and web applications with AI or just better understand
how to use web tools like next yess and react
through AI, and that team is very small and it's
(06:15):
led by a former founder who really understands how to
build great products and has been given you know, a
budget autonomy to build this product into end and really
monitor success and own their numbers. And it's it's working
pretty well.
Speaker 4 (06:30):
So, since you didn't mention, V zero and AI seems
to be a fairly popular topic.
Speaker 2 (06:37):
Yeah, not around here.
Speaker 5 (06:40):
Yeah, what does AI stand for?
Speaker 4 (06:42):
Okay, Apple Intelligence? I think there you go. Yeah, anyway, anyway,
what can you tell us about V zero? I mean
I've seen it, I played with it. It's really cool,
but you know what, what can you tell us about it?
Speaker 5 (07:01):
Yeah? So we've been working on vzo for over a
year now. The first version that we released was kind
of our first experimentation into the generative UI space, taking
these new large language models. At the time that we're
getting really really good at being able to produce code
and figure out how we could help them better craft
(07:23):
the front end, you know, do animations, do CSS. And
over time we've just been slowly and slowly adding more functionality.
So vzero now supports building full stack applications. You can
integrate through databases through Purcell, you can add secrets or
environment variables through for sell to connect to any external services,
maybe some AWSQ service you want to use, could be
(07:46):
really anything. And we've also launched a community that has
published many different VZO generations that you can fork and
get started with. So what once started as a very
small tool to allow you to quickly build some UIs
is now this kind of full featured prototyping and app
building platform that allows devs to take an idea, whether
(08:07):
something they want to build from a screenshot or a
prompt or a pre existing kind of template, take it,
fork it, prompt it, turn it into something that they
would like to build or they would like to use,
and then deployed to purcell and just a couple of
clicks and have that thing actually live on the internet
to share. And since we've been slowly adding more of
these features, the growth trajectory of Vizerra has been it's
(08:31):
been really wild to see. Obviously, AI is very popular
right now and lots of people are interested in getting
excited in how these tools can help them kind of
level up their careers as developers. But I think more
interestingly bring in a whole new group of people into
what it means to be a developer. It's less about
writing the code for these people. It's more about they
(08:52):
have great ideas and they now have tools that actually
allow them to build those ideas. So think about, for example,
the product market is the product managers, maybe some of
the designers who don't have as much coding experience, A
lot of them are being able to use tools like
the Zero and actually build and publish their ideas.
Speaker 4 (09:11):
So I have a ton of questions about that. So
first of all, do you want to start?
Speaker 2 (09:16):
No, go ahead, Actually let me.
Speaker 3 (09:20):
I'm gonna get in. So I don't worship at the
react alter myself. I'm a view DV myself, and so
I tend to work more with you and Laura Vellen,
Nursha and stuff. So I've heard Burssell talked a lot
about but I don't really know it, and I suspect
there's a lot of people listening who might not know
it as well.
Speaker 5 (09:40):
So before you.
Speaker 3 (09:41):
Dive anymore into Dan's question that I was wondering if
you could just give an overview on what Brossell is
and who it's target audience is, I guess.
Speaker 5 (09:51):
Yeah, absolutely, Bricell is a developer cloud. So we're trying
to give you, as a developer all the tools you
need to build websites and applications, from hosting and managing
your react or view or Spelt applications, to having observability
into your production infrastructure usage, integrating with databases or other
(10:14):
back end services and other cloud providers or hyperscalers, bringing
all of these tools into kind of one place that
you can use as you know, you put all your
lego bricks together to actually be able to build your
your amazing piece of art. And that's that's Brissell at
a fifty thousand foot view.
Speaker 3 (10:31):
So now you just mentioned VT. I'm sorry you mentioned View,
but I'm looking at your supported frameworks docs and I
don't see you listed anywhere.
Speaker 5 (10:39):
Yeah, we Next and Sevelt. I see Next, and now
we support Next is the primary way we see people
deploying view applications to Rosell, but we also support doing
a more traditional just client only view application as well.
Generally we like to recommend NEXT. I think they're building
something really great with the framework. But also if you
(11:01):
want to use a like VT and view application for example,
that's also supported.
Speaker 4 (11:07):
Okay, So going back to V zero, first of all,
V zero V one.
Speaker 5 (11:16):
Yeah, right, it's like, Okay, as we continue to iterate
on it, do we bump above the version number? No,
the naming it's it's kind of funny.
Speaker 2 (11:27):
One.
Speaker 5 (11:28):
Yeah, yeah, yeah, I didn't expect this to happen, but
we've seen a lot of people have a play on
the name of V zero. So we saw an email zero.
I think there's a YC company now doing a zero
or something like that. So it's been fun to see
other people take this idea of gen ui and apply
it to other domains, emails, mobile apps, et cetera. But really,
(11:50):
the whole intention behind the name was you can get
started here. It's not necessarily saying it's gonna replace your
production code. It's not going to, you know, prevent the
need from still writing a lot of code. It's helping
you get started quickly and building something great.
Speaker 4 (12:09):
By the way, there's the void zero that's even us
in your company.
Speaker 5 (12:14):
Yeah, yeah, it was funny. I think that name was
after V zero, but then it's pretty pretty similarly named,
which is kind of funny.
Speaker 2 (12:22):
Yeah.
Speaker 4 (12:23):
So the fact that it's V zero kind of and
also looking at the demos that you primarily show, it
seems to indicate that what you're building is literally version
zero of a product that it's taking either an idea,
a concept like you said, maybe an image or a screenshot.
(12:46):
Does it also support Figma or something like that.
Speaker 5 (12:49):
Yeah. Yeah, you can paste in a Figma link and
it can understand your design system, colors and what's on
the page and then start building an UI from there.
Speaker 4 (12:58):
Yeah. And basically it turns that into code that interactively
replicates the design that you gave to it, correct.
Speaker 5 (13:08):
Yeah. Yeah. And the big difference in the way to
think about this versus a no code or a low
code tool is generally with the previous generation of tools
kind of pre AI was the code that you got
out from the low code or no code tool was
not necessarily production quality code. It was not code that
(13:29):
was using the popular frameworks of the world that you
could then take and kind of eject and actually start
building your application into something that was a real app.
It was more so like maybe just a big blob
of HTML with some JS in there. It wasn't using
the ecosystem of the MPM libraries that we have today
versus kind of rebuilding everything from scratch, and it wasn't
(13:52):
also built on top of reusable accessible component primitives, which
is something that zero does use. Which is a component
distribution system called chad cyn ui M.
Speaker 4 (14:04):
Yeah, I'm familiar of course with chat CN. Chats chad CN.
It's a component library UI component library popular one, probably
popular one.
Speaker 5 (14:17):
It's a it has a unique approach because the traditional
way of distributing component libraries, you know, think about maybe
material UI is a very popular one, for example, from
from Google. They build these abstractions that are very good.
They get published as MPM packages, and then you pull
them down and you use them in your app. But
what happens when you want to take that and kind
(14:38):
of transform it to your own design system, your own brand. Well,
you can extend them to an amount, but you don't
control the code. You're using the substraction through MPM, and
you can't go in and actually modify the source code
very easily. Of course, there's hacks, right. What chadzyen is
trying to do is it's more of a component library
distribution system or a way for you to build your
(15:00):
own component library. So it gives you all of the code.
You can effectively copy paste it into your you know,
into your editor, into your project tweak all the tokens
and the design system colors, and install packages, delete packages,
and then there's a nice CLI that helps you add
new components as as you need them.
Speaker 2 (15:18):
I gotcha, I have to say.
Speaker 1 (15:20):
I went on there and I mashed the button for
a landing page, and uh, it gave me a next
st app and then I clicked preview and it said
it couldn't load because I could find the CSS file.
But I mean, the code looks good. Is Yeah, it's
just a little but it's a I and it generates
(15:41):
and sometimes misses things I've seen.
Speaker 5 (15:43):
Yeah, you should try the Is there a fixed button
in the bottom left. Ideally if there's an air thrown,
there will be a little fixed button. Which is a
good time to talk about the general philosophy of these tools,
part of the reason why we called it V zero two.
With with these you know, non determinis AI systems, they're
going to get things wrong right there. It's never going
(16:04):
to be perfect.
Speaker 1 (16:05):
And our generative it predicts the next word or in
this case, the next token of your code.
Speaker 2 (16:12):
Yep, and so it'll miss stuff.
Speaker 5 (16:13):
Yeah, totally. So one thing we've tried to do from
the unbounded space of you can generate any code in
the world. To try to narrow that down into something
that's predictably outputting consistent UI is we try to add
a lot of systems and pieces in place to narrow
in and improve the quality ratio, because ideally, at the
(16:34):
end of the day, as a consumer of this product,
you want quality generations, both in the UI itself but
in getting working functional code. And the biggest lever that
we've done to do that is really focusing in on
our niche, which is React applications. That's the pipeline where
we're reviewing and ensuring that the data that goes into
the system is really high quality, and also reviewing the
(16:54):
output if somebody has a bad time. So, yeah, I'm
curious if that works for you.
Speaker 4 (16:58):
So questions about that, First of all, which model are
you using?
Speaker 5 (17:04):
There's a bunch. Actually, generally you can think about it
as a big decision tree. Because the user puts in
a prompt. That prompt, we first kind of have to
classify what they were even trying to do. Were they
trying to just ask a knowledge question, were they wanting
to generate some code. There's a lot of branches that
can go down and then even as it goes down
(17:25):
the branches, we might want to flip between models on
the fly depending on the quality of one model, or
maybe a new model comes out. I think the new
claud model was just released today, for example. So we
try to make this system that helps us get the
best quality output, and then we abstract away everything else
in the middle and just focus on having really high
(17:47):
quality data in and evals or tests to ensure that
the quality is predictable.
Speaker 4 (17:53):
Or you could use deep sick if you don't mind
a bit of Chinese propaganda on your page.
Speaker 1 (18:00):
Well maybe, but you can also run deep seek. I
think it's unbiased on like a lama or something, and
then you don't have all the filters in front of it.
I mean, they still trained it on their own data,
and so if you ask it questions about.
Speaker 2 (18:17):
History that are inconvenient, it.
Speaker 1 (18:19):
Just doesn't give you a correct answer because it has
no idea.
Speaker 5 (18:22):
But yeah, I think a lot of the enforced were
coming at the inference time compute. The test time compute
was where they were doing a lot of the cloud
hosted version of deep seq was where they were doing
a lot of the biasing towards the answers that they
didn't want to answer. So the open source models definitely
in the training data, they are still biased in a
certain direction, but you can get you can get better
(18:46):
results by doing it that way. Which I've already seen
a few models on hugging Face, and I think there's
like a Lama distilled version as well.
Speaker 1 (18:53):
So yeah, I've played with I've played with deep Seek,
and I've played with some of the other ones and
it is really it's really good in a lot.
Speaker 4 (18:59):
Of So the name v zero seems to indicate that
the focus on the product is on the initial generation
and less so about taking an existing app and modifying
it or adding capabilities into it. Is that correct or
is that just my interpretation?
Speaker 5 (19:19):
That was definitely where we got started. As we've added
more functionality to the product, we're now getting to a
point where you know, in the future you'll be able
to connect to a get repo and bring in the
code you already have in your application and make changes
from there. So I think in the future V zero
will evolve to be more of your general purpose AI
(19:40):
assistant that you can use for many different things, or
is trying to incrementally get there as we tackle one
problem and hopefully do it well before kind of moving
on and expanding scope.
Speaker 4 (19:52):
So if that's the case, how is it different from
something like Cursor or co pilot or or something like that.
Speaker 5 (19:59):
Yeah, totally. I think of Cursor and Windsurf and some
of the AI enabled IDEs as tools for I'll call
them professional programmers or professional coders who are primarily you know,
in their ide in their editor all day. You know,
they're fantastic at that. And I use z personally, which
(20:21):
also has AI tools, but they're all incredibly good at this. Versus,
I think the market for v zero and other tools
doing more of the generative UI space is more so
I think for the kind of adjacent folks who are
now getting into development. Certainly, obviously I'm a professional programmer
and I still use v zero, but I've seen a
(20:43):
lot of people either learning to build products through v zero,
or maybe they had an idea and they weren't really
sure how to build it yet, all using v zero
as a starting point to kind of kick off that
knowledge where it's more about the design and the ideas
and the product experience is just the code itself, and
of course you can still view the code It's still
(21:03):
important that you can observe the code and modify the code,
but first and foremost is the actual thing that you're building,
which is kind of just a different model than the
professional programmers the IDEs, just a different different model.
Speaker 4 (21:19):
So I'll challenge you a bit about this, if you
don't mind. I've been told that I sometimes ask tough
questions and hopefully you're you're fine with it. Yeah, okay,
if that's the case, if that's your target audience, the
way that you're presenting it isn't React potentially too low
(21:42):
of a level of abstraction, because you know, if I'm
more of a designed person, you know, it's great that
I can give my code to a developer later on,
but if i'm playing with it, maybe I would like
to play with, you know, at the higher level of abstraction,
(22:05):
with more sophisticated box but less complicated boxes.
Speaker 5 (22:12):
Yeah. So, while we do use React, the default for
v zero generally is going to be an XTGS application,
so it is a higher level abstraction on the underlying
React primitives generally. Though, the reason why I think React
is a good fit for this type of model is
when people are getting started with building their first application
(22:33):
and they don't have a lot of experience. The mental
model of components actually makes a lot of sense to people,
more so than a vanilla HTML or a vanilla JavaScript file.
Like if we go back to jQuery days and you
have a thousand line j Query file, this is totally
fine and it totally works. But you know, looking at
specific query selectors, they're looking up an element by ID
(22:54):
and then swapping out you know, the data from there,
the styling from there is it as in two for
the first time user, I think as looking at some
React code, maybe not at first, but I think you
pick it up a little bit more conceptually as you
kind of get into the code a bit. At least
that's been my experience talking with folks who are kind
of learning or looking at React code for the first time.
(23:17):
And secondly, the nice thing about the component model and
composition of React is an LLM generates a bunch of stuff.
You know, maybe there's ten different files, fifteen different files.
You want to be able to place those into different
you know, spots in your application without there being these
global side effects. And that's one really nice thing that
the React model does pretty well. Now, I will say
(23:38):
if you are a little bit more uh maybe not
super beginner, but maybe intermediate, and you kind of know
a little bit. You know that you want to just
work with HTML and male JavaScript. You can explicitly prompt
the zero to say like, hey, I know what I'm
doing here, Like, give me more of that low level
primitive and I can kind of build from there.
Speaker 2 (23:56):
Cool, I want.
Speaker 4 (23:57):
Will it be just one more thing?
Speaker 2 (24:00):
Oh, go ahead.
Speaker 4 (24:01):
Will it be able to also generate let's say, stuff
for Svelt.
Speaker 5 (24:06):
Yeah, I think right now it can do. It can
answer spelt questions, and it can help you build Svelt applications.
In terms of the completely dynamic running of SVELT applications
in the browser, we don't have support for that yet,
but I would like to get there in the future
for sure. It kind of goes back to that quality
of the unbounded space of next token prediction is already
(24:30):
tough enough. So we're trying to slowly, you know, build
a system that helps us get predictable, repeatable success and
then we can consider expanding up further.
Speaker 2 (24:40):
Yeah, just related to that.
Speaker 1 (24:43):
So I told it what my error was in here,
and it fixed it, just gave me a global CSS.
But then the next thing I did is I said,
can you give this to me as a next app
instead of a next app? And of course then what
it did is it gave me a next app with
the source folder for the next step still in it,
(25:05):
you know, but you know, and so you know, you
figure this stuff out with your prompt engineering and things.
But I think it's just fascinating how far you can get.
And I've done this with other projects with AI, where
I've essentially said, Okay, give me an app, you know,
this kind of an app, you know, with chat, GPT
or some of the other ones. Give me this kind
(25:25):
of an app with these kind of features and these
kinds of you know, deals.
Speaker 2 (25:28):
And I'm using tailwind and and and it'll do it.
Speaker 1 (25:32):
And then if I go and run it, I can
come back and tell it I got this error and
it will generally fix it within in one go.
Speaker 2 (25:40):
Sometimes it just it can't quite figure it out.
Speaker 1 (25:43):
And that's, you know, that makes me feel good because
I'm never gonna not be able to get a job
as a programmer. But yeah, it's it's pretty interesting to
see how far you can get on this stuff.
Speaker 4 (25:54):
And yeah, how manytainable is the code that rates?
Speaker 5 (26:02):
How maintainable. Is the code mm hmm, I mean this
is This is one of the biggest advantages, in my opinion,
of building on foundations like existing popular open source frameworks,
is that you're ejecting the code out into a system
that is an open standard. It can be deployed to
any infrastructure in the world, whether it's a S three
(26:23):
bucket or deployed to something like Versell. And there's this
thriving community of developers who are you know, consistently with
every release, making the docks a little bit better, making
the examples a little bit better, building all these templates
in the community, you know, fixing bugs, making improvements, and
it has the backing of a you know, a decently
sized staff team in the next ys team who is
helping make it better every day.
Speaker 2 (26:45):
Yep, I did run into a rate limit.
Speaker 1 (26:48):
I asked too many questions and then I said, you
have to sign up for Versille to keep using.
Speaker 5 (26:52):
Yeah, yeah, sign up twenty dollars a month more.
Speaker 4 (26:54):
I know.
Speaker 2 (26:56):
Yeah, no, I've I've used Forversell in the past and
I like it. Edit things going on there.
Speaker 4 (27:02):
I saw relatively new capability in it. I guess I'm
guessing that you can select a component and ask it
to modify just that component RRECT. That's a relatively new feature,
I think. M I think the next step would be
to add some sort of dragon drop capabilities or something.
Speaker 5 (27:24):
Yeah, we'd love to find a good way to you know,
bridge the gap of a little more visual editing, you know,
tooling that helps you think about the UI from a
higher level of abstraction versus just prompting. Of course, you
can get very far with prompting, but sometimes you like
the tools that you know, things like a figma might have.
(27:47):
There's some pretty helpful stuff in there.
Speaker 4 (27:49):
Yeah, okay, So anything else you want to say about
V zero before moving on to the next one. Yeah,
you've got one, the whole bunch of tools.
Speaker 1 (28:00):
Yeah, I just wanted to jump in on V zero.
So it's targeted at people who maybe don't have a
ton of coding experience or you know, I could see this,
you know, it gave me a nice enough layout on
the thing that I clicked on that I could see
this as just kind of a hey, this is a
kickoff point for me, and I'm confident at this stuff.
Speaker 2 (28:23):
Are you seeing this.
Speaker 1 (28:24):
As something that could conceivably become a proper dev tool
or are you looking to go in that direction with
something else, or where do you land on that?
Speaker 5 (28:36):
Yeah, at least for me in my experience, I already
consider it a proper dev tool. It's something that I
use to build a lot of applications that or ideas
that I have that maybe I want to almost run
in parallel. But you can try out a couple different
ideas and like, Okay, I like this version better. I'm
going to prompt this one and go a little bit further.
I recently wrote a blog post about this called personal software.
(28:58):
I think other people have called it vibe vading, whichld
I like the name of. Basically, it's it's not necessarily
saying this is, you know, production software used by the government.
But there's lots of software and applications to be built,
and it can be very fun to do it in
that way. So I think we're seeing a lot of
adoption for those type of use cases. The only other
(29:20):
thing I'll mention on v zero is if you're really
curious about how it's built, everything that powers V zero,
we have open sourced a chat GPT like application doing
very similar things, so it's kind of build your own
chat GPT chatbot that can you know, generate code and
run code in the browser. It can generate spreadsheets and
(29:40):
create images, and you know have the canvas or artifacts
like features that claude and chatcheeps you have. And that's
part of our AI s d K, which is another
framework that we maintain. It's like lower level primitives to
help you build AI applications. So chatt for cell dot
ais that template if you want to try it out,
and it's all open source so you can fork it
(30:01):
and build your own.
Speaker 2 (30:03):
Awesome.
Speaker 4 (30:04):
I just want to mention that my connection has become
a bit spotty all of a sudden. So if I
drop off for anything, that's hopefully that won't happen, but.
Speaker 5 (30:15):
You know, no worries.
Speaker 1 (30:20):
If it happens, then people will miss you because I'll
ask all my dumb questions.
Speaker 4 (30:27):
So anything else about v zero.
Speaker 5 (30:30):
No. I think that's that's the majority of it. If
you haven't given a shot, please do, and I'd love
to hear feedback. For anybody listening, feel free to shoot
me a message.
Speaker 2 (30:40):
Nice.
Speaker 4 (30:41):
So, of course, I think versall is best known for
next JS. I think, although it's important to note that
for Cell and next JS are not the same thing,
So maybe that's actually something worth talking about, like the
relationship between selling next js.
Speaker 5 (31:01):
Yeah. Yeah, So going back to twenty sixteen, kind of
a ways away, when Vercell was getting started, Germa was
trying to build a really high quality front end, a
really high quality application, and at the time wasn't very
satisfied with the state of tooling and like you know,
(31:22):
every engineer, well, rather than building my product, I'm going
to build the tooling for the product. Instead, I spent
you know, a decent chunk of time building out what
was you know, the first version of next js back
in twenty sixteen, and the idea at the time was
putting together a React application is kind of hard, and
he wanted to build a server first or a server
(31:42):
side rendered React framework that would make that easy to create,
you know, application UI marketing pages, dashboards, e commerce sites,
and so on. And this was really about the same
time that Create React App came out, so only a
few months apart, and they both kind of served unique purposes.
Over the years since then, nextgs has grown to add
(32:04):
a lot more functionality. The community has grown to be
a React framework that can build pretty much any type
of application, If you want to build an interactive dashboard
like some kind of single page app, that's totally doable.
If you want to build SEO optimized marketing pages or
logged out e commerce experiences, that's also possible. If you
(32:25):
want to build new AI chatbots where you can stream
in responses from the server, that's another prominent use case
of NEXTGS. So it all kind of sits on the
foundation of react as the UI rendering layer, the UI
engine for your components, and then we add some nice
views in the middle to simplify working with images and
optimizing fonts and optimizing third party scripts and streaming content
(32:49):
from the server and building out your API layer and
so on and so forth. So going back to your
original question on the relationship, over the years, we've just
continuously been investing in this MIT license open source framework
to a point now where we have one point three
million monthly active developers on NEXTGS and I think we're
at eight point five million weekly downloads on MPM. So
(33:14):
really the trajectory has been great for folks picking up
and adopting and putting next gs in production for both
you know, personal sites, small startups and really large enterprises.
For sell.
Speaker 4 (33:28):
I would just like to interject there an interesting statistics
the statistic that I saw. So, there's the Google Crux database.
Are you familiar with it? Yes, so, they obviously so
they Mostly it's used for looking at the performance of
production websites that Google scans, but it also analyzes the
(33:52):
technologies with which those sites are constructed. So you can
actually ask it, you know, show me the performance of
all the reacts sites, so show me the overall performance
of all the view sites and so forth. But you
can also just basically ask it a simple question like
how many sites do you see that use React? How
(34:12):
many sites do you see that use Next? And it
seems to me, looking at the numbers, it seems to
me that if you're building a new website using React,
you're probably it's probably being built with Next because looking
at this again, at the statistics from from twenty twenty
(34:36):
two up to now, the numbers seem to indicate that
that that period of time, React websites that Crux analysis
have grown by something like ten percent, but the next
GS websites have like tripled or something along these lines.
(34:56):
So effectively, it means that if you're building a React website,
then it's probably being built with NEXTGS. Is that does
that kind of match what you're saying.
Speaker 5 (35:09):
So, the last time I checked out the http archive
data which backs the crux report, was in the December
of last year of twenty twenty four, and at the
time there were twenty eight thousand of the top million
sites that we're using NEXTGS, which is pretty good. I'm
pretty happy with that. And of those twenty eight thousand,
(35:31):
fifty percent of them were using the new routing system
inside of NEXTGS called the app router, which under the
hood uses some of these new React features like React
server components and server actions, and some of the lower
level things like streaming that overall helped get better core
web vitals. So it's been awesome to see on two fronts. One,
just the general adoption of folks building React sites too,
(35:55):
those who are choosing to pick next gs as their
React framework of choice. But then three, those who are
now starting to build with the app router or incrementally
migrating over to the app router are generally seeing better
COREBA vitals across pretty much all dimensions versus kind of
the previous model of next Yes, that still exists, but
we're kind of investing in this on ramp over time
for people to take better advantage of the server for
(36:17):
some things and better advantage of the client for other things.
Speaker 4 (36:20):
So GTP Archive actually identifies websites that use the new
app router.
Speaker 5 (36:27):
Yeah, yeah, by looking at the different tags in the body.
Speaker 4 (36:32):
Oh that's cool. I didn't know that. I knew that
it identifies next. Yes, I didn't know that it actually
is able to distinguish between the patriarter and the app router.
That's really cool. But conversely, what I'm also saying so
about well over half a year ago, I switched jobs,
(36:54):
and while I was, you know, hunting for a new position,
I was looking at I was speaking with various companies
and also because of the type of position that I
was looking for, I was also kind of looking at
the technology stacks that they were using and whatnot. And
it was what I saw was that, again it's anecdotal,
(37:16):
but it's it was still fairly you know, consistent, let's
call it. It seemed that indeed, if you're building a
website using React, then you're highly and by website I
mean something that you know Google would would likely rank
or scan. Then it's highly likely that you're using next JS.
(37:38):
But a lot of organizations that are building web apps
there are a lot of time still predominantly client side
only in terms of their React usage, and then they're
kind of not using React or any other kind of
full stack framework, and they're like mostly just doing React
(38:02):
on the client side and using not necessarily even JavaScript
on the back end. They might be talking directly to
you know, RESTful endpoints implemented in whatever. Yep, So are
you thinking about somehow addressing that market as well?
Speaker 5 (38:21):
Yeah, totally. I think our story today is maybe under discuss,
but we have a pretty strong story for building single
page applications in next yes, or kind of fully client
applications most of the time. Obviously, I talk a lot
about the server stuff because I'm particularly excited about making
it easier for developers to use features on the server.
(38:44):
But for a lot of apps, especially, I don't know,
an internal dashboard at some very large company where they've
already got an established you know, it's probably not even
a node back in. Maybe it's a go back in
or Java or something else. They've got to RESTful API somewhere,
and their infra team hands them down this opportunity you
can deploy a site to S three. Right, you can't
(39:06):
run a server, but you can put some static files
here and doing that, their only option basically is to
do a client only application, and we want to make
nextys a great opportunity for those folks who are wanting
to use React as well. So we have inside of
nextgs this thing called a static export basically where you
can tell the app basically strip away the no jess server.
(39:28):
I just want to generate a bundle of HTML, JavaScript,
CSS files and then I can drop them in an
S three bucket and just kind of be on with
my day. And you know, even further, if I want
to basically opt into one hundred percent rendering in the
browser on Clyde side rendering, we have a way you
can basically skip server side rendering entirely, which for some
(39:49):
teams they're like, yeah, like, I know it's maybe not
the best UX. I know that maybe it's a little
bit slower, but it's fine. Like I'm just building something
that has a bunch of heavy charting libraries and I
just want to go straight client only and that can
be okay too.
Speaker 4 (40:02):
So you're basically saying that you can SSG static side
generation into an S three bucket, and you can take
it a step further and basically SSG a blank page
into an S three bucket.
Speaker 5 (40:16):
Totally. Yeah. The way I think about it is there
is a strict SPA which i'll define strict single page application,
which is basically that blank HTML shell that boots up
some client JS. There's a ton of websites built this way, right,
Like a lot of the client React only apps are
just built this way, and that's fine. You can do
(40:38):
that in next GS. What we try to do is
build this gradual ramp so you can do that. You
can also pre statically generator pre render more of that
page so it's not necessarily a blank HTML page, but
like a good chunk of the shell that then boots
up React on the client. Taking another step further, you
can pre render multiple inputs. So maybe you have Slash
(41:02):
the Slash route, maybe you have Slash dashboard, maybe you
have Slash users. Each one of them has a different shell,
so you're not looking at the like the page loads.
You see one spinner while you're waiting for the client
JAS to boot up, and then you've kind of loaded
your client app. You can still get a little bit
better UX by loading those shells and having different shells.
(41:23):
And that's like three steps along this journey. But there's
actually five more steps if you want to take it.
I get if you decide. Actually, I am okay with
doing a little bit of stuff on the server. You
can even run a what sometimes called a back end
for front end, but you can use the next day
a server just to effectively, you know, have your secrets
to talk to your API, have your bear token, and
(41:44):
still have your rest API somewhere else, so you don't
have to do some kind of like you know, wild
proxy situation. It's basically just a proxy.
Speaker 4 (41:51):
Interesting. It's especially interesting given that react to the React
core team is effectively retired Create React app.
Speaker 5 (42:00):
Yeah, I think from their perspective, which makes a lot
of sense, you know, going back in time, when create
React app came out, like I mentioned, it was basically
the exact same time as next gs, and the state
of building a React app was way more difficult than
it is today. Today, there's this vibrant ecosystem of frameworks
and even lower level tools like bundlers, who are building
(42:22):
deeper integration into all of the bits of React. And
I think for dev's getting started. It's just dramatically easier
to be productive right away, but to have a great UX.
And that's kind of why I mentioned there's like this
graduation curve. Yes, you can start here in the strict
SPA mode, but like along the way, you're probably gonna
want some of these things, Like you probably want to
have that little bit better UX so you don't have
(42:43):
to stare at that loading spinner while the user's loading
the page. And you know, it'd be kind of great
if I could prefetch some of that stuff on the
server ahead of time, rather than having to wait for
the client and like offloading all this work into the client.
So it's kind of like we know, based on you know,
let's call it ten years of React, you're probably going
to run into like these three or four things and
how you do routing, how you do data fetching, how
(43:05):
you load your assets. So that's kind of why nexty
has gives you all the building blocks for that, but
then it's up to you and you get the choice
as the developer on how much or how little you
want to use if you want to use NEXTGS. Just
as easy mode React, that's all client rendered like that's
fine too.
Speaker 4 (43:22):
It's interesting or I'm using that. It's basically you guys,
and then everybody else using VT.
Speaker 5 (43:30):
Yeah, yeah, I think V has been a great tool
for those people who want to kind of build their
own type of system. So maybe they want to do
you know, they pick their own bundler and V. They
pick their own router, maybe a React router, they picked
their own other pieces of the puzzle, a data fetching solution.
Maybe they're using something like a React query for example,
(43:50):
and they can kind of put those pieces together based
on whatever the needs are for their app. That's totally fine.
I still see a ton of people doing that. What
we've tried to do with NEXTGS is provide another option,
which is like, if you want to have all of
that baked into the framework itself, that's one solution that
we offer. But yeah, it's totally fine. I think it's
a lot of a thousand Flowers Bloom type situation where
(44:12):
it's it's not a bad thing to have a lot
of options in the ecosystem. I think this is primarily
why React is still so popular ten years later.
Speaker 4 (44:20):
But is it really that way, Because it does seem
that a lot of the React Core team has moved
from Meta into Versall to the extent that it almost
seems like the next version of React is next GS.
(44:40):
So is it a kind of world where next GS
is kind of eating React.
Speaker 5 (44:47):
That's not really how I see it, because if you
look at the percentage of the people who work and
contribute to React that work at Versal, it's still pretty
small in comparison to the broader scope of the React team.
I think a lot of people don't know the extent
of how big the React team is. At Meta, they
do a lot of fantastic work, not only for the
(45:08):
React core team itself, but for React Native and for
supporting all the React development for other kind of pieces
of the ecosystem. At Meta, Vercelle certainly is helping push
React forward insofar as we're contributing some of the server features.
Things that we're helping with like server actions, for example.
We've played a decent chunk in contributing a lot of
(45:28):
that work. But ultimately, at the end of the day,
it's still a partnership between Meta and us and hopefully
more companies in the future as well. I think from
the React team's perspective, they would love to have more
kind of full time contributors to help build this out.
But it's a pretty big investment in terms of your
companies R and D spend, and it's something that Versella
(45:50):
is kind of motivated to do because we also are
trying to make next to us a great framework that
built on top of reactions. We're willing to invest the
capital in terms of funding people on our team to
help contribute to the core library. So yeah, that's the
way I think about it. I think it's great that
we help contribute to reaction. I would love to see
other companies join as well.
Speaker 4 (46:10):
Cool I like that approach.
Speaker 1 (46:13):
So are there are there new innovations in next JS
that you want to tell us about or things that
are coming up that people might want to hear about.
Speaker 5 (46:23):
Yeah. I think for me, the whole journey of twenty
twenty five is it's basically two things. The first one
is continuing to make one percent iterations every release, every
day on the foundations the fundamentals, really great stack traces,
really great air overlays that are beautiful and they help
(46:44):
you find the answer quickly. Really great performance when you're
working locally, fast compilation, fast, hot module reload, fast, builds
a lot of the things that are the bread and
butter of the framework per se. Maybe not the most
sexy fee in the world, but we think that performance
in stability is a really important piece of what people
(47:05):
who have signed up to use NEXTGS in their company
appreciate in terms of frequent updates and improvements. So that's
a heavy, heavy focus for us. The next release of
NEXTGS here in a few weeks probably or maybe shorter.
We have a new redesigned air overlay that looks so great.
It's it's got better stack traces, it's easier to find
(47:27):
the information, it's got some nice animations, like just the
little delights of polish that make the everyday experience of
working in the framework better. So there's a ton of
stuff in that category. And then the second category is
how do we take, you know, years of feedback on
the framework and kind of continuously innovate and simplify the
existing model along the way while still respecting that you know,
(47:50):
a lot of people are building their apps in a
certain way and we don't want to disrupt their workflows,
but we want to give them a solution over time
that's increasingly more simple for them to adopt, especially for
new people who are just getting started in the framework.
So an example of this is in two ways, one
and around casing and two around the story with data fetching.
(48:12):
So probably the biggest piece of feedback on next yiest
for the past couple of years is very powerful that
it has cashing, but would love to see the default
experience be a little bit easier. It's like it felt
like the ramp from okay, I can do cashing to like,
oh yeah, this is why cashing is really hard was
like happening too quickly. So we've been working on an
improved system that allows you to define directives. It's like
(48:34):
a string like use cash and cash a function or
cash a React component in a much more granular, composable way.
That's going to be coming here very soon. Then the
next piece that builds on top of that is the
ability to write code that looks asynchronous and have it
actually be a page that's dynamically rendered. So I'll give
(48:55):
you an example. You have a React server component. You
might think, okay, that server component requires a server. The
first misconception there is you can take a server component
and you can do that static expert I mentioned and
run it as a pre rendered component that just spits
out some HTML. Right, So it's kind of like, yeah,
it's like a React build time component, just it has
(49:16):
the access to the server. So you've got the server component.
You say async function await get data from database. If
you write a call like await get data from database,
you're probably expecting that to be fresh data like this
is you're making a call and you didn't say that
you wanted anything to be cashed, right. But the inherent
(49:36):
confusion here comes with this idea of pre rendering. If
it's running during the build, you're trying to pre render
as much of the page as possible, and that's really
good because you can get that fast initial response when
you have the pre render. So the two kind of
pieces in parallel in this next phase are if I
write code that looks dynamic, the page should default to
(49:56):
being dynamic. And then me as a developer, I get
the opportunity to say, you know, which way do I
want to go? Should this be fresh on every request? Okay, great,
I'd put suspense around it. Reacts suspense. If it's not
and it's something I want to pre render, I put
use cash on it. So you have these like two
paths of which way you want to take, and those
(50:17):
three things, the first two and then the third one
being partial pre rendering, which is effectively just a way
to get a larger amount of the page pre rendered.
I think will simplify the experience of making next years
apps quite a bit, So I'm looking forward to that
future as well.
Speaker 4 (50:34):
It's interesting. Well, of course, you know that they say
that there are two hard problems in computer science naming things, cashing, validation,
and off by one errors. So yeah, cashing is hard.
The need for cashing, by the way, I mean, it's
worthwhile to touch a little bit about why all this
(50:55):
discussion about cashing. I think it has to do with
the fact that we encapsulating data access and that can
really easily lead to waterfalls and and so one way
is to basically obviously just break encapsulation and get all
(51:15):
the data as efficiently as possible yourself and stream it
into the components. But then the components aren't self contained.
So the alternative is to say, I will get the
data at the component level, but I'm really smart about
cashing to avoid really getting the same data multiple times
and making all the data retrieval as efficient as possible,
(51:36):
and that's the route that you guys seem to be taking.
It will be very interesting to see how efficient, how
how we are able to achieve this combination of efficiency,
ease of use, and correctness. It's not going to be
a trivial combination.
Speaker 5 (51:56):
M Yeah. Just to your point on like why cashing
the cash exists somewhere in the system. If it's not
in your front end, then you're caching your API responses
or you have some like a in memory cash in
front of your database. Like most of the time, there's
a cash somewhere in the stack. And I think what
(52:17):
we're seeing with a next year's app router app and
being more involved across the full stack basically react having
more of an opinion about the network layer is that
those caches are moving into more user code than a developer,
a product developer is having to think about, which is
very powerful, but it's maybe not something they had to
consider previously because like, well that was some other team
(52:39):
who worked on the rest API they added to casing layer.
I just hit that endpoint as much as I want,
I don't really have to think about it. You can
still build that model in an extati.
Speaker 6 (52:48):
Comes out of the back of the computer right right
or until your cash expires, and you didn't realize that
your cash control headers were misconfigured, and then actually you
overloaded your database and you didn't have the proper deal
with air caching headers.
Speaker 5 (53:01):
So now your pages is down and you have to
do a roll back. And there's a lot of hard
parts in getting that right for sure, and building it
into the system, like you said, while still maintaining correctness
is tough. One of the things we use cash because
we can use the compiler, which shout out to spelt
who does a great job of using the compiler as
well and making that kind of a core part of
(53:23):
the design of both the language and the framework with
spell Kit is that we can look at the inputs
into the function, the actual variables being used inside of
the code where you've marked it with this directive, and
automatically figure out the cash keys. So you know, oh, shoot,
I forgot to mark this as a cash key for
something that actually changes, so I'm not able to automatically
(53:45):
invalidate that cash when it changes. Like a lot of
those really hard things, we can automate some of that
away by using the compiler. It's obviously not perfect. There
are still cases where you have to be aware of
what you're doing with the cash and tag it correctly
and validate it correctly. But more and more of that
that we can build into the framework itself, the more
little edge cases that you don't have to deal with.
Speaker 4 (54:06):
Yeah, at this point, it's worthwhile mentioning that we actually
had Joe Savona and Satya from Meta to talk about
React compiler, which is all about minimoralization, which is another
way of saying cashing. So for sure, I'm totally with
you on that.
Speaker 1 (54:25):
I wanted to point out one other thing, and that
is that you know, you're you're talking about kind of
figuring these things out as you go, and I think
sometimes we kind of take for granted that things are
just going to work without recognizing that. Yeah, sometimes the
first pass is not great or at least not ideal,
and so just keep that in mind when you give
(54:46):
the feedback, Right, it's like, hey, look, you've got some
great tools here, and we're going to give you better tools,
you know, based on your feedback and things like that.
Speaker 5 (54:54):
Yeah, I think in an ideal world, right, you would
never ship, and you would just cook away and private
until you had the perfect API, and oh man, it
was just the best thing in the world. People are
gonna love this and I'm just going to keep waiting
until I get it just right. And the reality is.
Speaker 2 (55:11):
Sounds nice too.
Speaker 4 (55:13):
Well. My dad likes to say that the worst enemy
of good is better.
Speaker 5 (55:21):
The enemy I've done is perfection.
Speaker 2 (55:23):
Yes, yeah, I am.
Speaker 4 (55:26):
I am amused though that. You know, I'm still old
enough to remember that the big selling point of JavaScript
that was that it wasn't compiled. So it's really interesting
to see how much of modern web development is actually
based on bundlers and compilers. What you can think about
bundlers as the modern day linkers, the fact that we
(55:50):
are effectively compiling and linking the JavaScript code to get
it to work the way that we need it to work.
Speaker 5 (55:56):
M yeah, I love the ambition. I think there's a
lot of push in the web development community, especially those
who have been programming for a while, like can we
get back to the roots of a compiler free, bundler
free experience? And I love the ambition of that age.
Yeah yeah, I think I think there's good ideas there.
(56:18):
I like the pragmatism of it. I think the reality
is where do you want to make the trade off?
And unfortunately, the trade off with that approach usually comes
at the user experience, and that's like a trade off
that I'm not usually willing to make because I'd rather
have a compiler automate a lot of that work for me.
But you know, things if you have to revisit your
(56:39):
priors and recheck it year every year as technology continues
to get better and better. So I agree spiritually with
some of what he's saying, but I disagree on the
effectiveness of compilers and bundlers in twenty twenty five.
Speaker 1 (56:53):
Yeah, I have to say, you know, just kind of
coming from the other side of this too, may be
a stronger degree.
Speaker 2 (56:59):
There's a lot of it depends in there.
Speaker 1 (57:01):
Right, If you're doing some things, you know, maybe you
don't need it, and then in other cases it makes
a lot of sense, and so you have to look
at it. You have to look at your approach, You
have to look at what you're using, your what your
tools are, you know, what your framework expects of you.
And then yeah, I don't know that there's necessarily a
right way or a wrong way. A lot of times
(57:23):
it's just you know, this makes a lot more sense here,
and that makes a lot more sense there.
Speaker 4 (57:27):
Yeah, it's about the trade offs really. I mean, you know,
if you can look at the extreme other direction as
expressed by somebody like like Alex Russell. So so for sure,
you know, it really depends on what you're trying to build,
the functionality that you're trying to ship and whatnot. I mean,
obviously the fastest website is going to be the blank page.
(57:49):
So you know, it really depends on what you're trying
to achieve. By the way, I saw another thing that
might be worth men in the short time that we
have left that you guys I think recently introduced, which
is something called fluid. Can you say something about that?
Speaker 5 (58:08):
Yeah, so a quick kind of well, I'll try to
make it quick. A little history on Vercell's computing back
in the day. I'm sure you've probably familiar, but back
in the day, we offered a server full platform where
you could deploy Docker containers. You know, this was many,
many years ago, and in some instances like that's really great.
(58:32):
You can have a lot of flexibility with what you
bring to the platform. It's it's unbounded. You can do
anything as long as you can get it into Docker.
It's harder though, to get a consistently good time because
again it can be literally anything. This is back in
the day kind of pre Verseell moving to a serverleist model.
Along the way, we kind of found our footing, I think,
(58:54):
with focusing on frameworks and focusing on being able to
automatically find and generate the infrastructure as an output of
the framework. So we call this framework to find infrastructure. Basically,
it means you take a NEXTGS or next or SPELLCIT
or whatever framework you want to use, you write code
in the open source way, you bring it to Bursell,
and Vercell looks at the output of the build process.
(59:17):
So you run next build and you get a bunch
of files, and we do the work to actually convert
that into cloud infrastructure. We turn it into distributing those
files to a CDN or running the on demand compute
through for Cell functions and so on and so forth.
So for many years now we've been doing servilest compute
on top of AWS Lambda, and that also has a
(59:38):
lot of benefits, but also some notable downsides. I think
the biggest downsides that we've faced over the years. Number one,
I think serverleists as a term has become increasingly unhelpful
because it means everything to everyone and it's just so, yeah,
it is basically cloud. It's like, okay, so what exactly
(01:00:01):
do you mean by that? Do you mean auto scaling?
Do you mean cost effective? Like? What are the more
specific things that you mean? So I've increasingly struggled to
explain what I'm talking about when I say server lists.
The second and like the product feature that was the
most damming or hard with lambda is for every one
request to my nogs lambda, I can only use one function,
(01:00:25):
so it doesn't have the multiicincurrency of what you can
get out of a server. You can send many requests
into one box. Generally, that means it's going to be
a lot less efficient with your actual infrastructure usage, you know,
one request to one function versus load balancing in many
different requests into a box. Effectively a server that ultimately
(01:00:48):
ends up saving customers a lot of money when you
can be more efficient with how you route the compute.
So over the years, we had these kind of warts
of server lists that we were trying to fix and
pay rover. We would add new functionality to add streaming
to for self functions, we would add new functionality to
prevent you unbounded recursions that could rack up denial of wallets.
(01:01:10):
We would add functionality into run very long workloads on
top of functions. And then the latest things here is
actually optimizing the lower level bits, so like the underlying
infrastructure on top of the function We rewrote it in
rust to make it very optimized, which you know, of course,
just rewrite and rust and makes it faster right now,
(01:01:31):
But we have really been optimizing every part of the
stack to where now the last piece was making it
much more concurrent, and we've now taken all of this
and kind of bundled it into a new computing model
that we're calling fluid compute. It's not necessarily something that
is for sell only. We think you could you build
your own version of this on cloud primitives if you
(01:01:52):
would prefer, But generally it is trying to be a
hybrid of servers and server lists, building off of our
experience what some other people in the industry have done
a lot of inspiration from things like Google Cloud Run
for example, which is kind of coming at this from
the opposite side, from the server side and having auto
scaling servers with the general goal of being it's very
(01:02:13):
cost effective. So if I'm doing a bunch of network
io to a database or to an AM model, you
know I want to I don't want to have to
spend a bunch of extra money while I'm just waiting
on all that network io, which for a lot of
apps that's kind of the predominant, predominant usage of your
compute is just talking back and forth. It's not the
actual CPU time, so very cost efficient, very fast to
(01:02:37):
start up and scale down, so it auto scales based
on your traffic, which I think is really important, and
ideally you can eliminate this cold start problem that has
plagued server lists for a very long time. And we
do that by keeping a pre warmed functions which is
basically like Google Cloud runs minimum active instance is one.
It's like you always have compute running that you can load,
(01:02:58):
balance and send request into. So all of that is
fluid compute and it's something that for Cell customers can
opt into using today and we've seen some pretty awesome
results from it.
Speaker 4 (01:03:07):
So it's kind of like Lambda functions, but they can
live longer and service multiple requests and don't have to
immediately shut down when they're done. Am I missing something?
Speaker 5 (01:03:22):
Yeah, totally. And if you're doing network IO, you're being
charged for the CPU, right, you can be a lot
more efficient when you're waiting on a bunch of calls
your AI model or to your database, which ultimately drives
down costs for very concurrent applications that are doing a
lot of IO. So in some instances we've seen people
(01:03:43):
saving as much as eighty five percent of their bill
previously on the server list model. It obviously depends on
your traffic patterns how concurrent your traffic actually is, but
in almost all cases it's better for the customer.
Speaker 1 (01:03:58):
You said, this isn't running on Lambda or things like that,
What is it running on?
Speaker 2 (01:04:04):
Is this your own platform or yeah?
Speaker 5 (01:04:07):
So we still use AWS, but over the years we've
had to excuse me, over the years, we've had to
kind of build our own custom pieces to handle support
before Landa eventually had it. In some instances never had it,
so for example, excuse me one second, got a cough.
(01:04:31):
For example, Landa didn't have streaming until recently, I think
maybe like a year ago. But when we launched the
next year's app router, we wanted to have streaming and
had to build that kind of independently on our system.
So we still use AWS, still love AWS, but we've
had to build a lot of that stuff ourselves said
(01:04:57):
in another way, you can't go to AWS and purchase
would compute off.
Speaker 4 (01:05:00):
The shelf, right, So you're using AWS as kind of
like the hardware as it were, but you're building a
lot of the functionality on top of that stuff that
other that others might try to get from AWS but
(01:05:21):
can't necessarily get from them, at least not yet.
Speaker 5 (01:05:24):
Yeah, and we've seen this exact same thing play out
in our build infrastructure as well. So yes, you can
go run a bunch of builds on EC two, but
we had done that for many, many years and found
there were a lot of little optimizations we wanted to
make to make it more efficient and more cost effective.
And it's not like we set out with the goal like, man,
we really want to rebuild our build infrastructure and do
(01:05:46):
something custom. It was more of an innovation was a
necessity here. So we have a technical blog post that
talks about this. But our internal build system is called
HIVE and it's effectually we outgrew EASY two based on
you know, millions and millions and millions of builds and
ended up kind of building our own system on top
of the underlying as primitives.
Speaker 4 (01:06:07):
Cool.
Speaker 5 (01:06:08):
So aw US is still great for like super reliable, fast,
efficient cloud infrastructure. We're just trying to build. We're trying
to build the right abstractions for a developers so they
can take the best advantage of that infrastructure.
Speaker 2 (01:06:25):
Gotcha.
Speaker 1 (01:06:25):
So for the stuff in Fluid, how much of it
is sort of transparent? You know, like I just build
the next app and then I fluid it and then
it just works and I get those optimizations versus Okay,
you're going to deploy this to Fluid, You've got to
use some of these APIs to get some of the goodies.
Speaker 5 (01:06:45):
Yeah, it's fully transparent. And this is kind of the
beauty of the framework defined infrastructure model is you wrote
an xtgs app or any framework app, and you had
some kind of API and it works on your local machine.
You've got your API. Awesome, you deployed to Purcell. We
understand the output of the framework was an API. We
(01:07:06):
convert it into a forceell function and you flip on
the switch that says yes, I would like to use
Fluid compute. You now have more cost effective, more concurrent
optimized infrastructure without having to write the infrastructure code to yourself.
It's still an output of the framework.
Speaker 2 (01:07:22):
So what frame we also have work then?
Speaker 5 (01:07:24):
Sorry, say the game.
Speaker 2 (01:07:26):
What frameworks do you support then?
Speaker 5 (01:07:27):
For fluid, any framework that can run compute unver sell,
so any server rendered framework, any framework that allows APIs,
or even you know, if you want to do, for example,
a client VT application that uses the ver Cell Functions
API directory, which is a way to add server side
code to just a client on the app, those functions
(01:07:50):
can also use Fluid as well. And also you can
you know, maybe not as well known, but you can
deploy Express apps to for sell or you know, newer
frameworks like Ono, which is like a back end Express
replacement and it also supports Python, so we run you
can run Python APIs on versel too.
Speaker 2 (01:08:09):
Very cool.
Speaker 4 (01:08:13):
How about Ruby on rails?
Speaker 5 (01:08:17):
Yeah, yeah, it would it would be interesting. Not something
that is kind of top of our list right now
for Ruby or for PHB, but there are community libraries
that do it, so some people have kind of hacked
around it in the community, which I think is pretty interesting.
Speaker 1 (01:08:32):
So what else should people know what's going on at versaill.
It's always interesting to see what you guys are doing.
Speaker 5 (01:08:37):
Yeah, So we talked about v zero, We talked about
the AI s TK, We talked about next years and
some of our improvements there, talked about for sale and
fluid Compute, We talked about the flags SDK. We talked
a little bit about shad cn UI, which I think
is interesting. Shad cn works at for sale, so it's
(01:08:59):
something that we are, you know, always investing in as well.
And v zero natively understands chad cy end components as
well as kind of funnily enough, chat Gubt and Claude
also like their their UI applications on the web, also
understand chad cend and can generate chadsand components, which is
pretty cool. So it's awesome to see that standardization there.
(01:09:23):
I think that pretty much covers everything. I mean, we
can we can talk about other bits if you would
want around for sales, infrastructure and or other next shift features,
but that we've we've covered quite a bit of service area.
Speaker 2 (01:09:38):
Yeah.
Speaker 1 (01:09:38):
Well, we're kind of getting toward the end of our
time too. If people want to connect, they have questions
about versell or v zero or anything else we've asked about.
How do they reach you or get help from other
people at Vercell.
Speaker 5 (01:09:52):
Yeah, if you want to reach out to me, please
send me a message on x Twitter. Uh. If you
have feedback on any of our product there should be
a feedback button in the UI on every single one
of our surfaces if you want to just go directly there,
that all gets red and routed to the right teams.
So just know your feedback is heard. And you can
also email me Lee at verssel dot com for those
(01:10:14):
of you who used to like email.
Speaker 4 (01:10:17):
All right, I love Well, I'll tell you one thing.
You know, we predominantly use Slack, and I keep losing
things on Slack. Somebody sends a message in one of
the channels I'm like subscribed to like a million channels
because of the type of role that I perform at
the company. Yeah, and if somebody something is sent on
(01:10:38):
some channel and I didn't catch it when it was sent,
there's a good chance that I will never catch it.
Speaker 5 (01:10:45):
Hmm. Yeah. I've had to get very I've had to
become a very skilled Slack sleuth. I don't know the
right the right words, but I'm I'm I'm very good
at navigating the mess of Slack channel now and staying
on top of what I need to be on top of, because, yeah,
as a company grows, there's just lots of information happening
(01:11:07):
in real time in slack.
Speaker 4 (01:11:09):
Yep, all right, right right for machine learning. Yes, here's
the summary of all the things. Here's the summary of
all the things I should know that happened in Slack
since the last time I checked.
Speaker 5 (01:11:23):
Totally.
Speaker 1 (01:11:24):
Yeah, all right, let's have Dan starts with picks, Dan,
what are your picks?
Speaker 4 (01:11:32):
I don't have that much, you know what, I'll mention
just the one thing. So I think I picked this
one before a while back. It's a podcast called Revolutions
by Mike Duncan, the same guy that did the History
of Rome podcast. I'm very much a history buff, and
(01:11:53):
Revolutions is like him going through big revolutions in history,
starting from the English Revolution, the less well known English
Revolution with Oliver Cromwell, and then going to the French Revolution,
the American or the American Revolution, the French Revolution, the
Mexican Revolution, the Russian Revolution, et cetera. And it's highly
(01:12:14):
recommended and very interesting. But he's now doing something that's
kind of out there. I mean after the Russian Revolution,
he got to the point where if he kept on
talking about even more modern revolutions, it would stop being
so much about history and being more about politics, I guess.
So instead, what he did he kind of invented his
(01:12:36):
own revolution. He's talking about a revolution on Mars in
the twenty fourth century or twenty third or twenty fourth
century or something like that. So it's really interesting how
he's inventing a revolution to talk about, and it's kind
of like a science fiction story in a way. But
(01:12:58):
it's very engaging and I highly recommend it. So that
would be if you've not listened to any of that podcasts,
I would recommend to go back from the start. It's
got hundreds of episodes. It's of course not the same
type of podcast as ours. It's not in discussion, it's
not an interview. It's all scripted. He basically reads out
(01:13:22):
the script that he wrote, but he does a ton
of research. It's highly engaging, very amusing, very informative. Again,
highly recommended, and the new stuff is recommended as well,
And that would be my pick.
Speaker 2 (01:13:36):
Awesome, Steve, what are your picks?
Speaker 3 (01:13:40):
All?
Speaker 5 (01:13:40):
Right?
Speaker 3 (01:13:40):
So before I get into the high point of every
episode of The Dad Jokes of the Week. I'm going
to do a shameless plug here in terms of a
email I got through LinkedIn from somebody who's a listener,
and I won't share his name to protect him from
the abuse he would receive somebody who likes my dad jokes.
But uh, he did say that, Hey, I'm a longtime
(01:14:03):
listener of JS jabber and enjoy it very much, particularly
your dad jokes. I have retold a few of them
and got mostly eye rolling but a few guffaws. So like, okay,
that's good, making some good progress there. So now to
the reasons for that email the Dad Jokes of the Week.
So recently I changed the voice you know on my
(01:14:23):
my car GPS you can use the different accents and stuff,
you know. I changed it to nail. So now it
just says it's around here somewhere, keep driving. Oh my
dad joke isn't drum roll is not working, bumming me
out here?
Speaker 5 (01:14:40):
Okay.
Speaker 2 (01:14:41):
So, uh, I was.
Speaker 3 (01:14:44):
Ordering a pizza the other day and they asked me
if I wanted to cut it into four or eight slices.
I said four, There's no way I could eat eighth.
Speaker 5 (01:14:55):
There we go.
Speaker 4 (01:14:56):
You kind of reminded me of the fact that I
say that I don't care but the price of gas
because I always pay fifty dollars.
Speaker 5 (01:15:07):
Gotcha.
Speaker 3 (01:15:08):
Sorry, let me give you that oh damn drum that's
not working in. Sorry, tried to give you a drum joke.
There I go, little delayed. Sorry, we're back to Riverside again.
The uys that there's a little delay there. It's messing
me up. So I gotta get back on the train again.
Speaker 4 (01:15:25):
You need to to play the drum role before you
tell the joke.
Speaker 5 (01:15:28):
And yeah, but.
Speaker 3 (01:15:29):
Yeah, there's enough of a lag there.
Speaker 5 (01:15:30):
I gotta do it.
Speaker 3 (01:15:32):
So two more so, my favorite suit, I come up
with a new superhero and you know, Chuck's Marble shirt
that he's wearing today sort of inspired me this. I
don't know if he's part of the Marvel universe or
maybe another one, but it's Typo Man because he rights
all the wrongs that's writ right. And then finally I
(01:15:53):
told my wife I wanted to be cremated. She made
an appointment for Tuesday.
Speaker 4 (01:15:59):
Well, there was the Monty Python Meaning of Life movie
where they they you know, they knock on his door
and say you you've signed to be an organ donor.
We've come to collect.
Speaker 3 (01:16:10):
Right, always look on the bride.
Speaker 4 (01:16:16):
That's a different one though.
Speaker 1 (01:16:18):
All right, I'm going to go in with my picks.
I usually do a board game pick. I haven't played
anything new lately. I'm gonna pick one that I haven't
picked in a while. So my wife and I went
down to Saint George, Utah for the Parade of Homes.
We do this every year on the last weekend in February, basically,
(01:16:39):
and uh, my sister in law and her husband come
and they always bring the same game, and we really
enjoy playing. It's called The Quacks of Quedlinberg, and what
you're doing is you're basically brewing a potion and if
you get too many of the wrong element, the white
elements in it, it'll blow up. And so you're trying
(01:17:00):
to get a big as big a potion as you can,
often with as many elements as possible unique elements, without
blowing up your pot. And then you get bonuses for
playing it. I think we were playing with some of
the extensions, but the base game is actually pretty good.
(01:17:22):
Let me see, I didn't look it up on board
game Geeks, so let me grab that real quick, and
I can tell you what the It's got a board
game weight of one point ninety four. So if you like,
if you like a game that'll make you think, but
it's not like super complicated. That's about where this clock's in.
I think we played it in an hour anyway, fun game,
(01:17:47):
so I'm gonna pick that. I'm also going to and
I'm just that guy sometimes. So we are on Riverside.
It's been mentioned a few times we were on stream Yard.
I was having some issues with stream Yard, working things
out to get the payments to work with them and
things like that, and eventually they downgraded the account and
(01:18:10):
deleted some of the episodes.
Speaker 2 (01:18:12):
That we hadn't released yet.
Speaker 1 (01:18:15):
Oh wow, And so yeah, caveat emptor if you're going
to go use stream Yard.
Speaker 2 (01:18:23):
They have a great tool, but they might just stab
you in the kidney. So anyway, do.
Speaker 4 (01:18:30):
You think content is a big no? No?
Speaker 2 (01:18:33):
I mean yeah, And I tried to work with them.
Speaker 1 (01:18:37):
I tried to figure out how to get it, and
they just came back and said we deleted it. So anyway, yeah,
So if you're gonna do this kind of a thing,
just be aware. We've never had that issue with Riverside.
I did go through a bit of a process just
talking to Riverside, and it was the process was them
showing me all the things it does now, and it
(01:18:58):
does like all of the things that I had wished
it had done before.
Speaker 2 (01:19:03):
It basically does now.
Speaker 1 (01:19:04):
You can actually edit your episodes in Riverside and it
uses AI to clean up a bunch of stuff, automatically
generates your transcripts, it'll write you show notes. You know,
all all the things that I've either been paying for
(01:19:25):
other systems to do or wishing that I had a
system to do. It does all those things. So I'm
going to pick Riverside. Riverside is not cheap either, but.
Speaker 4 (01:19:37):
You know, anyway, anything's better than some getting your episodes deleted.
Speaker 2 (01:19:46):
Yeah. So yeah, we lost a couple.
Speaker 1 (01:19:49):
I think we lost more on Ruby rogues and we
did on JavaScript ever, but we may have to reach
out a couple of people and say, hey, they deleted it,
We're going to record it. But yeah, I'm delighted to
be on Riverside. And apparently it's an Israeli company, which
is also cool. I don't I don't know that I
care a ton where the where any company I use
(01:20:10):
is based unless but I'm.
Speaker 4 (01:20:13):
Kind of curious because it was streamed live, so it's
it's on YouTube, it's on on Oh.
Speaker 2 (01:20:19):
Yes, we could get them that way. That's a good.
Speaker 1 (01:20:21):
That's a good. Here we go, troubleshooting live on JavaScript chever. Yeah,
we should be able to pull them off of YouTube
because they're on they're on the channel for the network,
and so we can download them off of there.
Speaker 2 (01:20:38):
You just saved us a little work, thanks Dan.
Speaker 4 (01:20:42):
Besides that, I still remember that episode that we that
we did with pressing the record button.
Speaker 2 (01:20:52):
That once or twice. I've done that before. Yeah, that
that's a mistake.
Speaker 1 (01:20:58):
You only make a couple of times, and then you're
very first podcast.
Speaker 3 (01:21:03):
Let's throw out the very first podcast episode I ever did.
I did that and had to re record it. The
guy is gracious enough to re record it with me. Yeah,
never forget that.
Speaker 4 (01:21:15):
By the way, you know, as an interesting pick, we
now use for some reason Google Meet at work instead
of Zoom, which is not necessarily what I would have chosen.
But they have a good feature, which is you can
pre configure the session to be recorded because you never
remember to press record in real time, so it's it's
(01:21:40):
one thing that I do now when it when it's
sessions that I schedule that I know need to be recorded.
Speaker 2 (01:21:46):
Yep, I think Lee needs to run. Do you have
anything you want to throw out real quick?
Speaker 5 (01:21:50):
I'll give a shout out to Z my code editor,
and mentioned it a little bit in the podcast. But
it's been enjoy to use and they're shipping tons of
good improvements. So big fan.
Speaker 1 (01:22:01):
All right, good deal. Well thanks again for coming. So
much interesting stuff going on. You're right in the middle
of it.
Speaker 2 (01:22:09):
All right. We'll wrap it up here, folks. Until next time.
Max out