Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:08):
Welcome to the Angular plus Show. We're app developers of
all kinds share their insights and experiences. Let's get started.
Speaker 2 (00:21):
Hello, everybody, Welcome to another episode off the Angle plus Show. Today,
I want to think change things a little bit because
usually we stalk with the host, but today I'm extremely
excited for my favorite former host, Province.
Speaker 3 (00:36):
So there you go. I'm back, You're back.
Speaker 2 (00:39):
I'm back today. How's a guy?
Speaker 3 (00:44):
Oh that's great. It's so good to be back. I
miss hanging out with you guys every week and talking
about stuff. But now I get to be the one
back that's allowed to talk a lot. Right.
Speaker 4 (00:54):
We don't have to be like Jay, can you let
the guests talk a little bit? Right?
Speaker 2 (00:59):
Right?
Speaker 3 (01:01):
You taught me.
Speaker 4 (01:03):
No, that never happened. We never talk over our guests.
That'd be rude.
Speaker 3 (01:08):
Would absolutely yep, definitely didn't do that.
Speaker 2 (01:10):
Ever, today I think, if I remember correctly, we are
going to talk about graph Currel and most specifically about
Apollo graph Kill. Is that correct?
Speaker 4 (01:23):
That was the idea.
Speaker 2 (01:24):
That was the idea, So let me maybe start with
that thing, because that still is a thing that bothers
me the most about graph Curl. Jay, why do we have.
I mean, unfundamentally, I get the idea that graph curl
we have a fancy specification and all that stuff, but
then we have all those five hundred thousand implementations like
a polograph curl that are like somewhat in line with
(01:47):
the specification some whatnot, some ad features, some not. And
you're like sitting there and kind of feel like you're
reading a Java specification not knowing a single thing about Java,
and just so like, yeah, I don't get this thing.
So why shit so complicated?
Speaker 3 (02:01):
Hmm, that's a great question. I don't really have an
answer for you because we didn't like so, you know how,
like in every technology, there's usually like the one that
everyone knows, right, call it like the apple of that technology, right,
Like opening Eye is like the AI company.
Speaker 2 (02:20):
Oh whoa, whoa, whoa whoa whoa, whoa.
Speaker 4 (02:21):
Care It's like the band aid of plaster bandages.
Speaker 3 (02:25):
Right, exactly right, because like the brand one, even though
like they're not the only one and they're not the
only right decision, but they are like the one that
everyone kind of they're the one that first that comes
up on Google, right, right, Like that kind of thing,
right that seems to be Apollo from Bars I can tell. Yeah,
it's just I didn't have to read or understand the specification, right,
(02:50):
Like it just kind of did exactly what it promised. Like, yeah,
of course there's complexity, and I know one of people's
biggest gripes of graphicol just kind of overall, not just
an Apollo thing, is like the boiler plate and the
you know, all that kind of stuff. But I like, yes,
there's a lot of boiler plate. I'm not gonna refute that.
(03:13):
That being said, a lot of that can be resolved
by good engineering practices and cogeneration practices and tooling practices.
So not a problem in my mind, especially when you're
building really large APIs complex. APIs that poll data from
(03:34):
potentially dozens of sources. Like it's hard to abstract wildly
complex things to something that is stupidly simple, but people
seem to think that's possible. I don't know.
Speaker 2 (03:45):
Let's maybe take like thirty steps back, because I've used
Apollo graph kill and a polograph kill once, so I
have like a rough idea of what you're talking about. Like,
let's assume, just for the sake of all our listeners,
I have no idea what a polograph kill or graph
kill in generalists. Right, Okay, so that should be pretty
easy for you because usually that's the defaults say for
me when we talk totally.
Speaker 3 (04:07):
Okay, So let's so let's take a look at the
two kind of the main I would say, quote unquote
the main ways to build an API either rest and
graph QL. Right, those are probably the top two.
Speaker 2 (04:19):
What about soap?
Speaker 4 (04:20):
Obviously everyone's doing that.
Speaker 3 (04:23):
Way before my time, I've never used soap. I never
planned to use soap a your fuse one.
Speaker 4 (04:29):
I was given a soap whizzle that we were meant
to integrate with, and then we went back through our
code and realized we didn't actually need it. We could
just do a sequel request against the table in the POSSUMT.
So that's how I use soap anyway.
Speaker 3 (04:42):
Rest and graph kill they're kind of like the main
two if we take the look at them. Like, let's
just take a step back to a technology that most
if not all of the listeners here will understand, which
is sql. SQL. Like, we're in a database, right, you
write a slap statement to select you know, star from
customer where last name equals Bell. Right, rest is akin
(05:07):
to selecting from a single table, where graph ql is
a kin to selecting from multiple tables via a single command. Right,
So you can say REST would be you know, slexstar
from customer where last name Eicals Bell, right, whereas graph
QL you could do select star from customer where last
nameicals Bell. This is gonna be incorrect skill, So don't
(05:29):
write this, but join on orders, join on I don't know,
credit cards, and then that would give you back an
object of you know, my customer record with an order's
property that's an array of orders, and a payment methods
property that's an array of payment methods.
Speaker 2 (05:48):
Right.
Speaker 3 (05:48):
So it's one call that gets you kind of a
tree of data versus REST, which is generally speaking, I
know some you know, there are different implementations like I
all with this, like the Stripe API for example, you
can quote unquote expand on certain expand certain properties and
it kind of is something similar. But generally speaking, REST
is like, here's the endpoint to get the customer, here's
(06:09):
the endpoint to list the orders, here's the endpoint.
Speaker 4 (06:11):
To do that, right, and the set response. Right. It's
like you're always going to get these ten properties. They
might be they'll satisfy the contract. So it's either defined
or not defined or whatever. But you can't pick and
choose off.
Speaker 3 (06:26):
That, whereas with graph col you're kind of writing a
query akin to a select statement with joins that would
give you back the data that that request is requesting.
So you can as long as your graph Girl set correctly,
you can query anything in the tree that's linked via
the relationships. So every request could be different. Maybe you
(06:48):
want the orders one time and maybe you don't want
the orders another time. That's just a different graph uel query.
But it's all the same endpoint, everything structured the same way,
and it's just the request is choosing what data they
want as part of that single request.
Speaker 4 (07:02):
Yeah, and it's all the same endpoint as long as
you have it set up through your endpoints. Like if
you have like thirty five rest end points and you
make a graph qul endpoint on each of those rest
end points. If you don't, you have to either federate
them together or you have to make I can't remember
what they call it. It's like an orchestrator graph ql
(07:24):
endpoint that knows how to call all the rest endpoints
to get all the stuff.
Speaker 3 (07:28):
Yeah, So like the one of the like the you know,
big benefits of graph ql is like the consumer, the
client doesn't actually know or care where the data is
coming from. And graph ql allows you to merge and
pull data from all these different data sources to one resolver,
which is sort of like an endpoint within the endpoint.
(07:51):
One resolver could pull from the strip API, another one
could pull from the database, another one could pull from
a rett as cash, and another one could pull from whatever.
But all the client knows is it's getting its tree
request of day that right, So you can easily pull
from all these different sources, so each resolver can have
a completely different implementation. You can pull from longo and
one and postgress in another and like, it doesn't matter
as long as all the data gets pushed like merge
(08:11):
together into that tree to turn milan it right. So
usually there's only one graphical land point you would hit though,
and you would have different root nodes. So you start
at like with trells for example, the root well, there's
one root node for the organization and under that there's fundraisers,
and then under fundraisers there's event tickets and you know,
et cetera. Right, So it was started a root node.
Speaker 2 (08:33):
So the thing that I think is funny is the
first argument that people bring up, Well, you can curry
like your fields and stuff, but how often does this
actually happen? Because like we developers, I hate to say it,
but think very resource oriented and very object oriented. So
like you said, like okay, I want to pund raiser
(08:53):
or whatever whatever that is, But so how often does
it happen that you fetch that several times in like
very granularity and like the context of one business application.
I understand the appeal for graphkill when you're like a
general purpose API, where like you provide an API and
like like stripe or whatever, and like different consumers consume that,
(09:14):
like totally makes sense for that use case, but like
for the normal business applications that I've seen in ten
years of consulting, it's usually more or less there are exceptions,
but more or less one back end, one consumer, And
even though people might not be like well bye bobout microfile,
it's still the same same thing, several servers, one consumers.
Speaker 3 (09:33):
Yeah, yes, yeah, I mean, like it's obviously not the
right tool for every job, even within a single application.
You shouldn't use graph col for everything. Like our authentication
endpoints are still rest We do not do authentication through
graph cool. I mean, obviously you pass authentication headers through
your graphical request, but you're not like doing user authentication
(09:55):
through GRAPHQL mutation resolvers. Right, So there's that within there.
But then even then, like sure, graphkills probably overkill for
some applications where you know, maybe there's only one client,
like you said, right, one consumer, Maybe they're relatively simple
even if they're complex, But like the data is very
(10:18):
you're viewing very defined slices and verticals of data in
your application, right. Like the reason we like graphic Trels
so much is our pages are crazy dynamic, right, A
lot of it is user generated. We're linking so many
different things. Is we've built a product that kind of
(10:38):
amalgamates like four or five different like verticals of competitors
into a product, right, I mean to build the link
all those different things together that they don't have the link.
So graphic will gives us this like really really flexible
ability to build out these UIs and dynamic things and
link all the data together that isn't traditionally linked together.
(11:00):
Because then the consumer doesn't need to worry about querying
this data from this endpoint and that data from that
endpoint and then merging it on the front end to
be able to show it.
Speaker 4 (11:08):
Right.
Speaker 3 (11:08):
The endpoint just has it all there for us.
Speaker 4 (11:11):
Right.
Speaker 3 (11:11):
The other reason why we like it so much and
it might be a good option for some other people
is we move so quickly as an engineering team at
charlis that a lot of the times, like our product
designer Robin will be like, hey, I want to add
this thing to the front end. How long do you
guys think of that? Take like, oh, it actually just
already exists because you can you can query for that
(11:33):
thing through this relationship, right, Like, yeah, you can do
that with rest as well. But now you're hitting another endpoint,
that's another API call. You need to do more data
merging with us. It's just like, okay, just go add
the property to that quote unquote seql query, the gracuel query,
and now your front end has access to it. Right.
It's like a couple of lines change.
Speaker 4 (11:51):
Right.
Speaker 3 (11:51):
So the the ability to build out these like very
heavily relational or queries effectively right allows us to move
really really quickly because all of our data is so
inherently linked together. It's just like one grapt to manage
and then the clients can just do whatever they want.
(12:12):
They can always just query whatever they want. It's not
a burden. Like in terms of the angler space. You know,
one of our future components will have a single component store,
not component store does all of the querying, and all
of the querying is only one query there is it
all these multiple queries to pull this data to display
it in the way that the consumer wants. Right, So
(12:33):
it's not the right decision for everybody, As with everything
in technology and programming, right, Like, you need to consider
if this is the right option for you. The overhead's okay,
do you have the time to dedicate to building a
good tooling because otherwise it's going to be a burden.
Speaker 2 (12:49):
Right, So have you done stuff with Gatsby just out
of curiosity? Never?
Speaker 3 (12:54):
No, I've never used Gatsby.
Speaker 2 (12:56):
Okay, So Gatsby used like the whole architect was like
based around graph kill. So I was wondering if you're like,
you know, if that led to their downfall or if
it was the fondness fold.
Speaker 4 (13:09):
We we used graph kiol heavily on a mobile application
because the I world, for I was doing work for
a grocery store, and the way their database was structured
each product like it had there were like fifty properties
that might show up on the product. So like the
rest endpoint, the response would be huge, and we usually
(13:32):
needed the title and the price and maybe the picture URL,
like we didn't need to know like what aisle it
was on, or like all this other random stuff that
wasn't even really populated anymore but would still come back
in the response because it was empty, you know.
Speaker 3 (13:48):
Like, yeah, that's another great thing to point out, is that, like,
so let's say you have that customer object right for
the same chase, So like in a rest one, you
would have the cuss object and it returns all the properties.
Maybe some of those properties are computed, right, whether they're
cached in a reddess or they're computed on the fly
when you make the request whatever, right, but they're computed properties.
(14:11):
With graph ql, those properties could actually just be a
separate resolver, so that you've isolated the computedness of it
to a resolver and then the consumer can pick and
choose what to query. It's not they're not just picking
and choosing what relations to query. They're actually picking and
choosing what properties to query, so you can some of
(14:31):
your queries actually end up quicker than a REST query
because you're like, oh, no, I only want these three
static properties and it won't even trigger those computed ones.
Now again, you of course can do this in rest, right,
but it's not like it's not like a first class
supported Like you have to pass mc query PRAM for
what properties want to query, and then you need to
(14:52):
do some work on the back end. But that's the
work that graphicold does for you. So the graphicold knows like, oh,
I'm just not even execute that computed resolver, but that
adds more. That adds more overhead to building your resolvers correctly,
so you don't do like again, more overhead, but it
is more powerful.
Speaker 2 (15:12):
Right.
Speaker 3 (15:12):
You have to like, don't do your computed in the
single resolver for your object. Like if you have one
resolver for a customer and you're doing your computed stuff
in there, that's not going to save you from doing
computed if you only select the three static properties, you
gotta architect it correctly. And that's why it's an engineering problem.
It's not a technology problem.
Speaker 2 (15:32):
Right.
Speaker 4 (15:32):
It's like grab ql is that a thing where you
can slap it into your app and be like, haha,
now it's faster. Oh god, no, not even retely lower, yeah, not.
Speaker 3 (15:41):
Even you have you have to design an architect it well,
and we've done a ton of work on that at
Trellis to make it easy to do these things. But
you gotta do it. You got to think about what
it means when building your resolvers and what properties you're
putting in your or resolver versus your computer resolvers so
that the clients aren't making these requests that are constantly
(16:04):
computing things or pulling from caches or whatever, but then
not actually returning on data because the consumer doesn't actually
want it.
Speaker 2 (16:11):
Right.
Speaker 3 (16:11):
So typically the pattern we follow is the core resolver
for whatever entity returns the static properties from the database table.
It's just it's a single read. You don't really save
anything by only select certain columns from a database column, right, Like,
it doesn't that performance difference doesn't matter that much. Now
of course di gasters. Everything depends on technology. So don't
(16:34):
mean but you know what I mean, right, said.
Speaker 5 (16:39):
J Bell said, don't optimize your sqol The core resolver
returns the entity from the datasetable.
Speaker 2 (16:50):
Yeah.
Speaker 3 (16:50):
Anything beyond that is computed resolvers. Yeah, full stop. Yeah,
that's the general pattern that we follow because and the
consumer can be sure that by not selecting those, they're
not affecting the performance of the server.
Speaker 4 (17:07):
Right right, that makes sense.
Speaker 2 (17:09):
Yeah. One thing that bothers me with graph ql is
that query is a very fancy string with like a
little bit of its own DSL.
Speaker 3 (17:18):
Oh god, yeah, that's like one of the biggest cryptograph girl,
for sure. It's like it's it's only one post endpoint,
and you send the query as a stringified version of
their DSL in the post body operation property or whatever
property it is.
Speaker 4 (17:35):
And then like it always returns two hundred, right, even
if it errs, like.
Speaker 3 (17:39):
It always returns two hundred. So you have to make
share your hand like parsing the body for a success
versus failure. So like we again we've built utilities for this.
It's not a problem for us. We have custom rxgs,
you know type operators. So it's like tap query response
or tap mutation response. We're not using just tap response, right,
(18:00):
So then those would parts the body and then either
you know, next and complete or error the observable so
that the user I either developer writing the code for
the englad application can use tap response like if they
normally would. Right, there's a there's a next callback, there's
(18:22):
a complete callback and there or finalize whatever. But there's
the three and then error callback right right, So we
built all these utilities so they don't need to worry
about that quirk. Right, Yeah, we've quirked it into the
normal pattern. We've tried to quirk into a normal pattern.
But yes, again, that's one of the big ripes of
(18:43):
graph kills, Like, oh, it doesn't follow the standard or errors.
I'm like, yes, but the benefits that we're getting far
outweigh some you know sense of virtue that I have
about not following the standards. Like I'm for whatever's going
to help us deliver our product the quickest to the
(19:06):
highest quality. That's my job as a co founder and
CTO of Charls. I need to deliver the product and
if building your graph ql a p I helps me
do that, which it has. We've drastically improved our engineering
speed quality since we've implemented it. Yeah, right, Like I
(19:27):
don't care.
Speaker 4 (19:27):
And so like So in theory, someone could just make
graph l requests using a GTP client.
Speaker 3 (19:35):
Oh yeah, absolutely, you don't need a special client at all.
It's all it is is a post request. Yeah, it's
formatic correctly and stuff like that.
Speaker 2 (19:41):
Right, but yeah, that's just horrible.
Speaker 3 (19:44):
Yeah, it's awful. That's why you.
Speaker 2 (19:50):
Yeah, yeah, yeah I can. I can't also jump off
the window right there, but I won't.
Speaker 3 (19:55):
Yeah, please, please, please do not be making graph col
request with a plan an HTTP client. These we will
love God, Go find yourself a graph QL client to use.
Speaker 4 (20:06):
Whether something else on that topic, what do graphel clients
bring to the table.
Speaker 3 (20:13):
That's a great question. Yea, so one of the So
this is both a strength and I guess some people
see this as a drawback. Is graphic cuol has a
strong like code gen story or at least Apollo, or no,
it's Graphicuel has a codegen story. It's not the code
gen works with like graphical code gen works with Apollo
(20:33):
where you can define your queries. So whether these be
mutations post put delete in, rest downs or queries get
queries or yeah, so those would go to resolvers or mutations, right,
So you define that in the graph col DSL and
(20:54):
then you would write your code gen and so for
example Apollo Angular, it'll generate all all the typescript types
for you for the response types, the enums, the you know,
all of that kind of stuff, the query type, the
vacation type, all of that, and then it'll actually generate
you services injectable services that you can call just dot
(21:17):
query and you pass in your query properties, right, and
then it abstracts away all of that. Format the query
this way, send the query posts the query this way,
and stuff like that. And you're now interacting with just
an Angular service, right, and you're saying, you know, customer
service dot query where last name contains bell insensitive, right,
(21:42):
and now you're just working with Angler services. You can
use that in your component store, whatever kind of store
you're using, and stuff like that. So, but you have
to stitch together certain things. So again more overhead. But
if you're tooling for it like we have, like everything's
automated at charllis your cogen like adding a cogenar, get
to an NX library, automated, even managing the cogen files, automated,
(22:06):
compiling your co gen targets all linked together through your
dependency graph and your NX graph, right Like I've spent
a lot of time and my team has spent a
lot of time building up this tooling so it doesn't
get under way. It's a boon for us, not a detractor. Right,
But if you're just getting into it, there's a lot
to consider. There's a lot, right, it could be overwhelming
(22:27):
and you're like, this seems like a lot of boilerplate
to solve my problem. But we've gotten into the part
because I always push for tooling first when we implement
a new technology, process, whatever, because I'm not having that
kind of stuff interfere with my engineering operations.
Speaker 4 (22:45):
Yeah, I don't want my like my mid level doves
to have to ping me and be like, can you
please tell me how to use the cogen API because
I need to generate types and a service for this, Like, yeah,
that is we So we are in the very early
stages of adding graph ql to an application. And so
(23:07):
I phoned a friend and his name is j Bell.
So which because you know, of course, obviously we well,
obviously we use an nx mono repo because we have
a like, we have a large application. I say, obviously
there are people that don't use an x mono repos.
But I like what my EX should be using one
(23:28):
now right exactly, And we're not paid. We're not paid
for these comments.
Speaker 3 (23:33):
We might be an X champions, but we're not paid anyway.
Speaker 4 (23:38):
But there isn't an existing plug in for this that.
Speaker 3 (23:43):
No, no, there isn't. So we've we've built all of
our own NX executors and ex generators all that. So
like when we when we run like NX compile Thomas,
Thomas's our dashboard application. We name all of our apps
after trains here at Trela, so all of our internal
applications are named after how we like thomis Orient Frontier,
(24:03):
snow Piercer, Hogwarts is our internal magic deck.
Speaker 4 (24:08):
What happens in that one?
Speaker 3 (24:13):
So when you run when you run NX compile Thomas, it'll,
you know, it'll generate the schemas, it'll compile the code gen,
it'll compile the service work like we leaked the whole
dependency graft together, and then all those targets are cashed
are cacheable. So generally speaking, when you run NX compile
(24:34):
insert front end app name, those one hundred and fifty
co gen targets are just pulling from cash. Because we
have like I just looked. It was like, we have
like thirteen hundred x libraries at Charlos. You know, some
portion of those are back in, some portion of those
are shared utilities, and then a good portion of them
are client side libraries for our anger applications, right, and
a lot of them have co gen targets. Like, yeah,
(24:56):
you could do one massive code gen, but then you
don't get the incrementality. Right, So again you have to
invest in the tooling so that when a dev serves
the application, right, it's not just compile that runs the
coaching targets and ex serve, Thomas also runs the coach
on target so that your serve works correctly, right, because
it's all relying on those compiled typescript types, injective services,
(25:21):
UM's query, response, mutation, et cetera, et cetera. Right, so
we've linked the whole dependence graph together top to bottom. Serve, compile, build,
all of those would run all of your coach on
targets before it actually gets to the the target that
you called.
Speaker 4 (25:38):
Yeah. Yeah, so okay, So a lot of us are
used to using some sort of code generator to generate
services from rest end points. What how is the process
different using graph ql. Have you done have you done
enough coach on with breast to to I guess you've
probably done some.
Speaker 3 (25:57):
But only like a small amount out. I mean it
all kind of depends on the the tooling that you're using, right,
But like it, I wouldn't say it's dissimilar.
Speaker 2 (26:08):
Right.
Speaker 3 (26:08):
You have to annotate your graph ql in a certain way,
just like you'd have to annotate your REST in a
certain way, or I guess with with REST A lot
of times it's it's a swagger, right, but you got
to annotate your stuff right so that your swagger docs
get generated. Right, Like you have to say this property
is a string, and you know a lot of the
cogens can infer some stuff based on like the TS
(26:32):
type and you know whatever. Right, But like a lot
of times you have to say, like this is optional,
this is noble, This is a string, this has a
custom validator on it. This is a nested object or
an array object of these objects. Right, you have to
regardless of how you're doing COGEN, you need annotations, right
the graphs well is no different than that. You would
(26:52):
just annotate each of your different effectively response types your DTOs, right,
and then there's one additional step though, Well, I guess
there's an additional step. You're just swapping the sweager generation
for generating a server graph QL file, which is kind
of just like a compiled GRAPHQL schema of your application.
(27:14):
And then that is what all of your consumer graph
codegens point at, so they know how to write and
you get autocompletion and stuff like that. In your Jetparans
has a great plug in for you know, pointing a
library at a certain server graph cool because you've pointed
at the generator graphic coil. You don't point it in
the service exactly. I'm sure you can, but we compile
(27:37):
ours so that it can be cacheable and then in
the repo and you know, offline and stuff like that,
so you get all these type annotations. But yeah, it's
like you annotate the server, you generate the server graph
QL that is now a file on disc. All of
your client side cogen files point to that or a
different one. Right, Like we have two different graphicill servers
(27:57):
at Trellis, so certain ones when we say like NX
add graph QL code gen, uh insert project name, because
we're adding graphicill to in the library, right, it'll ask like, okay,
what schema does this one point too? Because the library
should only point to one server, right, it's either the
back end server back in from our user's point of view,
or the front server for their users users. Right, oh gotcha,
(28:21):
And then they would ask that right, like the donors
are the charities. It's effectively it's kind of like there's
a donor API and there's a charity API.
Speaker 4 (28:28):
Yeah, so Charlie said, like a slightly, like they have
two very distinct types of users.
Speaker 3 (28:33):
Exactly right. We sell to charities and then the charities
use our product to collect donations from their donors.
Speaker 2 (28:39):
Yeah.
Speaker 4 (28:40):
So if you're like a grocery store, you would probably
just have one graph l.
Speaker 3 (28:44):
More than likely, exactly right. So our code our code gen,
not graphicual code gen, but our generators actually ask the developer, hey,
what server graph cule do you want to point this
codegen at? Right? And it's and interactive prompting your terminal
and blah blah blah, and then it would write the
code gen files for that consumer onto disc in that
(29:06):
NX library's route. Right, and now you have your client
side COGEN file pointing at your server CoDeeN file that
was generated from your graph QL server annotations.
Speaker 4 (29:17):
Right, and so that's the schema definition language that they
sometimes call that SDL, right, where it goes like, that's
where you go in and you're like, for my type customer,
it has a property that's name, and it's a string.
It's not knowable. Uh that is, it's a number, it's
not I can't remember, it's not exactly yeah, whatever.
Speaker 3 (29:38):
And there's two ways to do it. You can either
do schema first or code first. Right, So if you
go schema first, you can generate code from your schema
and then that ends up as your server code. Or
if you go code first, you write the code and
then generate the schema. We've done code first because we
have a relatively complex like permission system. We have all
(29:58):
these computed resolvers, right, we're linking all this kind of data.
We have like huge control over what the inputs are
and like what your filter conditions are and all we
have all these utilities built so it's not probably it's
not like a kind of overhead for us. It's a
very well defined, clean process now because we have so
much like down friendly utilities and shared code and stuff
(30:21):
built for it. Yeah, those are the two ways. Right,
So you either you either write that graphicule file yourself
and generate your server, or you write your server and
generate the graphic cu al file.
Speaker 4 (30:32):
Yeah, but regardless, there's generators through cogen that exists to
do either way. And then yeah, and then the benefits
I've read, well, so we're actually starting with schema schema
first because we're still building the APIs, so we have
nothing to generate from. And so the benefits I've read
of that is that it's that way you can kind
(30:53):
of work. If you're a full stack team, you know,
you can start to really nail down what your APIs
needed you to satisfy the needs of your UI. We're
not a bole like we're we are kind of siloed
front and back end, and so it gives us a
common place to start discussions about what we might need.
Speaker 3 (31:14):
That's actually a good point to bring up too, Right,
Like everything that I've been talking about so far has
been from the context of trellis, where everyone on my
team there's three devs in me, we're all full stack developers. Yeah,
all of our APIs, all of our consumers and clients,
all of our cogen, all of our annex targets live
in a single repository. So that's how we're able to
(31:37):
link all these things together in a really easy way
with generators and casual targets and depends on task crafts
and all that kind of stuff. Right, So like and
even then we can do full sack tickets ourself. Right
if we're giving a ticket of add this property to
the graph QUL and then go use that property on
the client, one person can do that and you don't
need to go cross teams. Right. So that's another reason
(31:59):
why this works so well for us and we're able
to move so quickly is we don't have dependencies on
other teams. Yeah, it goes a great tool for us
because we can just go do the whole thing top
to bottom. That we can do the cojen, we can
add the properties, we can write the resolvers, we can
write the mutations, we can add the component store, we
can build the UI and then ship that ticket across
(32:20):
a single person, which some a lot of teams can't
because you have that. Yeah, we sun a team and
you have that back end team in the front end
needs to request something from the back end for the
graph quo L and you know et cetera, et cetera.
Speaker 4 (32:31):
Right, Yeah, yeah, for us, it's been I've been so
Actually I was using SDL. I was asked to gather
the API requirements that we thought we needed, which is
it's kind of a different way of working. It's I
prefer to just go from the product requirements. But yeah,
this way we you know. But I chose to write
(32:52):
it in graph tuel. First of all, I was being
a bit of a turkey, but I realized that, like,
it's actually easy to write. It's probably the least for
both way to write it.
Speaker 2 (33:05):
So but that's the thing that bothers me so much
with it. Yes, it's not for both, but it's also
not an extensive API description or anything. It's literally just
in an output. It doesn't specify anything any kind of
behavior that might be triggered, like does this mutation send
out an email?
Speaker 3 (33:27):
What do you mean?
Speaker 2 (33:28):
Well, like, an API is usually more complex, and that's
also the big issue that I have with Swagger, for instance,
it literally just describes this is the data in and
this is the data out. With graph creol, it's a
little bit more abstract because it's more like this is
all the data that we have and like what feel
like the structure and stuff.
Speaker 3 (33:45):
So is your concern that the like say, the mutation
definition and your serviographic goal doesn't describe what the mutation
is doing fully like all this.
Speaker 2 (33:54):
Well it completely lets any documentation on behavior of the
API and REST do that well not swegers also stupid.
Speaker 3 (34:04):
But so how does the rest? How does a rest?
Speaker 2 (34:08):
Well if you if you should like proper architecture work
and like create documentation.
Speaker 3 (34:14):
That's part of the Yeah, of course, I mean that's
a again that's an engineer. That's a not a technical
engineering problem.
Speaker 2 (34:22):
Oh yeah problem.
Speaker 3 (34:24):
You can document graph U l A p I s
we document are super well.
Speaker 2 (34:29):
I'm just saying that's an issue that I have with graph,
Coel and swagger. I'm not saying that REST is so
much better about that. REST is absolutely not better about
that because you still.
Speaker 3 (34:37):
Need to rely on dens to do their chops document.
Speaker 2 (34:41):
Right, But like it's so for for instance, if you
part of behavior descriptions. For instance, what happens if I
call an API request multiple times in a short time?
Is that a thing that is allowed like what is
the what our rate limits on it? Those kind of
things that are information, uh, that are really really.
Speaker 4 (35:01):
Three dollars, I think we're okay, okay, yeah.
Speaker 2 (35:03):
But those informations are important for even for foreigna developers
just to do that course. Yeah. Yeah. And if you
just have like a technical specification, well this is what
I put in and this is what I get out,
you don't have that information, sure, and so people are
just like, well, but I need this SWAGA documentation. You're like, well,
what's the very stupid.
Speaker 3 (35:23):
There's always ways to document things. It's just up to
the depth if they're spending the dead strum correctly, right,
Like we document our mutations like you can go look
at our generated serviograph QL or even to extend beyond that,
the generated types in the clients ie coach, you can
go and look at that and like they'll be annotated
with what the mutation is doing.
Speaker 2 (35:44):
So now, okay, I have opinions on generated coach.
Speaker 3 (35:47):
Yes, I'm sure you do. You have a few opinions on.
Speaker 2 (35:54):
How how do you feel about the generated code that
those uh coach and tools.
Speaker 3 (36:00):
Like the quality of the generated code. I mean, it's
wildly simple. It's like some types of interfaces and them
like a.
Speaker 2 (36:08):
Well but services for instance Angular.
Speaker 3 (36:11):
Yeah, so it's a service that has a constructor that
extends like an Apollo Angular construct. It's like each services
like four lines plus the decorator. That's it because all
it does is it passes in the graph QL fragment
like the DSL query into Apollo Angular. All those services
(36:34):
are are there effectively wrapping the internal Pollo Angler construct
to provide you type hinting. That's really all they're doing.
They don't actually have functional code in them.
Speaker 4 (36:45):
Yeah, because you can call, you can call. You don't
need cogen necessarily to work to make Apollo client work.
Speaker 3 (36:52):
Yeah, So like the quality of the code is fine
because there's like no functional code.
Speaker 2 (36:58):
Well, but if you say of like okay, this generates
like twenty services at are all four lines, that sounds
like super unnecessary amount of code. And I would like
if I if death comes to me and creates like
a code like that, I would easily reject that pull request.
And just because it's generated, we lower our standards for that.
(37:19):
And there are some I agree, but like the bundle
size impact for instance, is still very much an aspect
might be different for like B to B business.
Speaker 3 (37:29):
Yeah, I'll can see that for sure, Like is it
more code than if you went to hand handwright things. Yes,
is the trade off and engineering speed and quality we're
getting like overall product quality we're getting. Well, that's the
fact that there's a bunch of four line services.
Speaker 2 (37:46):
Yeah, and that's the thing that where I was curious
that I wanted to hook into because you're described in
the beginning that you developed all those toolings to make
it work nicely in your Mona repository. So would you
still say I don't know how much time it took
you if that was just like a oh with NX,
it was like three lines of code and within ten
minutes everything work. Usually tooling is not quite that simple,
(38:07):
and it's more like very frustrating. Does it work now?
Commits and it's still not working. So if you consider
all that time, like, how do you evaluate at the
beginning of a project, Okay, this is still worth it
because you don't know exactly how long it will take
you to implement all this tooling, and you cannot really
assume the output because at that point you don't know, oh,
(38:28):
it will increase our velocity of course.
Speaker 3 (38:30):
Yeah, I mean there's there's definitely a certain amount of
like pray and hope, but you also need to be
smart about prototypes and spikes. Right, everything we do at
trellis we try and do an initial spike just to
see how much work it's going to be and like
just to give you an idea our Oh, all of
our tooling is like I don't know, a couple hundred
(38:54):
lines of code, Like it's not thousands of lines of
code with all this complex stuff. It's like for the
codegen it's like you know, one one one executor target
that calls the code gen CLI but just populates the
project path. They can pick file like you know the
serviographical one. All it does is it just it spins
(39:15):
up our existing API with the configuration set to output
the SERVIOGRAPHICIL in a certain location. Like a couple hundred
lines of code, maybe three hundred lines of code total
across all of our front end, back end co gen
executors general.
Speaker 2 (39:30):
The way you talked about it, it's sound like, oh,
like the whole team was waiting on like everythought one.
It was like this huge moment when like out of
nowhere everything worked and like popco was flying through the room.
Speaker 3 (39:39):
No no, no, I mean like it was inerative. Obviously
it was inerative and we've improved it over time, right,
and as we've you know, as Naxos improved too, we've
gone back to update things and you know stuff like that. Like, yeah,
we spent a lot of time improving and optimizing the tooling.
But the tooling result in something simple because we didn't
(40:03):
have to write the cogen logic. You didn't have to.
These are just rappers to make it simple to interact
with within our repository.
Speaker 4 (40:13):
Right. So and the you know the code that it
so I we used, we use open API code generation
against rest end points wigger. Yeah, the code that we're
generating for the graph QL stuff is dramatically less like
the rest end points. I mean that was almost always
(40:34):
our biggest package, especially depending on how the APIs were architected,
Like we had one endpoint that returned it or one
uh API that had so many end points and there
was the open API generator wouldn't split those up for us.
So it was like, well, if you need anything out
(40:54):
of here, you're getting it all. Yeah, And that was
that was that was hard, and that was a hurdle
we couldn't get over because there was no hunger on
the API side to split those up and so which
was now using gra ql, I think that we'll be
able to have much smaller, like a much smaller footprint
(41:17):
of generated code versus like the open API code that
we were generating.
Speaker 3 (41:22):
Totally and you can like you can generate at a
smaller incremental level too, right, I mean, like I'm going
to preface this by saying I haven't used that in
many rest slash open API slash swagger code generation tools,
so I don't know exactly what the power is there.
I'm assuming it's pretty mature. Correct me if I'm wrong. No, yes,
(41:42):
it's it's well, it's mature.
Speaker 2 (41:43):
I already said my opinion that I think that our garbage, Okay,
there were always.
Speaker 4 (41:49):
We always get a new devil come in. They review
polar Quest with generated code, and they are picking apart
the services for why they're doing it a certain way.
Speaker 2 (41:59):
Yeah.
Speaker 3 (41:59):
So like knowing that, I would say the generation story
in graphic co wel is better. It's it's kind of
like it was a it's a first class citizen, right
like it Graphicool was built with that in mind, whereas
I'm rest like, I don't know if it's built with
it in mind, but like it's it's a first class
(42:19):
citizen within the ecosystem now, right, like graphicill cogen is
a thing, graphqel isn't.
Speaker 2 (42:27):
I'm confused on that because graph kill was originally by Facebook, right, yes,
and they're not using type script.
Speaker 3 (42:34):
But I didn't say types of coach and I said
just cochen in general, I'm sure Facebook has some kind
of interurn, like it was built with coaching that's mined,
right like graphic co Wel. I'm sure as with everything
in the web, you know, the technology industry that was
created back in the nineties and early two thousands, Like,
I'm sure it was hacked together at the start and
the specifications evolved over time and you know, et cetera.
(42:57):
Except like think about the jazz script programming language, right, okay,
we I think graph cool was designed at a time
where we had the benefit, as with everything that comes
out a decade or two later, right, the benefit of
looking at you know, what was hacked together the first
time around. And I don't know the actual history of
graph cool that much. Actually met the guy that created
(43:18):
it one time, but that was before we were using it.
But the coaching.
Speaker 4 (43:23):
Story, but I forgot all of it real right, the
coaching story is better, right, it is.
Speaker 3 (43:32):
It's very well defined. The tooling is really mature for it, right, Like, yes,
you're still Coe Jenning versus writing yourself. It's I've been
said before we started recording today. We were just talking
about artisanal code versus AI generated code.
Speaker 2 (43:52):
So like this should.
Speaker 3 (43:56):
Those and that told us the ship has sailed on
us being hyperficient, on not having generated code. Anywhere that
ship is sailed.
Speaker 2 (44:11):
We still have that can have opinions unlike the quality
of the.
Speaker 3 (44:14):
Tool absolutely, and we should always be pushing for more quality.
I will absolutely can see that point. Yeah, but the
trade off in having a bunch of four line services
for what the benefits we get of it. Yeah, it's
not even a conversation I need to have in my mind.
Speaker 2 (44:29):
Does this scene Okay? We can? I mean we can
also Like if that is the argument, then we could
also bring in like things like TRPC or something that
happens type safety over the wire.
Speaker 6 (44:39):
Built in good morning, you know that moment when your
coffee hasn't kicked in yet, but just like is already
blowing up with Hey did you hear about that new
framework that just dropped?
Speaker 2 (44:50):
Yeah? Me too, That's why I created the Weekly Depth Spurf,
the newsletter that catches you up on all the web
def chaos while you're still on your first c oh Look,
another anger feature was just released and what's this? Typescripts
doing something again? Look also through the poor requests and
(45:12):
change slot gramma, so you don't have to five minutes
with my newsletter on Wednesday morning, and you'll be the
most informed person in your standard. Ah. That's better the
Weekly Desperate because your brain deserves a gentle onboarding to
the week's tech madness. Sign up at Weekly Brew dot
Death and get your dose of deaf news with your
(45:32):
morning caffeine. No hype, no clickbait, just the updates that
actually matter. Your Wednesday morning self will thank you. You
still have that benefit of okay, you can selecting the
query like you can selecting the fields and stuff, which
for some application is really great benefit. But other than that,
I don't I personally don't see quite the appeal to
(45:53):
graph crol anymore. And like modern architectures, if you know
your constraints, you can easily build those things. It's not
like graphko does not have constraints. Like said, like authentications
of wild graph.
Speaker 3 (46:05):
There's tons of constraints, yeah, all over the place, right,
they're just more com.
Speaker 4 (46:08):
Well, and then there's like a ton of different ways,
Like you can write really bad graph QL, just like
you can write really bad.
Speaker 7 (46:16):
Boy.
Speaker 3 (46:17):
Yeah you can.
Speaker 4 (46:18):
Like there's you know, you can do data loaders, you
can do batching, Like there.
Speaker 3 (46:22):
Is way easier to shoot yourself in the foot writing
GRAPHL than it is rest Yeah.
Speaker 4 (46:29):
Well, like even something like Apollo Client. I worked on
a team where we had Apollo Client, but nobody read
the documentation that told you that if you don't if
you don't use the Apollo ID key, which is ID,
I think they also accept like whatever, like asset, Like
(46:51):
if your type is asset, it's that they would accept
asset ID as well. But if it's anything else, you
have to tell it what it is or else you
don't get any cashing. Which was the entire point of
using a Polo client.
Speaker 3 (47:03):
And that's what I was getting at, right, Like there
is a crazy amount of overhead, Yeah, in comparison to
a rest API.
Speaker 2 (47:10):
Quick question on that, because Paulo does come with the
data lay of the like the client side stake management too. Ye,
is that something you're using Oh yeah.
Speaker 3 (47:20):
It's built built in as long as you're defining your
primary keys.
Speaker 2 (47:23):
Like, but you did mention also that you're using signal sto,
so how does that story look.
Speaker 3 (47:33):
Yeah, so we want component store. Yeah, we're using component
stores still. We love components. We have utilities for it,
but that's a whole different I don't have a business
benefit to rewriting to signal store right now, and we
have all these component store utilities that make working with
graft QL easier. So what the component stores are there
for is they're kind of like the middleman between the
(47:55):
feature components and the appall of services that can generated.
So the component store is what's responsible for pulling the
ID from the ural, building out a query observable that
gets passed into the watch query for Apollo, and then
maybe transforming that data that comes back into selectors so
(48:15):
you can seleck specific things and stuff like that. Right.
We also have generic component stores that you like, abstract
ones that you can extend from that will handle pagination
for you, that'll handle loading and watching loading flags, air flags.
All of that is abstracted. So if you go to
(48:35):
build a new UI that displays a table of this entity,
all you do is extend our graph ql page list
component store and then implement a couple of methods like
what's the query look like where you pull in the
IDs from? What does the you know, response transformation look like.
(48:57):
It's all strictly typed and all that kind of stuff,
so that it's the man between. So we're not using
Apollo directly in our components. Allows better easier testing, easier mocking,
easier com like combinations and pulling of what your query
looks like like maybe this ID comes from here, but
this other one gets set from the future component or
something like that. Right, So they're all just again another
(49:19):
layer to make it really easy for our deves to
work with our graph colapi.
Speaker 2 (49:24):
Yeah.
Speaker 4 (49:25):
So like for your devs, their daily workflow would be
then defining whatever queries or mutations they need for their
feature code, jutting the services that know how.
Speaker 3 (49:35):
To make thoseal fragment and query and stuff like that.
Speaker 4 (49:40):
Yeah yeah, and then interacting with the component stores. So
the really all your doves need to know about graph
ql is how to write the queries that they need,
because and the queries are not the same as SDL.
Like SDL is where you're defining out types. Queries are
literally like I want a customer and I want its
(50:00):
name and its description, and that's all I care about.
Speaker 3 (50:03):
Just send me that the SDL is your database definition
and your queries are your select statements.
Speaker 4 (50:09):
Yes, yes, And so your devs really can get away
with if you using Apollo Client, using the tooling that
you have set up with code gen doing the optimization
you've done on the platforms. If you had, like say
just an intern coming in to write some front end code,
(50:30):
they could just write, they write, Yeah, they know how
to write queries and mutations, and they need to probably
know that they have to run the generator if they
change it, and probably they won't even have to because
how often once you get your features set up.
Speaker 3 (50:45):
Yeah, like automatically when you run serve.
Speaker 4 (50:48):
Anyway, Oh yeah, yeah that's nice.
Speaker 3 (50:50):
It's an our task graph So as soon as you
as soon as you change that run serve, that generator
is going to see oh I'm different now because the
so many change also to find your graphicill are part
of the input cash, right, so I have to rerun
this target, rerun it. Okay, now I can serut my application.
So the only piece of graph poll that a new
(51:11):
developer needs to know if they're coming into rite. Say
a front end feature is how to write a graphical
krey or a mutation. The massive it is just angular.
Speaker 4 (51:19):
Yeah, and that is because of the tooling that you
set up and that you have people that set up.
I feel like it's like anything with any technology we
pull in, there's always going to be a team that
has to go in and set up like how you know,
how are we handling feature flags? How are we handling authentication?
And then the the daily icee is only going to
(51:42):
be touching just barely any of.
Speaker 3 (51:45):
That interacting with the systems that get built for them.
Speaker 2 (51:48):
Yeah.
Speaker 3 (51:49):
Right, Like there's like the two different levels of development
inside of a company or more. But like we'll just
talk about the two.
Speaker 2 (51:55):
Right.
Speaker 3 (51:55):
It's like the people that build the systems that all
the other deads use, right, Authentication cogen Like a lot
of it's tooling, right, or it's these core shared systems.
Authentication is a great one, right, Like we have decorators
to use in our nest out that turns on authentication,
(52:18):
but there's this whole system behind it that no one
needs to really know about. Everyone does at my company
because we're small and everyone works. Yea has worked and
salved bugs and fixed and stuff like that.
Speaker 4 (52:28):
I think that's actually like a better Like it really
really sucks when the one person who knows how it
works either goes on vacation or doesn't work there anymore
for whatever reason. So yeah, the more you can spread that,
but you can't effectively isolate some of your team from
having to take on that responsibility. So okay, so Polo Client,
(52:50):
we kind of started talking about what it brings to
the table, but then we talked a lot about cogen
So let's talk a little bit about Pollow Client.
Speaker 3 (52:57):
I got, all right, Yeah, so Pollo Client brings all
of the great things that Apollo does for you by
you know, automatically caching, queerying, formatting the queries, you know,
watch mutations, all that kind of stuff, but layers of
angular API on top of it. Right, so you get
(53:18):
I don't actually know if observables are anything that Apollo
provides itself, but like there's a full observable story in
the Fellow Client because it's turned all of these operations
into watch queries that have value changes and all that
kind of stuff. Right, So really what you're getting is
an Angular API layered on top of the follow.
Speaker 4 (53:40):
Yeah, and so like we were actually looking at RCLE
like U r ql. I think that's how you say it.
You can use that with Angular, but it has its
own observable type, so that's a little bit confusing.
Speaker 3 (53:52):
Yeah, that's annoying.
Speaker 4 (53:54):
It doesn't generate. It doesn't have a generator. Code gen
doesn't support like Angular service generation. So you get like
half of the you get the type, you get the
types and stuff because those are not specific to Apollo.
They're using different codegen plug ins. But yeah, it won't
generate Angular services for you like you can integrate. So
(54:18):
Angular Apollo is the only Apollo or the only graph
Ql client that we found when we were doing our
research in the last six months that fully supports Angular specifically.
Speaker 3 (54:31):
I think it's kind of the go to, is my
understanding for you?
Speaker 4 (54:34):
Yeah, yeah, I mean because I I did a quick
pull of the gdes and I don't think it's also
like there aren't a lot of I guess I don't.
I'd be interested to know how many teams are actually
using graph Ql versus using REST. I think it's more
rest than graph Ql.
Speaker 2 (54:54):
Graphic definitely saw like a steady decline over the last years.
There were several startups that tried to provide like the
infrastructure aspects of like the the whole Graphicolls stories, and
all of them failed or got acquired the bigger companies.
All of them I don't I know, obviously don't know
all of them, but all the ones that I know of, uh,
(55:16):
that's how they ended eventually. So obviously I get it.
Speaker 3 (55:22):
Like, yeah, there's a lot of overhead and a lot
of complexity with graph cool, right, Like I'm and I'm
sure a ton of people were told, oh, graphicol solved
all your problems, and then they went and used it
and they like screwed their entire company or engineering problems, right,
And it's like, no, no, Like, it will solve a
lot of problems if you invest the time into it
(55:42):
and accept the constraints that you're getting with it. That
is solving for you inter compared to rest.
Speaker 2 (55:50):
Right.
Speaker 3 (55:50):
But there's constraints in anything you do. There's trade offs
in everything you do. There is no one there's no
silver bullet.
Speaker 2 (55:55):
Yeah. I like to look at it kind of like rails,
where it has like a very stable community of people
and it does solve actually real world problems. Absolutely, but
you need to have those problems for it to be
a viable solution.
Speaker 3 (56:06):
Yes, exactly absolutely, as of everything, don't over engineer. At
the start, we started as rest. We still have lots
of rest, right, but we're moving towards graphic well because
a lot of our a lot of this stuff on
the customers customers facing the donors side of the product,
A lot of that is built out by the charity. Right,
(56:27):
it's dynamic UI. Every fundraiser page is different, Yeah, all
the structure different, all the components are different, the data
is different, everything's different, right. So now there's a unified
API that allows us to query each of the different things,
but with a single API, and you don't need to
make all these different requests. Right, organization defines what their
(56:48):
funders or page looks like, and the client just knows
how to query the right data for that case. It
selects these things for this case.
Speaker 4 (56:57):
That is the hardact use case at Cisco, because they
about Cisco. It's a giant company that has acquired so
many other small companies, and our biggest hurdle is data,
getting data and normalizing it. So graphicuel gives us a
way to pull data from different sources and streamline it through.
(57:18):
You can do that with REST, but we have not
traditionally done that well with REST.
Speaker 3 (57:22):
So yeah, and to be clear, all these problems I'm
saying graphicill has solved for us can be solved with REST. Again,
I am not saying this is a silver bowl. I'm
talking to our listeners right now. Graphicill isn't just going
to solve all those problems that I've talked about. You
can absolutely do that with REST. We just chose to
do with graph coel because we liked working with it
(57:44):
and we are finding that it is wildly beneficial to
our engineering process and product development life cycle.
Speaker 4 (57:50):
Okay, so what are some of the optimizations you can
do with graph ql that have really sped up performance
in your applications.
Speaker 3 (57:59):
Yeah. So one of the big things is being able
to pull from the like having those computed fields and
stuff like that, and being able to pull from different
data sources. Again, you can do this with REST, but
then you're always computing those fields unless you put in
your own property selection mechanism of some sort. Right, But
we've built out this really strong caching system that will
(58:21):
cash entire entities that can be used throughout the entire
the entire product, not just for computed fields, but also
like so I know Laura mentioned data loader is just
kind of offhand, I don't know, like ten minutes ago
kind of thing. So a data loader is effectively an
in memory store for that request. And the reason they
(58:42):
exist in graph keel is because as you're clearing down,
I'm using my hands to display things, but no one
can see.
Speaker 4 (58:47):
That starting at the top, and he's going down.
Speaker 3 (58:50):
Starting at the top. When you're coating your root node,
and then you're getting the next don you're getting the
next one, getting the next ode. Right, Like, those are
different function calls in JavaScript effectively, right, Like, it's not.
It's not like a rest call where you have like
it enters your function and then that function does everything
and then returns to data and then that is literally
the response from your hd B server. Right with graph ql,
(59:12):
it comes in and then kind of figures out what
resolvers it needs to process, builds a query plan effectively
of what order they should be processed in. You know,
child resolvers are processed after parent resolvers, but sibling resolvers
get processed in parallel et cetera, et cetera. Right, So
because of this, like so for example, trials the root node,
(59:34):
one of the root nodes is organization, and blow that
as fundraiser and blow that is you know, event tickets.
If you need the fundraiser from the root node that
was resolved at the root node in your fundraiser node,
that's a different function call, right, So it doesn't necessary
it's not getting the response necessarily, like there are like
there are parents and you know, et cetera, et cetera.
(59:55):
But one of the things with data loaders is they're
shared across the whole request. So you can p I'm
a data loader in one resolver and then that data
is available without needing to read it from the database
again in your child resolvers. Right, so you can like
kind of build this. Okay, here's the data that we've
queried so far. So what we've done is we have
there's like multiple layers of caching and optimizations where our
(01:00:18):
resolvers are using data loaders. Those data loaders look at
a reddest cash first. If it doesn't exist in the
reddest cash, then it goes to a read replica. Right.
And but we've built it such that the dev just
imports the data loader. Yeah, everything else is handled for them.
Speaker 4 (01:00:37):
Right.
Speaker 3 (01:00:38):
So now like we've we're working on load tests to
actually see I'm going to I'll report them on like
my Blue Sky and stuff like that when when we're done,
when we're not with it, but I'm going to. We're
building all these massive load tests to run with the
model cash completely disabled, and then run with the model
cash completely enabled. So you can see the benefit of
(01:00:58):
implementing this cashing mechanism with data loaders. Yeah, data is
also a batch, right, Yeah, so they batch. You can
bash request keys. They're shared across the request. So if
you batch your graphical crease on the client as well
and say like, oh, the client Appallo has made these
three requests within one hundred millisecond time, right, it'll send
(01:01:20):
those as a single request to your graphical server and
then process them top down.
Speaker 6 (01:01:25):
Right.
Speaker 3 (01:01:26):
Data loaders are also shared there because you should be
architecting things such that the entity foreign id is the
same entity across requests, right, so you can you can
share that data. So data loaders are beneficial in a
lot of cases and required, absolutely required for system performance
in other cases.
Speaker 4 (01:01:46):
Yeah, we had we had the with the grocery store,
we needed to match coupons to products, and so as
you scroll, we would send individual requests like now get
me to coupons for this, now for this, now for
the And when we put the data loader in there,
it was able to batch those requests instead of making
individual plus exactly batch together like however, it works off javascripticks.
(01:02:12):
So however many requests came in during that tick, and
if it had seen that request before, it was already
in the cash so it could just hand it back
right away and it didn't.
Speaker 3 (01:02:20):
Bres that on the client side. But then on the
service side, wall say you have a graphicel endpoint that
lists products. Right, once that resolver completes and you have
a list of say ten products, each of those products
is going to be a GRAPHICEL object, a graph kel
product object. Right. That graphic product object may have resolvers
attached to it that you can then query off of
(01:02:41):
using your query language. Right, But if that resolver that's
attached to the object isn't using a data loader, and
you need the object in that data loader for some reason,
or you need something from higher up in the tree,
now you're making that query once per resolver per product.
With data loaders, the data is already gonna be in
(01:03:02):
the cash in the server side data loader cash, so
you just load it from the data loader. Yeah, once,
So you're not running that resolver ten times and making
ten more database grades. So when you're doing lists and
pagination and stuff like that, data loaders are absolutely one
required for server performance. They're not technically required.
Speaker 4 (01:03:21):
They're not technically required.
Speaker 3 (01:03:23):
Not technically database calls.
Speaker 4 (01:03:25):
How slow your stuff is, exactly explain about it.
Speaker 3 (01:03:29):
You want you can make data database calls in every resolver.
You shouldn't, but you can, right, What about again is
just another case of the overhead and complexity that graphical
sort of inherently imposes on you because of the architecture
of it. But if you can figure out how to
(01:03:51):
reduce that via tooling and automation, the benefits are huge.
Speaker 2 (01:03:57):
Yeah.
Speaker 3 (01:03:57):
So just being able to like incrementally separate out, separate,
separate out caching and optimizations and you know, adjust what
is most likely to be queried and you can you
can really improve the performance service server. Again, there's other
(01:04:18):
trade offs and etc.
Speaker 2 (01:04:19):
Et cetera.
Speaker 4 (01:04:19):
Right, but like you use Apollo on the server side
as well.
Speaker 3 (01:04:24):
Yeah, we use Apollo server as our API server client.
You don't have you don't have to. I mean, I'm
sure there's other graphl ser server, graphic ALP clients that
I've never used any but what we're saying.
Speaker 2 (01:04:36):
But it's a JavaScript implmentation, so if you're not doing
JavaScript on the scar.
Speaker 3 (01:04:41):
Yeah, yeah, So we use as our server and then
we use Apollo Like Nest has a first party Apollo
and Graphicuel implementation, which just uses a Polo server internally, right.
Speaker 4 (01:04:53):
Just we I think that we didn't use Apollo on this.
We only used apollow client, so you like you can.
Speaker 2 (01:05:00):
Yeah, And that was one of the first things that
pissed me off when I used I used apollow client
in an anger app, but we were having Java on
the back inside. No Apology server there. So one of
the big things that you read very quickly when you
and as was twenty seventeen, so things might have changed
by now, but one of the big things at that
time was like, well graph co supports streaming out of
(01:05:22):
the box of data, which is nice, but that is
a benefit of Apologize, not of graph Cel. So when
you're not using Apollo on the server, streaming says bye bye,
no streaming for you. Yeah.
Speaker 3 (01:05:33):
So graph ql is the underlying technology and specification. Apollo
is an implementation of it. So they've of course layered
their own improvements features, you know, tooling.
Speaker 4 (01:05:47):
Like optimistic responses like all that, like it handles a
lot for you. And then yeah, and you can really
effectively like instead of trying so our engine RX Global
Store implementation was effectively trying to replicate the caching that
Apollo client does.
Speaker 3 (01:06:06):
Yeah, and we did the same thing. We're like, this
is wildly complicated to like merge all this data in
memory after making the API calls that we could build
out these u these highly dynamic uys. H. That's why
we wont with craft to all.
Speaker 4 (01:06:18):
Because now it just does it as well as long
as you, you know, made correctly the right things, instead of.
Speaker 3 (01:06:25):
You need to architect it. And that's the problem with
a lot of softwares out there.
Speaker 4 (01:06:30):
Why do we need architecture J This sounds like kind
of oppressive.
Speaker 3 (01:06:34):
Just right, just write the code, right, Just write the code.
You just built a product near J.
Speaker 4 (01:06:38):
I'm not even going to write the code. I'm just
going to have co pilot to it is vibe code
it a vibe code, it out. I can't wait, nothing
will go wrong, all right. Oh, and then you can
do batching on the client side.
Speaker 3 (01:06:53):
Right, you can do let's sorry that.
Speaker 4 (01:06:57):
We talked about that, you and I.
Speaker 2 (01:06:59):
Yeah.
Speaker 3 (01:07:00):
Yeah, So you effectively tell Apollo Angler like, okay, batch
these requests in a one hundred millis second tick. So
if a page load happens and like five components rendered
in parallel, right, and they all send their own graph
qu out request, then Apollo Angler will instead of sending
(01:07:20):
an operation as an object in the body, it'll send
an array of objects in the body, and then the
Apollo server we'll see like, oh, this is an array
of operations, not just a single operation. Okay, I'm going
to process them sequentially.
Speaker 4 (01:07:33):
That's cool, Okay, all right, Joann, you were going to
say something.
Speaker 2 (01:07:37):
Yes, I have one last question for you, Jay, Yeah,
so no, no, super serious model, okay.
Speaker 3 (01:07:42):
Serious, serious, sis, we are so serious, so serious?
Speaker 2 (01:07:48):
How long that's actually not the question. How long are
you know you use a graph in your copase a
trella's how long? Yeah?
Speaker 3 (01:07:55):
Three two years? Three years?
Speaker 2 (01:07:58):
So with what should now? Would you change anything about
your implementation? Like assuming like this is the wild West,
and you have like unlimited resources. I mean you a startup,
you're basically unlimited.
Speaker 3 (01:08:12):
Yeah, so CEOs sorry justin if you're.
Speaker 4 (01:08:16):
Listening to this, it wasn't I mean he was talking
about this other company.
Speaker 2 (01:08:21):
That sound was like a psychic jokes aside, Like, if
you with what you know now two years later, would
you do things differently? What would you change? Would you
spend some time optimizing some things? If you would have
the time.
Speaker 3 (01:08:37):
I would, Uh, I'd be a little bit more strict
in determining how we want our responses to be structured,
because like graphicol doesn't just define the response type for you. You
define you define the input type, you define the arguments
that query prems. Effectively, you define the response type, right, So,
(01:09:00):
so you want to design the system such that the
response types can be extended on and flexible. Right So,
instead of, for example, instead of just returning the customer
object as the response type, maybe you return the customer
object as a key of the response type, so you
can extend on that without having to change that structure later.
(01:09:20):
Because like maybe you want to return some additional data
alongside the customer that would then get updated in the cash.
You can't do that now without changing that whole response
type too, maybe having a customer key with the customer object,
but the clients are still going to make think just
the full customer object, so like really spending. And this
is also a problem with rest APIs right. You need
to thoughtful with how you define your requests and response
(01:09:45):
because then you have to version your APIs or you
have to push break and change and whatever, right, which
isn't a problem for some customer companies, but it is
for most. So I would spend some time considering how
we want our response it's types to be structured. Now.
I think that may have been a small learning on
our point on our part, because it was we didn't
(01:10:06):
fully realize what we could do with response types like
and graft. You. Well, everything is tracked via like its
object type and ID, right, so customer order coopon whatever. Right,
So if say apolograph will gets an object back, it
(01:10:26):
knows that object is a customer and it knows that
customer's object ID is this, right, So it's always able
to update the cash and stuff like that, right, So
if you're mutating a customer, you don't just have to
return the customer you can return whatever object types you
want from that mutation, and then Appaulo will update all
of the object types that it finds in that response. Right,
(01:10:50):
So it doesn't need to be only return this, Like
maybe your mutation changed the payment method object type on
that customer because it's part of the customer mutiple. Right.
You can return an object that is customer this, payment
method this, and then the appalla will update both of them.
Speaker 4 (01:11:08):
Right.
Speaker 3 (01:11:08):
Where So like we have like a bit of a
mix of the two, right because we haven't gone back
to change things where some responses are just the entity
and then some are a more extensible format where we
can can add optional keys in the future to improve
our responses or to do something else or whatever.
Speaker 4 (01:11:26):
Right.
Speaker 3 (01:11:26):
So that's definitely one of the things that I would
have changed.
Speaker 2 (01:11:29):
Okay, that makes sense, And I think that is one
of the difficult parts because we're so in this like
rest way of thinking that like, Okay, it's a nails
so everything needs to be a.
Speaker 3 (01:11:39):
Hammer, Yeah exactly, Yeah, yeah, Okay, it's not a one
to one with rest, absolutely not a one to one. Like,
do not try and think it's a one to one.
You need to understand the differences and what it's doing
differently then rest to use it effectively.
Speaker 2 (01:12:02):
It's not like, uh, graph K for dummies kind of
thing like the best or like from your aspect, the
best way to get into graph kill.
Speaker 3 (01:12:13):
I mean at this point I would just use chatch
ept and just ask it like give me, give me
a getting started guede, you know mentality.
Speaker 4 (01:12:22):
Yeah, yeah, and there's a there's like graph ql dot
or slash learn.
Speaker 3 (01:12:28):
Like the Apollo docs are great, like the Apollo server docs.
Honestly they're great. And if you're using some server side stuff,
like if you're using Nests for example, right, there's a
first party Apollo plug in for nests built by the
Nest team. It's in their documutation Super Easy Get Started.
They go through schema first, and they go through code first.
They go through both and then they talk about resolvers
(01:12:50):
and mutations and authentication and guards and all that kind
of stuff. Right, So find whatever framework you're using and
see if that framework hasmation for Graphic or Apollo.
Speaker 4 (01:13:03):
Yeah, and then co gen too. There's all there's different.
They support a ton of different things, like if your
back end is using like Yoga, they have generators for
that they have, you know, typescript generators and you know
Apollo specific generators. So, and there's like just like a
(01:13:24):
whole what did they call it. It's like a there's
like a graph Quel Consortium or something like. There's like
literally like a board like kind of like typescript but
not typescript, where they like they approve tooling for things
and like support tooling for things. Like I can't remember.
Speaker 2 (01:13:44):
Is it like a foundation. What's that like a foundation
kind of thing?
Speaker 4 (01:13:49):
Yeah? Yeah, yeah, yeah yeah yeah, like where things like
if things start to be unsupported and they're important to
the community, they adopt them, that kind of stuff. But
I can't remember what it's called. So I'm very helpful.
Speaker 2 (01:14:03):
Conceptual. It's so cool.
Speaker 4 (01:14:04):
Yeah, do you remember what it's called. I feel like
such an idiot. Well, let me just google graph QL Consortium.
That sounds like the right graph cuel. No, wait, that's federation.
Federation is different is different?
Speaker 2 (01:14:21):
Oh no?
Speaker 4 (01:14:22):
Wait?
Speaker 2 (01:14:22):
Okay, hold on, okay, are you talking about the Guild.
Speaker 4 (01:14:29):
That's what I'm trying to say, Thank you, Grachael Guild.
Speaker 2 (01:14:32):
Okay, well, but that's a company by.
Speaker 4 (01:14:37):
They adopt things that fall off though, like they've adopted.
They adopted one of the I want to say, maybe
it was grap child request or something. There was some
library that they recently adopted. And this is just me
doing random googling.
Speaker 2 (01:14:50):
But the guy that run a used to be a
anger GDE and I have his what's his name? He
has like really crazy hair and a good.
Speaker 4 (01:15:00):
Way I'm going to find this picture. No, it's not
in there, he's not They're not in and about, but
but it is, thank you. That's what see. That's like consortium.
It's kind of the thing like, but there's some resources
through there too.
Speaker 2 (01:15:19):
Juri Goldstein that's running the company, and he used to
be an anger gd like way back anger to Beta
face kind of thing.
Speaker 4 (01:15:30):
Okay, like a million year like old school school. But anyways, yeah,
there's because I think they support I think they did.
They own coach in. Maybe that's where I came up.
I don't know, it doesn't matter. I'm not going to
keep talking. I'll just sound like a dumb dumb But
but there are resources out there. There's also a ton
of courses like LinkedIn has courses like.
Speaker 3 (01:15:53):
So there's so much out there.
Speaker 4 (01:15:55):
Yeah, and I don't know, like in your opinion, Jay
has there. I don't feel like there's been a lot
of changes, not to my knowledge, Like it feels like
everything else has changed, like super especially in Angular. We've
seen so much happening so fast. I don't feel like
grap you all has suffered from that same Like I'm
sure there's things that people be like, but what about this,
(01:16:16):
And I'll be like, I didn't know about that because
I haven't done it for four years.
Speaker 2 (01:16:19):
But yeah, but just it's also a little bit different,
different to like change the schema, right, like a specification,
like if you're compared with like JPA hybend that hasn't
changed in like.
Speaker 4 (01:16:33):
Yeah, yeah, I'm just saying like, don't worry, y'all. If
you learn four years ago, it's probably still applicable.
Speaker 2 (01:16:40):
Don't do gets.
Speaker 3 (01:16:42):
Me, don't forget Yeah, don't do that. No, yeah exactly.
Speaker 4 (01:16:47):
But yeah, we're like I should probably go back to
my real job at some point today.
Speaker 2 (01:16:53):
But this is not your real job.
Speaker 4 (01:16:54):
It's not Nobody pays me for this. That's why I
can see n X and definitely go to n G
conflict because I'm not getting paid to do that. So
I just we're we're paid. I'm just here because I
like to tell people what to do, so amazing, one
(01:17:15):
more outlet for me to control people.
Speaker 3 (01:17:18):
This is just yeah, this is just away from you
to talk.
Speaker 4 (01:17:22):
About nerdy stuff without my spouse being like, uh huh, yeah, okay,
that sounds reasonable. What you know. It's bad though, when
like they know the name of every person you work
with because you've clearly been ranting about stuff too much.
So yeah, anyways, Jay, we appreciate you being here today. Uh.
(01:17:43):
I feel like if you don't know how to reach
out to j Bell, you aren't trying hard enough. But
Jay's the best place to reach out to you.
Speaker 3 (01:17:51):
What's the best place? Probably on Blue Sky now, j
Cooperbell Dek. Yes, yeah, it'll be in the show notes
if anybody wants it.
Speaker 4 (01:18:00):
Yeah, and definitely if you aren't on Blue Sky, I
am gonna shill for them because I it's literally like
I got back that great like Twitter vibe that I
had before Twitter turned into a weird neo Nazi bar
so I enjoy being there. But yeah, Jay, thank you
(01:18:21):
so much for joining us. Thank you for everything you
do for the community. You're always willing to help people out,
and I think that's amazing. We also love having you
on the podcast. Due to our we know that listeners
love having you on the podcast as well. So definitely
subscribe if you have not, if you have not bought
(01:18:43):
your tickets to n G comp yet, that is happening
in October in Maryland this time, so it is a
venue change.
Speaker 3 (01:18:54):
Subscribe button.
Speaker 4 (01:18:55):
Smash it, just smash it, like get out a hammer
and smash shit. Okay, I need to go because I'm
getting like a little bit my blood sugar is low.
Speaker 3 (01:19:08):
You have our NX Champions meeting in five minutes, Laura.
Speaker 4 (01:19:12):
I went to the seven thirty one because I'm not reliable.
I usually am taping this podcast at the same time.
Speaker 3 (01:19:18):
So yeah, that one's too early for me.
Speaker 4 (01:19:20):
Yeah it no, that doesn't is a little bit too
early for me.
Speaker 3 (01:19:25):
I've not paid enough for that, right.
Speaker 4 (01:19:29):
All right, Well, thank you so much for joining us, Jay,
thank you for your time to the community, Thank you
for listening, and we will catch you next time.
Speaker 7 (01:19:37):
Thanks everyone. Hey, this is Prestol. I'm one of the
NGG Champions writers. In our daily battle to crush out code,
we run into problems and sometimes those problems aren't easily solved.
NGI Comp broadcasts articles and tutorials from NGI champions like
myself that help make other developers' lives just a little
bit easier. To access these articles, visit medium dot com,
(01:19:59):
forward slash.
Speaker 1 (01:20:02):
Thank you for listening to the Angular Plus Show, a
NGCOFF podcast. We would like to thank our sponsors, the
NGCOMF organizers Joe Eames and Aaron Frost, our producer Gene Bourne,
and our podcast editor and engineer Patrick Kayes. You can
find him at spoonful ofmedia dot com.