All Episodes

December 18, 2024 48 mins
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:06):
Hey, everybody, and welcome to another episode of Ruby Robes.
This week, con our panel, we have John Luke.

Speaker 2 (00:13):
Hello, Dave.

Speaker 1 (00:15):
Hey, everyone feels so much more familiar to not say
your last names. I'm Charles Maxwood or Chuck.

Speaker 3 (00:20):
I'm Chuck.

Speaker 1 (00:20):
And this week we're talking to Dmitri. Dmitri, I, do
you want to say hi and let us know who
you are?

Speaker 2 (00:25):
Yeah? Sure, Hi? Ever want? My name is Dmitri and
you can call me Dima if it's simpler, because I
know that Russian names start sometimes. I started my software
development career in twenty and twelve and decided to focus
on Ruby and Rails in twenty fourteen. Before that, I
did some totten and development, front and development and iOS
and since southern and eighteen, I guess I work as

(00:48):
a back end developer at people Marshus. I spent a
lot of time working on open source. I committed to Urga, Rails, grab, Kluba,
has some smaller gems, and I also maintained some of
my own jams. And I started working the graph field
I guess in two thousand and seven gen, so it's
like four years not for yes, so far nice.

Speaker 1 (01:07):
So the discerning listener probably heard graph ql about four
times in your introduction, and that's we're going to be
talking about specifically graph QL and rails. We talked about
graffiti a couple months ago, and so this will be
interesting to kind of compare the approaches and things like that.
I know that some people love graph ql and some
people love to hate graph ql. So I'm a little
curious as we get going, can you just kind of

(01:29):
give us the elevator pitch for graph ql. I know
most people are probably at least somewhat familiar with what
it is. But if we can start there, then we
can start talking about why substitute this for REST, or
use it in conjunction with REST, or what the trade
offs are, or how to set it up or all
that other stuff. But knowing what it is first is
a good place to start.

Speaker 2 (01:47):
Yeah, sure, that's strand out. So if you're using RESS,
probably got to use to a concept of a resource
that each kind of data you have is a resource,
and you have at least affections you can perform them,
and you have these rest points that I allow you
to fetch the data you need. But the problem is
that sometimes you need a different set of data for
each platform. For instance, when you're working with your own

(02:09):
website and your mobiliplication. And in this case we have
two problems which come from the same place. The first
one is under fetching, which means that when you finish
some data and you want to get the additional data
about the additional resource, you have to make another quest
or sometimes you have to make a lot of requests
to get the full page. Imagine that you're working on
Baseborg or something. You have a feed, a rist of friends,

(02:31):
all that stuff, and you have to make additional requests
or you can you know, have a special request that
allows you to fetch all things you need. But it
also sounds wrong because when you have the mobile AUP,
you don't need to load all your friends and they
fit at the same time, so you have you know,
separate APIs for the mobile application and your website. The
second problem is overfetching. It's almost the same thing. When
you work on the mobile app and you get all

(02:54):
these data for the website you don't need you don't need,
but you have to download it processed and you know,
use the network for that. So graph Gale tries to
solve this problem. You can get any data you want,
but you don't get any additional data that you don't
want to get. The special benefit to get from graphical
is the documentation, because the heart of the graph Kale

(03:15):
system is it's schema. Schemo defies your graph So you
have a list of entities which are called tag and
these steps can have connections between each other. So you
have all these connections described and you know what data
can you get from each point. And in this case
you don't need to have the separate documentation because you
already have it. You just need to explain what does

(03:35):
it mean in your system? For instance, what is the user,
what is customer? All this stuff and that's it. You
cannot have the outdated decommindation because in this case your
graphical system just won't work at all. So I guess
that's the main setting point for me. And also one
more thing, but not the last one. I guess there
is a special thing called subscriptions. Sometimes you don't want
to get the data right now, but you want to

(03:57):
get the updates, for instance, new messages in the chart
or new notifications of the stuff. And when you work
with the rest, you have to build something from scratch,
like make the ajax request make all ink makes sockets
by yourself all the stop. Graphicel solves this problem by subscriptions.
In this case you can describe the where you want
to make and wait until the event happens. In this case,
graph kill would send the data to you and you

(04:19):
don't need to think about how it came to your
ilientification because it doesn't really make any difference between the
regular quay or subscription call. And this is the part
of the technologies, so you do not even need to
care about how it works because it's usually a part
of the library. You think that's the benefits of GRAPHAEL.
I wanted to mention to kick it off.

Speaker 1 (04:38):
I'm a little curious as we get going, John Luke
and Dave, what all have you done with graph ql.

Speaker 4 (04:43):
You mean, other than avoid it or complain about it. Yeah,
now I'm joking. So I think that graph qul is
really cool from the perspective, like Dmitri was saying, it
has a lot of benefits, especially to end users consuming
the API where where they can only really fetch the
data that they need, not everything, especially when you're talking

(05:06):
about really large data sets and really complicated data sets
which maybe have a lot of associations associated with the
main record that you were wanting to get back. So
having the ability to just make one API call to
this back end application serving graphic well is really nice
opposed to having to make ten different calls. So not

(05:28):
only is that more efficient from the perspective of one
verse ten calls, but you also then have more ability
to let's say pull it often, especially if the back
end has some kind of rate limitter and in order
to serve this one page or this one application, you
had to make fifteen to twenty simultaneous requests to get

(05:51):
all that data. Well, if you're doing any kind of
rack attack or rate limiting, then you were consuming twenty
times for just one page, you know. So it definitely
has some benefits from that perspective. But now here's my
big buttz. Despite all of those glorious benefits, I think
if I am developing a API back end that's going

(06:11):
to be consumed by an internal or an iOS or
Android application that I'm publishing to the store, or by
another in house product or by a third party who
is very comfortable with working with rest APIs, I'm going
to go with RESPI every day. But if I am
going to create a API that I'm expecting all of

(06:35):
my customers or users of the application to heavily write
their own interfaces to everything. I'm just really providing a
data source in a formatted way. Then Rephkuel might make
a bit more sense.

Speaker 2 (06:50):
I can totally agree with Dave, and there is no
reason to just start writing all the applications we have
with golf kill. And also if you have a client
that wants to work with REST and if it's still
comfortable to work with the REST, that's completely fine, I guess.
But sometimes it makes sense to start new projects with
the graph Gale even if you don't have any experience
with it at all, because sometimes you can find benefits.

(07:12):
And by the way, there is a special trick in
graph Gale which I also to awoid all these different
small requests, which is called batching. I guess all modern
front and trap cal frame A frameworks can do that.
Batching is a thing that doesn't immediately make. It just
builds a lest of requests and send then once, let's
say once a second or something. It helps a lot

(07:34):
when you have to render a lot of things from
the same page, at least in the initial lot.

Speaker 5 (07:38):
Yeah, I don't disagree with save at all. I think
for whatever reason, maybe this has always gone on. But
I feel like the last few years there's been a
lot of hey suite new tech comes out, like I
have this new hammer and everything is a nail. And
I feel like graph cel totally was a thing like that,
where everyone was using graph qol for everything, even when

(07:59):
I could find any of the benefits of graph ql
lining up with their problems right Like it was like
everyone using Mango to try and replicate relational databases, which
Mongo is not good at.

Speaker 3 (08:10):
You know, that's my only beef.

Speaker 5 (08:12):
I think that graph ql is awesome to be frank,
I've even said before and I still think it's probably true.

Speaker 3 (08:18):
So REST is kind.

Speaker 5 (08:19):
Of like a standard, right graph Ql is a technology,
and I think that distinction, to me is important because
if we can create a standard on top of graph
ql that looks a little bit more like REST, I
think that, in my opinion, I think it's likely that
we'll see that graph ql is a lot more powerful
than it looks right now.

Speaker 3 (08:36):
Because to me, it looks very wild westy.

Speaker 5 (08:38):
It's like, you have this cool new technology and everyone's
all over the place with it, and so it's just
a gigantic mess, and everyone's blaming the technology instead of
blaming the discipline of the people that are using the
technology that to me appears to be the source of
most of the problems. I've only used it on toy
apps so far. I haven't actually had a real problem
in my life that made sense to match it up with. Yeah,

(09:00):
but I still think it's sweet. It changes the way
that I think about things.

Speaker 1 (09:03):
At the very least, Luke, have you done anything with
graph you are no.

Speaker 6 (09:06):
I interviewed for a position with a company that made.

Speaker 7 (09:09):
White label, white label apps and they were heavily in
the UK. They were heavily heavily into their GRAPHQL, and
I was talking to CTO and asked him why because
again there were a rail shop why they kind of
put everything into GRAPHQL. And they were very mobile focused,
and they felt that one of the benefits from using

(09:31):
graph ca OB was the kind of huge benefit on mobile,
especially with more efficient network usage. So for them, any
amount of complexity was worth the trade off to get
their apps running more responsive. So the fact it was
a kind of a newer technology didn't care. They just
wanted that improved customer experience. So I'm really interested to
hear what Dmitri's saying, where you're saying this is not

(09:53):
just about efficiency, this is actually a better way of
organizing the interface between the client in the server. And
I think you said something like having a schema was
one of the big winds which you don't get with
the rest based at So it's interesting to hear what
why isn't didn't schemers go away with XML?

Speaker 6 (10:14):
That was the last time I had to deal with schemers.
Are schemer is good now?

Speaker 2 (10:18):
Yeah, that's a good question, and I totally agree that
it sounds like the more schema, but yeah, schemers are
better now. I guess I'm too young to understand all
the pain with schemes because I didn't work with them
a lot. But yeah, the schema language in Garcao is
very simple, looks like Gison, just regular gson. You just
define a type, you just call type user and define

(10:40):
list of fields like title is string and I don't know,
orders is a list of order which where order is
another type for instance. And they said you define all
this stuff with text and for instance, in Ruby you
don't even have to define the scheme by yourself. You
just define classes and it makes all the magic behind
the skins. You don't need to do anything at all.
And then you can use this schema to center your clients.

(11:02):
For instance, some i ET tools use scheme must have
nice autocomplete and will date to e client request that
they at least quest the fields you have, for instance.
And also you can use any intercommutation tool you want
to just show the place where you can find the
chema and it will show abutal screened with all the fields, connections, types,
all that stuff.

Speaker 1 (11:22):
Yeah, now, speaking of the schema, I mean, this was
the painful part at least when I've laid with graph
QL in rails, is that I wind up essentially fighting
the engine a little bit because I have to set
up that schema for every type in the system. It
you know, at least, and I haven't tried it for
a year or so, so the tooling might have gotten better.
But yeah, so essentially I had to tell it. You know,
the user's name is a string, and the user's user

(11:44):
name is a string, and the users you know this
that and the other is a number, and you know,
in RAILS, if you're doing rest it just kind of
all works, you know, and I didn't have to do
any extra work to get that kind of an endpoint
up within GRAPHQL. So yeah, that's my major beef with it.
It's just that it created a whole bunch of work
on the back end that I wouldn't have had to
do if I was doing rest.

Speaker 5 (12:04):
I may argue you had to do that on the
front end, right, because you got all that through Jason.
It was all the string and then you had to decide, well,
what is this? Is this the number? Should I be
parsing ends, you know, or is this actually a string
or is it something else? If it was objects, you
were cool. But otherwise, yeah, you had to play with
it there.

Speaker 1 (12:21):
Yes, and no, I mean I agree with you, and
I don't. One thing that JavaScript has going for it
is coercion, and so it winds up doing a lot
of the you know, it winds up doing a lot
of the work for you on that front right, And
so unless you need like some very specific functionality out
of it, you can just count on you know, JavaScript
doing the right thing when you do a coercive comparison

(12:43):
or things like that. And so, yeah, I don't do
a lot of type casting in JavaScript. So for the
most part, that doesn't matter as much to me.

Speaker 5 (12:50):
Yeah, I don't want to pretend like I had to
do it all the time, but it was a thing
that I did have to care about more often than
I wanted to. Probably at least a few times a
year I was dealing with something where I was doing
a comparison or something and you know, for whatever reason,
something that came from data was still a string and
I had to fix that.

Speaker 2 (13:08):
And also, when you work with playing rails and you
want to set up some kind of recommendation, for instance, swagger,
you have to write all these comments near your actions
by hand, and there is a chance that you'll forget
to change anything and your swagger will be outdated and
no one will not end. When you have schemas, there
is no chance that it will be outdated because it's

(13:29):
your schema that's used for peputing your cold.

Speaker 5 (13:32):
So going down the schema route a little bit more.
I used to use soap back in the day, and
I mean, I don't miss it at all.

Speaker 1 (13:38):
This is me crying for you.

Speaker 5 (13:41):
I don't miss it at all. But this isn't my
first rodeo with schemas. Sorry, I guess alets of thinks
I'm talking to her because I said soap.

Speaker 3 (13:48):
I don't know whatever.

Speaker 5 (13:49):
Anyway, I used to use so back in the day,
And I mean, there's some benefits that you get with schemas,
which I think is kind of what you're talking about here.

Speaker 3 (13:57):
For me, the main thing that I.

Speaker 5 (13:59):
Always felt like with schemas is I could pull information
from a completely new source. But I know that you're
not doing this all the time, but I could. I
could know nothing about what was going on underneath, but
I knew everything that I was allowed to do. That's
a lot harder to do with, you know, RESTful APIs as.
You know, we don't really provide that kind of thing.
It's not impossible to do. I guess you could provide

(14:21):
a page that does that or something, but nobody. It's
not a standard by any means. But I guess why.
For me, I can see benefits to it, and I
feel like people are happy with the exchange when they're
doing graph QL. Can you talk more about like why
the schema in this case is a good thing as
opposed to like soap where I always felt and Charles's

(14:43):
actually just Chuck just expressed this a few minutes ago,
He's like, well, I hate that. In rails, I had
to like do all this setup like every time, right,
Like we really hate writing boilerplate. That's why we write rails, right,
So why is this exchange good? I guess, Like, what
are we buying an exchange for it?

Speaker 2 (15:00):
Well? I get. The problem is that in graph kill
World world, they prefer things to be explicit, So they
want you to define the list of fields you want
to fetch. The list to fills you a lot to
fetch all that stuff. And it could be possible to
just generate the list of fields based on your table structure.
That's not impossible. I believe there is even a gem
for that. But there is a huge benefit from having

(15:21):
a schema. When you want to change your data in
graph kiell World, it's called mutations. Mutation is a special
way that can change your data, and it looks like
a regular query, but it's usually expected to change the
data on the back and set. And the thing is
that when you define arguments. For instance, you want to
change the price of the item for your Internet store,

(15:43):
and you want this price to be numbered, and you
want it to be required. You can be sure that
it will be present in the arrangers, because otherwise graph
execution engine will just send the error to the client
telling him that he's wrong. So from the one point
of view, you're at some boilerpred But from the other hand,
you just remove the part when you avalidate the data

(16:03):
because you can ask your execution engine to make it
to make all the checks you want for you. I
think that's fair exchange.

Speaker 3 (16:10):
So is the big bonus then on the writing side,
is that what we're saying.

Speaker 2 (16:13):
Yeah, that's one of the benefits of having schema because
you can make sure that things that you expect will
arrive in a way that you're described in your cinema.

Speaker 4 (16:21):
Yeah, it's been a year since I've really touched graphql,
but back when I did, one of the biggest issues
that I had with it is that my code wasn't pretty.
It seemed like Graphql complicated it beyond what a res
API would do, and from that perspective, I just found
it easier and more maintainable to stick with res api

(16:44):
despite some of the benefits that graphql provided. Now I
know that with I believe Facebook created the concept of
graphul or created graphic youl, and what they were dealing
with I'm sure is a lot worse than what graphql provides.
But I think with what Rails provides just out of
the box, or with just one or two serializer gems,

(17:07):
then you have a much cleaner look without the graph
Ql and just stick with the rest.

Speaker 2 (17:13):
So ft correctly, you mean that you don't like the code.
You're right. When you're writing craft, you know we'll be right. Yeah, basically, Yeah,
I guess it's a much taste, and I can agree
that writing a list of fields each time we had
a new entity can be not that great as a
developer experience.

Speaker 1 (17:28):
Well, it wasn't just that for me. It was writing
the fields and then writing the resolvers for all the
extra stuff. I mean, it felt like it added a
ton of work that I wouldn't have to do if
I was doing rest.

Speaker 4 (17:38):
Yeah. I think the gym that was really popular a
year or so ago for graph ql had a very
strange DSL that they used, or at least it just
seemed like it was over complicated, but that was probably
just due to the necessities of what was required by
graph Ql.

Speaker 7 (17:55):
I'm going to ask some really dumb questions here, so
stand by all right, is running the graph ql. I've
looked at it and I don't quite understand. I understand
the process. You put your scheme together, you write some
supporting code.

Speaker 6 (18:09):
Yeah, is the graph.

Speaker 7 (18:10):
Col query hitting that can't hit rails directly, can it?
It's going through something? Is it hitting a node server
running on the server?

Speaker 6 (18:18):
No.

Speaker 4 (18:18):
Graph qul is basically just a as John mentioned technology,
So you're not having to parse it through another service,
per se or external service. It's just you have a
standard of what the graph ql should be structured as
as it's consumed and how you are posting back to
it for mutable changes and requests and stuff. But it's

(18:42):
not actually going through a external service or anything.

Speaker 3 (18:45):
It could. But your rail server can also be your
graphul server.

Speaker 1 (18:50):
Yeah, it just translates it down into calls through your model,
just like anything else.

Speaker 5 (18:55):
It's kind of like you could pick Puma, or you
could run a patche or engine X. Whatever you want
can be your web server. You just have to adhere
to certain things in order to be that.

Speaker 7 (19:04):
Oh this is my my weird bit, one of my
weird bits for this week. So I've been working on. Noticeably,
graph kill has got a kind of subscription service you mentioned,
which looks like a kind of web sockets based live updates.
I've been trying to make my own without using web sockets,
using a long pulling and I was using Finn because

(19:24):
it's quite easy to use Finn to do asynchronous staff
using event machine. But Finn appears to be missing, presumed dead,
and everyone's using Puma now right, you're getting nods here,
so event machine.

Speaker 1 (19:36):
Yeah, I guess people can't hear my nods.

Speaker 6 (19:38):
Yes, yeah, yeah, there's an absolutely agreement.

Speaker 7 (19:40):
So I've been trying to get you working using the
rack hijack, yeah, which is the kind of way of
keeping a connection open with Puma. My word, that is
a total nightmare. I can't tell you how long. I mean,
it was took me to about two am to six
am to get a hijack working. And now my web
server stack is engine x into Passenger, into Puma into

(20:04):
my app.

Speaker 6 (20:04):
Is this is this excessive? Do you think to run
to run all of those for one web request?

Speaker 3 (20:09):
So this is not grabbed you? Well, but yeah, one
hundred percent agree with this.

Speaker 5 (20:14):
I actually reverse proxy to Puma in almost all of
my setups, but I don't use enginex plus Passenger and
then also add Puma on top.

Speaker 7 (20:22):
You wouldn't believe how long it took me to get
it working. But now that I have got it working,
I quite like it. And it kind of feels like
I've got a bit of a gang on the server
and constead of kind of lonely process serving web pages.

Speaker 4 (20:33):
You like it until there's a problem in production.

Speaker 7 (20:37):
Why I like Passengers. If I'm in production, Passenger tells
me like help. You know, I just run the Passenger
status and it tells me what's going on, and you don't.
You don't get an at with the with the Puma,
there's no kind of happy here. Is how what I'm
doing Puma command.

Speaker 4 (20:52):
Yeah, so enginex is going to be your web server
that's going to serve the HTTP content, and then Passenger,
Puma unic Horn are going to be your application service,
which is essentially going to talk to the rails application,
convert all that ERB slim or hamil code over into HTML,

(21:12):
and then give that to the web server to then
serve to the client. I know it's off topic.

Speaker 7 (21:16):
Well, the reason I'm diverting here is because I've got
my happy long pulling setup.

Speaker 6 (21:22):
And that works really well on our part.

Speaker 7 (21:24):
From the huge, huge stack of process that goes through.
But now I've got that, I'm kind of looking at.
What I want to achieve is a site that sends
state simultaneously to lots of different clients live. So it's
a production system. Maybe you've got sixty terminals, and I
want state changes to propagate automatically so that with somebody

(21:47):
at one terminal hits a button, the others can see that, yeah,
in real time.

Speaker 6 (21:52):
So this is my motivation.

Speaker 7 (21:53):
So that would be if there was a slightly less
arcane way of doing that using graph QL, that would
interest me, you see, because my choices at the moment
are roll my own web socket system, which I've done
but it feels a bit home brew, or look at
something which has already solved this problem, which is a
bit more standard.

Speaker 2 (22:12):
So in graph CAEL there is a subscription special subscription
type which looks like query and it can be run
on different transports as far as I remember, along with
action Cable, it supports some other paid services. So you
set up your transport engine and after that your client
client application will just start using subscriptions as a regular requests.

(22:34):
Because in this case it doesn't really matter where the
data came from because the data has the same type. So,
for instance, if you have a list of notifications and
you have a quit to get all the all the
let's say a notifications for the current user, you get
the list from the query, and then as soon as
the new notification rates, you get the new notification from
the subscription. So it can be done with when actually

(22:56):
cables it down.

Speaker 5 (22:57):
So is this a thing that I can get out
of the b really easily or is this thing that
I have to set up right so I have to
go set up some action cable stuff, tie it in
to whichever graphul gym I'm using, or maybe I'm using graffiti,
which we talked about recently or whatever, but I haven't
tried yet. Is this a thing that's like easy to
set up with some out of the box rails gems

(23:18):
that are easy to find, or is this a thing
that I'm going to have to go read three tutorials
and kind of bumble my way through it.

Speaker 2 (23:25):
Let's say it's not super hard to set it up. Fortunately,
I have veedutorial how to set up subscriptions and all
the stuff you need to start your grap cal application.
And yes, that's a part of the standard jam. The
only thing you have to change if you're using any cable.
Have you heard about any cable by the way, So
in this case you'll have to set up one more
additional gem and it will work.

Speaker 3 (23:45):
Okay, sweet.

Speaker 1 (23:46):
So one thing that I'm wondering about is, yeah, you know,
we've talked a bit about the boilerplate that has to
be written. So is there a generator or something that
I can run that will take care of a lot
of that stuff for me?

Speaker 2 (23:57):
There are actional generator in the graph cal gem that
will create all the stuff you need to start, all
the base types, control and a year old. So yeah,
not to that you have very simple schema with one tip.

Speaker 5 (24:10):
I guess just trying to track what is the gem
that's kind of been around for like I don't want
to say, like three or four years for Ruby stuff.
Is it graph quo Ruby or is it a different one?

Speaker 2 (24:21):
It's a graph You guys mentioned that there was a
strange syntax. I guess you work it with the previous
version with blocks look like gelscript right, a lot of braces.

Speaker 4 (24:30):
Yeah, that's very possible.

Speaker 2 (24:32):
So sometimes ago they completely written the EPA and just
use classes as all other Ruby jumps to do.

Speaker 4 (24:38):
Yeah, as long as I can right, clean, maintainable code,
then I think graphke well, you know, is viable in
a rail's application. But as John said at the beginning,
it's not a hammer that will fit every nail. You
have to use it when it's appropriate, not just because
it's what you're familiar with or because it's the latest
and greatest. If it's right tool for the job, then

(25:00):
absolutely use it.

Speaker 2 (25:01):
Yeah.

Speaker 5 (25:02):
I think for me like the issue is trying to
understand a sort of nuanced view of when is the
right time to use it. I think my understanding, thus,
far from a very simple standpoint, is very similar to
what you said earlier, Dave. If I have control of
the client right so I can write it myself, and
I have resources that very easily fit sort of the

(25:22):
rest kind of framework, then I don't really have a
lot of motivation right now to diverge from that and
go to GRAPHQOL. But if I start to leave that,
if I start to look like Facebook, where I need
to know about you know, person X and a whole
bunch of stuff about person ex as friends, That's where
I feel like I start to go down the road
where graph ql is good. And so I don't know

(25:44):
if any of you guys or Dimitri, if you guys
have like a more nuanced view of where you're like, oh,
here's the line in the sand when you might want
to consider that, But I don't have a very good
line at all.

Speaker 2 (25:56):
Well, as I mentioned before, when you have more than
one client, there is a chance that you'll have more
than one client in your future. That might be it
the baseline you can touch you in graph kill because
in this case you want to worry about what data
would be needed on each platform.

Speaker 1 (26:10):
Yeah, yeah, that makes sense. I mean the way that
I'm kind of envisioning it is, say I build a Dragon,
Ruby or a React Native or something, you know, build
an app with those. And I'm actually looking at some
of this stuff for some of my personal things, Right,
so I could build a front end off of the
graph ql, you know, and React or view or Angular
or you know, have some kind of tie ins with

(26:31):
Stimulus or something, depending on how I wanted to structure
the front end of the web app. But then the
mobile app, I'm finding more and more people want to
interact with apps on their phones, right, and so you know,
I may have slightly different interface or slightly different needs
on a mobile interface on the web, or I may
offer an app, you know, like we said before, written
in graph ql or something and or sorry and react native.

(26:53):
And so if I'm doing that, then having that option
available as well also works. One thing that I'm wondering about, though,
is the authentication flow and like permissions and security and
things like that. Right, So, one thing that I'm working
on right now is I'm trying to pull together like
basically a dashboard for the podcasts that essentially, you know,

(27:14):
I can say, okay, you guys are all hosts on
Ruby Rogues and so you know, here are all the
upcoming episodes. Here's all the prep information. Right, So instead
of it going through zap year and Google Docs, which
is what we're doing right now, you know, it just
pops up a notification on your device and says, hey,
you've got you got this new thing, you know, come
check it out, come prep whatever you know, and stuff
like that. But I'd like to be able to manage

(27:35):
all that stuff on my phone, and so I'm not
sure how to set things up so that the device
can actually you know, authenticate. So they go to the
login screen and React native and it sends the login
information over the wire to graphql. Do I do my
authentication through graphql or do I do it through a
device rest endpoint and then just kind of pass around

(27:56):
a JavaScript web token or how do you manage all
that stuff?

Speaker 2 (28:00):
Fun thing is that when you try to find this
out in the documentation, they say that we don't care
about the authent authentication, just do it as you do it.
Usually in this case, yes, you'll have to set up
something token or device endpoint or something else. And in
this case, when you do some old stuff, you don't
have any shows you have to use rest because when

(28:20):
you get the couple from let's say Facebook, gil have
to set up this resting plant. But then when you
have the token, you just put it somewhere into heater,
into cookies as usual, and perform the authentication on the
rails level, on the rest level. Because graph kl claims
that it's transfer diagnostic and they don't care about the
protocol vi use. They do care about the protocol vid USE.

Speaker 1 (28:40):
So I would put my token, say in the header
or something like that, and then is there some level
of checking that I can do before it executes a query?

Speaker 2 (28:49):
Yeah, so let's briefally talk about how graph kill CUSS
is processed in RAILS. So there will be after you
run the generator we discussed before, there will be a
special role called graph curl. It will be posed because
sometimes query is big enough to not fit into get
your bests, So there will be a post into a
special control which is called graphkill control with a single

(29:09):
action called execute, and these control action contains a single
culture graph caill schema. There is an object called graphicill schema.
With the execute method, you pass your query, your variables
that came from the land, and then you can pass
a context. And context is just an object, a hash
which can contain anything you want, and in our case

(29:30):
it will contain current user object. So you can authenticate
user to make a decision about his permissions and put
it into the context. And these current users in the
context will be available everywhere inside your graph kill tabs, resolvers,
all that stuff. So it's pretty similar to just regular
application control. We're all right every day. You also mentioned

(29:50):
some authorization questions.

Speaker 6 (29:51):
This is the end plus one for the optimization.

Speaker 1 (29:55):
Did you freeze?

Speaker 6 (29:56):
Yeah, I guess people do that a lot when I
ask some questions. I think it might be big.

Speaker 2 (30:00):
Yeah, So the question was about amplus one autimization. There
is a big problem in graph call with amplus one
because sometimes you get requests to feage some entity and
a list of entities inside of this entity, and if
you miss something, there is a chance that you want
to make a joint in your database and you will
have a lot of small database calls. Just a classical

(30:21):
and plus one problem, but it's a little bit different
in graphkill work because there is a channel that you
will make a ton of smaller quests during the one
graph killer request. And there are a lot of solutions
to this problem. So the most simple and native approach
is to just eagle lodge everything you need. When your
application is small enough, you can just you know, list

(30:43):
all the possible associations on the top level in your
career tap and they will be loaded. Sometimes it works
because some applications are small enough, and also there is
no alternative solution. I have a gem called active cot
lasery Lot. It works in exact same way, but it
doesn't really lodge every think you listed right now. It
loads it as soon as someone tries to access the association.

(31:05):
In this case, you can list everything you need, but
it won't make a lot of requests on the request
that you need to make. But of course it doesn't
really work on the huge applications, so there is a
different solution. The solution is called loc ahead and it's
a part of Graphical Ruby a GM. The idea is
that you always can ask the execution engine if it

(31:25):
wants to feish a specific field on any level you want. So,
for instance, when you're resolving a user who can have orders,
you can ask, hey, are going to lot orders right now?
And if it says that you're going to lot orders,
you can make a special quest to database to a
wed DLUS one. And the last, the most complex solution
but it works in huge applications, is called graphkel Badge

(31:46):
is a GM by Shopify. The idea is that they
use leasier resolving. They stop resolving your field as soon
as it understands that you're going to have a possible
DUS one. It collects all the ideas that it wants
to lod and then makes a single request for all
the entities who wanted a lot and then just puts
the data into the slots. Let's say, so I think

(32:07):
and one problem is solvable in this case.

Speaker 1 (32:10):
So another thing we have on our list of discussion
points is caching. So does I guess where does that
cashing occur? Because if you can kind of request anything
with graphuel, then I'm assuming that it happens at a
lower level where it caches the fields or the result
that it gets somehow and then builds the response from there.

Speaker 2 (32:28):
Yeah, there are a lot of problems with catching. So
first of all, as I mentioned, graphicular requests are usually
post requests and it's impossible to use HP cash with pots,
and there is a solution for that. There is a
special technique called graph ciel persisted queries. The idea is
that when you send a requests to the server from
the client, sometimes you know that you're going to send

(32:49):
these requests more and more the same one, and in
this case you can just send a fingerprint of this request.
There is a special way to calculate these fingerprint that
is known by sir end client and this you can
send the fingerprint and severe knows what he wants to
return and if it doesn't know what queer you want
to make it to a respond that he doesn't know
about the query and ask the full version. When you

(33:10):
use this technique, you can use get requests because there
is no chance that you will send these huge quier
using GAT. In this case, you can use aestification if
you want, as far as I remember, it's possible to
do it, you think, using GRACoL Ruby, but only in
pro version. So I've written my own jem for that.

Speaker 1 (33:27):
That's interesting. So essentially what you're saying is that, Yeah,
you can create a fingerprint for the query that you made,
and then you can make a get request and take
advantage of the HTTP cashing anyway, as long as you're
working the exact same query.

Speaker 2 (33:38):
Yeah, because of the variables, you usually make the same
requests from not only a single client, but from all
the clients for the same version, so most of the
time you have cash hit.

Speaker 1 (33:48):
Then does it invalidate the cash responses if some data changes,
like does it keep track of it the same way
that rails does for other cashing.

Speaker 2 (33:56):
So in this case we don't really care about the
invaliation because we cash on the queer itself. So the
text of the quer and estification has to be invalidated
by RAILS. So many there is no solution. I tried
to implement something, but stopped and figured out that we
need to use a different solution. There is an idea
that we can cash graph car responses. From the first perspective,

(34:18):
it seems like it's impossible because each time client makes requests,
it can ask for any data we want, but it's
possible to catch the part of the query that you
don't want to resolve each time. For instance, when you
have the econmerce set and you have a list of
popular items, you probably want to show the same list
of popular items to everyone, and you don't want to
go to the database to fage them. That's why we

(34:40):
created a special gem for that called Graphy of Ammunication.
It was extracted from one of Evil Martials projects, initially
implemented by Blood mendument If and Nicola Switchkov. The idea
was that we want to remember the part of the response,
and when a client wants to get the same thing,
we just take the part of this song from the

(35:01):
cash from Redis or somewhere else and just send it
to the client without resolving it. Unfortunately, it wasn't possible
to do it in grapt you rubly because there were
no mechanisms to stop executions, so I had to make
a poor quest to add this mechanism. So right now
it's possible to use these being even if you're not
using the JAM. But when you use the gym, you
can specify the cash gear it will be built by

(35:23):
the JAM, and of course you can always invalidate this
cash if you want, and also you can also specify
time to live.

Speaker 5 (35:29):
That seems super useful. All right, So is there anything
else Dmitri that you feel like maybe we should be
doing with GRAPHQL that's sort of not common practice, Like
is there best practice that we've sort of missed or
anything along those lines that we just haven't gotten to today.

Speaker 2 (35:45):
Yeah, I think we could talk about some best practices.
Some of them are so similar to things we do
in rest, but a little bit different because it's a
different technology. So there are some possible security issues that
can happen with you when you're used in graph kill. So,
as we mentioned before, it's possible to ask everything you want,

(36:05):
but there is a chance that a client will make
requests that will just blow up your database because it
will be a huge select with a lot of giants.
And there is a solution for that, as far as
I remember it's not a part of the specification, but
most of the collabraries have these build chain. The idea
is that they calculate the depth or the quer and
the complexity. Depth is amount of entities you touch when

(36:27):
you go down the graph and complexity is a complex
method that includes amount of fields you are grassed, depth
and so on. And you can limit these two variables
on your execution engine. In this case, when someone tries
to make a huge request that you know that won't
be made by your real clients, it will just ask
the client to make a simpler one. I guess we

(36:48):
didn't face such promost with the rest. And also there
is one more interesting thing. As we mentioned before, it's
possible to expose your schema to have let's say public documentation.
But also in this case, it's possible to see your
testitious chema using graph heel because there is a special
incode introspection, you can make queriers to see the schema.

(37:08):
I'm not sure why it's dangerous, but some security specialists
think that it's security problem that we should add introspection
from public users because they don't really need to see
it and use it only in development. And maybe stanging environments,
so sometimes it makes sense to turn off introspection into
the production environment. There is a special setting for that
in graphlo rob which you can just turn it off

(37:29):
and it disappears forever to be impossible to ask for
this data.

Speaker 1 (37:33):
I'm going to change the topic unless there's more to
the best practices.

Speaker 2 (37:37):
There's one more thing to mention. There are some best
practices related to code which would be hard to discuss
without who can't get code, like we shouldn't expose data
from other entities, all this stuff.

Speaker 4 (37:48):
So I have a small.

Speaker 2 (37:49):
RoboCop extension for that which will help you to check
your code. Of course these rules have been edd because
it was great by me, but sometimes it may be helpful.

Speaker 3 (37:58):
Sounds good.

Speaker 1 (37:59):
How do you test your graph QL? You just make
a bunch of queries? Is there a better way to
do it?

Speaker 2 (38:04):
So it depends. There are two ways that I'm aware of.
The first one is making querers, So you can just
make a test for like graphical controller. So you define
your quer, then call the controller, and then make the
request to call your controller, and then you'll check the
response if it much to the DT do you expect
or not? But when you write a lot of such tests.

(38:26):
It sounds repetitive. So there are two ways that you
can await it. The first one is to write a
shared context in respect. In this case you have some
kind of DASL that you will handle things for you.
Or there is a different approach that I heard of.
There is a gem called fixed trauma. The idea is
that you put all the data you want to check
to yamo files, define a single test and just have

(38:48):
a lot of folders with these yamo files, like what
you ask for, what you expect and if you'll just
compare the result with the dat do you expected to
get It sounds like to be cool to test types directly,
but right now it's impossible because of the Graftco Ruby architecture.
It's impossible to just create type objects from the crete
and ask it to shallize something for you because it's

(39:11):
heavily relies on the execution engine. So right now it's
not possible, but maybe it will change in the future.

Speaker 5 (39:16):
Did what Chuck did or Dave, You guys were complaining
a little bit about it earlier, Did you guys do
any testing with graph QO when you did your implementation?

Speaker 4 (39:25):
It is this testing that you speak of test is
a four letter word, right, all right, all right, Now,
I did tinker around with it, but I didn't dive
into it too much, and it was really just simple
mini tests. Just make a request, expect this back, I
got it back. I mean, it's really not too too
different than doing other tests when the rest API.

Speaker 1 (39:47):
Yeah, most of my testing was manual because I was
more playing with it than actually trying to get something
out of it. But having spent time on some of
the front end systems, I mean, that's where graph ql
is really really nice, especially if you're using something like
a that just gives you a whole bunch of powerful
features out of the box. Anything else, any other thoughts
or advice.

Speaker 2 (40:06):
Nothing on the top of my head. The only thing
I wanted to mention is that I'm surprised that no
one asks about the performance, because sometimes people think that
it's so hard to mark these huge graphicel requests and
then respond to them, and that graph gill add some
overhead to your processing. I tried to improve it and
figure it out that it's not that big problem. I

(40:28):
have a benchmark that tests the memory consumption by graph
keel would be compared with jacent APAI, which I mean
gcon Pei standard that makes the similar thing that graphicell does,
and there is no significant difference between them. And also
I tried to process really be queer, like there were
eight fields and each field had ensted an unested entity

(40:51):
with eight fields and so on four times. So that
was a really cuture request and it took like one
second process, which is a big amount of ten. But
we usually don't make such requests, and that's why I
do not recommend to you badging that I was meanted before.
And also after I published these benchmark things became better
because right now there is a new execution engine in

(41:13):
Cyangraphic Orbit, so things are in foster now, so I
can say that it doesn't really add any or to
your system.

Speaker 1 (41:19):
Yeah, we didn't bring up performance because rails can't scale.

Speaker 3 (41:22):
So it's cool. People that don't know how to use
the tool can't make it work. I think that's normal.

Speaker 4 (41:28):
Yeah, so instead they reach for stuff like react.

Speaker 3 (41:30):
Oh oh, I definitely not trying to No, No, it's fine.

Speaker 1 (41:36):
We can all pity the Jango devs. Anyway, any other
thoughts or things that we want to bring up before
we go to picks.

Speaker 6 (41:42):
Yeah, why can't we just send sequel over website?

Speaker 3 (41:47):
People do this probably somewhere.

Speaker 4 (41:50):
We're supposed to buy everything in a C data tag.

Speaker 3 (41:52):
Right, Yeah, there we go. Sweet.

Speaker 1 (41:54):
Yeah, all right, let's do some pics, Dave. Why don't
you start us with picks?

Speaker 4 (41:57):
All right, So I recently got a star Tech under
desk computer mount to free up some desk pace, and
this thing is really cool. I mean, granted I now
hit the computer with my knee a bit, but having
it off the desk is really nice. So that's my
first pick, and sur probably my computer weighs like thirty

(42:17):
five almost forty pounds, and it supports the weight nicely.
And then the second pick is not to offens. So
with the age of digital learning and two of my
three soon to be four kids we're expecting if you didn't.

Speaker 6 (42:32):
Know, they congratulations.

Speaker 3 (42:34):
Thank you. Yeah.

Speaker 4 (42:36):
I think after three kids it's supposed to be condolences,
but thank you.

Speaker 1 (42:40):
Anyways, I have five. You're going to name the next
one Pearl.

Speaker 3 (42:43):
Yah.

Speaker 4 (42:43):
No, I've been xnayed on any more develope programmer kid names.
I already got Ruby, so that's that's a win. But
with these computers that ended up building for them. I
just went to my local microcenter and bought a whole
bunch of parts. One thing I forgot to get was
case fans. It came with one case fan on the
back for exhaust, but I didn't have any intakes, and

(43:06):
so I had some spare RX five eight graphics cards
and did some load testing and let the kids play
some games, and that case got super hot. So I
went out and on Amazon ordered a couple of Noctua
case fans and those things are amazing. Not only they
quiet even when they ramp up, but it made a
huge difference in the cooling of the PC case.

Speaker 6 (43:29):
So No.

Speaker 4 (43:30):
Two Evans is my second pick, as well as just
like case fans in general, having enough of them.

Speaker 5 (43:35):
Congrats on your kid, by the way, and I'll also
thumb up not to I think they make pretty reasonably
good fans.

Speaker 3 (43:40):
They're usually belliant fans.

Speaker 1 (43:42):
Did I was going to say something dumb, but I didn't, John, What.

Speaker 3 (43:45):
Are your picks?

Speaker 5 (43:45):
I feel like I should recommend like some general type
fans which you can't like buy anymore or something like that.
But they used to make these really cool fans that
were like super silent, but they're like impossible to get
super expensive now. So this week I just like want
to to give some pretty much a shout out to
let because I think that a lot of people are.

Speaker 3 (44:03):
Familiar with sticker Mule. Yes, thank you. I just had
the name like five minutes ago and I just forgot it. Whatever.

Speaker 5 (44:09):
Anyway, best experience ever getting stickers every time. I have
gotten stickers from multiple places in the past, and then
a few years ago I switched and started using sticker
Mule and I just got stickers again recently, and they're
just so like flipping amazing to work with. And they
let you freaking cut out your stickers in the shape
of your sticker, which is exactly what I want.

Speaker 3 (44:29):
So they're awesome. Just saying they're doing icons or something. Yeah,
they're great.

Speaker 4 (44:34):
I put them out in the sun rather, I do
the bumper test, So I put a sticker on my
bumper and see how it fares to the weather. I've
had a bunch of stickers which just faded over of
course of two months. Any Sticker Mule sticker I put
on the back of my car has always lasted years
and years.

Speaker 5 (44:52):
Yeah, they're pretty awesome and as far as pricing goes,
like I don't know. I feel like they end up
somehow meant managing to be cheaper too than like already
get them from like Moo or something.

Speaker 3 (45:01):
So yeah. Also Moo doesn't do the weird shapes that
I want. That's it. That's my pick.

Speaker 1 (45:05):
Awesome, Yeah, I like stick from Mule theer awesome, Luke,
what are your picks?

Speaker 7 (45:09):
I've got one for fans. This is a an EESC.
It's speed controller for a drone motor. Can get them
for about twenty thirty dollars or up. And this is
controlled by the same PWM signal that your computer fans
run off, and the vultage going in on this particular
one will.

Speaker 6 (45:26):
Run off a twelve rail.

Speaker 7 (45:27):
If you want to put it in your PC case,
you can hook up the minus twelve and a twelve
and run really quite insane one of these. So if
you want to make a really ridiculous computer fan, you
can hook one of these up with a cheap drone
motor and get them for again about twenty bucks and
put a carbon five propeller on it, and that will
move the most amazing amount of air your PC has

(45:47):
ever seen. It's really unsafe, obviously, because it's designed to lift.
It's designed to lift things, not to provide a gentle airflow.
But if you want, if you miss truly silent fans,
that is a great way to get a totally silent fan.
I've used one in the past. It's really hot here,
so I've used one for as a like a desk fan.

(46:08):
Obviously cats is not safe around cats.

Speaker 3 (46:12):
How many of those does it take to lift up
your computer case?

Speaker 7 (46:15):
Oh? Sure, So this particular model you can get about
three kilos of frost out of it if you really
push it and you get everything matched up.

Speaker 3 (46:23):
So if you.

Speaker 7 (46:26):
Okay, if you yeah, as long as you've got a
big enough peller, you can lift anything, right. It's kind
of so my pick for the week is not very
dangerous computer pans. It is my server request stack, my
much maligned Engine's Passenger Huma stack, which I'm using, and
that is my pick.

Speaker 6 (46:46):
You might not like it, but I picked it.

Speaker 7 (46:47):
And part of a motivation for this was a post
by the Passenger blog which talks about combating slow Loris
deedoss attacks by stacking up enginets and matter compuber and a.
They explain part of the motivation behind it. So that's
my picklog post by Fusion from Partying like it's twenty eighteen.

Speaker 1 (47:07):
Awesome. I'm going to jump in here with a couple
of picks myself. The first pick is, I've been reading
these books by Brent Weeks. The first book is called
The Black Prism and it's high fantasy. Really really been
enjoying these books. I'm on book four right now. I
can't remember what the name of the book is, so
don't ask. But anyway, the Black Prism is the first book,
and I guess the series is called the light Bringer Series.

(47:29):
And anyway, I've been listening to it on Amazon. It's
narrated by Simon Vance, who's an awesome narrator. So I'll
put links in for the book and I'll drop my
affiliate link for Audible because that's where I'm consuming it
and really really enjoying it. And yeah, that's going to
be my pick for today. Dmitri, Do you have some
picks for us?

Speaker 2 (47:46):
Yeah, I just thought about the email I've got yesterday
that this year, as usually, will have a October first
in October, so that will be my big guess.

Speaker 1 (47:56):
Awesome, And if people want to connect with you online,
do you have a blog or GitHub or Twitter or whatever.

Speaker 2 (48:02):
Yeah, I will send your links to my GitHub and
Twitter profile. And also I sometimes write articles in the
Evil Marshalls book, mostly biographical and the based stuff.

Speaker 1 (48:13):
Cool. All right, folks, we'll wrap this one up and
until next time. That's out.
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.