All Episodes

May 2, 2025 102 mins
In this episode of Ruby Rogues, Charles Max Wood and Ayush Nwatia welcome back guest Stephen Margheim to dive deeper into the evolving world of SQLite. Stephen explains that with Rails 8, SQLite has reached a major milestone: it now supports a fully production-ready, server-driven web application experience with no compromises. He walks us through the big improvements, like better transaction handling and SQLite’s integration with Rails, which now supports background jobs, caching, and WebSocket messaging—all powered by SQLite without additional configuration. These enhancements mean that deploying a Rails app backed entirely by SQLite is not only possible—it’s efficient, stable, and simple.

Become a supporter of this podcast: https://www.spreaker.com/podcast/ruby-rogues--6102073/support.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Hey, folks, welcome back to another episode of Javis JavaScript Chamber.
I am let me try that again. I am so sorry,
I am so I don't know anyway, Hey folks, welcome
back to another episode of the Ruby Rogues podcast. This week,

(00:26):
on our panel, we have a Yoshinatya.

Speaker 2 (00:29):
Hello.

Speaker 1 (00:29):
Hello, I'm Charles Maxwood from top Endevs Ruby Geniuses dot Com.
We have a special guest this week that is Steven Margheim. Stephen,
welcome back.

Speaker 3 (00:41):
Yes, thank you for having me again.

Speaker 1 (00:43):
Yeah, we had you on I need to go look
exactly when, but we had John. We talked about sequel
Light last time, and I think it was mostly a
can you really do all this stuff in sequel Light?
And the answer was pretty much yeah. So yeah. Is
there anything else that you want to bring up as
far as an introduction before we dive in and start

(01:06):
talking about more even more sequel Light awesomeness.

Speaker 3 (01:11):
No, not much. I'm happy to be back here, just
braving the cold dark Berlin Winters. So happy to have
the opportunity to chat with some other Ruby and Rails
enthusiasts and hopefully maybe by the end of this talk
convert you into Sequelight enthusiasts as well. Let's see awesome.

Speaker 1 (01:36):
So yeah, let me get the link into the comments
for the videos. But yeah, we talked quite a bit
about sequel Light last time. Are there any new developments
as things go along with sequel Light or.

Speaker 3 (01:55):
Yeah, so there's there's been a one major development. So
if I were to summarize the conversation last time, it
would be can we really do this with SQLite in
a web application built with Rails, And the answer is yes,
but you need to finesse it. And the big change

(02:18):
which is really exciting is that the release of Rails
eight marks the first time that Rails and as far
as I'm aware, any major web application framework has provided
a fully production ready experience for a server driven web

(02:43):
application backed entirely by seqlight. You can drive your cash
through sqlight, your background jobs through sqlit, your web socket
messages obviously your primary data, and you don't have to
fiddle with any configuration. You don't have to fut's about

(03:04):
with any setup or installation. You get a no compromises,
high performance but incredibly lean, mean and stable application. When
you run Rails new like you literally can run Rails
new and come all deploy and you won't have any
business logic, so it might not be very useful. But

(03:26):
the bones of that application are as sturdy and as
simple as they have ever been in the two decade
history of RAILS.

Speaker 1 (03:35):
That's awesome. I have to admit that a lot of
the stuff that's in Rails eight that just kind of
comes baked in. I am so excited about. On Friday,
I was how do I put it? I did some
therapeutic coding and anger, and I thought, Okay, I'm just

(03:58):
going to pull together this apple. I mean, admittedly I
use postcris QOL, but I literally, yeah, I did the
exact steps you were talking about. Other than that, and yeah,
I had to deploy an accessory on Camal for the database,

(04:18):
but then I only had one other accessory to deploy,
and that was my Rails app. I mean, that was it,
and it's it's so refreshing to be able to do that.
And then I was looking at search and I was thinking,
so I have to do elastic search, and it turns
out that I am figuring out how to do the
vector search in postgress. So even that's awesome. And I
think there's a vector extension for sqlight isn't there. So

(04:40):
there is I mean, it's it's I mean, this is
the beautiful thing, right and so if you want to
deploy it and make it run and all of that stuff,
you definitely can. I have a question just to throw
that out there. You said you can just do the
camal deploy. Do you set up an accessory for sql
I or do you just store it on a volume

(05:02):
so that when you kill and restart your dock or
it just connects to the same sequel light file.

Speaker 3 (05:08):
The latter.

Speaker 1 (05:11):
That's awesome, so then you only need one container, Yes, exactly.

Speaker 2 (05:18):
That's the cleost thought about sequel lighter thing is it's
literally just a file. You're literally reading a file. I
love that.

Speaker 3 (05:27):
Yep.

Speaker 2 (05:29):
Just a quick question about like the changes from RAEL
seven to eight which kind of made a sequel light
production ready. One thing that I know is something called
a busy handler that was I think baked into the
connection now and there's a couple other bits above that
I remember reading a long blog push from music. Could
you just summarize what are the changes from sys RAEL

(05:50):
seven two to eight that now make sequel light production
ready That wasn't the case earlier.

Speaker 3 (05:55):
Yeah, so there are two foundational changes that take rails
from being You can sort of use it, but you
need to work around some pain points to it works
out of the box. So the first one concerns how
sequalites transaction management system works in the context of a

(06:21):
multi threaded web application. So I'm going to try to
do this as succinctly as possible. But the default way
that seqalite works with transactions, it calls working with deferred transactions.
So to give one piece of context, seqalite only allows

(06:42):
one writer to be doing a right operation at a time,
so it has linear rights. Now, you can open up
multiple connections to the database, and you can have connections
reading in parallel, like that's totally fine. You can have
five connections and they're all doing red and they can
just do that at the same time. But if you

(07:03):
have five connections open and they all try to write,
have to line up and go one at a time.
And when you use a transaction, the default behavior so
if you just write begin transaction, some seql commands and

(07:25):
then commit seql light doesn't attempt to acquire the right lock.
This is the way in which it figures out which
connection is able to write, like there's a lock, and
each connection has to pass it around, so it doesn't
attempt to acquire the right lock until it sees a
right operation inside of your transaction. So if you do

(07:48):
begin transactions select star, select star, update set from select,
at that third operation, it will say, okay, let me
try to acquire the right lock. And the problem that
you hit and this is where anybody who tried to
use seqalite in a Rail's application before Rails eight saw

(08:10):
this error in their logs database is locked. You know,
you got that error all the time, and that error
was happening so often because when you begin the transaction,
that is when seqalite takes a snapshot of your database
to use for that transaction right to keep the isolation guaranteed.

(08:35):
And so when you have a right operation inside of
that transaction in order to ensure that isolation guarantee right
for our acidic guarantees of this relational database, Sequalite does
not allow that attempt to acquire the right lock to retry.

(08:57):
So you get one shot. If the right block is
being held by another connection, you immediately error. And so
when you have a multi threaded web and you have
many rights, there's no retry mechanism. It's like you either
are able it's the lock is free and you can
grab it, or it isn't and you error. And so

(09:20):
the first change was to ensure that when RAILS generates
transactions forceqlight, it explicitly makes them what seqlight calls immediate transactions.
And all that that means is that you attempt to
acquire the right lock at the begin transaction statement, not
at the update statement. And the begin transaction statement is

(09:41):
when you capture the snap shot, and so if the
lock is being held, then we retry the begin transaction.
So we retry taking a snap shot, and it's not
until we can acquire the right lock for that particular
connection that we begin the transaction. And then we have
the right lock for the entire of the transaction, everything

(10:01):
will be fine and we pass it along. So that
immediately solved the major problem of just like getting a
ton of databases locked aerrors by allowing our connection pool
to deal with transactions. The second problem crept up thereafter,

(10:22):
which is okay, now when you are retrying, so retries
are happening at the sequlite level, and they're happening often
in our multi threaded rails application with decent right tripe
right through. But how does seqalite retry stuff? And it
uses a very simple mechanism. Connection A has the right

(10:44):
lock connection B attempts to acquire it can't seql. It says, okay,
just sleep for one millisecond and then try again. So
it sleeps one millisecond, tries again. It's still being held
by A. So B is told sleep for two millisec
and try again. And it has a kind of polynomial
back off.

Speaker 1 (11:04):
It sounds like my kid's dad, Yeah, Dad, Dad.

Speaker 2 (11:09):
Are we there yet? Are we there yet? Are we
there yet?

Speaker 1 (11:12):
And then they sit there and listen for a second. No,
he's in there, dad, Right, that's the backup exactly.

Speaker 3 (11:20):
And this is simple and it works incredibly well, right,
Like you can serialize concurrent operations like without the application
needing to do any custom logic. That's great, right. The
problem that's when I eventually go what yeah, exactly. The
problem came from how Ruby and sequelite work together. Because

(11:44):
the magic and the beauty of sequlite is that it
is an embedded database. It's not a client server database,
and this means that it runs inside of your application process.
You don't spin up a Sequalite process that's running next
to your Rails server process. It is code that's executed

(12:05):
inside of that process. And what that meant for Rails
or Ruby more generally is that, by default the way
that Ruby is integrating with Sequalite, the Ruby interpreter did
not see sequel lighte connection sleeping as io that it

(12:27):
can release the GVL on. So the GVL gets retained
as Seqlite retries. And so what that meant was if
you spin up ten Puma workers and one Puma worker
acquires the right lock and it's doing a little bit
of longer running work, another Puma worker could and should

(12:50):
theoretically be able to do Ruby work while Sequalite is
doing Seqlite work, but they weren't able to coordinate correctly.
So this is the busy handler thing. So what we did, I'm.

Speaker 1 (13:04):
Just going to back up for a second because you
mentioned the GVL, yeah, which is something that we've talked
about at length on this show before, but not for
quite a while. So the gvl's the global vmlock and
it says essentially, hey, I've got this thread that's trying
to do a thing, and what Ruby does is when

(13:25):
it's doing IO. Right, So if it's sending off to
the network and waiting for a response or talking to
the database and waiting for a response, what it does
is it releases the GVL and lets something else, another
thread pick things up and work. And so what Steven's
talking about is essentially Ruby didn't detect this as IO

(13:49):
is what he said, which meant that it wouldn't release
the GVL. In other words, it's going to sit there
and it's just go, I'm still waiting instead of you know,
letting another thread pick up work and go do work.
Because if it's not doing iowork that it has to
wait for, then it can do other things, right. It
can calculate stuff or you know, sort your raise or whatever,

(14:12):
right because that all happens in the VM. And so yeah,
so what Stephen's talking about is you would have things
essentially just hang there until they got an answer.

Speaker 3 (14:22):
Yeah, and this is particularly important for typical rails applications
because your typical rails application is using Puma. Puma's going
to spin up a number of worker threads. Let's just
say it's ten. And the way that it is expected
to work is that you are increasing your throughput by

(14:42):
having high saturated concurrent work. So one Ruby thread can
take an incoming HTTP request and do Ruby work to
like parse the stream like get a sense, let me
get the rails request object, and then it makes a
call to the database. It's like okay, now that worker
is going to wait for the database. So the next

(15:03):
worker over can start and pick up the next request,
and so your CPU can be doing work constantly. We
can really saturate our resources, take advantage of all of
our cores, and do high through put work by really
leaning into concurrency. That's like essential to how Puma is performant.

(15:24):
And you could imagine that if talking to your database
did not give the opportunity to your other Puma workers
to do the other parts of the web request processing,
you have ten workers, but effectively you have one, Like
you have removed the advantages of the concurrency. And so

(15:47):
what we needed to do was instead of using sqlightes
built in mechanism for retrying, we wrote they expose a
lower level API to write your own and then that
allows us to write our own in Ruby. And so

(16:07):
instead of calling the OS level sleep, we're calling Ruby sleep,
and Ruby sleep does properly communicate to the interpreter, Hey,
I am just sleeping. That is IO, you can release
the GVL. And the third related thing, just to finish
off the triad is when we were doing that rewrite
and writing it a Ruby, I was doing benchmarking and

(16:30):
I saw that sequelights implementation had a sort of subtle
nasty side effect that I call punishing older queries or
older connections, where because it was a polynomial back off
what that and it's just like telling you don't ask
me again for another five seconds. If you had high

(16:53):
right throughput, once a connection needed to retry two or
three times, so now it's waiting five milliseconds and then
ten milliseconds. New connections have a much higher likelihood of
acquiring the lock because they will retry more immediately and

(17:14):
then one millisecond than too, so they get to try
four times before my older connection even checks in again.
And so if I have a steady stream of new connections,
that old connection will go from the fourth generation to
the fifth to the sixth, and it'll you'll always have
one which goes until it errs out. And so you're
like P ninety nine will have like these big spikes

(17:37):
because you'll have like one connection that just never acquires.
So we also rewrote the logic to just have every
connection every time check in every millisecond, so there's no
penalty for being quote unquote older having more retries as
compared to younger having fewer retries. And this flattened out

(17:57):
the latency that we saw well at like P ninety
nine and P ninety five. So those three fixes are
the major changes in the Rail's eight release that make
rails production ready out of the box. You're not going
to get these database lock airs, You're not going to
see these latency spikes in your P ninety five or

(18:20):
your P ninety nine statistics. You're going to get a
high performance, stable experience by running rails new.

Speaker 1 (18:31):
Awesome. So I have another question. I seem to remember
seeing somewhere that you were fairly involved in a lot
of this. So when sequel light exposed the lower level thing,
what was that for rails, Like, did you or David

(18:51):
or somebody else go and actually talk to him and say, hey,
we have this problem, can you let us do this
other thing? Or was that something was already there?

Speaker 3 (19:00):
That was something that was already there. This is one
of the things I have come to really appreciate about Syqalite.
You know, it's thirty years old at this point, and
it's used everywhere, and so it really is today like

(19:20):
very feature rich and very powerful. The big difference that
trips people up, because it's quite uncommon in our modern
software culture, is that Sequelite cares far more about backwards
compatibility than it does about enabling newer, better defaults and features.

(19:44):
And so what that means is that if you don't
do anything, if you don't do any configuration or any tuning,
when you are using Seqlite today, you are effectively using
Sequlite as it existed in two thousand four, which is
when version three was released. And so these myths and

(20:07):
sort of fears that have grown up in the web
development community like seqlite is not a production of any database,
have a root in reality, because in two thousand and six,
seqlit was not a good database for web applications. It
was squarely built for embedded devices and that context. Over

(20:27):
the last two decades, a lot of features and a
lot of work has been done to make it a
far more flexible and powerful database engine, but literally none
of those features are enabled by default, you have to
manually configure seqlite to be properly run in a web application.

(20:50):
This is the other big thing that rails does. We
started doing this in rail step two. But it has
a set of six or eight configurations that it applies
to every connection you make so that it ensures that
seqalite is running in a web application friendly mode. So

(21:12):
this is just one of many features that seqalite has
had sort of added over the last two decades that
you can take advantage of to make seqallite run really
well in the context of a multi threaded web application.
But you have to do the work to figure out
how you need seqlite to behave for your context.

Speaker 1 (21:37):
It's interesting that.

Speaker 2 (21:38):
You summarize the connection parameters. Is that going down a
depravat hold the configuration stuff.

Speaker 3 (21:45):
I can summarize the key points. So the first most
important one is that the original implementation of sequalite's journal mode,
so the way in which it deals with stashing data
to keep durability guarantees, was limited in that it required

(22:10):
any operation to be linear, so reads and writes both
contended with each other, so you could only do one
thing at a time, so you could open multiple connections,
but it's kind of useless. You were single threaded quote
unquote at the point of view ofc QUAD. Over the years,
they added a new journaling mode, right aheadlogging, which is

(22:32):
the same sort of algorithm that postgress uses, which allows
for concurrent reads. So that's like major change number one.
You need to be in right a headlogging mode. When
you use rite a headlogging mode, you have a much
naturally safer environment, and so if you don't care about

(22:53):
like the absolute strictest durability constraints, you can flush to
disk often and get much faster performance. So the other default,
like by default sequelite after every right flushes to disk,
so you're doing the file system calls on every right,

(23:14):
which adds a lot of overhead. So RAILS also defaults
to flushing to disk less often, which is encouraged when
you're using right ahead logging, so you get better performance.
It also expands the default size of the in memory
portion sort of space that sqlight uses. This is another

(23:36):
misconception people have right like, every database is working with
data on disc, like that's how we get durability for
the ACID guarantee. But no database exclusively uses data on disc.
Every database generates a pool of memory. It's like, here's
a memory space I'm going to use, and they keep

(23:57):
all of the hot data in that pool of memory
such that they can just read and write directly in memory.
Sequalite does the same, and you can control how big
that space is. And it's default from you know, ten
years ago is paltry compared to the memory that modern
machines have, so we bumped that up. So for a

(24:20):
lot of databases, you can actually like run your entire
database effectively in memory like you have. There's literally no
way any database can beat the speed of Sequelite in
that context. That's actually what the guys Waprus found. They
poorted their client from using reddis to Sequelite. He messaged
me at one point He's like, Hey, I think I'm

(24:40):
doing something wrong because I ran these benchmarks and it's
actually faster with Sequelite and we're switched to Sequlite because
we need like richer query language and we just knew
we were going to pay that with slower speeds, but
somehow it's faster. What am I doing wrong in my benchmarks?
And we dug into it, it's like, no, your whole
data fits in memory, so Seqallite is a more expressive

(25:06):
and be faster than reddish in this case. And then
we do a few other like small little bits and bobs.
I have a whole blog post on my on my
blog that breaks down all of the different settings with
links to the documentation and explanations. Just the last one

(25:27):
out I'll flag here Rails has been doing this forever,
though you have to manually. By default, Sequalite does not
enforce foreign keys, so like do you. Originally when it
was built, poor and key constraints were just like you know,
just sugar. The engine didn't do anything with them. They
added that feature later for backwards compatibility. It's not added

(25:49):
by defaults, so you have to explicitly say sequel Ie,
please enforce my foreign keys.

Speaker 2 (25:56):
Interesting with the title of the blog post that you're
referred to.

Speaker 3 (26:01):
Let me look up the exact title. So my blog
is at fractalmind dot get habio and I have a
little search up at the top. So I'm going to
search for pragmas which is the name. I think it's
called fine Fine tuning your database.

Speaker 2 (26:22):
Cool, I can see it now. I shall dive into
that later.

Speaker 1 (26:29):
Yeah, I need to.

Speaker 3 (26:31):
I want to.

Speaker 1 (26:32):
I just want to capture it too so that we
can put it into the show notes.

Speaker 2 (26:35):
And so let's move on to something a bit more real,
specifics of a solid trifecta, which I am quite excited
about because it's like more stuff big into reels rather
than reaching for third party dependencies. It was obviously kind

(26:57):
of conceptualized with sequel in mind, which is awesome, but
there's still quite a lot of postgress out there in
the world. So my question is, how, like, when you
think about web applications, how differently do you have to
think when you're using sequel light and if you're using

(27:17):
the solid trifector with postgress, are there any things you
really need to be aware of? Because instantly things like
solid cable where you're kind of pulling for messages, it's
making my spidery sense tingle that this is less efficient
than reddest But is that a misconception when it comes
to postcast because in sequel light, I know you can
pull that much. I'll be fine.

Speaker 3 (27:40):
Yeah, So there are differences. The short answer is yes,
Like I think that your your spider sense there is
is onto something right. So let's let's start with with
action cable. So Rails has had a postcress specific action
cable adapter for a while now, and that adapter uses

(28:02):
Postgress is in built listen notify functionality, which is faster,
it's more performance, it will scale more naturally as your
throughput increases, but it does come with a trade off
of like a maximum message size of I think like

(28:25):
eight kilobytes. There's a gem that expands that that max size.
And if I were using postgress and I wanted to
keep everything simple, like not bring in Redish just for

(28:45):
that use, I would one hundred percent use the action
cable postgress adapter and not solid cable that is better
tuned to that architecture. Now, if I'm all in on
my SQL already and I don't want to bring in
the reddest dependency just yet, and I know that I

(29:06):
don't have like a ton of web socket traffic right
like I'm just doing a few basic things pushing down
alerts when background jobs finishes, and it's like I only
got six background of stuff. Like It's going to be
totally fine, but you should go into it being aware
that you are going to have some different pain points,

(29:27):
the biggest one being that, of course when you're dealing
with the client server architecture, your primary bottleneck is the
network latency. Like every database is fast, really fast, Like
databases are really magical pieces of technology.

Speaker 1 (29:48):
I want to just interject, So not twenty twenty four,
but twenty twenty three when David or DHH gave his
keynote talk, he talked a lot about why the databases
are fast or and how the disks have gotten faster.
And you know, you were talking about memory capacity. That's
another thing that's expanded, and you've alluded to that. So

(30:09):
I love all of the allusions to you know, modern
computing and just what the capabilities are. But anyway, go ahead.

Speaker 3 (30:17):
Yeah, And so you are penalized in a client server
architecture for making a large number of small queries. And
this is one of the biggest mindset shifts you need
to make if you want to embrace sequlite, because it's
really deep in rails developers like limpic system, Right, I

(30:38):
want to prefer a small number of large queries, but
when you're working with SQL, you actually want to flip that.
You are going to get a better performance profile by
having a large number of small queries. And so that's

(30:59):
like one consideration. And so if we think about something
like action cable, the nature of action cable is like
it's you can't aggregate those queries, right, Like if you're
just doing the polling, you're just making that get well,
I'm using an HTTP verb there, but you know we're
making that reade. Yeah, and you have to eat the

(31:22):
network latency. You just have to. So for something like
a solid cable, it's really well tuned to seqlite. The
way that seqlite is built and the fact that we
isolated in a separate database like means you don't even
have to worry about concurrency. Of course, if base Camp

(31:43):
were to use this like, they would experience very real
pain because now polling for action cable messages is contending
with serving all of their other application database needs. Right.
So if they just use a single big bee feed
my sql in since and they're doing all of their
application reading and writing as well as this website of

(32:05):
pulling like now their connection pool, the resources of that
instance are all like being used up for both of
these use cases. And so you do need to be
more thoughtful, especially if you're unsure about how your usage

(32:25):
might scale over time. If you're using something like postgress
in my squel especially around actually cable for something like
the cash. I actually think that there are real advantages
to using a database back cash, regardless of the database
engine you are using. At the scale of base Camp.

(32:46):
They through experience and they talk about this in their
Raales World talks from last year introducing solid cash. Came
to see, Yeah, that contention is too high, So they
split out separate my SQL database for their cash, and
that I think is going to be the case at

(33:10):
really sort of ity decent size for your cash. But
the point that they make, which is like when you
just massively expand the size of your cash, I can
just keep way more things in the cash, and I
can keep them in the cash for longer. Then your
cash becomes more useful because you just have more hits. Naturally,

(33:31):
you just there's more things to hit against, and that's
what the database offers you. And because of modern computing,
because like the file systems are faster, the RAM is faster,
the networks are faster, the difference that you have between
even some of your cash hits, like needing to go
to disc to fetch the data to send it back,

(33:54):
is much smaller than it used to be, So I
would recommend looking at solid cash for any application, regardless
of the database engineer using for solid q. If you
are all in on postgress. This is another case where honestly,
probably good job is a better fit, like it's explicitly

(34:18):
tuned to postgress and using its features. The really nice
thing that makes solid q quite competitive is that just
it has more useful features out of the box, like
recurring jobs. That's just that they are, and they have
done a ton of work to make it super performant
on postgress in my SQEL. Obviously, base Camp and thirty

(34:41):
seven Signals use my SQL, and they are using solid
q in production doing like tens of millions of jobs,
so it's totally production ready at really large skills as well.
So I would also feel very comfortable using solid q
with my SQEL or postgress at larger use cases. But

(35:03):
in general, all three of these gems, like lean on
pulling and sequel, it's just the best sort of perfect
fit if you want to use a database engine and
you just want to pull all the time, because those
reads are effectively free, I mean, they're so fast and
there's really no contention or penalty, So it's it's like, yeah,

(35:29):
just pull all the time, like you basically have real time.
There's not a ton of complexity and there's not really
a ton of trade offs.

Speaker 1 (35:41):
So one thing that I'm wondering then, is because I
just switched over to solid que and it has a
plug in that like you fire up Puma and it
runs all your So everything just runs on the same
server essentially, right, you just tell it run the server
and runs the server. It's the workers and it does everything.

(36:03):
So what you're telling me is is if I'm still
really in love with PostgreSQL, I can do that for
all of my I guess primary level data, and then
for the queue and the cash and the cable stuff,
I can still use squeal light and just avoid all
of that network latency on these tiny polling queries. Is

(36:24):
that what I'm hearing?

Speaker 3 (36:25):
Yeah, So the if you just run rails new with defaults,
you're going to get seqlite everything. You're going to get
all the solid gems. And the way that it will
be structured is that Rails will create four different database files.
It'll create a we'll just talk about development. A development
that's sqlight three, A cash, A queue, and a cable

(36:47):
that sqlight three.

Speaker 1 (36:48):
So yeah, four different I find that I was using
Postgress and it's still created the other three with SQLite,
I believe.

Speaker 3 (36:57):
Yeah, that might be true. I honestly haven't spent a
ton of time initializing rails applications with anything about postpress
those details as well. But so what that means is
that a so SQA has this constraint of linear rights.

(37:18):
That constraint is at the file level, So you can
right to your primary database, add a job to the queue,
add a message to the backlog for cable messages, and
add a cash entry in parallel. You can do all
four of those operations at exactly the same time, so

(37:40):
there's no contention across the different services. And all three
of those additional services will run directly inside of PUMA
if they need a separate process, like solid q does,
and so you don't have to set anything up and

(38:00):
you don't need anything more than that one single machine.
So like I build and deploy my applications on hatchbox
on a single Digital Ocean droplet, and I have. I
had the big first app that I really went all
in on seqy with and led to a lot of
these additions to rails. It's still running in production and

(38:22):
it is on a twenty dollars a month droplet, and
that app has made multiple millions of dollars in revenue
in the last four years. It's it's it's really kind
of beautiful how cheap you can run these applications like this,

(38:42):
Like this is a targeted B to B application that
just isn't going to have ever more than a thousand
concurrent users. And so I just I don't need a
huge box. I don't need to run a whole bunch
of things. I can. I was a single dev I
built it and maintained it for years. It was super straightforward, lean,

(39:07):
easy to operate, easy to deploy, easy to evolve, super
fast for those kinds of applications. So I think, like
particularly B to B applications are like a great fit.
Like you have a certain kind you typically have a
sort of geographic boundary, right, Like you're building a business

(39:28):
in the UK, You're mostly going to have UK customers.
You're building a business in Europe, you'll mostly of European customers.
You're building a business in the US, you'll mostly of
US customers. You're going totally virally successful. It means like
maybe you have ten thousand customers, right, Like, it's just
totally different than trying to build the next TikTok. SQL
Light and vertical scaling are such high leverage decisions for

(39:52):
those kinds of businesses. You can operate your business so
efficiently and add so many high power features to your
application so quickly and so cheaply. That's where I get like,
particularly passionate. It's just like that more people appreciate that. Yeah, objectively,

(40:14):
Postgress is a great database. I have nothing negative to
say about it, but the idea that it is the
only viable option for a database in twenty twenty five
and production is objectively laughable. Sequelight, in my Sequel and
Postgress are all really great databases that there are very
legitimate use cases for in twenty twenty five.

Speaker 2 (40:38):
Yeah, that's quite a pitch, and I'm not surprised to
yet that from you, But how passionately you've been kind
of waving this flag for the last couple of years,
and I'm giving sequel Light a go on a side
project I've got just started looking forward to diving into that.

(41:00):
How do you feel about the architecture that Chuck briefly
mentioned by a primary database is Postgress, but you're using
sequel Light for jobs and cashing and solid cable or whatever.
How do you feel about that kind of architecture.

Speaker 3 (41:17):
I think that it makes sense and it can work,
especially if you have an existing application and you want
to start exploring some of these things. I think that
solid cash driven by Sequelite is a really great place
to start. Like, take an existing application, leave your Postgress Heroku,

(41:38):
well not on Heroku. Sequel doesn't work on Heroku, so
not on Heroku. If you're using Heroku, just hug Reticent
Postgress and you'll be fine. But in a context where
you're not using Heroku. Yeah, just like spin up a
cash database with seqle it can get really huge, not

(42:01):
can have any problems. There's zero need or reason to
go all in on one immediately. It's totally fine to
mix and match the For me, one of the primary
benefits of sqlit is the opportunity to have a maximally

(42:26):
operationally simple application. So for me, I start all new
applications going all in on sqlight because it's like I
don't want to run two machines. I don't want to
figure out how to secure and network two machines. I
don't want to, Okay, I'll just do one machine. And
it's like, let me make sure I'm running the two processes.
Let me make sure I'm doing that resiliently. Like what

(42:47):
happens if my postcrest process goes down? How do I
have my app process handle that gracefully? How I need
to have another process running that watches the postcress process
to bring it back up when it goes down. Like
it just it gets a little bit out of hand
rather quickly. But if you have an existing application, this

(43:09):
is not in any way and all or nothing. I
think cash is a great place to start. Cable can
also be a good place to start, though, if you're
already on postgress the action cable postgress adapters, I would
start there. You still don't have to reach for rettis.
But yeah, mixing and matching is one of the core

(43:33):
philosophies of rails, right, This is O my case, at
its best. So I do embrace it and encourage it,
particularly for migrations. But I will beat the dead horse
once again if you're starting something, if you're running rails new,
like just give it one month, give it one month.
Just try it for one month, That's all I ask.

(43:55):
And if you say, you know what, I just really
really love postgress, awesome, go use post But if Mordv's
just tried, I think we would see a real change
in the makeup of RAILS applications in production, because there
are very real benefits for bootstrap businesses to lean on

(44:20):
this technology.

Speaker 1 (44:23):
So I got a couple of questions here. One is
is you've talked about some of the optimizations that you
can use in postgress. You know, whether you're enforcing foreign
keys or you know, some of the other pragmas that
you can turn on to kind of tune your database.
And it sounded like Rails does some of this for you. You

(44:44):
can correct me if I'm wrong there, but I guess
that's my question is do I effectively get all the
goodies for free or do I have to go in
and set up seqlight and or rails in a particular
way to get any you know, to optimize for whatever
I'm doing with it.

Speaker 3 (45:05):
You get all of the essential goodies for free for
having a production ready experience on day one. All of
that is by default. You don't have to touch anything. But
like with most of Rails, you have levers available to you.
So for example, there are dozens of sequel like pragmas.

(45:30):
You can configure all kinds of things about seql Light
runs the database EAMO file accept a pragma's key and
you pass a nested hash and you can just set
whatever pragmas you want in your database YAMO. So that's
built into rails now. So you can add configuration. You

(45:51):
can overwrite configuration. If you're like, hey, I've got I'm
looking at my VPS, I know the services I'm running
on it. I can commit way more memory to my
squel database. I know I have four I know I'm
going to do five connections each. I know I've got
this much RAYM, I know all the other services. Like,
let's give every single connection like one gig. I'm on

(46:12):
a big beef server. Let's give them all one gig.
Overwrite that I'm huge memory addresses for every connection. You
can do that, but there is not a single required
action that you need to take. You can literally run
RAILS new and put it on a production server and

(46:37):
everything will be good. So it's just just write your
business logic. But you can do extra things. So you
can also pull in. I have a jem to provide
access to the SQLite extension ecosystem and make that easy
to bring into your application. So you can bring in
Squalite extensions like the vector search extension and integrate that

(47:01):
into your application very easily doing backups. That's not something
that is like a Rails default, but there is a
gem to do that easily. So if you really like
take it seriously, like if I was building a business,
there are a few extra steps that I would take.
But that's true of every Rails application right Like you,
there's like the Rails plus subset of like gems that

(47:25):
everyone turns to to really solidify an application for showtime.
But the core fundamentals with Rails eight, every single essential
detail is packaged into Rails New and requires no additional
work to get a production ready experience.

Speaker 1 (47:51):
Makes sense. The other question I have is I'm just imagining.
I'm trying to think through some of the apps that
I'm either built or in building or things like that.
I mean, one of them you mentioned that, I guess
I'm worried about workload, right, and I know that we've
talked in the past about hey, it can get you know,
you can get pretty large workload. But let's say I

(48:14):
do hit some kind of limitation, right, I don't know
how likely that is, but let's say that I do
hit some kind of limitation. So I want to shard
my database, or you know, maybe I want to break
it up by tenant because I'm doing a multi tenant
database or something like that, or a multi tenant app.
Like how hard is it to do that kind of
a thing and how likely am I to even need

(48:35):
to consider it?

Speaker 3 (48:37):
Yeah, I'm going to start with the second question first.
So the team at thirty seven Signals did a lot
of different load testing for the Campfire application, and the
camp Fire application uses Squi for the primary database and
reddis for all of the accessories. This is because they

(48:59):
built it in reales seven one days and it's harder
to all the sologems didn't exist and as various reasons,
but I took that code base, swapped out all the
reddis for SQL. I ran my own benchmarks, and like
everything was at least as fast, if not faster in
their benchmarking in a chat application using the primary chat

(49:23):
sending messages and getting the responses back, so right, heavy, right,
not like a typical eighty twenty read write when they
got the when they benchmark their beefiest server. In their benchmark,
you can go and find their posts to get the
details of that machine. I don't recall at the top
of my head. They supported fifty thousand concurrent chat participants. Wow,

(49:49):
So imagine that's basically like a fifty to fifty read
write split and that's fifty thousand. So you go eighty twenty,
you can get to one hundred thousand concurrent users. I
am one hundred percent confident and application can support one
hundred thousand concurrent users on SQL eight. So just I
want to be very clear about like where the limit is,

(50:11):
because like I know, I understand it. I have the
impulse too. I had the impulse for a long time.
It's like, ah, like maybe just maybe I'm building the
next Facebook, you know, and then what am I going
to do if I have like a billion users? It's like, okay, yes,
but I promise you you aren't, Like I can mathematically

(50:31):
guarantee you you're not going to win the lottery and
you're not building the next Facebook. And the odds of
your application having one hundred thousand concurrent users is astronomical.
So just that's step one to really state very clearly
you aren't going to hit a performance limit. You won't.

(50:52):
Now that being said, let's imagine you did accidentally build
the next Facebook on top of a sequel add on
rails application. I'm welcome to you do. Then well, I
think that sharting is indeed like the natural place to go.
And if you will permit me a bit of gentle teasing,

(51:15):
I happen to know that there is a project underway
right now that will make the sequelite sharting story much better.
Right now, it would be a fair bit of manual
futzing about to get it set up right, and at

(51:36):
some point in twenty twenty five this project will be
released and it will be way simpler. So as long
as you don't become the next Facebook, as long as
you don't have more than one hundred thousand concurrent users
in an eighty twenty read write crowd application before the
end of twenty twenty five, you have nothing to worry about,

(51:58):
because it will become really straightforward to sharred to the
level of giving every individual tenet of your application their
own Seqlite database. So the sky really is the limit
and by the end of twenty twenty five, I think
there will basically be no performance problems aside from hitting

(52:24):
the bottleneck of I have vertically scaled to the biggest
possible machine that exists, and I still have too much traffic.
Like I'm Facebook going from one billion to two billion
people now, and now I need to horizontally shard my application.
But there are a number of tools in that space.

(52:46):
They're all necessarily younger than tools in the Postgress and
my squel space, and that we're like built in that
world from day one, but they do exist. They are
getting more more battle tested each year, so there is
a story available to you to horizontally shard. Whether that's
using lib sql, which now has a Ruby adapter and

(53:09):
an active record adapter, whether that is waiting for the
Limbo project, which the Turso team, you know, they are
basically rewriting sequalite in Rust with acinc built in from
the ground up, waiting for that to fully come to maturity,
whether it's using light fs to get console driven cluster

(53:32):
multi node cluster management for seqoid databases. There are a
number of tools here in this space, all sort of
maturing at their own sort of pace. With their own communities.
So there is a story to be told for horizontal
sharding as well. So short summary of the answer, you
aren't going to hit computational limits on the sequlite statistically,

(53:56):
I can say that almost with a guarantee. If you did,
there are paths available to you, whether that is sharding
your database so that you can deal with more right
throughput and you can get a BEEF server and the
BF server will handle your users just fine. Or you
go super nova and you have a million concurrent users

(54:19):
and you start horizontally sharding your application and you pick
up a tool like light a fasts and bring a
distributed sequel architecture into the mix. There are tools and
communities and solutions on all of these points that are
present and open source and available to us all. So

(54:40):
there really is not an actual ceiling.

Speaker 2 (54:46):
So one thing where I would still be a little reticent,
and you can tell me if I'm wrong. And it's
not necessarily for my own projects, because I completely get
where you're coming from that I'm not going to be
building the next Facebook. But if you're using sequel light,
then I think I would I be right in saying
that schedule downtime is something you kind of need to

(55:09):
just accept if you're making big changes, because like hundred
percent up time is not a thing, but if you
need to be as close to that as you can
possibly be, when you kind of need to do bluegreen
de plost, bringing another sever up, making sure it's accepting requests,

(55:30):
and then taking the old one down just to make
sure you don't go down. That kind of system is
not great for sequel light. Would I be right in
saying that.

Speaker 3 (55:43):
A little bit? Not quite in the generic way that
you said, but you're you're getting to a core point.
So at the generic point, like I use hatchbox from
all of my hatchbox uses the old Capistrano style deployment
where you just add another directory to the releases directory

(56:05):
and your rails server process is running and you just
swap out the sibling and that one hundred percent works.
Like I have zero downtime deployments. I've not you know,
oh I'm deploying. I have to bring it down for
five minutes. So that is totally fine. When you embrace

(56:28):
vertical scaling. If your machine dies and you need to recover,
then yeah, you have downtime because your machine went down
and you need to recover. So it's not so much
around deployment as it is around disaster recovery, where you
will have downtime. But that's much more a factor of
embracing vertical scaling than it is a factor of sequelite.

Speaker 1 (56:49):
Though.

Speaker 3 (56:49):
If you embrace equal, I recommend you embrace vertical scaling
as well.

Speaker 1 (56:54):
When you say vertical scaling, you're saying you add memory
ad disc Yeah, you on your application everything.

Speaker 3 (57:02):
They're getting a bigger bucks as you need it. Yeah, exactly.
But on the deployment front, it is worth being explicit
about the fact that if you have an old successful
application with Sqli that has some very large tables and

(57:26):
you make certain kinds of schema migrations to those tables,
you're gonna have some pain because if you if your
migration requires making rights, the migration rights contend with the
application rights. Seql I only allows one right operation at
a time, and the way that we typically write these migrations,

(57:51):
they're very, very greedy, so they will tend to beat
out your application rights, and so your application will suffer
and very likely experience some form of downtime. So there
are migration situations in which the simplest way to avoid
any angry pain from customers is to do schedule downtime.

(58:14):
But that that situation is doing rights against a very
large table, and by very large, I mean like over
a million rows let's say, which, of course it is
not crazy to get to.

Speaker 1 (58:30):
But adding an index or calculating a value on a
new column for every row.

Speaker 3 (58:37):
Yeah, so yeah, adding an index would be the most
common one. Supplying like a new default value, backfilling data
to a column that you added work like that, Yeah, exactly,
So it's a little bit more nuanced. But it is
true that vertical scaling means if you need five nines

(59:03):
of uptime a you don't. I should start there, like,
if you need five lines of uptime, you have been
sold marketing from AWS. That is a lie. Literally, AWS
is the only service that needs five mines of uptime,
and they lie in order to keep it. So even

(59:23):
AWS doesn't have five nines of uptime, it doesn't exist,
so you don't really need it. But let's say you actually,
like truly need three nines of uptime, if that is
essential to your business, I would not recommend a vertical
scaling approach. I would recommend going in the complete opposite
direction and distributing your compute and your persistent data to

(59:49):
the edge, and really recognizing like for you to be
successful in your business, you're going to need to solve
the distributed systems problem. I will make the note though,
that seqlight is particularly good if you need to push
compute and data to the edge. It's not going to
help you any if you put one big postcress server

(01:00:11):
in Virginia and you put ten application instances in every continent.
In fact, it's going to kill you because you're going
to make one HDTP. Your customers will make one HDTP
request from their laptop to your service which is right
next to them, and then your application will make probably
on average, ten HTTP requests to your server for all

(01:00:33):
the different queries. And it would have been much better
for your customer to make one long network request to
Virginia and then your application be right next to the
database and do ten really short queries than the opposite
way around. But if you're going to the edge, you
have computational constraints, and it's really perfect actually to just

(01:01:02):
have a file on disc And so just as one example,
we can find the blog post and add it to
the show notes. There was a company that's doing authentication
as a service. They're called OsO and as a part
of their business success, like A, we need three nines
of uptime and B because our customers will be hitting

(01:01:24):
our API for every one of their customers web requests,
our P ninety five needs to be ten milliseconds. So
how do we do that? Like, well, the only way
that we do that is we put ten application instances
around the globe, So we just have to do distributed service.
So then the question becomes, well, how do we do that? Well,

(01:01:45):
obviously I don't want to put one database in Virginia
that'll destroy my There's no way I have a P
ninety five to.

Speaker 1 (01:01:51):
Ten milli seconds. You can't make that round trip that fast, right.

Speaker 3 (01:01:54):
And you're going to make ten of them. So I
have to put my data right next to my application.
Now I'm going to have ten instances of my application.
So what is going to allow me to have all
of the sort of data access patterns I need from
my application but will work in the constrained environment of
the edge instances I'm using SQL? It will be great.

(01:02:16):
So then how do I solve the problem of distributed Sequlite.
They did a home grown solution, so they build on
top of Kafka, and so they distribute the Sequelite data
and get to eventual consistency across all of their instances
through COFCA. So anyway, all that to say, I really

(01:02:40):
think that the world is moving towards the edges and
that the sanest thing to do moving forward is to
either fully embrace vertical scaling, like I'm building a business
in Germany for Germans, I don't need I just need
the one server in Frankfort and it'll it's going to
be great, or embracing the other extreme and saying like

(01:03:02):
I'm building a global business, I'm going to have customers
that are as valuable to me in Japan and in
South Palo and so I just need a fully distributed thing.
And I think that sequel I is fairly uniquely positioned
with the set of trade offs and value propositions that

(01:03:24):
it makes to be a really solid choice in those
two extreme scenarios and post press in my sequel we're
really well positioned for the middle case, where you're trying
to have as much of both as you can. But
my sense is that the world is moving away from
the middle case and embracing the edges and the extremes
is going to be likely a better business decision, and

(01:03:46):
so really thinking about the kind of application you're building
and what kind of architecture makes most sense. But seql
I is kind of weirdly perfect at the extremes. Nice.

Speaker 2 (01:04:01):
I have one last burning question regarding database backups. And
I know, if I remember correctly, the last time you
were on the show, it was before my time, but
I remember watching the video. I think you deleted a
production seql light database by accident. If I recall correctly,
I think with sequel Light it's easier to do that

(01:04:24):
than with any other database system because it's just a
file and you can quite easily delete it. I know,
I know light stream is a thing. But let's say
you're me and allergic to dependencies. Because I hate putting
stuff in my gem file and I want to do
I'm like, it's just a file. All I need to

(01:04:46):
do is make copies of this file to do backups.
I'm sure I can write something myself. What's the recommended
way to do seql light background You just copy paste
the file out or is there some kind of native thing.

Speaker 1 (01:05:00):
To do backups the fantom ability CP.

Speaker 3 (01:05:04):
Exactly. So I just I just provided a link here
in our little chat. We can add it to the
showtes as well. But one of the other great sequeled
enthusiasts in the Ruby community old mo As. He goes
by on Gethue. So he has a great blog post

(01:05:26):
on different backup strategies for sequel I, and he he
really covers them very thoroughly, but just to just to
hit you with a few things. So sequel like comes
with three different backup strategies you can call backup, you
can call dump. These are slightly slightly different. They also

(01:05:53):
have like the Seql command vacuum into. So he breaks
down those differences. I won't reatrat of that right now,
but indeed genuinely you can call CP. And one of
the points he makes, which is I will reiterate here,
is that most production Linux servers are running file systems

(01:06:16):
that have copy on right functionality available, and so if
you do CP with the opten ref link always you
get actually like a really really fast, very very space

(01:06:36):
efficient backup of your database. And then of course you
do have something like light stream. So the benefit of
light stream is that you can get really fine grained backup,
so you get point in time recovery, but it does
come at the cost of you have to pay for
the S three compatible bucket stores service that you're going

(01:06:58):
to use. Pay for that storage, you have to run
another process. So if a light stream process to copy
over the data. If you don't need like resolution to
the second level for backups and you just want to
set up a crime job, you absolutely can use one

(01:07:19):
of the other four methods. And his blog post really
breaks down the pros and cons, like which ones are
more space efficient, which ones have faster restore performance, what's
the complexity to use them, what's the level of durability
that you get. So I would just read through that
article and think about like your use cases. But if
you don't want to bring in a dependency, whether at

(01:07:41):
the file system or at the SQUL light level, you
have four different options for generating a backup that have
different pros and cons around durability, restoration speed, and space efficiency.

Speaker 1 (01:07:53):
Yeah, for me, the thing that I'm concerned about, and
granted what my first job out of college, I was
working for an online backup company, right it back up
all of your pictures of your kids, the cloud kind
of thing.

Speaker 3 (01:08:06):
And.

Speaker 1 (01:08:09):
So I'm imagining, I, you know, if a meteor hits
the data center and I want to deplay it somewhere else,
then I need that file in the cloud somewhere. I
guess I could just have it on my local machine,
but anyway, I need it somewhere that's not the data center,
and so that's that's what I'm imagining. But yeah, I mean,
this gets me a file that I can push to
the cloud. So do you just I guess I'm wondering

(01:08:32):
what those options are. Do you just extract the file
as long as you have whatever time granularity you like,
and then figure out how to get it up int aws.

Speaker 3 (01:08:41):
Yeah, So if you just wanted to use one of
these sort of simpler built in services and you want
different levels of durability, the basics that you have is like,
let me put it somewhere else on this machine. So
it's in the totally different director. So the way in
which I accidentally deleted my directory, there was a feature

(01:09:03):
in hatchbox which has been changed by the way, where
in the WebUI you can rename an application. And so
I was just fuzzing about. It's like I'm going to
standardize all my applications because I'm OCD, not clinically I
should say, that's a colloquialism. But so I just changed
the name of the application. And the way in which

(01:09:26):
they pushed that change to the servers they ran RMRF
on the application directory and then ran a new deploy
And so the fact that my sequel like directory lived
in the shared subdirectory did not help me at all.
So I could have a backup to just some other
place that's not likely to be accidentally RMRF. But then

(01:09:46):
it's like, well, if my machine goes down, that's not
so great, so I want to put it somewhere else.
You can use network attached storage. Now, like all of
these services provide volume storage. So the other thing you
can do is you just CP it onto the volume storage.
So now it's on a different machine, but it's in
the same data center. So if the data center goes down,
it's not so great. So then you could set up

(01:10:07):
a third layer to pick up those files and put
them on some other service. Right, you could just set
up back blaze probably on those levels machines however you
wanted to do that, or you could also just you're like, well,
I really want the durability, then I would embrace the

(01:10:29):
dependency of light stream, like it's a really low overhead dependency,
and so now you can get your backups a fine
grained but most importantly be in a completely other service,
in a completely other data center that has no relationship
to your application service, data center or machine. But generally

(01:10:52):
that's like the layers of a durability that you have
available to you, and you can you know, however far
you want to go for your use cases, you can
work your way out without a ton of effort.

Speaker 2 (01:11:03):
Cool. We've spoken quite a lot today about like you
won't need this, like you don't need three nines of
our time and stuff. So from my point of view,
when I was thinking about this for one of my
own projects, I was like, all I need is a
head snow box with an attached volume, and I'll just
ride backups to the attached volume of course the data center.

(01:11:25):
But for my small scale I think that's fine, Like
would you agree with that?

Speaker 3 (01:11:32):
Yeah, I mean so for most things, I think that
the trade off for if the if everything goes to shit,
everything has gone to shit, right, it's like just presuming
things don't go to shit and not take like you know,
it's it's the old adage that it's the last twenty

(01:11:53):
percent to finish something that takes ninety percent of the time.
That really is true. And most services do not require
the level of resiliency that would take the next ninety
percent of effort. And so just saying, like, you know,
how actually critical is this. And so for anything where

(01:12:17):
I'm not trying to make money, you know, it exists
until it doesn't, right, And so I'm not going to
put a ton of extra time and effort and money
for things that are trying to make money. Then you
do need some durability because like you have to keep
making money if something bad happens. But that's basically it.

(01:12:38):
That has been my own rule of thumb. So every
single non money making application I have, I don't do
fancy live stream backups for all of those applications, I
actually don't do any backups, to be honest with you,
because I know the one way in which it failed,
and I talked with Chris Oliver and he changed it.

(01:12:59):
And so unless I manually go in there and r MRF,
like my data is fine, it's all set up correctly.
So I don't even run CP or anything for anything else.
I personally just jumped to live stream. But like network
attacks attached storage with CP great. You know it's a

(01:13:22):
nice metal ground.

Speaker 1 (01:13:23):
So is light stream a paid service or is it just.

Speaker 3 (01:13:26):
No, it's an open source utility. You have to pay,
so it pipes your data to some bucket storage service
like S three R two Digital Lotion spaces, so you
have pay for that storage, but the tool itself is
a MIT open source utility.

Speaker 1 (01:13:45):
Awesome. One other thing I just want to throw in,
because we've mentioned it a few times now. I did
a quick Google search on like how many minutes or
how many hours of downtime per year is five nines?
So five nines is about six minutes of downtime per year,

(01:14:09):
and yeah, I mean no, nobody's gonna notice. I think
if it's six hours, which three nines is about what
eight and three quarters hours per year? So you know,
if you have an outage here and an outage there, right,
you know for fifteen minutes or twenty minutes, you're going
to be well over the three nines and you're gonna

(01:14:31):
be fine.

Speaker 3 (01:14:33):
Yeah. And I think the other key point that I
like to preach on is that there is a massive
difference to the health and sustainability of your business between
scheduled and unscheduled downtime. Right, true, scheduled downtime is a beautiful, simple,

(01:14:54):
powerful tool in your business is tool belt. It is
not very difficult to commune, nicate to your users in
a way that makes them feel respected and appreciated to
take thirty minutes of downtime to deal with a complicated
database migration, or to vertically scale your single machine up

(01:15:19):
to a bigger instance. And so I feel like we
sort of in a lot of ways, the conversation around
web development has gotten to a place where we have
sort of simplified things and we just talk about downtime
as if it's all the same, and it isn't. And
attempting to again like swallow all of the complexity to

(01:15:41):
have a system that theoretically never ever ever goes down
is a really deep rabbit hole of complexity. And we
can win back so much time and so much mental
space if we just accept the leverage available to us
to say I am going to control my application goes down.

(01:16:03):
And it's true, like a vast majority of applications have
really regular usage patterns and as a part of their
usage pattern, have multiple hours where no one is using
the application, like you have time available to you to
just like go down and literally no one will ever notice.
And even if you have people using it all the time,

(01:16:25):
like you're going to have low periods and you can
just let people know, Hey, I'm going to be down
for thirty minutes two weeks from now, and everything will
be fine, like you can continue making money.

Speaker 1 (01:16:36):
Yeah, well, it also in my opinion, I mean, you know,
I don't remember if it was you or I. Usually
that brought up five nines, but the diminishing returns, right,
I mean, because you go from what six minutes to
almost an hour for four nines. It's just you're optimizing
away just a tiny bit of unplanned outage. And you know,

(01:17:00):
I don't know, I feel what you're saying there. Yeah,
if you know that you might cause it, then you
just let people know, Hey, you know, we're doing a
scheduled maintenance and hey, maybe you get lucky and your
index doesn't take more than a few minutes to build anyway,
but just letting people know it might be slow or
go down. Yeah, you get a ton of grace out

(01:17:22):
of that. One other thing I wanted to ask you about.
So the word is is that you're building a course
or you've built a course about some of this stuff.
Do you want to tell us about that?

Speaker 3 (01:17:36):
Yeah, so you might have been able to tell But
I'm pretty passionate about the general concept of giving individuals,
giving small teams leverage. And I personally think that RAILS

(01:17:56):
eight and sequel Light, when paired together, are a really
powerful lever for making things on the Internet. And so
I partnered with the Internet's favorite dad, Aaron Francis, and
I recorded at the end of last year about sixty

(01:18:20):
video video course called High Leverage Rails. So you can
find it at high Leverage Rails dot com and it
will be coming out in a couple of weeks. And
it is a course to walk you through everything that
you need to know, but nothing you don't. So try

(01:18:41):
to keep it as like lean and focus as possible
to build applications with RAILS and SEQLI. So we walk
through like an introductory sort of module onlike all right,
active record and SQLA, what is really happening here under

(01:19:02):
the hood. Feel comfortable with how Squalite is working and
sort of some of these differences around preferring a large
number of small queries to a small number of large queries.
And then we just start building a simple application to
go through all the details, like to really understand what
is solid Q, what is solid cable, what is solid cash,

(01:19:23):
how are they set up with squal light, what are
the trade offs? What are we doing? So we just
we step through the whole process of building a basic application,
focusing on giving people the knowledge and the tools and
the confidence for them to then go and do this themselves,

(01:19:44):
and we go all the way to shipping it to production.
So we use hatchbox is the tool that I use
most and think is perfectly well suited to sqalide on
rails applications, and Chris at gorails has actually sponsored the course,
which is exciting. So from from rails new to running

(01:20:09):
and production, we build a whole host of different features
to really exercise all of the sort of functionality available
to us in the rails application. So we set up jobs,
we have a cash, we send action cable messages, We
do Jason some fancy Jason stuff. We do full text search.

(01:20:33):
We take advantage of sequalize full text search engine dig
into all these different details. I think it's going to
be a really excellent resource, particularly for people who are
interested in a lot of these kinds of philosophies. I've
been harpening on here on this call around, like I
want to find leverage, I want to generate value. I

(01:20:57):
want to move fast, but I care about quality. I
care about understanding what I'm doing, right, Like, sure, AI
can help you move really fast, but in two months,
when you need to make a change or there's a
bug and you're wading through the mountain of slop that
makes no sense to you, and you really don't know
any of these tools, you're going to be in for
a world of pain. So really, I really want to

(01:21:20):
push this idea that you can just master two very approachable,
simple tools. Right, It's super easy to get started. There's
it's straightforward to really genuinely master these tools, and then
you have the world at your fingertips. Because not only
can you move fast, can you leverage AI with tools

(01:21:43):
that have been around for decades, right, Like there's all
of these ais have been trained with like millions and
millions and millions of tokens on these topics, these tools,
But you can be a true craftsman. You can have
full control over what you're doing, such that your system
and your your business can evolve over time without you
feeling anxious about like what the hell's going on. So

(01:22:04):
this middle ground of like finding your leverage, mastering your tools,
but still moving fast, being responsible and adapted to the market.
I think this is my sense for like where the
future of the sort of web economy is going, and
this is my attempt to sort of help as many
people as possible to get their foot in the door

(01:22:24):
and start making noise on the internet for the betterment
of us.

Speaker 1 (01:22:28):
All awesome. So where do you get the course?

Speaker 3 (01:22:35):
Yep, check it out at high Leverage rails dot com.
You can sign up for the waitlist. We're going to
have a early bird discount, so get on the waitlist.
Planned release is February eighteenth. All the videos have been recorded,
we've done the first passive editing, so we're just doing
our final passive editing right now, so I expect that
it'll come out on February eighteenth, and if you are

(01:22:58):
on the waitlist, you get a nice discount. So high
leverdrails dot com check it out, and I look forward
to hearing everyone's feedback.

Speaker 1 (01:23:09):
Awesome. All right, Well I was going to go to picks,
but you mentioned that you're going to be speaking at
some conferences in April before the show, so you want
to just let people know where those are. Two and
then we'll do our picks.

Speaker 3 (01:23:24):
Yeah. I literally just before the show found out the
third one. So in three straight weeks in April, I'm
going to be bouncing from South Paolo, Brazil. We're obviously
speaking at Tropical on Rails, then hopping back over the
ocean to Poland to speak at Fratslavarb and then the

(01:23:48):
week after that I have to fly over to Japan.
I'll be at Ruby Kaigi this year and speaking there.
So three straight weeks, three different continents. It's going to
be a fun and take a month. Awesome.

Speaker 1 (01:24:04):
Well, yeah, if you are going to be at any
of those conferences, definitely look for Steven. Let's go ahead
and do some pics. Are you have some picks?

Speaker 2 (01:24:15):
Yeah? I do. It's funny. Last week I was so
distracted for about half this podcast because I broke production
for my client.

Speaker 4 (01:24:22):
And when you didn't, the database did delete the database
just causing some mile errors, not the best, and I
did a pick for my app which I relaunched called
scatterg on that email.

Speaker 2 (01:24:36):
And neglected to say what it was about what it
actually does. So I'm just gonna go ahead and repick
scatter Gun that email, which is a mailing list platform
with a focus on privacy and simplicity. So it's like
my take on stuff like mail Jim, but without all
the extra marketing stuff. Is like, literally, all you want

(01:24:56):
to do is drop a subscription box on your website
and and email that list. That's kind of what skadagun does.
So you can just drop a web component into your
website to collect email addresses, and then scadagun will give
you an email alis and just fire up your favorite
email editor and send an email to that alis. My
app will receive it. It will tack on an unsubscribed

(01:25:18):
link at the bottom and send it out your list.
That's literally all it does. To be honest, it's fairly simple.
It's always going to be fairly simple. It's never going
to have fancy marketing features. But if that's the kind
of thing that you fancy, then give that a go.
So I've done the proper sales which this time, so
I'm going to leave that to the side. For next
picks because last time it's just like after I got

(01:25:41):
the college, like I mentioned it and then I forgot
to say what it actually did. Yeah, that's what breaking
production will do to your brain. And I'm going to
do another couple of picks. I'll do a music pick.
So I think I might have mentioned them on this
podcast before, but I love a band called Solstice. There
really grassroots level band here in the UK, and they

(01:26:03):
do kind of folk kind of music mixed with rock,
mixed with progressive rock, weird shit. They've got a new
album coming out in a couple of months and I've
got early access to it and i've pretty much I
got it two days ago and i've had it on
loop pretty much ever since. There's a track on there
called twin Peak is the most hauntingly moving piece of

(01:26:26):
music I've heard in many, many years. So if that
sounds vaguely interesting, maybe go check out the last album,
which is Light Up. And their upcoming album is called Clan,
which will be out in early April, I believe. And
I've been going for quite a lot of comedy gigs

(01:26:46):
in the last week. So there's a comedian called Robotton
who I really like He's got a YouTube, an older
special effice on YouTube for free. So if you like
we had stand up comedy, then go check out Robot
and the Time Show. It's on YouTube for free. So, yeah,
those are my picks.

Speaker 1 (01:27:10):
Awesome. I'm gonna throw out a few picks here. So
I've gotten really deep into the launch for Ruby Geniuses
and JavaScript Geniuses, so just to give you a quick
rundown on those, and then I'll do my board game
pick and stuff like that. So and I'll try not

(01:27:31):
to be too long winded. When I got into programming
and the times where I felt like I was really progressing,
I had a handful of things really going for me.
I had meetups that I could go to and people
I could talk to with the meetups. I had a
mentor that I was working with at the time, I
was watching rails casts. Anyway, there are a bunch of

(01:27:54):
things that I wanted. We kind of did book clubs,
I got some mentorship from the guy I was doing
Ruby Rogues with way back in the day. There were
conferences and anyway, So what I'm looking to provide to
people is that kind of experience because a lot of
these things have kind of gone away or I've just

(01:28:16):
kind of advanced to the point where I'm not working
with anybody who really pushes me on tech or anything
Ruby or anything like that. And so you know, I
get a certain level of that from this show. You know,
because we're talking to Steven. Stephen knows way more about
Sequelight and some of this stuff that I do, right,
so I can ask them all my questions. But what

(01:28:37):
I wanted to do was just put together this system.
And so I'm scheduling meetups right now. We'll do a
meetup every week. One of the meetups every month is
going to be something about AI because that seems to
be where people are at. And then we're going to
be I'm going to be putting out videos on how
to do stuff, primarily with rails. Like I said, I'm

(01:28:59):
also doing in the cell on the JavaScript end, and
so that'll probably focus more on React. But yeah, we're
going to have a place where you can come and chat.
There are going to be premium podcast episodes, and you'll
get discounts to any of the summits that I put out.
So last year I did a summit on just where

(01:29:21):
Ruby was headed, and so this year I'm looking at
doing some more of that stuff, probably in the summer,
and you'll also get discounts on retreats. I want to
start doing retreats where you know, we kind of get
like a big giant Airbnb that has a room that
can hold twenty or thirty people and invite you know,
a handful of people, so you know, maybe Steven or

(01:29:42):
maybe a USh or you know, if we can get
Uncle Bob Martin or DHH or Aaron Patterson or some
of these folks that you all know, right, so we
all get into a room and just talk about how
we can program better. But you kind of get that
FaceTime with somebody who's I guess sort of at that
higher level of access, and also you know somebody that

(01:30:06):
you want to sit and pick their brain. So anyway,
that's that's the deal. You can go to rubygeniuses dot
com and sign up. I've also been sending out emails
to my email list. I signed up for the system
I'm using before I knew about scattergun. Sorry I usual,
but anyway, so just go to rubygeniuses dot com if

(01:30:29):
you sign up before Valentine's Day February fourteenth. I'm also
throwing in a one hour coaching session, and my coaching
rates about two hundred dollars an hour, and so that's
a two hundred dollars value right there. And then the
other thing I'm throwing in is either over WhatsApp or
something similar, I'm offering text and voice coaching. So you'll

(01:30:52):
send me a message, hey, I got stuck doing this right,
and then I'll get back to you and say, Okay,
this is how I would do it, or you know,
these are the things that I would look at or
you know whatever, right anywhere up and down the chain
from performance to how do I do this particular thing?
And if I don't have the answer, the nice thing

(01:31:13):
is is I've been doing things like Ruby rogues long
enough to where I probably know somebody who knows the answer,
and so I should be able to track it down.
So anyway, that's what that's what we're putting together with
rubygeniuses dot com. You can sign up for a month,
you can sign up for a year, or you can
sign up forever. Those are the prices, So go to
the website check that out. And yeah, I'm super excited

(01:31:35):
to be collaborating with people I did leave off. We're
going to be doing a monthly book club as well,
So we're going to be picking up something like Pragmatic
Programmers or my book, yeah, or your book. We might
do that, right, and so then it's you know, you
show up, so you've read the book, probably worked through it,

(01:31:56):
and then you know, we show up. Usually the authors
there because I know a lot of them too, and
we'll do that so that that's all part of the deal.
Some of them might be combined with the JavaScript group
if if the book is more generic programming. But I'm
not going to shy away from picking you know, a
solid Ruby book or a solid JavaScript book for the

(01:32:18):
other group, and we'll probably do both of those in
the same month so that we can have separate meetings.
But anyway, that's what we're pulling together. I just people
are looking to connect. A lot of times people find
jobs through these things. There's going to be some level
of career growth, job hunt content to it, just because

(01:32:39):
I've talked to a whole bunch of people are looking
for that. But that's what we're putting together. So Rubygeniuses
dot Com. Now, as far as board games go, I'm
trying to remember I think last week, No, I don't
think I did pick it last week, so I'll pick
it this week. I played a game with my my friends.

(01:33:00):
So one of my friends is teaching the hot games
at SaltCON, which is a conference in March in Layton, Utah,
north of Salt Lake where you show up and you
play games with people. I went last year. I'm probably
not gonna be able to go this year. But anyway,
he's teaching the game, so he's trying to learn the games.

(01:33:20):
And we played a game last week called Cascadero, and
Cascadero is you're effectively placing a little horseman and they're
carrying word about the new king's ascension to the throne.
Not that that has any bearing on anything, it's just
kind of the premise of the game. And then yeah,

(01:33:46):
so when you place your piece next to a city,
then if it's part of a group, so if it's
your second piece in the group or higher, then you
get to move move up the progress track. And as
you move up the progress track, you get points and

(01:34:07):
you win at the end of the game if you
have the most points and you've gotten your color of
your marker all the way to the end of the
board board game. Geek weights at at two point five
to three. I liked it. I don't know if I
would go buy it, but I did like it. It was fun.
I kind of want to play it again, just because
some of the mechanics were a little bit odd to

(01:34:28):
pick up, and now that I understand them, right, I'd
like to play it again and see if I can
do better. It says it's age is fourteen plus. The
community says ten plus can play it. That's probably fair.
We played with three players. It's two to four players,
and anyway, it was kind of a fun game. It's done.

(01:34:50):
It's made by what's his name, Rainer Kenitsia. I'm sure
I'm destroying his name. I think he's German or something,
but anyway, his he's done a bunch of other games
that you've probably played if you're in the board game.

Speaker 3 (01:35:10):
Area.

Speaker 1 (01:35:11):
So another one that I picked was lem. Anyway, he's
put out dozens and dozens of games, and so Lost
Cities I think I've picked on here before. So anyway, yeah,
go check it out. I'll put a link into board
game Geek where they have that rating, and then I'll

(01:35:32):
also put a link in for Amazon so you can
go buy it on my affiliate which doesn't count cost
you anymore, but I get a kick back if you
use my link. So anyway, I've gone on way longer
than I wanted to, so I will let Steven do
some picks.

Speaker 3 (01:35:50):
Yeah, so my first pick a little bit of a
I don't know play, I guess on the word. But
my first pick is going to be the city of Philadelphia,
and there are two reasons why. The first reason is
that I am picking the Philadelphia Eagles to become Super

(01:36:11):
Bowl champions this Sunday in Super Bowl fifty nine. Just
for a quick bit of context, Born and raised in Louisiana,
but lived in Philadelphia for about eight years before I
moved to Berlin six years ago, and I just can't

(01:36:33):
I don't know how anybody lives in Philadelphia and doesn't
become an Eagles fans. I think they put something in
the water. So I'm a big Eagles fan. But the
second reason that Philadelphia is my pick is that I
am super excited for Rails Comp this year. For anyone
who doesn't know, this is the final Rails comp. Ruby

(01:36:55):
Central has been putting on Rails Comp for over a
decade now, I don't even know how many years, and
they have decided to focus their efforts on a single
conference each year, and so this year they are doing
Rails Comp but not Ruby Comp, and then moving forward
they will only be doing Ruby Comp. And so the

(01:37:16):
very last Rails Comp ever, is being hosted in Philadelphia, Pennsylvania,
which will be the only time that a Ruby Central
conference has ever been in Philadelphia. I know the hotel
that the conferences is happening at, I know the area
where the conference is happening. There's like really great food options,

(01:37:41):
really great experience. It's going to be an absolute blast.
I am really looking forward to that. The call for
papers just opened up and that's going to be open
for all of February. I recommend everyone to take the
time to like sit down and gather your thoughts, try
to present a interesting, cogent narrative for something that you

(01:38:07):
have thought about or done recently. Like all of us
have done interesting things, all of us have had interesting experiences,
and there is so much value in even just preparing
a proposal, let alone you getting get accepted and then
preparing a talk and getting involved in that side of

(01:38:30):
the conference experience. So big year for Philadelphia. Let's go
Eagles and hopefully I'm not too sad on Monday, but
that's going to be my first pick. Just for a
technology related pick. I have been using a couple of

(01:38:56):
gems from my friend and elaborator in the open source world,
Joel Drapper for some of my more recent projects, and
I just want to shout them out because I've really
enjoyed using them so too. In particular, for those who
don't know, Joel Drappers, the creator of flex. Flex is
the view library in Ruby that allows you to write

(01:39:21):
HTML with pure Ruby, so no templating string, templating syntax ERB,
you just write normal Ruby. That's a great project. But
he also has a library called Literal, which we're going
to be talking more about at Bratslav in Poland in April.
It's called Literal and it's a really really fascinating project.

(01:39:46):
So I'm just going to tease more of the details
of our talk there. The name of our talk is
going to be Ruby has always had types, and for
those of you who don't know, Ruby has always had types, uh,
and Literal sort of finishes the type system that has
existed in Ruby for a long time and the way

(01:40:06):
that it does that and the way that Ruby has types.
Is that there is a run time type system. So
what does that mean? What can you do with that?
How is that different from a statically typed language, a
compiled time typing system. There's a lot of interesting stuff
to stay there, and I'll just leave you teas. Come
check it out at bratslall RB in April, or just

(01:40:28):
go check out literal dot fund read the documentation. That
is a really excellent library. And then a newer library
that is still sort of in beta but is really
exciting is called quick draw. That is a testing framework
and test runner, so basically a direct competitor to urspec
and mini test. And the big selling point is that

(01:40:52):
it is built from the ground up specifically for our
modern multicore environment, and so it has a highly optimized
handwritten test runner that distributes your tests as efficiently as
possible across all of your computer's cores, and so it

(01:41:15):
runs at you know, if you have like a ten
core machine, about ten times faster than the equivalent in
ourspec and it is just incredibly fast. It is so cool.
It has a watch mode and you can just like
leave it on and you're just saving your files and

(01:41:37):
it just finishes. I am working on a sequel like parser,
and I have a test suite currently at about thirty
five thousand assertions and I just leave watch mode and
it runs in like on my M one Mac in
like seven hundred milliseconds, so I just am refactoring away
and letting it sit in watch mode. So really, really

(01:42:01):
awesome projects. I've I've really enjoyed working on them with
him and using them in my work for the last
few so I definitely recommend people check those out.

Speaker 1 (01:42:10):
Very cool. Yeah, a lot of stuff to check out there. Well,
thanks for coming. I'm not gonna talk about the Super
Bowl because I like both teams. I'm trying to decide
which way I want to go, So anyway, thanks for coming, Steven,
thank you so.

Speaker 3 (01:42:27):
Much for having me. It was a blast to talk
about all this stuff with y'all. Yeah.

Speaker 1 (01:42:31):
Absolutely, Well, let's wrap up until next time, folks. Max
out
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.