All Episodes

January 23, 2025 • 65 mins
In this episode of Top End Devs, host David Camira is joined by panelists Luke Stutters and John Epperson, along with special guest Jesse Spivak, a senior engineer at Ibotta. Jesse shares his experiences and insights from a challenging project at Ibotta, where he navigated through four critical mistakes. These included choosing the wrong technology, siloing work, falling into premature optimization, and making too many changes at once. Jesse explains how these mistakes jeopardized the project but ultimately led to valuable learning experiences. The conversation also touches on the importance of discussing and learning from mistakes openly, the complexities of transitioning to new technologies, and the significance of making systematic, verified changes. Additionally, they delve into the evolving landscape of developer interviews, aiming to create a more inclusive and positive experience. Join us as we explore the trials, lessons, and growth that come from navigating the highs and lows of software development.

Become a supporter of this podcast: https://www.spreaker.com/podcast/ruby-rogues--6102073/support.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hey, growl. Welcome to another episode of Ruby Ropes. I'm
David Kumia and today on our panel we have Luke
Sutter's Stutters, and we have John Epperson everybody, and today
we have a special guest, Jesse Speedback.

Speaker 2 (00:18):
Great to be here.

Speaker 1 (00:19):
So, Jesse, would you mind explaining just a bit about
who you are, some of the things that you're doing,
who you work for, and what are you're famous and
all that good stuff.

Speaker 2 (00:27):
Yeah. Absolutely, so. My name is Jessin Speedback. I'm a
senior engineer at a company called Ibata, which is a
cash back for shopping app based in Denver, Colorado. I've
been working there for about three and a half years.
We are to do some hiring, so check out our
cruise page. I guess I'm famous as it were, because

(00:47):
I gave a talk at the first Remote Rails comp
this past May, and I talked about kind of how
crummy of a developer I am.

Speaker 3 (00:57):
Yeah, I think we can all relate to that on
the daily basis sometimes, So would you might have given
a bit of a you know, highlight talk about what
you covered the conference and stuff, so we can just
kind of pick it up from there.

Speaker 1 (01:11):
We'll link to the in the show notes a link
to the conference, but just for those who maybe didn't
see it.

Speaker 2 (01:17):
Sure, and there's no substitute for actually watching this fantastic talk.
But more more seriously, the talk is really about my
experience as a tech lead and I bought a working
on a pretty critical project over the course of about
six six months or so, and over that time I

(01:40):
made four very big mistakes that put the project in jeopardy.
And hopefully there are mistakes that I will learn from
and not make again as I continue to lead projects
that I bought in the future. And my hope is
that by sort of articulating these mistakes and what I

(02:00):
learned from them, other folks can benefit. And so the
four mistakes that I made are first, we picked the
wrong technology. Can get more into that. We also, as
a team we siloed work, so work was divided up
in not the best way. We fell into the premature
optimization trap. And then maybe worst of all, we made

(02:22):
way too many changes at one time. So I can
go into detail on any of those.

Speaker 1 (02:27):
Yeah, and I think it's fair to say that our
listeners other members on the panel have been there before.

Speaker 4 (02:35):
Yeah, I've worked with loads of people who made those mistakes.
Obviously I've never made them myself, but oh my word.

Speaker 5 (02:43):
Luke is the perfect developer. Let's be fair.

Speaker 6 (02:45):
No, I was actually going to jump in and say that, like,
I feel like I don't feel like I'm alone, but
usually when you make a mistake, it cascades into more
for whatever reason. I generally feel like you you either
kind of a rolling along and you feel pretty good
about stuff, or you're just like in a bottomless pit

(03:07):
and there's very little in between. Time you just suddenly
notice that you're at the bottom. So yeah, just saying.

Speaker 2 (03:13):
And absolutely, and I just want to kind of call
out that there's like a certain amount of privilege that
comes with being able to talk about our mistakes. Right,
I'm not worried that my boss is going to fire me,
and I'm also not worried that folks won't take me
seriously for getting this talk. If anything, it probably improves
my reputation as evidence by getting to talk to three

(03:35):
of you gentlemen. So I just want to call that
out because I think it's important.

Speaker 6 (03:41):
I think that, Okay, So speaking of that, I think
there's a give and take there too, So I think
this is one of this will get into a thing
that I care a lot about this particular subject because
for most of my shoot, I still feel it today
or whatever, it doesn't really matter. I still always feel
this pressure to not let people know about the mistakes

(04:01):
that I made, right, because letting people know about the
mistakes that I made makes me more vulnerable to them
not wanting me to work for them, right, like fire
me whatever. The bad thing is that it's a boogeyman,
it's not real. It's just a pressure, right, And so
I guess what I'm trying to get at is like
I kind of feel like one of the things that

(04:22):
I always care a lot about, what mistakes is telling
people about your mistakes, I actually have discovered in reality
actually makes people respect you more, but it doesn't change
the fact that it's like really hard, and I still
feel that pressure to this day.

Speaker 2 (04:39):
Yeah, John, Yeah, I agree with what you're saying. I
think that for a lot of us, talking about mistakes
openly and honestly and what we learn from them actually
builds our credibility. But that's not always the case, depending
on sort of how you present what you look like.
I think that you guys recently did an episode. I

(05:00):
think you talked about issues of hiring and getting equality
in a workplace, and I think that that kind of
plays in here for sure.

Speaker 1 (05:08):
And I would go as far to say that it
depends on what the mistake is, what kind of mistake
it is. If it is just utter and complete negligence,
then those aren't really the kind of mistakes I would
really want to be forthcoming and outright about, you know.
But like if I went in and always had a

(05:29):
VP and tunnel into my production environment and then we
got malware in our local work environment and that transferred
over to production and encrypted all of our production data,
I don't know if I would really say like, oh, yeah,
you know, that was a silly mistake of mine. No,
it's like, okay, not only of our customers effective, but

(05:50):
now you know, my jobs in jeopardy because I decided
to always take shortcuts. But something like and what I'm
really interested in is your technology framework or the use
the wrong technology? Can you explain a bit more about
the scenario?

Speaker 2 (06:08):
Sure? Absolutely, so. This is the high level we had
ibought it. We have just a wonderful majestic monolith rails application. Actually,
originally the application we've written in Scala, and then after
about a month of that, they switched it over and
rebuilt the whole thing in Rails and it's been Rails
ever since. So it's going on close almost ten years

(06:29):
at this point. So it's a it's a large application,
it serves millions of users. Very cool, and about three
years ago when I joined, we started thinking. We started
growing the team and thinking about how we could, like
many people have pull pieces out of the monolith into
micro services. So this project in particular was about taking

(06:50):
a piece of billing logic from the system, from the
monolith and pulling it out into a micro service. The
hope was to make it better, encapsulated, easier to iterate on,
not you know, isolate dependencies, every reason that you'd think
to build a micro service. So we chose Actually, before
I say the technology, because before I get trolled by

(07:11):
all the lovers of this technology, I'm going to preface
this by saying I don't think that this technology is wrong,
and I don't think it's bad in and of itself.
I just think it was not the right technology for
the problem.

Speaker 4 (07:25):
And the team hold on a second hold on the
second Jesse because the slide I'm looking at says big mistakes.
It doesn't say small mistakes. It said big mistakes. So
let's just keep this in mind, beful, what we reveal,
what this technology is, What a big mistake.

Speaker 2 (07:41):
Was absolutely this. Whoever uses this technology is definitely making
a big mistake. So you spoiler. We weren't building like
us a side rails app micro service, which would probably
would not have been as big a mistake. But the
issue really was that our team, the small team of
developers and then a larger team of engineers in the company,

(08:04):
really did not have a ton of experience with the
framework that we chose, and as a result, we ended
up having to do a lot of plumbing and reinventing
the wheel and just not benefiting from the institutional experience
that exists at Ibottom. And unfortunately, right this this could

(08:24):
work if you're doing kind of like a proof concept,
like let's show what this technology can do. Let's pick
a pretty isolated use case. But the building logic that
we were pulling out about Monolith was basically like do
or die. Right, if it did not work, it costs
millions of dollars to fix, or you know, it ends

(08:45):
up costing company millions of dollars. So it really we
were walking on a tide rope and there was no
net underneath us, and unfortunately we decided to I guess
like walk on our hands instead of go across normally.
So the framework that we used is called and I
think for a team that knows AKA, this probably would
have been really a perfect tool for the job. But

(09:07):
our team and our company really did not have a
ton of experience with Akka, and so unfortunately we weren't
able to sort of take advantage of it and use
it in a way that sort of professional AKA developers
likely can.

Speaker 4 (09:20):
So this is kind of like a functional programming thing, right.

Speaker 2 (09:24):
It deals with data streams and passing data along in
a functional paradigm, and it's meant to accommodate high volume
data across highly parallelized system. So you know, at the
time we went there, well I'll talk about when we
went there in a second, but in retrospect, it was

(09:46):
it was something that could handle basically ten thousand x
really what we needed in terms of what it was
designed to handle. So just sort of on paper, it
probably wasn't the best the best move in that respect,
but I could also talk about the team as well
and why it wasn't a great a great bit.

Speaker 6 (10:02):
Yeah, can you can you talk for just a minute
about like what was the problem that you were trying
to solve, why did you choose?

Speaker 5 (10:09):
Like what did you need akup for sure?

Speaker 2 (10:11):
Yeah? Perfect. So the problem that we're trying to solve,
and you have to know a little bit about the
Ibota app. So I assume all of you, all of
you guys have downloaded it and are actively using it.

Speaker 4 (10:23):
Absolutely it's the best Apple that they used.

Speaker 2 (10:25):
Good.

Speaker 6 (10:25):
I've definitely looked at it and I was like, nope.

Speaker 2 (10:30):
That's funny. We did a cert. Well I'll get to
that later, But basically, I bought it as a way
for you to get digital coupons. You can get brands
put offers in the app. You click on the offer,
you show us evidence that you bought the thing that
is on offer, and then I bought it. Will pay
you cash back, they'll put me send it to your
PayPal account, give you a gift card to Amazon, whatever

(10:52):
you want. So the problem that we're trying to solve
is how do we make sure that the offers in
the app don't exceed the budget that is allocated to
them by the brands that put those offers in the air.
And that sounds maybe like an easy problem, like there's
an easy way to just say, Okay, there's five hundred
thousand dollars in budget for Oreo coupons. Just divide five

(11:13):
hundred thousand dollars by how many how much money we're
giving out per coupon, and then you know, there's actually
much Like it's obviously much harder than that, And in
order to preserve a good user experience, we need to
make sure that we're not yanking content and surprising our users.
Like you would be really upset if you went to
the store specifically to buy Oreos to get the coupon

(11:36):
and then by the time you checked out, the coupon
was no longer in your application. So we have to
run some predictive algorithms to basically guess when we're going
to run out of money and kind of slow the coupon,
slow the velocity down as we approach that point.

Speaker 4 (11:52):
Oh dear, that's that's that's that's a dangerous recipe, I
must add to think about. It is a vague in
the UK at the moment. Is it available in Canada.

Speaker 2 (12:03):
We are only in the United States right now. Sorry.

Speaker 4 (12:07):
Yeah, So in my defense, this looks like the perfect
storm for me because you've both got the kind of
strong financial components. So if you get it wrong, you
lose money. And if you get it long, then people
have wasted their time going to the shop. You know,
in Britain it's so small. I can walk to the shop.
You know, sometimes I just open the window and shout

(12:28):
at them and tell them what I want. But you know,
I know when I was living in the States and
people maybe drive for two hours to go to Walmart.
So this is this is this is quite a quite
a problem you've got there.

Speaker 2 (12:41):
Yeah, it was. It's interesting that you say that, because
right when I joined the company, I was kind of
put on the team. I was actually giving you my
tech body, my mentor. When I joined the company, this
was his project, and as someone knew in the company,
it was it was very it was really overwhelming problem
space because basically, these campaigns and application almost have like a

(13:04):
physical momentum to them. So if you imagine trying to
stop a moving train, you can't just hit the brakes
and expect it to stop on a dime. You have
to apply the brakes over some distance to slow the
train down. And that's really how the content in the
application is modeled. And it's I'm not a physics person,
and so it's really confusing.

Speaker 6 (13:26):
So all right, so this seems a slightly I mean,
it's a it's a twist right on your classical inventory problem, right,
is what it kind of sounds like. So the way
that AKA came into this is is what you were taking,
like what all these requests from users and trying to
decide whether or not you're going to I guess give

(13:48):
them the coupon or not or something.

Speaker 2 (13:50):
Yeah, that's that's that's sort of right. So we have
an event based architecture where our system and for the
for folks who are familiar with that, I mean is
that basically your system publishes events, which is data that
signified that something has happened of interest in the system.
So maybe, like you know, shopping cart loaded is an

(14:12):
event that you might have in like a typical inventory
space or something like that. So we have events that
we're interested in, like content awarded, which means that John
went to the store, submitted or received through the app,
and got cash back. So the content has been awarded,
So we listen for those events in order to keep

(14:32):
track in real time of how much budget is being used,
and we basically track that over time to make a
rough prediction about how fast things are moving. And so AKA, sorry, John,
to get back to your original question, AKA is really
good at streaming data, at streaming large amounts, high volume data.

(14:54):
And I bought a we'll get on the order of
several hundred thousand of these content aword events per day,
which seems like a lot, but it's actually much lower
than I think what AKA can kind of deal with
or is designed to deal with out of the box.

Speaker 5 (15:09):
Yeah did you so?

Speaker 6 (15:11):
Did you consider alternatives? Did you reject them for various reasons?
And and I guess so. Obviously AKA didn't work out
for you, so you must have picked something else that
you like better. And how did you arrive at that
new thing? And what was that? If that makes some sense?

Speaker 2 (15:28):
Yeah? Perfect, So yeah we we Basically this comes down
to some team issues again and not not an issue
with OCCA. So the team issue was basically that at
the start of this project, we scaled up our team,
like this is a lot of work we need, we

(15:50):
need to bring in some artillery, and we brought in
a new developer from outside of the company who is awesome.
She's a rock star and had it was from the
ad product space and with dealing with volume, much volume
of streaming data at a scale much higher than than
what we needed or we're going to be dealing with

(16:11):
in it in any near near future. And you know,
she was coming from I believe in AKA shop and
so she joins the team. We're excited to have her.
We think she's a rock star. I mean, she's a
rock star. And she's like, this is a perfect use
case for ACCA. We're like, okay, never heard of that,
but you know, we trust you. You crushed our interview

(16:34):
and we think you're amazing. So yeah, that's that sounds
pretty good. And again this isn't a knock against Akka.
I think this is just that in our company, we
have a ton of infrastructure setup to support the tools
that our company has sort of blessed that that are
kind of frequently in use. In fact, we have like

(16:55):
a name for that. We call it the paved road,
which is like, if you're an engineer, you have a
lot of autonomy and abada belt what you use but
if you stay on the paved road, it should be
an easy path. And akka, which is a jam you know,
which used with Java or Scala typically is not on.
I bought a paved road. So we had to kind

(17:16):
of have a contentious meeting, a contentious conversation with the
architects at Abada and say, no, we really we believe
in in our developer and we think that she knows
what she's doing, and we're ready to sort of make
our bed and lay in it. And they were like, well,
you know, as long as you know that, so they

(17:37):
let us kind of they gave us just enough rope
to forget how the rest of that goes to hang
ourselves with.

Speaker 6 (17:43):
Okay, So so it sounds like you had an advocate
and that's kind of how you landed on it. Where
did you go after you decided that ACA was the
poor choice? Like what did you end up with?

Speaker 2 (17:53):
Yeah? So this and this kind of gets into the
next problem, which was the siloed work. So we're starting
to work on building this AKA powered race car of
a micro service to pull biling logic out of our
rails monolith. And there's sort of two streams of work.
There's the development of the ACA micro service, which is

(18:15):
we were writing the reason Cotland, which is the JBM.
It's kind of a nicer job. It's what Android apps
are written in. It's actually pretty nice. So we have
that work going on, and then we have to integrate
that micro service sorry into the Rails monolith. So I
ended up taking on a lot of the integration work

(18:36):
and the other developer took on most of the Akka
and Cotland work. So we have these sort of two
very isolated pieces of work that are siloed. And then
something terrible happened. And I don't blame this person at all,
because I wouldn't want to be on my team anyway.
She decided that she was much more interested in data

(18:57):
engineering and she moved to a different team in the company.
So and that's actually, you know a great thing about
working at Ibada, Like we've got tons of really cool problems,
different spaces, and people are almost encouraged to move around
to find things that they're that they like and are
good at. So she made this move, and now now
the the problem of using a framework that none of

(19:21):
us had experience with and that the general company didn't
have a ton of experience with really became self evident
because now there was there was still work left to
do on the micro service. And I'm just I'm just
a rails developer, Like I had to go into there
and start reading Cotlan, started reading the ACA documentation and
try to wrap my head around what what this whole

(19:42):
you know, actor system meant. And it was. It was tough,
and that's why I call it a big mistake. So sorry, Dave.

Speaker 1 (19:53):
You know from my experience too, is that, yeah, Ruby
is slow, you know, there's no getting around that when
you compare to a compiled language. But holy crab, is
it fast too. You know, I have a production application
which processes over five hundred thousand active jobs every single day,

(20:14):
and that it does it extremely quick. I don't need
it to be faster. I mean that's plenty fast on
the current setup that it's on, which is two cores
and four gigs of RAM, and we have two servers
dedicated to the background job, so two vms. It's able
to handle that load and we don't have to worry
about it crashing or anything like that. So I mean

(20:36):
that's good enough for us. Now I imagine that it
would be able to double that workload before we ever
ran into any kind of performance issues where we needed
to start scaling up.

Speaker 2 (20:48):
Absolutely, Davi, and we were originally in the monolith. The
way the system worked was it ran on a background
scheduled job using rescue, and the scheduled job based took
roughly forty five minutes to run, So it was running
like kind of like a waterfall, like every ten minutes,
we'd start a new job that would take forty five
minutes to run. And so going from that to basically

(21:12):
completely real time is a huge improvement. And if you know,
real time for five hundred thousand events per day versus
five million events per day are different things, but if
you're at real time, it's already this just enormous improvement
over what we were working with, which is like this
forty five minute loop. So at that point, and I

(21:32):
guess I'll say another thing before I get into like
how we fixed how we fixed my mistake, This is
actually getting to what Dave said, this is a case
of premature optimization. We sort of didn't do the back
of the envelope math well enough or didn't have a
clear picture of like, okay, this is really like designed
to handle like literally a thousand times more traffic than

(21:55):
our best day. So you know, we didn't that was
that was definitely a mistake, like we should have asked
that question and realized, Okay, maybe maybe this is a
little too much horsepower. We don't need this. It's too
much trouble for what it's for, what's buying us. And
on my end, also, I was imagining, you know, getting
all these events we call it, So the micro service

(22:16):
was producing like prediction events, Okay, so predicting every time
a piece of content should come down and making an
adjustment about when it thinks that that should happen. And
so I'm imagining the monolith is listening to these prediction events,
subscribe to an SNS topic or sqsq and I imagine,
you know, thousands and thousands and thousands of events every second,

(22:40):
which is way way too much. And so I was
thinking of like all these clever ways to do caching
and like to try to figure out when can I
drop events so I don't need to hit the database,
like when can I? When can I? How can I
come up with these clever ways to only make round
trips to the database when I really need to, And
that added time to the DEVI, and it also added

(23:01):
a ton of complexity so that when our two systems,
when we were like, okay, let's turn them on, let's
see what happens. The first error that comes out, because
of course there's going to be an error when you
first try it out. That was really hard to debug.
It was really hard to understand, like is this a
caching issue? Is this actually an issue with the data
coming from microservice? Like it was really hard to isolate that.

(23:25):
And then that obviously moves into the third problem. The
third mistake was that we made too many changes at
one time. So all these were ways that I tried
to shoot myself in my foot, in my own foot
while working on this very important project.

Speaker 4 (23:38):
Yeah makes sense. I'm not sure if I understand it.
So you're saying there's kind of two separate things going
on here. Firstly, you're moving to a completely new technology,
so you're taking a large part of the app out,
you're putting it in a micro service. It's not very
micro if you're doing what seven hundred thousand things today,

(23:58):
that's not a micro service. That's ah, this is this
is this is this is a macro MicroB or medcro
and the second thing you're doing is you're actually changing
the whole into face. So you're saying you're going from
you previously had a rescue batch job and now it's
saying no, no, no, we're not going to do batching. We're

(24:19):
going to do it all immediately.

Speaker 2 (24:22):
Yeah, that's a really a student observation. So we were
changing structure and you were changing behavior at the same time,
which is something since that point I've really tried to
avoid doing, even at like the micro PR level. If
I have a PR, then I'm going to push up.
It's either going to add behavior, change behavior, or it's

(24:42):
going to change the structure of the code, and I'm
going to try to not do the two things at
the same time. So in this case, we did the
two things at the same time, and at multiple points
we should have known, or we should in retrospect, we
should have known to pause and verify, right and if
if we couldn't verify, take that failure as feedback and iterate.
And I think that that's really what really strong engineers,

(25:05):
experienced engineers know to do, make one change at a
time and verify like I have. It's I guess it's
kind of like when I run our respect right or
ourspect feels like supernatural to me. So every time I
hit my test suite, I'm like, Okay, I expect this
to happen, and like sometimes I'm right and then sometimes
I'm wrong, and that's like really interesting too. But making

(25:27):
that initial guess of what's going to happen is super important.
And I think we didn't slow down and stop and say, Okay,
here's our guesses to what's going to happen. Here's how
we're going to verify it. And if we can't verify it,
here's like the corrective action we can take.

Speaker 6 (25:42):
Yeah, so I think you're hitting it, like one thing
that's like really important, right, So one of the things
I think that make that really kind of makes you
a strong engineer of sorts. Right, It is not necessarily
like to avoid mistakes because shoot, for whatever reason, they
keep happening to me, Right, it's are you good at

(26:03):
cleaning up?

Speaker 5 (26:03):
Right?

Speaker 6 (26:04):
Are you good at figuring out that something's not right,
something smells funny? Whatever it is? Can you go back
and you know, sweep up your mess? You know, either
push it across the finish line because you're close enough,
or say, actually, this is the wrong path. We should
actually back up, go the last intersection and go down
the other way. So I mean it's definitely yeah. I

(26:26):
mean I can speak to experiences too where I was
just like, man, I really like missed all the flags
and then I kept missing more flags and just kept
going and and why are we surprised that we were
in a bad place? Because I like ignored everything. So
I hear you meant it, and and kudos for recognizing
it at some point and and uh and doing something

(26:47):
about it.

Speaker 2 (26:48):
Yeah, so what do we do about it? We? Unfortunately,
I well, not unfortunately. It was a good It was
a good learning experience. But obviously my happy place is
deep in our legacy rails application finding all the pathways there.
But I had to get really pushed myself yet out
of my comfort zone. I had to pick up Colin. Luckily,
Colin is like a super friendly language I think to

(27:11):
get into, especially for folks who are coming from Ruby,
because I think that there's like kind of like an
emphasis on syntax, on like syntactic sugar on, like making
the code actually like readable and look nice, whereas that's
not always the case with all JBM languages. So in
that case. In that sense, it was kind of friendly. Also,

(27:31):
at I bought a we were able to like organize
kind of a learning group. So there are lots of
folks who are new to Ibata or new to Colin
who were kind of trying to solve these similar problems.
So we made a study group, we found cool online coursework.
We held each other accountable for making sure that we
were making progress on those things. And I started writing

(27:52):
Colin in the micro service to try to, like you said,
push across the line, like we need to get this
across the line, and then we can kind of circle
back and deal with some of the underlying problems. And
that's important because like as engineers, like at the end
of the day, we're trying to deliver value to the
businesses that we that we work for and not just
like trying out a new technology or like optimize what technology,

(28:14):
what what technological solution we're we're we're implementing.

Speaker 6 (28:19):
So to just to clarify with what you're saying, you kept,
you kept oka you just you just pushed it across
the finish line despite all your your problems.

Speaker 5 (28:31):
Yeah, that's okay.

Speaker 2 (28:33):
So we had we had problems with it, and we decided,
let's get this thing. We're like, we're close enough, even
with the problems, even with our lack of understanding, we're
close enough that a total re right right now would
put us back a long way and upset the business.
So let's push it across the line and then we

(28:53):
can circle back and figure out what to do. So
that was that. Delivering that value, even though it kind
of felt yucky, even though we knew that there were
things that were going to need to change over the
long swer bought us some time and brought us some
credibility in the organization.

Speaker 1 (29:10):
And I want to circle back to the point you
made about moving too quickly or too many changes happening
at one time, And I think that's something that a
lot of developers might start to experience soon with the
whole removal of j Query from Bootstrap five. So no,
not my jQuery, no.

Speaker 5 (29:35):
Sorry.

Speaker 1 (29:35):
So the idea is that Bootstrap five no longer has
a jQuery dependency. So you might plan to upgrade your
rails application the CSS framework from bootsrap four to Bootstrap five,
and you might say like, hey, well, why we're making
this change, Why don't we go ahead and rewrite a

(29:56):
lot of our JavaScript that was also jQuery dependent, and
so I would say, instead of doing that, do one thing.
Either first remove all your jQuery dependency within your application
and just have jQuery be a dependency of Bootstrap, and
then in another iteration, upgrade your Bootstrap version, removing then

(30:18):
jQuery entirely. Or do advice versa where you do your
Bootstrap upgrade first and then you do your j Query
removal from your application as a dependency. But trying to
do both side by side it's too big of a task,
you know, for you know, one person or one team
to do right away. I would just handle one thing
at a time and moving slower. You're going to say, okay,

(30:41):
what broke this? Was it the new Bootstrap framework.

Speaker 5 (30:44):
That broke this?

Speaker 1 (30:45):
Or was it our jQuery rewrite? So that way you're
gonna be able to identify a lot more problems quicker
before they are reported to you by the customer.

Speaker 2 (30:55):
Absolutely, I think the most experienced engines of the best
engineers I've worked with, his name is Justin Hart. He
was the like did the first commit on the on
the monath that I work on, and every time I
work with him, that is always my experience like is like,
we're making this change, here's what we expect, and we're
going to do it, and then we're going to verify

(31:16):
it and then we're gonna move on. And I'm always like, oh,
why don't we do this, this and this, and he's like, no, no, no,
just this one thing. And it's illuminating, right, it's illuminating.
I think it's helped me as a developer in the
projects that I'm doing now, realizing like, Okay, this thing
that I that I want to do, it's actually like
four different things, and I'm going to do the first
the first thing, first, verify it, and then move on

(31:39):
from there.

Speaker 4 (31:39):
If you all your loved ones are affected by the
removal of j Query from boots Chap, why not check
out the excellent call from Jef dev chat dot tv
you don't know JavaScript yet thirty day challenge available with
the link in the episode showness.

Speaker 5 (31:55):
Were scheduled for advertisement there.

Speaker 4 (32:01):
Just trying to just trying to just trying to keep
a ship afloat.

Speaker 6 (32:04):
Man, come on, uh, I was gonna so this goes
along for me with.

Speaker 4 (32:10):
No no, no, no no no no no no no no no.
We have to do jQuery now because this is really
affecting me. I mean, this is this is terrible news.
I literally can't write JavaScript without putting a dollar sign
in front of everything.

Speaker 5 (32:23):
You wanted to have a.

Speaker 6 (32:24):
Dollar sign it you just need to assign something to
the dollar sign that's off.

Speaker 4 (32:28):
One of the reasons I started using maybe not jQuery
was because it has a little dollar sign in front
of the data structure, and that kind of really makes
me feel at home. Plus, you know, it's if it
feels like money, you know, when I tip the dollar
sign in, I feel like, yeah, this is going to
make me a millionaire.

Speaker 5 (32:43):
Oh, I'm sorry.

Speaker 1 (32:44):
So I guess while we're on JavaScript, I think another
premature optimization, or rather I like to call them premature
de optimization is creating a new Rubyon Reil's application with React.
The dashat web back equals React. Just thought through that
there for the oblutory bash on React.

Speaker 5 (33:03):
I do like stimulus.

Speaker 6 (33:04):
I do feel like I do feel like we can
we can let React go now.

Speaker 2 (33:08):
Yeah, I mean, I don't know. Have you all checked
out the base camp's email Hey heay dot com.

Speaker 4 (33:15):
Mm hm, oh yeah, yeah, what do you think?

Speaker 2 (33:19):
Yeah? So I've been I've been using it, and obviously
I'm a base Camp fanboy, but I think it's it's
pretty amazing what what they're able to do with just
HTML over the wire for the most part, and not
not relying on some of the frameworks that have come
out recently. Also think it's super interesting, Like the twenty
twenty rebown Rails community survey, if you look the one

(33:41):
put out by Planet are Gone. If you look at
like what jobscript libraries people are using. So I was
expecting that number one would be React and maybe number
two would be I don't a view, or like Ember
or maybe, but number one on there is jQuery, and
it's like we're all in this, in this jQuery world,
and it's not bad. I love your Query.

Speaker 6 (34:02):
So so for all of you who, along with Luke
are mourning j Query, I'm telling you Stimulus go check
it out.

Speaker 4 (34:10):
All of them don't understand it.

Speaker 5 (34:12):
I don't have an It's that's fair. I thought I
found it pretty easy.

Speaker 6 (34:17):
I thought it gave me everything that I felt like
I was getting from j Querry before, which is a
thing happened on the page. And because that thing happened,
I mean Jesse talk about event based architecture here. Come on,
this is exactly what JavaScript is, right and jQuery especially right.
Stimulus is exactly that event happens. Do something else because

(34:39):
this thing happened, right, Just saying if you love that
event based architecture style stimulus is that it's just way
prettier and I found it a lot easier to use,
and because I don't like dollar signs, I was happy
about it.

Speaker 1 (34:55):
Yeah, And Luke, if you want a bunch of different
stimulus JS tutorials, check out your the Ruby man, I
have a whole bunch on there.

Speaker 4 (35:02):
And it's a big plug episode. Everyone quickly plugged their thing.
Do you know what, Actually, Dave, I have already checked
that out and I still don't understand it, so I
need to check it out again. It's definitely me, I
tell you, I tell you what the problem is, right.
I've been leaning on j Query for so long and
I'm talking about ten years, right, And it's not just

(35:26):
that I've got my whole arsenal of weird front end
stuff that I can pull in and replacing that big
long list of handy widgets I know will solve this problem.
Is the is the is what I'm lacking. So yeah,
basic gomb stuff. Fine, you know, yes, six all away.

(35:46):
But if I want to do something weird, if I
want to do something fancy, the whole point of stimulus
is it doesn't have weird fancy stuff. It's clean.

Speaker 6 (35:56):
So so, now that we've decided that Jaquer is eyeing
and the hopefully our morning period is almost over, it
is dead.

Speaker 4 (36:05):
I know it's dead. I'm just finding it hard to cope.

Speaker 6 (36:08):
Okay, fair So, Dave, you earlier had me a little
bit on this train and I kind of wanted to
go back to it or whatever talking about like changing
multiple things because there's so many places where we can
find ourselves tricked or just like seduced into it right
where it just seems like the right thing to do.

(36:28):
So so my the thing that like I like to
tell people because it made total sense to me when
I first heard it, was, you know, Kep Beck's like,
first you make the change easy, then you go and
make the easy change, right. And the important thing about that,
right is there are steps here, right, like I Yeah, dude,
engineering is all about taking those like tiny steps and

(36:50):
like you said, testing right, and we all know what
happens when we change a bunch of things at once
and then we all do it and then we Yeah,
that's how we go down the road of mistakes.

Speaker 5 (37:03):
Engineering is this.

Speaker 6 (37:04):
It requires self discipline and whenever we don't have self
discipline or whenever we you know, just break it. Because
for human beings that stuff happens.

Speaker 5 (37:12):
Yeah.

Speaker 2 (37:13):
I almost feel like it's I go into like meditative
state almost. It's kind of like the flow state kind of.
And there are times where it's like I pick up
a story and it say, Okay, yeah, this is easy.
I can just kind of like half think about it
and do it. And when that doesn't work, which it
almost never does, I have to like get into this

(37:34):
like very focused. I'm making this one change verifying state.
It's almost like like the cartoon Avatar going into your
Avatar state. If you guys know that amazing cartoon.

Speaker 5 (37:45):
Yes, I'm familiar with it, Luke, I.

Speaker 2 (37:48):
Highly recommend it.

Speaker 4 (37:49):
Well, it's a cool Avatar and.

Speaker 2 (37:50):
The Last Airbender. I think it came out like ten years.

Speaker 4 (37:53):
Ago, fifteen hours to go up with the face the
face tattoo.

Speaker 5 (37:56):
It's got narrow yeah, exactly, you know exactly.

Speaker 4 (37:59):
What right, I will check it out.

Speaker 5 (38:02):
I've seen that art too, he looks like he's really angry. Exactly.

Speaker 4 (38:06):
He's got no hair, right, yeah, bald head. Yeah that's
the guy. Yeah, all right, all right, I'm a huge
anime watch too. That's a that's a dangerous gap in
my knowledge.

Speaker 6 (38:18):
Yes, So anyway, it's like you go into the state
just oh Jesse, you were you were saying, you go
into the state when when the first.

Speaker 2 (38:27):
And unfortunately for me maybe it's because of my age
or whatever, but getting into that really focused state, actually
that costs something, right, It takes like a little bit out.
You can't maintain that state forever. There's like a manipool
almost of how long you how long for me? I
can be in that extreme focused state. And I think

(38:48):
that when there are times where I look at a problem,
I'm like do it Like I almost like ask myself
do I need to be in this highly focused date
to get this thing accomplished? And more often than not
the answer is yes, But my initial answer is no,
And so I end up wasting time by not doing
things as systematically as they need to get done. Am
I am I one on this? Nobody else? When they

(39:12):
when they call.

Speaker 4 (39:15):
I find that, Yeah, people talk about this flow state
and I was.

Speaker 2 (39:19):
I was.

Speaker 4 (39:20):
I was not a believer for a while until I
got something really hard to work on, you know. And
it's those problems where you're the kind of limit of
what you can do, when someone you're thinking, can I
actually write this? That is when I find you get
this kind of periods of intense concentration, and obviously you
don't want to be, you know, at the edge of

(39:41):
your ability all the time. You want to kind of
line up the nice easy jobs or things tend to
go disastrously wrong. But one thing I have found is
that you know, when you're doing your really hard problems
and you're like, can can we do it? You know,
can is it possible? You always have to come back
and redo it. Once you prove the as possible, then

(40:01):
you have to hit it again, and then you come
up the one. So all of the stuff I've done
in the kind of state intents concentration tends to get
thrown away, but it moves you forward down the old
down the road, down the road.

Speaker 6 (40:16):
I always cause those my naive implementations, and it's totally
cool to write really terrible code during naive implementation.

Speaker 5 (40:24):
That's not the point of it.

Speaker 6 (40:25):
The point is to get from having nothing to having
a working thing. The important thing is that after your
naive implementation, you decide, you know, decide whether it's worth
refractoring when you're going to refactor all that kind of stuff.

Speaker 2 (40:41):
Yeah, I love that idea of a naive implementation. And
I think the Aka micro service was our naive implementation.
We didn't consider it a scale at all. We didn't
consider the makeup of the team changing. And as a result,
when we went back to fix the naive implementation, right
when we wanted to get rid of the double loops

(41:02):
or whatever, we ended up moving to Java. So from
Cotton to Java and from Aka to Camel. And the
reason that those tools made sense is because they were
very common in our company and what we didn't have
to reinvent the wheel. We weren't seeing errors for the

(41:23):
first time in the entire company. It was sort of
a knowledge base to draw from. So the current state
is that we have Java, Camel, a Spring micro service
talking to the rails monolith. It's very stable, it's performing
super well now and yeah, I mean getting I guess
a question I have is like, how can we speed

(41:44):
up the process of getting from our naive solution to
the more learned solution.

Speaker 6 (41:51):
I think, in my opinion, if I had an answer
for that, I would be I would be a super
wealthy person because in my experience, usually what happens is
when you get to the point that you're calling something
a naive solution, right, it's usually because you recognize that
there's a problem with it, right. And one of the

(42:12):
things that's going on, I feel like is that you're
sort of you're kind of burning yourself out on the problem.

Speaker 5 (42:18):
At that point when you when.

Speaker 6 (42:19):
You sort of have this realization, you're also at the
point that you're kind of burning yourself out in this problem.
So if you as an organization don't have the resources
to sort of like swap out people, you know, bring
somebody in it's fresh things like that, right, or or
kind of go back to a huddle and you know,
somehow become re energized. Like all the things that we

(42:41):
can do are almost all like social interactions, not engineering interactions, right.
And so if you have a culture where you can
kind of deal with that, I feel like people can
kind of turn around and refactor and make reasonable choices
or whatever. But if you don't, like I feel like
it's really hard. It's really hard for you, as somebody

(43:04):
who just burned yourself out pushing something across the finish
line that was really hard, to then turn back around
and be like, I'm throwing you out and I'm going
to redo this really hard problem again, right like. That's
just a really hard thing to do.

Speaker 2 (43:16):
I think it's interesting that you say that, because at
the point that we made that decision, the problem didn't
seem hard anymore. At the point that we were like,
you know, we can just do this in Job and Camel,
we had wrapped our heads around the problem enough that
we really felt like it wasn't hard anymore. And maybe
that's what it takes.

Speaker 6 (43:35):
You also said a really important word there, which is
we right. So one of the things that you did
as a company to recover from this problem is you
went back to this huddle and you said, hey, I
made some mistakes, and then your team said that's okay, Jesse,
we as a team are taking ownership for this, right like,

(43:56):
and we as a team are going to fix this
right like that. That's what a lot of that communicates
right here. And when you do that, like, that's one
of those re energizing moments or you know things, and
then those decisions become a lot easier.

Speaker 4 (44:09):
In my experience, it breaks my heart every time, every
every time I have a have a get commit with
a kind of more lines removed than added. Oh man,
those are the proudest, is right. I know some people
like to take the code out and they're like, oh,
I go knocked out. You know, loads of that repeated code.
I just think, oh, why couldn't I get it right

(44:30):
the first time?

Speaker 2 (44:31):
That's interesting. The person who taught me how to code,
one of the first pieces of advice that he gave
me was that you should write a lot of code
and throw a lot of code out, and that's how
you'll get better at coding.

Speaker 4 (44:45):
That is very very sensible advice, very sensible advice, and
by that standard, I've got amazing coding over the years.
Do I The context is a big mistake. Now, just
to be clear, this big mistake has had a happy ending.
So this isn't This isn't the kind of mistake I
got fired. This is big mistake. We pulled through it
and it'll work down, Am I right?

Speaker 2 (45:06):
Thankfully? Yeah? So I think so we did a couple
of things. I think that helped us. So the first is,
as John said, we were open about these things. We
didn't try to hide that things weren't going as well
as we had wanted them to. And I think that
Ibota has a pretty strong culture in the sense that
we're not trying to throw people under the bus. In engineering,

(45:28):
if something crashes, it's not who's going to get fired,
It's like, Okay, how do we learn from this this
this is a mistake that costs us some money. How
do we make sure that that money is actually teaching
us something. So that was part of it. And then
I think also we did a good job of communicating,
like to external stakeholders. We communicated to the finance team,

(45:49):
who were kind of one of the main main consumers
of the data that we were producing, and you know,
really went through in detail, here's where we're at, here's
the timeline, like up them. We were checking in with
them all the time and just keeping expectations in line.
I think really helped us out. So even though we
delivered a little bit later than I think we thought

(46:11):
we would at the onset of the project, because we
were able to communicate that we were not fired. And yeah,
I mean not only that we've hired more people, we're
still hiring. And if you're thinking about getting into the
mobile coupon space, there's a ton of really cool problems

(46:32):
even if you're not passionate about mobile coupons, and you
might get to talk to me in the interview process,
which will be fun.

Speaker 4 (46:38):
Speaking of interviews, I see a smooth transition there. Speaking
of interviews, I understand that you have a project which
you call meanas one interviewing that is I have no
idea what minis ones aunds well, I see a lot
of people saying it in a rib community, but I
just just not It's one of these weird in jokes
I think they have in a Rube community.

Speaker 1 (47:00):
It's so nice, I know, I know, I know, try humor, Luke,
never pick up on your sarcasm.

Speaker 5 (47:09):
Good lord, I'm just over here like like dying everything.

Speaker 4 (47:14):
That's the What was it, Dave?

Speaker 1 (47:16):
Sorry? It was seriously? Yeah, go on, Matt's is nice
and storial.

Speaker 5 (47:21):
We Yeah.

Speaker 4 (47:21):
What the best thing about Rabius community, it's a it's
a really it's a really great language, and that's a
really strong community, really great events, even though obviously the
events this year have been a bit difficult. So how
are you carrying that culture, that tradition over into the
interviewing process. What's your gambit.

Speaker 2 (47:43):
Yeah, so maybe maybe everybody has experience or a lot
of folks haven't have an experience in let's just say
a not mean us one interview an interview, maybe it's
a little more hostile than we'd like, And I think
I think a lot of us have experienced kind of
broken interviews where it feels more like the person on

(48:04):
the other side of the table is trying to prove
how much smarter they are or how much better they
are coding than I am. And that's not nice. Another
thing that's not nice is asking someone to do an
inordinate amount of work outside of work that's not paid
in the form of like a take home project. So

(48:25):
I've done take home projects that have taken me in
entire weekends, multiple days, and that's uncompensated work, and that
that can bias your process against people who have outside
of work commitments like families and I just you know,
who don't want to be working all the time. So
I thought it would make sense to kind of take

(48:47):
the best part of the Ruby community. This idea that
Matt's is nice, and so we are nice and apply
it to interviewing. Let's like actually be nice to the
people that we potentially could be working with. And you know,
the pandemic has been terrible in so many ways, but
it did offer us this opportunity to kind of dramatically

(49:08):
rethink what our interview was.

Speaker 4 (49:10):
Going to look like.

Speaker 2 (49:11):
Because we're not coming into the office. Everything has to
be remote. And basically, our HR team and our leadership
were like, how how can we do this? We've only
been accustomed to bringing people in, asking them super tricky
things that they have to wipeboard. How can we translate
this to a remote interview? And this is what I proposed,

(49:31):
And this is like what we landed on, which is
an interview not meant to trick the interviewee. It's an
interview meant to simulate what the first couple of days
of work is going to look like. And it's supposed
to give give us as an organization, a sense of
how much we would enjoy this person as a colleague,

(49:51):
how successful they'll be. And the message that we're always
trying to send is not hey, I'm so much smarter
than you, because I understand ourcourage because I understand how
to do whatever, this type of model, this type of
data structure. The message is you're going to be successful
on our team, and you're going to like working with us,
and we're going to like working with you. So the

(50:13):
way that we do that is basically by giving the
person a sample of code from our domain and it's
highly simplified and it's not and we ask them to
just read the code and we say what is this doing?
What do you like about this code? What do you
not like about this code? And it's not a bug
finding adventure, and we're not asking them to find where

(50:34):
where an error is going to be secretly raised or
why it tests this failing that's that's kind of beyond,
like you can't really do that in a twenty or
thirty minute conversation. We want to hear how this person
would approach an alien code base, which is what their
first task is going to be on the job. Then
we present them with some data, you know, some kind
of some award events from our system, and we ask

(50:56):
them to manipulate that data and with a pretty simple
alberta them. We even tell them what the algorithm is
and we ask them to code it. And we say specifically,
we don't care about the answer here. The answer is
not interesting. We just told you what the answer is.
We actually want to see what it looks like when
you code. How what's your approach? Are you systematic right?

(51:17):
Are you making guesses about what should happen and checking yourself?
Those are the things that we're looking for. We're not
looking for some for to see. You know, do you
know this random algorithm from your computer science education that
you'll never use as a as a web developer at
I bought it? So those are those are the big pieces.
And I'm stoked because I got invited to talk at

(51:38):
Ruby KMF which is coming up in November, where I'm
going to be kind of outlining this. You all got
a preview of the content there exclusive.

Speaker 5 (51:46):
To Ruby Rhodes Awesome.

Speaker 6 (51:48):
I'm definitely curious as you guys implement this, like when
you kind of come back and say, well this works
and this didn't work, or or this is like this
is how we sort of had to come on with
a way to you know, because this is a subjective thing.
This is this is how we came up with a
way to to make this more objective than it originally was,

(52:10):
you know, and that helped us like definitely interested in
seeing this. I mean all of you, all of the
pain points that you're talking about, right, all this homework
that we have, all these you know, coding challenges that
we do. I mean, we were very I feel like
as a community, we talk about how where we are
that they don't work, but we don't have alternatives yet,

(52:30):
and so people continue to use them regardless because they're like, well,
I don't know what to do, and the people who
are lost, you know, just run back to where they
came from a lot of times, right, It's like the squirrel.
It's like in the middle of the street and the
car's coming and it's like, shoot, I gotta go all
the way back. So I kind of feel like we
do a lot of that. Still, Yeah, there have been
there have been various things like talked about. I remember

(52:51):
I think I've heard various people from thomp Bottom various
podcasts like talk about the fact that like they they
have you come and spend like I don't remember if
it was like a day or whatever, half day, but
like a lot of time with them pairing, right. I
know that boot camp does that as well, or base Camp. Sorry, wow,
whatever base Camp does that as well or something similar.

(53:12):
They'll they'll actually I remember hearing they had an article
about they did some homework or something recently. Whatever they
pay for the homework though, Yeah, that's okay. Yeah, I
mean there's there's people that are like exploring around these edges,
but we don't really have like a good way forward,
I feel like yet.

Speaker 5 (53:27):
Still, so I'm super interested to see how this turns out.

Speaker 2 (53:31):
Yeah, I mean so far, you know, obviously hiring has
been slower than typical for for Ibata, but we are
still hiring, and so far, to feedback from candidates has
been really interesting because you know, we'll get unsolicited emails
about how much people like the process and how they
hope that this is the direction that the industry goes in.

(53:53):
A lot of folks who are interviewing with us are
interviewing at other places as well. And I see this
as like a competitive competitive advantage for us, Right, does
this by us like ten thousand dollars in salary of
five thousand dollars in salary or something like that, Like
does this make our company more? You know, once you
get to a certain point, there's like diminishing returns on
all these different levers that a company can offer to developer.

(54:17):
And I think that this is maybe an unexplored lever
or a lever where there's a ton of root to gain.
And when someone goes through our process and comes out
feeling like, hey, I'm gonna kick butt there the people
that are super nice, they didn't make me feel like
an idiot, and they can trast that with other interview
experiences that they've recently had, hopefully that will play in

(54:37):
our favor.

Speaker 4 (54:37):
That is certainly not the current approach to coding interviews.
I've been hearing recently a mill more people talking about
make how to make coding interviews harder. If you go
on something like hack and news, you know, they discuss
which framework they need to insert onns of the fingernails
of potential applicants. And I hear if you go to

(55:00):
Facebook now and apply for a job, then you have
to bring along your PhD in computer science. They then
burn it in front of you, mix it into old coffee,
and make you drink it as part of the interview process.
So it's really I think it's really something that the
tech community needs. We have been doing a few episodes
about trying to make community more inclusive, trying to bring

(55:21):
in people, trying to keep people in, and that sounds
like a really great idea for talk. You know, why
don't we be be nicer to people, try and get
the most out of them during interviews and stead of
subjecting them to torture.

Speaker 2 (55:35):
I love that image of shoving a framework under someone's fingernails.
Luke Right. The thing that I noticed, and I've done
a lot of interviews. I have done I would say
easily two hundred interviews over the past two years. We
did a ton of firing, so it's crazy. And the

(55:57):
thing that I noticed is that when we were asking
people to wipeboard right to answer these kind of tougher
algorithm algorithm questions, I didn't I didn't always see a
one to one correlation between the people who crush that
kind of question and the people who I really enjoy
working with in either direction. Right, they're false positives and

(56:18):
false negatives, and I just found there to be way
too much noise in that type of diagnostic tool for
me to really make a strong decision that I felt
confident with that proved out over time, and I'm hoping
that this is a remedy for that.

Speaker 6 (56:33):
Yeah, I mean you always have to check to make
sure that whatever test you're giving, right, like is that
you have to decide what you're testing for and you
have to say, is the question that I'm asking actually
testing for the thing that I'm testing for? And too
often we jump to this conclusion that, oh, yeah, this
sort of this sort of is related. Therefore it'll let

(56:54):
me know when really you're just you're making a subjective
judgment call again and calling it objective.

Speaker 5 (57:00):
So yep, it is.

Speaker 6 (57:01):
It is a thing that's very painful, and maybe maybe
we should we should focus a little bit on this
in the future and have a more in depth chat
about this. I feel like that would be an interesting subject.

Speaker 2 (57:14):
Yeah, there was a rails comp talk last year. I believe.
The guy's name is Eric. I'll throw it in the
chat in the second. He's from test Double and he
talked about test doubles like a rail when you consultancy
based on Columbus, I believe, And he talked about their
hiring process and more in the sense of our goal

(57:36):
is to build a process that lets candidates show off
their strengths and as opposed to helping us find their weaknesses.
And that's how really informed our process that I bought it.
We're trying to find people's strengths and trying to figure
out where they're going to be able to make the
most impact in our company.

Speaker 5 (57:52):
Yeah, I have.

Speaker 6 (57:53):
I have gone for the past, like I don't know,
four or five rails comps. I've gone to basically every
single talk that was sort of like related to the
subject I do. I care about it quite a bit,
and I just kind of hope that we get to
a better place. It's Yeah, like I said earlier, my
feelings on this I think are basically what I said
earlier that I think that a lot of people are

(58:15):
thinking about this and I just don't think there's a
clear answer yet. But I'm super interested at like everything
that people are trying experiment. It's kind that's how I
think we're going to get a little better.

Speaker 1 (58:26):
Yeah. Well, hey, it looks like we're coming up on
the hour, so we need to move things along. Jesse.
If people want to get in touch with you, where
should they go online?

Speaker 2 (58:36):
And look, I would say you can find me on Twitter.
I tweet from Planet Efficacy, So if you can, if
you can spell that you can find me and you'll
find a lot of interesting movie content, political outrage, and
Vermont progressive rock on that Twitter stream.

Speaker 1 (58:56):
Awesome. Well, let's go ahead and move over into Look
you want to kick.

Speaker 5 (59:01):
Us off, I would.

Speaker 4 (59:03):
My first pick is a version of Windows ten. A
bit of a strange thing to pick on a podcast,
but this is called Windows ten Ameliorrated Edition. I found
it featured on a popular YouTube channel not long ago.
And this is a version of Windows ten called a

(59:25):
spy aware taken out, which also incredibly boosts the responsivity
and performance because it's not doing any acinc network calls
back home in the background, if you know what I mean.
This is the first operating system I've ever found where
to download it. I had to get a bit torrent

(59:47):
link from a Telegram channel. So if you want some
excitement in your operating systems, check out Windows ten Ameliorated Edition.
Never seen anything like it.

Speaker 6 (01:00:00):
Just just wanted to confirm something here really quick. Look,
is this a legal version of Windows ten.

Speaker 4 (01:00:05):
It's got quite a big section on legality on the website. Okay,
the website does make it clear that you do need
to have a Windows ten license in order to legally
use the Windows ten and Media rated edition, but I
mean come on bit torrently through a Telegram channel. Wow,

(01:00:26):
that's that's quite anyway, It's it's quite. It's a cool
little project. There's lots in the faq about their rationale
behind it, and it does fly when it runs, it's
really quick.

Speaker 1 (01:00:39):
I heard there were some issues with it with not
being able to do certain things because it can't call
home to Microsoft even some like simple rudimentary things that
you would think that would be possible but just kind
of breaks it.

Speaker 4 (01:00:53):
Yeah, I mean you could call it Windows ten bottomized.
I mean, there's barely anything there at all.

Speaker 5 (01:00:58):
But it's it's.

Speaker 4 (01:00:59):
Oh well, it's a lot of fun.

Speaker 5 (01:01:01):
It flies doing nothing awesome.

Speaker 1 (01:01:04):
Well, John, you want to do some picks?

Speaker 6 (01:01:06):
Yeah, so I have a different pick this week too.
You may by the time this this comes out. I
have no idea if people are even gonna still be
playing this game, but I have been playing and streaming
and absolutely having a blast playing.

Speaker 5 (01:01:21):
I guess that's redundant among us.

Speaker 6 (01:01:23):
So if you're not familiar with it, if you're familiar
with like Werewolf or Mafia or kind of games in
that nature where you there's another game that people have
like compared this to or whatever, where you like sort
of have like like townsfolk, and then you have like
people that are pretending to be townsfolk that are like
trying to kill everybody off, and your goal is a

(01:01:45):
townsfolk are to try and find the well. In this game,
the imposters among you. So this is like set in
space or whatever. But yeah, it's an absolute blast. I
really am not sure what to say about it other
than it's like kind of a great social game. Yeah,
it's it's pretty awesome, so I highly recommend checking it out.

(01:02:07):
I definitely have been doing a lot of streaming of
it lately, so yeah, I'll.

Speaker 1 (01:02:11):
Jump in with a few picks. First pick bamboo flooring,
so I love bamboo. And instead of the crummy old
chair mat, the little plastic, thin plastic that I was
using for a long time, I finally went to home
Depot and got some bamboo flooring and I put glued
that to a piece of plywood and I now use

(01:02:33):
that as my floor mat on my carpeted floor, so
my chair can just slide around on it. It's really
really cool and it's been a life changing event here,
so it's pretty awesome. And the second thing that I
pick is the Elgato key light, So I got those
for mass streaming set up and they make a huge difference.

(01:02:54):
So you won't be able to tell on the podcast,
but this is with my light's on and this is
what the like it's off. Makes a really big difference
in quality. So having a proper lighting on doing any
kind of video work is an absolute must.

Speaker 4 (01:03:08):
Can you can you do it that again, Dave? Just
so everyone can see on the podcast. Yeah, I got
to say that is quite striking. Is that the same
company that did the shortcut bomb?

Speaker 1 (01:03:17):
Yes, the stream deck.

Speaker 4 (01:03:19):
Yeah, that's a pretty cool thing as well.

Speaker 1 (01:03:21):
Big al galto f and Jesse, do you want to
jump in with some picks?

Speaker 2 (01:03:25):
Yeah, I've got a couple of picks different categories. So
first is a talk that I watched recently that maybe
you all have seen already, but I recommend revisiting it.
So Sandy Metz is rails com twenty nineteen keynote which
is called Lucky You, and I think it's especially timely
at the as we approach election season and thinking about

(01:03:49):
equality and issues of inequality in our country. So I
highly recommend that talk. It's amazing, as all Sandy Metz
talks tend to be. Then I have when the pandemic starts,
we started working remote, I made a small investment in
my work from home setup that I highly highly highly recommend.
I went out and I got an Ergo doos split keyboard,

(01:04:12):
having some risk pain on the keyboard I was using,
so I know keyboards can be a whole other podcast,
But check out ergodocs. They make a really, really, really
good product that's worth the price and I've been a
huge fan of it since getting it. And then my
final pick, there's a book that just came out. It's
a guilty pleasure. It's called The Trouble with Peace by

(01:04:34):
Joe Abercrombie, British gentleman, and it is in the grim
dark genre of fantasy literature and I highly recommend it.

Speaker 1 (01:04:43):
Awesome. Well, thank you for those picks, and just be
sure to post them into our chat section here so
we can include them on the show notes. Well, Jesse,
thank you for coming on today. It was a lot
of fun. I love these kind of talks where we
can just humble ourselves and talk about past mistake weeks
and you know what we learned from them, so thanks again.

Speaker 4 (01:05:03):
This was awesome, really interesting.

Speaker 2 (01:05:05):
I listened a long time listener, first time guest, and
this was This was awesome. I appreciate it.

Speaker 5 (01:05:11):
Yeah, thank you for coming right.

Speaker 1 (01:05:12):
Well, that's all for this episode everyone, Thanks for listening.

Speaker 5 (01:05:16):
Take everybody,
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.