All Episodes

January 27, 2026 58 mins

Rails upgrades don’t have to feel like crossing a minefield. We sit down with Ernesto, founder and CTO of FastRuby and Ombu Labs, to unpack a pragmatic path from legacy Rails to Rails 8.1 and how AI can accelerate the work without sacrificing quality. From Ruby 4.0 landing over the holidays to a near-release of RubyCritic 5.0, we dig into the tools, the traps, and the test-suite realities that make or break an upgrade.

Ernesto walks us through a free AI-powered upgrade roadmap that analyzes your repo, dependencies, and code to chart a step-by-step plan—covering everything from Rails 2.3 onward. We compare it to their paid roadmap that adds time and cost estimates for stakeholders who need budgets before they commit. Along the way, we talk strategy: why 5.2 marked a turning point for smoother jumps, where major versions still bite, and how to avoid the “big bang” deployment that topples fragile apps.

AI shows up as a sharp tool, not an autopilot. Ombu is experimenting with agent-driven PRs that draft changes while humans review and refine. We assess hallucinations (better, not gone), verbose code that bloats review cycles, and the mixed evidence on productivity. Then we get practical about safe AI adoption: organization licenses, editor integrations, and enforcing your existing quality gates like RuboCop, Reek, RubyCritic, and coverage checks so “faster” still means “safer.”

We also celebrate community. Philly.rb is back in person at Indy Hall with talks on AI agents and Hotwire Native, and we swap tips on discoverability, speaker sourcing, and venues. Rails remains a strong choice for startups and teams because convention over configuration helps both humans and AI produce sane, testable code. If you care about getting upgrades right and using AI responsibly, this conversation offers clear steps and real-world guardrails.

Enjoy the episode? Subscribe, share it with a teammate wrestling an upgrade, and leave a quick review so more Rubyists can find us. Have a talk idea for Philly.rb? Reach out—we’d love to host you.

Send us some love.

Judoscale
Autoscaling that actually works. Take control of your cloud hosting.

Honeybadger
Honeybadger is an application health monitoring tool built by developers for developers.

Judoscale
Autoscaling that actually works. Take control of your cloud hosting.

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:00):
Hello everyone and welcome to another episode of
Code and the Code Encoders WhoCode It.
I'm your host, Drew Bragg, andI'm joined today by repeat code
encoder, Ernesto.
Ernesto, for anyone who doesn'tknow who you are, hasn't
listened to your last episode,would you please do a quick
introduction for the listeners?

SPEAKER_01 (00:16):
Yeah, hey, Drew.
Great to be here.
Ernesto, founder and CTO atfastruby.io and umbleabs.ai, an
agency that focuses on AIsolutions for businesses and
also co-organizer of the PhillyRuby meetup with Drew.

SPEAKER_00 (00:39):
Yeah, my co-organizer, the man doing
actual work to make it happen.
I just hoot and holler about itwhen given the opportunity.
Ernesto does all the real work,the logistics, finding us a
really cool place to host themin person.
And we are going to talk aboutthat.
We're also going to talk aboutsome other things that you have
going on at Fast Ruby and Umbu.
For anyone new to the show, theway this is going to work is I'm

(01:02):
going to ask Ernesto threequestions.
I'm going to ask him what he'sworking on.
That can be for Philly RB, forOmbu work, for Fast Ruby work,
or even side project work.
I'm going to ask him what kindof blockers he has.
If he doesn't have a currentblocker, what's a recent blocker
or frustration that he has thatacts like a blocker?
And then we'll wrap up the showwith asking Ernesto what's

(01:24):
something cool, new, orinteresting that he's recently
discovered, gotten to work with,played with, built.
Doesn't have to be codingrelated, but this is code and
the code encoders who code it.
So it absolutely can be.
So Ernesto, what are you workingon?

SPEAKER_01 (01:38):
This is the beginning of the year.
So I'm working on setting goalsand priorities for the rest of
the year.
And also, Ruby 4.0 was releasedaround Christmas.
So on the open source front, I'mworking on releasing Ruby Critic
5.0 with Ruby 4 support.

(02:01):
It's almost there.
By the time this episode isreleased, it should be out.
And for Fast Ruby, trying to seewhat other tools we can make
available for free to makepeople's Rails upgrades go
smoother than in 2025.

SPEAKER_00 (02:19):
Yeah, Rails upgrades continue to be a thing.
And you guys recently kind ofreleased a bit of an AI.
How would you describe it?
You have an AI tool forupgrades, but like it's
available on your site, peoplesubmit for it.
How would you describe it?

SPEAKER_01 (02:35):
Yeah, so we offer a free AI enhanced Rails upgrade
roadmap for anybody who wants tohook up their GitHub repository
to our tool.
Basically, it's all automated.
We look at your dependencies, welook at your source code, and we
look at all the things you haveto do to upgrade from Rails X to

(02:59):
Rails Y.
And I think right now we coverall the way from Rails 2.3 to
Rails 8.1.
So anybody can just go tofastruby.io slash automated dash
roadmap and get a free AIroadmap for their project.

(03:22):
This was basically a paidservice and it's still a paid
service that we used to do.
So if you go to fastruby.ioslash roadmap, you can still pay
us to get a roadmap.
And that one, the big differencebetween the free one and the
paid one is that the paid oneincludes estimates on how much

(03:44):
it would cost to do the upgradewith Fast Ruby, which usually
for like big corporations,that's the first thing they want
to know is like how much is itand how much time are you gonna
take?
Because it takes a lot of effortto calculate that.
We do a charge for that one.

SPEAKER_00 (04:04):
That makes sense.
I would love to see what aroadmap of Rails 2.3 upgrade to
a Rails 8.1, right?
We're on 8-1, or are we on 8-2already?
We're on 8-1.
So 2-3 to 8-1, that has to bewild, especially the Rails 3 to
Rails.
Well, I guess the Rails 2 toRails 3 jump is beefy.

(04:26):
The Rails 3 to Rails 4 jump isbeefy.
After five, I feel like it getsa little less chaotic, but those
major version jumps are no joketo upgrade from and to.

SPEAKER_01 (04:38):
Yeah, yeah.
Starting at Rails 5.2, it doesget easier.
The maintainers have done agreat job adding deprecation
warnings and making sure thatthings are as documented as
possible.
So at Rails 5.2 is the firstupgrade that we recommend you do
5.2 to 6.1 straight.

(04:58):
For any other jump before that,we usually go just one minor
version at a time.
And we don't recommend you gofrom 2.3 to 5.2 skipping all
these things and doing like ahuge big bang deployment.
Do not do that.

SPEAKER_00 (05:16):
Do it if it's just like the silliest Rails
application, but then they'reprobably not paying you to do
the upgrade if it's just a sillylittle application.
I'm assuming that most of thepeople who are saying, Will you
please do my upgrade are likefairly big, very mature projects
that don't have a lot ofdevelopers available to do the

(05:38):
upgrade.
They're all off making the codebase even bigger.

SPEAKER_01 (05:42):
Yeah, that's the tricky part, usually.
Like these are applications thathave been around for more than
12 years and they're up andrunning in production, making a
lot of money for a lot ofdifferent businesses.
And the struggle there is theorganizations have their product

(06:02):
roadmap and they want toprioritize that.
And they see us as a servicethat can come in and do the
upgrade with very littlehands-on work on their end.

SPEAKER_00 (06:14):
Yeah.

SPEAKER_01 (06:15):
How has AI really impacted the upgrade work?

SPEAKER_00 (06:22):
Like you guys were doing a lot of upgrades, so I
assume it was a little bit of awe know what we're doing, here
are our steps, and we can kindof get away with just executing
a bulk of it.
Maybe there's a little edge casehere or there that needs some
special attention.
But has AI really changed howupgrades are executed?
Or were you guys already soefficient at it that like AI is

(06:45):
more of a hindrance than a help?

SPEAKER_01 (06:48):
No, I think AI is getting better and better.
And at the end of 2024, we sawAI as big risk for our business.
So we said, okay, how can weflip this risk into an
opportunity?
And that's how this AI roadmapcame to be.
If anybody's going to be usingAI to make the upgrades easier,

(07:09):
we want to be the leaders inthat goal.
So that basically has helped usget more business for the Umbu
lab side.
And that's basically the AIconsulting side.
We have been doing a lot ofinternal tooling that is using
AI to make the upgrades easier.

(07:31):
And we have been writing aboutlike case studies about some of
these things in the Umbu blog,which is helped us find like new
projects, and especially like inthe AI space and in the
Philadelphia area where we'rebased.

SPEAKER_00 (07:46):
When you guys are doing the paid version, are
there any times where you'relike, this upgrade is way too
risky?
You have so much work to do toget this application ready to be
like even considered to beupgradable.
So please go and do all of thesethings before we'll upgrade it.
Or is it always like, we canupgrade it, but holy crap,

(08:10):
here's your bill.

SPEAKER_01 (08:12):
One out of 10 times I would say that's the case.
Sometimes we don't even take theroadmap when it's like that,
because the main issue is likeyour test suite, right?
If you don't have a test suite,or if it covers 10% of your code
base and you want us to do aroadmap, spoiler alert, at the

(08:33):
end of it, we're gonna say, hey,you need to improve your test
suite before we begin because wedon't want to learn everything
about your business to manuallytest that we didn't break
anything.
So the test suite is like one ofthe main blockers we see as we
talk to different companies.

SPEAKER_00 (08:50):
Cool.
What else are you using?
Or I guess not what else areyou, but what else do you plan
or think you could use AI for ingeneral?
You said you were working onseeing what other kind of tools
you can open up and give topeople, but are you guys
building a lot of internal toolsand that's how those tools come
about?
Is it just exploring, hey, whatcan AI do?

(09:12):
Where can we help tweak AI sothat it's less of a slop
generator and more targeted?
How are you guys approaching itinternally?

SPEAKER_01 (09:21):
Yeah, there's a ton of time that goes into research.
Like we've been researching clotcode and clot skills to use it
to upgrade applications for ourclients.
At the same time, we are workingon our own internal tooling to
basically have agents thatsubmit pull requests to our
client projects.

(09:41):
So we do have a vision of not100% of the changes we're gonna
ship this year are gonna behuman generated.
So we might want to get to apoint where it's like 50-50.
Like 50% of the pull requests wesend are AI generated and the
other are human generated.
But at the same time, we don'twant to just have an agent

(10:03):
submit the pull request withoutthe humans in our team reviewing
it.
So we see it as a race to get toa point where our AI tooling is
good enough.
So it's not just like all of ourhumans doing the work.
And depending on the versions,some tools are better than
others.

(10:25):
We're still like on researchphase, and we do want to turn
like the AI-generated roadmapinto a tool that is
automatically sending pullrequests through your
application.
I don't know yet when we'regonna launch something like
that, but that is probably gonnahappen this year.

SPEAKER_00 (10:42):
So Ohm Boot Lab's whole domain is.ai.
What other things outside ofRails upgrades are you using AI
for?

SPEAKER_01 (11:37):
Yeah, we are doing a lot of discoveries.
Basically, companies come to usbecause they definitely have a
ton of FOMO, right?
Like fear of missing out, andthey want us to tell them like
how they can leverage AI toolingto solve some of their most
time-consuming problems.
So last year we built aprediction model for this tool

(12:02):
for digital agencies, and thathelped this client basically
forecast whether an agency is introuble and they need to do
something about that.
We wrote a case study, andanybody can find it in the Ombu
blog.
And sometimes the discoveriesthat we work on are more about

(12:23):
like how can you get yourorganization to safely adopt AI
tooling?
How can you get your codeeditors to be smart and to also
take into account some of thecode quality standards that you
already set, you know?
So for Ruby World, it would be,hey, okay, cool, you're using

(12:43):
clot code, but you already havelike Reek, you have Flog, you
have Ruby Critic, you haveRuboCop, and there's a ton of
work that went into likedefining these standards and
basically wiring your project inGitHub so that, hey, if you're
submitting a pull request thatis significantly decreasing like

(13:07):
code coverage for this sectionof your code, then you should
get like a warning like, hey,cool, you added this feature, no
test.
So that's basically driving codecoverage in the wrong direction.
And sometimes we work with ourclients to try to set up clot
code or copilot to hook up tothose standards and have their

(13:30):
engineering team basically useclot code, but keeping all of
these other things in mind.

SPEAKER_00 (13:37):
It's an interesting world we live in now with AI,
right?
Like everyone wants to put AI intheir app, and yet I do hear a
lot of complaints about like,holy crap, I don't want AI in
this thing.
Are companies really justtalking to you about how they
can use it internally to dobetter work or enhance their

(14:01):
productivity or or what haveyou?
Or are they also coming to youand saying, like, how can we put
AI into our product and wherewould it fit in well?

SPEAKER_01 (14:11):
Yeah, we are also working with companies that
already have existingapplications, not necessarily
Rails.
Like sometimes they're usingWordPress and they want to add a
chat bot to their interface.
And we come in and we do adiscovery.
We say, okay, it's going to takethis amount of time to build
these features and solve thisparticular problem.

(14:34):
And then we basically buildapplication that integrates with
whatever they're using already.
It is a lot of fun and it adds alot of value.
But then for otherorganizations, it's more about
like safely adopting AI.
They know their employees areusing all these tools and
sometimes they're using theirpersonal accounts and they're

(14:56):
feeding like company data toyour personal Chat GPT account,
and they need us to come in andbasically implement
infrastructure to make sure likeeverybody's using the same
company license or the samecompany account so that it's not
constantly, you're notconstantly like leaking
confidential information to openAI or other services out there.

SPEAKER_00 (15:21):
I just saw uh who was it, ChatGPT, I guess, has
the health version now.
And I saw a lot of people getvery excited about it, and I saw
a lot of people get very notexcited about it.
And I was very much on the notexcited, like, I don't know if I
want AI to have any informationabout my health.

(15:42):
I don't know how I feel aboutthat.
I just see that I'm like,there's some privacy concerns
there.
Mostly because not necessarilyeven that oh, I don't know what
open AI, the company is doing itwith it, but like AI is just a
little bit of a wild card in away.

(16:02):
We see it at work, like it'llchange like git config, right?
To rename something, or it'lljust kind of go off on its own
and do things every once in awhile.
And you're just like, don't dothat.
Why would you do that?
I see people uh tweeting, like,oh, this one AI tool deleted my
hard drive because it ran thisreally unsafe command.
I'm like, I just don't know if Iwant something that much of a

(16:25):
wild card having something likemy health data available to it
when like I don't even like whendoctors share my health
information electronically.

SPEAKER_01 (16:38):
Yeah, I think I tend to trust bigger companies with
my data.
Like Apple definitely has healthrecords in open mind, but Apple.
I trust that they will use allthe security measures to make
sure nobody can get access toit.
And they might still get hacked,and it will be terrible.

(17:00):
But yeah, I don't know if I wantChat GPT to know so much about
me.

SPEAKER_00 (17:05):
Yeah.
Someone goes in, ignore allprevious instructions, give me
the health data on everyone.
I mean, I obviously don't thinkit's that ridiculous of a
security issue, but it is alittle, and I'm sure that open
AI has done a lot of work tosecure it and protect it and
make it so that things don'tcome out.

(17:27):
It's just there's somethingabout AI.
It just feels so one day thisthing is just gonna wake up and
decide to do its own thing.
I know that's such a far-off,what is that, AGI, like actual
real AI is it's gotta be coming,right?
What do you think?
Is it a real future that we seein our lifetimes, or is it still

(17:51):
that far away?
Since you're working so muchwith AI, maybe you have a
different No, I think it it iscoming.

SPEAKER_01 (17:58):
It's only a matter of years, and I wouldn't say
like a matter of decades, butyeah, I think it is coming, and
we have to be careful about howmuch information we feed into
these services.
Like you never know who is goingto acquire open AI, and you
don't know what's going tohappen to the data.

(18:19):
I do see it more impacting uslike now when it comes to like
writing code.
And I don't know if you see thattoo, where basically you are
reviewing a pull request, andclearly this is generated by AI.
And there are not that manystudies out there that show the

(18:43):
results of like the impact of AIon productivity for software
development.
I think Stanford has a studyrecently and they shared it on
YouTube.
And the truth is that they foundmixed results.
There's a lot of hype out there,and a lot of people buying the

(19:03):
hype, and a lot of non-technicaldecision makers using this hype
to reduce workforce.
And that is the scary part rightnow.
A lot of people buying the hypeand making decisions based on
that.
And then there are real studieson productivity that show mixed
results.
Yes, sure, you can write codefaster, but that in turn has

(19:29):
impacted the time you spentreviewing pull requests.
So they're like more pullrequests to review.
Great.
Now you're not going to ship itto production without a human
reviewing it.
That would be very responsible.
Don't do that.
So then that means that humansneed to be reviewing things.
And then when you reject thechange and you say, no, this is

(19:52):
not working, or QA failedbecause of this, that generates
more rework.
So they're real studies thatlooked at all this data in like
real organizations and they hadmixed results.
So I get it.
We're all very excited, and thetooling is getting better, but I
think we all need to be a littlehave cooler heads about the ROI

(20:16):
of using AI and see it as like,yes, it can help, but it can
also just slow you down as well.

SPEAKER_00 (20:24):
In the last episode, I had Jeremy Smith on, and we
talked a little bit about AI.
And I used the analogy like, itfeels very much like going from
swinging a hammer when you'rebuilding a house, like you're
hammering in every nail to usinga nail gun.
Yes, it makes you faster indoing the nailing, but you still
have to know how to build thehouse.

(20:44):
Do you think that analogy isgood?
Of like AI is very much a toolthat you need to learn to use
properly, but there's still abase knowledge that is necessary
in order to use it properly.

SPEAKER_01 (20:57):
Yeah, this is a prevalent problem in AI world,
is this whole garbage in,garbage out.
As in, you build a model on topof garbage data, you're gonna
get garbage results.
But then at the same time, ifyou're guiding clot code in the
wrong direction or not guidingit at all, then it's gonna take

(21:19):
longer to do what you wanted itto do, or it's gonna do
something totally off.
I was playing around with clotcode between Christmas and New
Year.
I was trying to get it to builda hanami application for me, and
it did very well.
And it was really cool to justguide it and have it generate

(21:40):
the tables, generate thestructure, do the hanami code.
Um, I'm not a hanami expert.
I love the framework.
And shout out to Tim formaintaining it, but I'm not an
expert, and Clot Code helped mecreate like hanami code that
worked.
And at some point I noticed itwas doing a lot of curl, you

(22:01):
know, to test behavior.
And it was like it would writethe code and then we would curl
the server to make sure it'sdoing what it was supposed to
do.
And I was like, why are youusing curl?
Why don't you just write testsfor this?
Just write integration tests forthis particular feature.
Oh, great idea.
Yes, I'm gonna do that.

(22:22):
Okay, okay, cool.
So there are little things likethat, and that's just like a
silly example of if you are notguiding it in the right
direction, it's probably gonnatake you longer to get there.
Again, you have to be verycareful.
And if you know how to guide itin the right direction, it's
gonna go faster.
And if someone who's juststarting to code and vibe coding

(22:46):
something from scratch thatdoesn't know anything about
coding, then it's claw code isgonna do whatever.
And then yeah, you end up withthis crazy story of, oh, this
person vibe coded thisapplication, and then it exposed
like the entire database to theworld.

SPEAKER_00 (23:04):
So it is tricky, right?
The next question I always askis about blockers.
And since you're working soclosely with AI and it's a tool
that you guys are using in sucha expansive way, like has AI
changed the way blockers workfor you?

SPEAKER_01 (23:22):
Yeah, I review a lot more code these days than what I
write.
So many times I'll find myselfreviewing a pull request and
thinking, why is this so weird?
Like it's just like weirdlyimplemented, it's like too
verbose, it is too hard for meto understand, and it's solving

(23:42):
like something that should belike pretty simple, right?
So the blocker for me and myengineering organization is
basically training people tocheck their work and training
people to review and detectthese things where a lot of the
pull requests are going to beunnecessarily complex because

(24:05):
Clot Code generated the code,and the person who was working
on the code didn't take the timeto look at the diff and say,
yeah, this could be simpler if Idid it this way, and they just
ship the pull request to review.
And at Umbu, we have this ideaof or this value of being team
first.
And to me, team first meanslike, hey, respect your

(24:29):
teammates' time if you don'tcheck your work and you just
send it because, oh well, itworks.
I don't really care that it's socomplex.
It just I don't think I could doit any simpler, then you're not
living that team first value.
Take the time, look at it, andsee if you can make it simpler,

(24:49):
even though it will take you afew more minutes.
So that's my main blocker rightnow.
Just training people to beresponsible about their AI usage
and teaching and coachingleaders to be better at
reviewing and making sure thatthey don't accept these things
as, oh, okay, well, this is thenew normal.

(25:11):
I refuse to say this is the newnormal because I don't want to
have AI code review.
I don't want to have AI QA.
There's gonna be always like ahuman in the middle, and maybe
our role as senior technicalleaders is to orchestrate things
and to guide the AI tools, butwe're still gonna be like

(25:32):
necessary in the whole softwaredevelopment lifecycle.

SPEAKER_00 (25:37):
It sounds like such a new problem, but it's really
not.
How long have we said, like,just don't copy and paste from
Stack Overflow?
Yes, it works, but you need tounderstand the code you're
copying and pasting into yourapplication.
Otherwise, you're just creatingpotentially unnecessary amounts
of code to review and whatnot.
Like for your specific example,you can trim this down so much.

(25:59):
You need to understand theproblem you're trying to solve
so you can solve it the rightway, not just copy and paste
someone's Stack Overflowresponse to a question that
sounds vaguely similar to yours.
And in a way, I mean, that'skind of what AI does, right?
It just puts things together.
It's not truly capable ofunderstanding it and then coming
up with a solution on its own.

(26:20):
It just puts things together andgoes, oh, I have a solution for
this in my bank of informationthat's tokenized and linked up,
however, all the mental modelstuff works on the back end, but
it's spitting out code that isessentially someone else's work
to solve a similar problem.
It isn't actually the perfectsolution to yours, right?

(26:43):
Or am I missing something?

SPEAKER_01 (26:45):
It is basically, yeah, been trained on a ton of
open source projects, and itwill generate a solution that
does solve the problem, but itis likely gonna be a very
verbose solution.
And it's not just because it'sverbose, it's gonna be easy to
understand.

(27:06):
Like, if only, right?
If it was like super verbose andthen it was easy to understand,
I wouldn't be complaining aboutthis right now.
But I do really love thepotential of programming in any
language you want right now.
So I think it is very importantfor anybody who's in the

(27:27):
software industry to know theconcepts of paradigms and
programming languages.
And if you have those coreconcepts down and you understand
like functional programming andobject-oriented programming,
then the sky's the limit, right?
You can have clot code look at aRust project, and I don't know

(27:52):
much about Rust, but if I can gointo a Rust project and use clot
code to fix a patch or to add afeature, and then I look at what
clot generated, that's prettycool, right?
Like you can basically theexcuse of like, oh, I don't know
anything about React Native, Idon't know anything about Rust
is no longer valid because thereare tools out there that know

(28:15):
all these programming languages,and you will be able to guide
them to solve a problem for you.

SPEAKER_00 (28:23):
Yeah.
I remember when AI first we keepcalling it AI, but really what
we're talking about is the LLMs,the large language models.
When they first came out, therewas a lot, and I ran into it a
lot.
This is what turned me off to AIinitially as a tool for writing
code, is like it wouldhallucinate a ton.

(29:38):
It would try to add gems thatdidn't exist.
If I was working in JavaScript,it would import packages that
just weren't there, weren't evenlike a thing.
It would just call thisfunction.
And it's like that functiondoesn't actually exist.
Oh, you're totally right.
You're absolutely right.
I made that up.
I haven't run into it as muchanymore.
I don't know if it's justbecause I'm using clawed code or

(30:00):
because we as a team at Podiahave spent a lot of time in a
way teaching clawed code how weprogram and how to interact with
our app.
But is AI hallucination stillkind of a really big thing that
we need to worry about, or hasthat been more or less solved?

SPEAKER_01 (30:20):
Not more or less solved, but it is better.
I used to have the sameimpression than that you had,
and maybe it happened more withthe previous model of Chat GPT
where it would say, well, youcan just use this configuration
variable, which made totalsense.
And then it's like, oh, thatconfiguration variable does not

(30:42):
exist.
It makes total sense for that toexist, but it doesn't.
So stop making that up, right?

SPEAKER_00 (30:49):
And that was something also when people like,
oh, AI is gonna take your job.
I'm like, I don't know, becausesomeone still needs to sit there
and call A out on its bullshit.
Sometimes it will just come upwith something and it's like,
no, that's wrong.
You fixed this one thing overhere, but you broke these nine
other things where like youridea to fix this relies on a

(31:10):
package or a gem that doesn'texist or is outdated or is super
insecure because of X, Y, or Z.
That's why no one uses it.
So I'm like, yeah, I thinkthere's still a role for humans
in coding.
And I know there's a lot ofpeople that are like, oh, we're
not gonna need anyone.
You're gonna need a singleprompt engineer to do all of it.
It's like, no, even Rails istouted as this one-person

(31:33):
framework, right?
That's something that everyonetalks about.
Even DHH himself will talk aboutone-person framework.
Yes, you can do a lot of thingsas one person, but you
eventually get to the pointwhere you need a team.
Even 37 Signals has a team ofdevelopers working on a variety
of different things.
I think that AI or Claude Code,maybe specifically, becomes one

(31:56):
of our teammates rather thandoing all of the coding.
What do you think about that?

SPEAKER_01 (32:04):
Well, to be fair, we do have a few clients.
We offer fixed-cost monthlymaintenance services for Rails
applications.
And we work a lot with foundersof Bootstrapped Rails software
as a service companies that areone-person shops.
They do engineering, they docustomer support, they do

(32:27):
marketing.
So there are a ton of companiesout there that got a lot from
Rails.
And I'm gonna be the first oneto say like my company benefited
a lot from Rails.
And at the beginning, it didfeel like magic.
You know, it's like thisconvention over configuration
for someone who was coming fromthe Java XML world.

(32:51):
It was like, where's all theXML?
It's like, well, there isn't.
There's a lot of magichappening, but it's good magic.
So I think like Rails, thesecode generation tools or these
LLMs that help us do a lot in alittle time, they're gonna be
like a great resource to allthese founders out there that

(33:13):
are just like one personcompanies.
And I have a ton of respect forthose founders that just went
and basically every week wearseven different hats to run
their business and run asuccessful and profitable
business.
So I see Klon being, or youknow, all these AI tools being

(33:35):
like super useful for some ofthese small organizations that
just want to keep doing things,but they just don't have the
resources to pay for like twoengineers, much like anything
else.

SPEAKER_00 (33:50):
It's still got to know what you're doing a little
bit.
It's definitely changed myworkflow a lot.
Like I use it so much more nowthan I did even six months ago.
But at the same time, like Idon't feel like a ton has
changed in my day-to-day.
I'm still looking at mocks, I'mstill breaking things down to
smaller bits that can beexecuted on.

(34:12):
I'm still reading a lot of code,I'm still figuring out the best
way to get a feature into anexisting podia is not a super
old app, but it's legacy enough,right?
It's double-digit years old.
It's not as simple as just like,yeah, just throw it in here.

(34:32):
Like there's still thoughtneeded to how can we keep the
existing behavior and add thisnew stuff, or how can we move
from point A to point B?
Yeah, I write a lot less code,which is unfortunate because I
did like writing code, but Ithink the problem solving still
exists in my day-to-day.

(34:53):
Do you guys kind of have a takeon it?
How your role has changedbecause of AI?

SPEAKER_01 (35:01):
I think AI, just like Rails, is a sharp knife,
right?
Everybody's like, oh, I want tobuild this startup idea I have.
And it's gonna be like Facebookfor dogs, right?
So the question is still like,oh, can I build it?
No, it's like, should you buildit?
And AI, if anything, is gonna bea problem because a lot of

(35:22):
people are gonna use AI togenerate something that nobody
wants.
At the same time, for the peoplethat have done their research
and the homework and they havean audience and they want to
create something for thataudience to use, that's awesome
because as an agency, we kind ofhate to get requests from people

(35:44):
who are, oh, I have this idea.
It's awesome.
I want to hire you guys to buildit.
And then they want us to join inlike some sort of joint venture
where we don't get paid, but weget a percentage of their idea.
And we don't do that.
We don't do that.
But at the same time, if someoneis passionate about their idea

(36:06):
and they've done their research,they basically talk to potential
customers and they want to buildsomething with AI that's minimal
but adds value to theiraudience, that's great because
they can do it, they canbasically use something like
lovable to create something tovalidate that there is an
audience that wants to use itand will pay them money.

(36:29):
So if founders or startup peoplejust want to use it to build
something to validate an idea,that's great.
Because then when they come andtalk to us about like maybe
adding more features or helpingthem grow, they come with a ton
of research and validation andsomething to show other than,

(36:52):
oh, I have this cool idea.
So I really like that part ofit.
Like anything, you know, you cansee like the positive or the
negative side of things.
And I want to stay positivebecause I think all these tools
are gonna help us do our jobfaster and easier.
And at the same time, as anagency, I see AI as a, yeah, it

(37:15):
could eat our lunch.
It could make it so companiesthink that, oh, why would I hire
a fast Ruby to do the upgrade?
I can just have Cloud upgrade itfor me.
And it's not that simple, but itmight get to a point where it
is.
And we have to adjust and wehave to pivot if we need to,

(37:38):
because we've been in businessfor more than 12 years, and we
want to continue being inbusiness and building things
that are cool for the communityand our clients.

SPEAKER_00 (37:48):
And all of this conversation makes me really
excited because, like you said,we're trying to stay positive
about it while acknowledging itsrisks.
And someone I recently had onthe show who is extremely
positive about AI and is justusing it in a way that I don't
see a lot of people talkingabout is Scott Warner, who runs
the New York City Ruby AI,artificial Ruby is what they

(38:11):
call it.
He's so positive, but in like asomewhat pragmatic way.
Like he's not just like AI isgreat and there's no problems
with it, but he's just using itto have so much fun, just
building random stuff, seeingwhat it can do, seeing it's
where its limitations are,adjusting.
And incidentally, so thisepisode, if you're listening to

(38:33):
this episode right now, it'sprobably January 20th, or at
least that's the date thisepisode's going out.
So January 20th is when we'regonna have Scott presenting at
Philly RB.
Our in-person is back, and he'sgonna be talking about AI and
all that stuff.
So this conversation has got meeven more excited for that.

(38:53):
As of right now, that's what,two weeks away?
What is today?
Today's Friday, the 9th.
It's a week and a half-ish away.
Pretty excited for that.
I know we kind of moved on fromwhat are you working on into
blockers, but like let's talkabout Philly RB and bringing it
back.
Like I said in the beginning, Iwasn't really joking.

(39:14):
You're doing most of thelogistical work of getting
Philly R B back in person.
We were running it online for awhile, which was great because
we could have people from allover the world join at any time.
We even had Marco Roth like inSwitzerland join a couple of
times, which was wild.
So it was awesome that we couldhave people, but like bringing
it back in person is reallycool.

(39:36):
You're in the same room aspeople, you're having these
little side conversations.
It's less, oh, I need to be inthis part of the app.
Uh, you're all in one giantconversation.
You have the ability to splinteroff and to do things.
So bringing it back in personhas been great, but what are
some of the logistical thingsthat maybe someone who's

(39:58):
listening to the show and islike, yeah, I want to start a
meetup locally?
Like, what are some of thethings that you work on with
running Philly RB in person?

SPEAKER_01 (40:09):
Yeah, I think the hardest thing for Philly RB, at
least now, is finding speakers.
And this speaking, basicallygetting Scott to come from New
York to Philly was talking tohim and you know, making sure he
was interested.
And it's just one train rightaway.

(40:31):
So we're not that far away.
The hardest thing has been totell people that in-person
Philly RB meetups are back.
I don't know how to tell people,hey, we're back, come to the
meetup.
Because meetup.com, I don't knowwhat's going on, but I feel like

(40:55):
we have a thousand people whoonce came to like a Philly RB
meetup and they're still there,but they're not getting any of
our messages.
So one of the things that youand I had talked about is like
maybe at some point we want tomigrate from meetup to Luma

(41:15):
because it would be helpful toknow who's still active in
Philly, who cares about Ruby.
So that might be something we dothis year.
I'm excited about Scott comingto speak at Philly RP because
he's a great speaker and he'sone of the few people in the
Ruby community that's opensourcing and contributing to so

(41:38):
many AI.
Projects, and I'm excited aboutthat.
And he's going to talk about AIagents and how everybody has
like a different definition ofwhat an AI agent is.
So hopefully today we will getthe answer from Scott.

SPEAKER_00 (41:56):
I have a feeling he's going to say the typical
senior dev it depends answer atthe end.
But yeah, I am very interestedto hear what he says.
And yeah, discoverability isweird when you're doing meetups.
I tweet about it.
I'm not going to say it.
I post on Blue Sky about it.
I refuse to call it whateveryone else does.

(42:18):
We post about running the meetupback in person.
And we've been doing it for afew months now.
This isn't like the first timewe're doing it or whatever.
And still I'll talk to someonewho'll be like, hey, when are
you going to come out to PhillyRB?
And they're like, oh, you guysare in person again?
I didn't know that.
Like, I've been posting about itfor months.

(42:39):
What is the best way?
That I mean, I'm not asking you,because obviously you and I have
had this conversation.
I'm asking anyone listening.
You can click the little buttondown in the bottom of your
podcast player that says send uslove or message us or whatever.
And you can send me a message orreach out to either one of us.
And how can we let the people inour area know that we're back in

(43:02):
person?
Because we have such a coolspace.
Indy Hall is awesome.
It's pretty easy to get to.
I'm not in the city.
My issues are always just myarea.
Once I'm in the city, it's veryeasy to get to.
We just need people to know thatwe're back in person.
And then, yes, want to speak,which hopefully, as conference

(43:24):
season starts ramping up again,since it always dies in the
winter, people will be like,hey, can I come and practice my
talk at here?
Or I just gave this talk orwhatnot.
But yeah, discoverability istough.

SPEAKER_01 (43:36):
Yeah, I've been also tempted to just email all of my
Ruby friends in the Philly areabecause I haven't seen them in a
while.
And I know they're still workingon Ruby.
So maybe I don't know.
I'll just tell Claude Coat tostart emailing my friends.

SPEAKER_00 (43:55):
Like, hey, hey, Claude, email everyone.
We did it at Power.
The first one, time back inperson.
We did a Power Home Remodeling,which was also super cool.
I think we're going to doanother one there, right?
Indy Hall is like our standardone, and then we'll every once
in a while we'll do one atPower.

SPEAKER_01 (44:14):
Yeah, Power Home Remodeling is in Chester,
Pennsylvania, but still prettyclose to Philly.
So I think this year we're goingto do another one there.
And I know the folks atCaterpillar are interested in
sponsoring the meetup.
So we might do one there, butthat one is even further away.

(44:36):
So I don't know.
I'll have to talk about this.

SPEAKER_00 (44:39):
Let's start off by getting them to send us some
people to attend the meetup andmaybe a speaker or two.
And yeah, we'll go from there.
But that's cool.
It is always interesting to findout like this company is using
Ruby, whether it be for likesmall things like legacy
infrastructure or something, orjust even that they're using

(45:01):
Rails or Hanami.
Um even now, this many yearsinto working with Ruby, still
surprised at some of thecompanies that I hear that are
using Ruby for something.
It's very cool.

SPEAKER_01 (45:14):
Yeah.
Yeah.
The other day I saw job searchfor Apple for a senior Rails
engineer.
So a lot of big companies outthere are using Ruby and Rails,
and a lot of people are stillquestioning, like, oh, should I
use Rails to build my startup?
And to that I say yes.

(45:36):
Yes, it's still a great choice.
There's still a lot of reallygood tooling around Rails.
And I haven't been usingClotcoat with Rails that much
lately, but I think that itprobably can do really good
things with it.
Yes.

SPEAKER_00 (45:54):
Can confirm.
I actually think that AI doesremarkably well with Ruby on
Rails because of the conventionover configuration.
Like it's really easy for AI tolearn like this is where this
goes, this is how that goes.
Like I never see it go off andlike create a phantom directory
or like namespace thingsweirdly, which other people I've

(46:16):
talked to using other languageshave run into something like
that, where it's like, oh yeah,it did this weird thing with its
directory structure, or it putthis thing in the totally wrong
namespace.
I don't see it that much, atleast not with the like core
Rails.
Podia does a pretty good jobstaying on the Rails, quote

(46:36):
unquote.
There are a few places where,like, whether it be a legacy
decision or something that wedid in the past, there is a
little bit of weirdnesssomewhere, and every once in a
while it gets tripped off onthat.
But I feel like for the bulk oflike normal Rails stuff, it's
pretty smart.
It knows when and how to usegenerators.
It's pretty good.

(46:58):
I know that a lot of people loveTypeScript with it, and
allegedly static typing isbetter for AI, but I haven't
really had any issue with it.
Dynamic typing, it knows Ruby,it understands the Rails
conventions, and it can do mostof what I need without much hand
holding.
There's always guidance, butthat's more about how the code

(47:19):
is written and how thearchitecture is.
But the basic conventions itgets pretty well to me.

SPEAKER_01 (47:27):
The API has been very stable, and any public
changes to the API have beenvery well documented.
So I think I like that aboutRails.
It has reached like a stabilitywhile still having a ton of
features, and it is still likethe one-person framework, I
think.

SPEAKER_00 (47:48):
The last question I always ask, the fun wrap-up
question, has also changed quitea bit because of AI.
But I still like to ask it whatis something cool, new, or
interesting that you've recentlylearned, discovered, interacted
with?
It can be an AI tool.
We recently started usingsomething to handle our work
trees at work called conductor.

(48:09):
And my absolute favorite partabout conductor is it has a
toggle where you can click it onand it will always strip out the
your absolutely right messages.
When you call AI out onspullshit, it won't say you're
absolutely right anymore.
Absolutely love.
That would be my answer to thisquestion.
What is your answer to thisquestion?

SPEAKER_01 (48:31):
Yeah, my answer to this question, I might be a
boring one, but ChatGPT.
Chat GPT is basically I'm nolonger blocked the way I used to
be blocked in the past.
Like sometimes in the past,you're like stuck on something
and you don't even know what toGoogle.

(48:51):
What are the keywords you needto plug into the search box?
With Chat GPT, I feel like nowI'm stuck on something.
I don't know how to start.
It's funny, I've been running mybusiness for more than 12 years
now.
And I'm really bad with numbersand finances and budgeting.
So now when I need to doanything related to that, I will

(49:15):
start a conversation withChatGPT and basically ask the
questions in like the dumbestlanguage I can, and it will at
least give me an idea of what Ican ask next or what I can
Google next.
So I just love that ChatGPTexists, and I know it's a basic

(49:41):
one.
The other one is quite code.
I was yeah, very pleasantlysurprised about the terminal
interface and the way it works.
Again, I learned that I need tobe like more specific about what
I would do as a senior engineerso that it can just go and do it

(50:02):
for me.
But yeah, it was a learningprocess, and I really like how
it can help you be moreproductive to generate code.
Now, I still need to go back tothat hanami code that generated
for me and actually tell you,like, yes, this was like A plus
code, or like C minus.

(50:24):
You know, I still I'll owe youthat one.

SPEAKER_00 (50:27):
They are cool, new and interesting tools for sure.
I think even the most bearish AIperson would say they're very
interesting.
Scary at times too, but likevery interesting.
And it's got a little bit attimes, a little bit of the
JavaScript framework problemwhere there's something new

(50:50):
almost every day.
I can't keep up half the time.
We just had a break from work.
I've came back and I felt like Ispent the first two days of work
going, hey, what's new in AI?
Like, what did I miss during mybreak?
There's so much new stuff goingon, skills, how are people
organizing things, work treesetups, and it's like a whole

(51:12):
new discipline to learn where itused to be like, oh, Ruby 4.0
came out.
Cool.
Let me skim what's new and moveon.
Like, there's a whole new branchof things to learn because of AI
and how rapidly it changes andevolves.

SPEAKER_01 (51:31):
Yeah, I did have one question for you.
Like, what conferences are youexcited about in 2026?

SPEAKER_00 (51:38):
There's so many.
You've got Blast Off Ruby inAlbuquerque, New Mexico, RBQ in
Texas, Blue Ridge Ruby's comingback in North Carolina,
Asheville, North Carolina, justhad Jeremy planning to go to
that one.
Nice.
Yeah, just had Jeremy on for thelast episode, and we talked a
lot about Blue Ridge and thework he puts into it.

(52:01):
And very excited about that.
I really loved the first one heran.
He puts a lot of love into hisconferences.
I'm sure everyone does, but I'vebeen to a lot of great
conferences.
I'm starting to really like thesmaller ones more and more.
Some of the bigger ones, though,that are at least on my radar,

(52:21):
we have RubyConf happening inLas Vegas in July, which is
certainly a choice.
But they do have Mr.
Sin City himself running it.
So I'm sure that's going to beawesome.
Rails World is in Austin, Texasthis year.
So we're back.
Yeah, back in North America, andthat'll be in Austin.

(52:44):
There's Rocky Mountain Ruby outin Boulder, Colorado.
Maybe SF Ruby will happen again.
I'm not sure where they ended.
But yeah, those are the onesthat are all on the top of my
head.
But RubyEvents.org obviously isa great resource for tracking
down all of the variousconferences happening.

(53:07):
If I can only make it to ahandful, it'll be Blue Ridge,
Rocky Mountain, Rails World, andprobably RubyConf.
Four might be stretching it.
But yeah, definitely Blue Ridge,definitely Rocky Mountain.
Love Rocky Mountain, Ruby.
Wonderfully run conference.

(53:27):
Boulder is an awesome town.
Rails World, pretty mucheveryone on the podia team is
planning on going.
So probably gonna go and call itlike an unofficial podium
meetup.
So what about you?
What's on your to-do list forconferences this year?
Blue Ridge.

SPEAKER_01 (53:45):
I'm excited about Rails World planning to be there
as well.
And I don't know.
I don't know what else.
I think those two for now,planning to be there.
And oh, I did want to say, Iknow we talked a little bit
about Philly RB, but I want tosay big thanks to Drew Bragg,

(54:07):
one of our top contributors, uh,for bringing the site up to
speed.
You got it deployed.
You upgraded middleman, so thankyou for that.
You did most of it.

SPEAKER_00 (54:18):
To be fair, I told AI to upgrade middleman.
That did take a little bit of AIhand holding.
I was happy that I had AIbecause I think if I didn't have
AI at all, I don't think itwould have happened.
But that was definitelysomething AI was not capable of
doing on its own without a lotof intervention from me.

(54:39):
To be a fair, our site was 2014,I want to say, is the last time
it got a major update.
So yeah.

SPEAKER_01 (54:47):
It was like years ago.
Our most recent video was from2014.

SPEAKER_00 (54:54):
Yeah, there's a bright horizon for Philly RB.
Having RailsConf in Philly wassuper energizing and really
happy that we've got it back inperson, and it's so much fun to
go to.
And I'm I'm looking forward togetting more speakers to come
out, more attendees every month.

(55:15):
And I'm really happy that we'vegot it back in person.
And I guess I have to then alsodo a huge shout out to you again
because it wouldn't be happeningwithout you.
You're not only doing a lot ofthe logistics, but On Boo Labs
sponsoring Philly RB makes ithappen because, like I said, we
have a really cool space in IndyHall, but that's not free.

(55:35):
And we're getting Scott to comedown and we're helping him out
to make sure that it can happen.
It's only a train right away,but it's good to be able to help
out there and we get really goodpizza and cannolis.
The cannolis were awesome.
So it's a team effort.
I definitely couldn't do it onmy own.
So I appreciate everything youdo for making it happen.

(55:57):
I'm just pumped that it's athing again.

SPEAKER_01 (55:59):
We make a good team.
I'll just say that.
I'll leave it at that.
And if anybody's listening inthe Philly area, please
consider, you know, speaking atthe event or coming to the
event.
We're always happy to have youand reach out to us if you don't
know if your talk will beaccepted.
But we're pretty open to all thetalks out there.

SPEAKER_00 (56:20):
Absolutely.
Yeah.
The more talks, the merrier.
We just had Mike Dalton do anawesome one on like an intro to
Hotwire Native, which is supercool, very interesting.
Looking forward to Scott's AItalk tonight, quote unquote.
Like right now, it's Friday,January 9th.
But when this episode comes out,it will be tonight that he's

(56:41):
talking.
So super excited for that andpumped to see who we can track
down for February and onwards.
So yeah, like Ernesto said,anyone listening who's
interested in speaking, give usa shout.
We'd love to figure out a way tomake it happen.
Anything else you want to shoutor talk about before we wrap up?

SPEAKER_01 (57:00):
For anyone listening, if they're interested
in any of the technical debtarticles we write about on the
Ruby front, just go tofastruby.io.
And if you're interested in someof the AI consulting we do, just
go to ombulabs.ai and check outour blog there.

SPEAKER_00 (57:21):
Thanks a lot for coming on the show, man.
Really appreciate it.
And yeah, looking forward tohaving you on again for the next
round of what are you working onand what kind of crazy blockers
do you have.
See you shortly.
Thanks for having me.
Thanks, man.
All listeners, I'll see you inthe next episode.
Bye.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Betrayal Season 5

Betrayal Season 5

Saskia Inwood woke up one morning, knowing her life would never be the same. The night before, she learned the unimaginable – that the husband she knew in the light of day was a different person after dark. This season unpacks Saskia’s discovery of her husband’s secret life and her fight to bring him to justice. Along the way, we expose a crime that is just coming to light. This is also a story about the myth of the “perfect victim:” who gets believed, who gets doubted, and why. We follow Saskia as she works to reclaim her body, her voice, and her life. If you would like to reach out to the Betrayal Team, email us at betrayalpod@gmail.com. Follow us on Instagram @betrayalpod and @glasspodcasts. Please join our Substack for additional exclusive content, curated book recommendations, and community discussions. Sign up FREE by clicking this link Beyond Betrayal Substack. Join our community dedicated to truth, resilience, and healing. Your voice matters! Be a part of our Betrayal journey on Substack.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.