Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Hey, folks, Welcome back to another episode of the Ruby
Rogues podcast. This week, I'm your host, Charles max Wood,
and we are talking to Mike McQuaid.
Speaker 2 (00:14):
Mike, you came on before.
Speaker 1 (00:15):
We talked about homebrew and building Homebrew and work brew.
Speaker 2 (00:19):
And all the stuff you were doing at the time.
Speaker 1 (00:21):
Is there anything else you want people to know about
by way of introduction or yeah?
Speaker 3 (00:25):
Not really all the same old stuff, Like I'm not
kind of doing as much stuff on work brew nowadays.
Speaker 4 (00:31):
I'm doing even more stuff on Homebrew nowadays. So yeah,
all good stuff. Nothing really has changed.
Speaker 2 (00:36):
Good deal.
Speaker 1 (00:37):
I use work or not work Brew, I use Homebrew
all the time. So yeah, glad, pretty handy. It runs
on Linux too, I think I think I wound up
using it there somewhere.
Speaker 3 (00:47):
Yeah, yeah, it runs on Linux, and I guess because
it's Linux. But we've still got a little a few
tweaks there as well, on like WSL on Windows as well,
so I've ended up playing wrong with it all my
Windows gaming machine, which until recently been unsullied by Homebrew.
Speaker 2 (01:02):
Un slowly. I love it. Yeah. We we talked quite a.
Speaker 1 (01:05):
Bit about Homebrew and how it works last time, and
I thought it'd be interesting because I find myself sometimes
fiddling with like writing scripts or things like that on
my on my machine and then thinking, you know, it
wouldn't be too bad to kind of parlay this or
these set the scripts into like a CLI. And so
(01:26):
I was just curious, I don't even know where to
start with that. But since you've kind of built a
CLI that a lot of people use, I know you
had other contributors, but yeah, let's say that I'm sitting
on some stuff that I think belongs in the CLI.
Speaker 2 (01:40):
Where would you even start with something like that? Yeah?
Speaker 3 (01:42):
I think that's it's really good question because I think
the a lot of this depends on I guess, like
a lot of stuff in tech, right is like what
does your scaling story look like? Right to be one
of those boring people of like if it's just something
that you're writing.
Speaker 4 (01:58):
For yourself to run it once and throw it away.
Speaker 3 (02:01):
Then you know, thinking about like the language you use,
like you just want to reach for whatever is the
most familiar to you, or I guess in personally nowadays,
like whatever you think chat chpt can one shot a
reasonable approximation of.
Speaker 4 (02:17):
What you want it to do, and then you can
tweak it before you run it.
Speaker 2 (02:20):
But yeah, so like that way more than I care
to admit.
Speaker 4 (02:23):
Oh for sure.
Speaker 3 (02:24):
I mean it's great because I feel like there's a
lot of stuff around that where there's like scripts that
previously we just wouldn't have written. And then now just
being like, hey, I want to pull the state from
the getab api and previously being like, oh, that's possible,
but I really want to spend like two hours on this,
and then it's not like well, I'll spend twenty seconds
asking chat CHPT and look at the output and if
(02:44):
it's wrong and then it doesn't matter. But yeah, so
I guess that's the first thing. Is like, so for
me personally, like I tend to reach for like the smallest, dirtiest,
filthiest scripts for Bash because I'm pretty comfortable in Bash.
There's Bash has like a huge amount of gotchas though,
so as soon as it becomes even slightly load bearing,
(03:06):
even for you, you probably want to go to like
a no offense Bash authors like a real language, and
at that point, you know, stuff's pretty interchangeable, like if
we're on a Ruby podcast, So like Ruby is really
good for like yeah, standalone single file cli script. That's
what I would tend to kind of go for straight away.
(03:27):
If you're a Python person. Python's pretty good for that
type of stuff as well. I think, like, you know,
if people don't have familiarity with those, I'd be surprised
if they listen to a Brewery podcast. But then, you know,
I'm sure you can probably do equivalent stuff in C
sharp and various.
Speaker 4 (03:44):
Other right more obscure languages.
Speaker 3 (03:46):
But like, I guess that's where I would tend to
go next is just be almost like, okay, how can
I put in a single script and you can do
some kind of neat stuff with Most people are familiar
with Bungler for installing dependencies, but then most people are
also like, okay, like bundler, I have to do like
bundling stoll and bundle exec and.
Speaker 4 (04:04):
All this type of stuff.
Speaker 3 (04:06):
But actually you can do some cool stuff with bundler
where you can actually define a gem file inside the
script itself, so the first thing it does is shouts
fall its dependencies installs them all, so that that can
be again quite a nice way of extending some little
basic scripts.
Speaker 1 (04:23):
Yeah, I just want to back up on that for
a minute, because I ran into that somewhere I can't
remember where, but yeah, it's just its own little syntax
in there, isn't it.
Speaker 2 (04:32):
What exactly do you have to do in order to.
Speaker 3 (04:34):
Yeah, I can't remember the exact syntax with top my head,
but yeah, basically, like, I mean, that's the nice thing
about the state of Bundler and Ruby in twenty twenty
five is if you're installing an even vaguely modern Ruby,
it's going to come with a vaguely modern Ruby gems
and Bundler out of the box, and then everything essentially
you can do through the CLI you can also do
through the Rubia guys instead. And that means that essentially
(04:57):
every input that you would previously have poss as a parameter,
you can poss in as a string or like your
gem file instead of reading it from disc you can
like have that be like a here doc in your
Ruby script or whatever it might be.
Speaker 4 (05:11):
Yeah, Like so that's a nice way of getting there.
Speaker 3 (05:14):
So like, that's what I will tend to do is
have like a little bit at the top of maybe
one of my scripts that's like, hey, if I need
this particular gem, then install this gem at the first
time it's run, and then I don't need to like
carry along the script and a gem file and whatever.
I can just run it wherever I need to and
however I need to.
Speaker 1 (05:31):
Yeah, I did find it. You require a bundler inline
and then you do gem file do and then you
basically put all your gem file stuff in there.
Speaker 2 (05:41):
So you do your source which is probably RubyGems.
Speaker 1 (05:44):
Dot org, and then you gem whatever and yeah, you
just use your the domain specific language for the butler
for the gem file. Yeah, and off it goes.
Speaker 2 (05:54):
It's pretty cool.
Speaker 3 (05:55):
So then when your script's getting a little bit more
I guess load bearing, as we were saying as well,
like that might be when you want to be like, hey,
I want to have some like options parsing, and like
the most basic way of doing that is just you know,
you've got the rg V like array and you can
go and.
Speaker 4 (06:12):
Say, okay, what's the first parameter, second parameter. But if
you want to handle flags.
Speaker 3 (06:16):
And switches and being passed in different orders, there's like
the Ruby again part of the standard library. There's the
Ruby kind of option parser library, which is pretty good,
pretty easy to use. If you're still in bashland, there's
like sort of a Bash equivalent, but in my mind,
even by the time I start getting towards like multiple
argument parsing, it's like, that's again a nice sign that
I probably want to go into Ruby, and yeah, use
(06:38):
that kind of richer library. And there's also various gems
that provide different dsls for whatever your preferences are in
that area.
Speaker 1 (06:45):
Yeah, that makes sense. I know a lot of people
also use Optimist. You used to be called trollo.
Speaker 2 (06:50):
Yep.
Speaker 1 (06:51):
If you're been around for a while, and it's like optimists,
it's the same thing.
Speaker 2 (06:55):
Yeah, so do you have this reference.
Speaker 4 (06:58):
It's funny because I tend to.
Speaker 3 (07:02):
I tend to like not I tend to use just
the most basic, like I'm going to pull stuff out
of rg V until it hurts. I think it's also
because I've been around in Homebrew and Ruby and whatever,
like long enough it's like I'm aware of all of
the kind of educases that you get there, and like
(07:23):
if you delete a random part of r V, or
if you jup it, or if you check for inclusion,
like what.
Speaker 4 (07:29):
The various gotches are.
Speaker 3 (07:30):
So like I wouldn't necessarily recommend other people do what
I do, but yeah, but I guess I don't have
a particular library of that I prefer for this stuff
and Homebrew because for a very long time, Homebrew kind
of couldn't really by design use any Ruby libraries without fully.
Speaker 4 (07:51):
Vendoring them all, which was a big pain for us.
Speaker 3 (07:53):
Like Homebrew basically built its own argument parser and stuff
like that. Even today, like Homebrew were a little bit
I wouldn't say suspicious of other libraries, but it's like
the we care. I guess this is a nice segue
to the next thing you might start caring about with
the CLI, which is like Homebrew nowadays, we really care
about like the.
Speaker 4 (08:12):
Run time and almost like how long is.
Speaker 3 (08:14):
It going to take to essentially, you know, do the
thing that you want it to do, So like how
long how long until you can print a Hello World
once you have all your libraries loaded and everything like that.
And I think that's the type of thing where like
if you have a script that you run once a
month and it takes thirty seconds to run, you probably
don't care if it takes five seconds or thirty seconds
(08:35):
or a minute, right like it's but if you have
a script that you're running in your shell, prompts say,
and then your shell's not going to be print your
shell prompts, not getting it printed until the script completes.
Speaker 4 (08:46):
All of a sudden, you maybe don't want to require
that library.
Speaker 3 (08:50):
You may be want to do it by hand so
that you can just boost it, or you may well
be getting to the wards the point which Homebrew is.
Speaker 4 (08:58):
Not quite able to do.
Speaker 3 (08:59):
But some the other CLIs I've built more recently, I guess,
like at work Brew and at GitHub or whatever, it's like, okay,
you maybe want to reach for another language at this point.
So Ruby is really good language. There's been a lot
of performance optimization work that's gone into Ruby in the
last five or ten years, but a lot of that
comes from the shopifys and the gethubs, who are not
(09:21):
focused on how can I make a CLI run as
fast as I possibly can? But they're focused on like,
how can I have a long lived Ruby process realistically
running probably rails of Sinatra or whatever, which can serve
as many requests.
Speaker 4 (09:33):
As possible as quickly as possible.
Speaker 3 (09:35):
Right, So that has more performance implications around like memory
management and garbage collection and JITs and stuff like that,
and things which rewards the longer execution of the application,
which is the exact opposite of what CLI does. Unfortunately,
some things like Ruber Coop have kind of got around
that by having I forget what they call it, but
(09:56):
there's basically a mode when you can essentially have Ruber
cops sitting in the background and you end up sending
requests to an already running process. I guess we could
maybe a take that approach with Homebrew one day, but
we've not yet. So instead we're generally concerned about like,
how can you how can you shave that time down?
How can you require as little as possible and push
some logic into maybe Bash in home Brew's case. But
(10:17):
when you're writing like a prophecy, a lie and that's
you know, that looks like maybe nowadays probably the best
choices would be something like gut rust or go, depending
on your preferences.
Speaker 4 (10:26):
Like go, for example, is.
Speaker 3 (10:29):
If you're used to Ruby, it's a very sad language
I think to write because it's you cannot be.
Speaker 2 (10:34):
Expressive, IM go right now?
Speaker 4 (10:36):
So yeah, so how have your experience has been with that?
Speaker 3 (10:39):
About like going from the delights of ruby Land to
go land with the stuff?
Speaker 2 (10:44):
Yeah?
Speaker 1 (10:45):
Yeah, I don't love it. I mean I'm just barely
getting into it. They use it at work, so yeah,
back end engineer encompasses Ruby and go. Yeah, and mostly
I get away with not doing a lot of Go,
but yeah, they kind of expect and need you to.
Speaker 3 (11:03):
So yep, yeah, it's you. I feel like there's there's
definitely been several schools of languages where the underlying methodology
is like stop people from doing stupid stuff, and Ruby
feels like it's almost as far in the other extreme
as you can do, where it's like you can I mean,
(11:23):
I'm sure you've reviewed some absolutely heenous Ruby code of
the years where people have metaprogramming to an absurd yeah
Goldberg level where it's impossible to figure out what's going on,
but that also allows you to build incredible the powerful
rails and all of this stuff, right and go.
Speaker 4 (11:42):
I'm sure you have the same Charles. I just end
up feeling like.
Speaker 3 (11:45):
You know, this like twenty line switch statement could be
one line of coding Ruby, but you know you don't
let me have the you know, you've taken away the
keys to this particular car, So I guess I have
to write the twenty line switch tament.
Speaker 1 (11:57):
Yeah, I just have to say, I mean to a
certain extent like control flow and things like that.
Speaker 2 (12:03):
It's all the same.
Speaker 1 (12:04):
You know, there's some funky stuff with like very variable
assignment and functions and things like that, But yeah, it
doesn't light up my soul like ruthe does.
Speaker 4 (12:13):
Yeah.
Speaker 3 (12:14):
Yeah, Like Go feels in many ways to me like
a sort of much less verbost Java. Yeah, Like I
wrote a bunch of Java back in the day when
it was peak Java enterprise beans, and you know all
this like accessive boiler plate, and there's a lot less
at the boiler plate and Go.
Speaker 4 (12:32):
But it still feels like this non zero amounts. But yeah,
so I'd say.
Speaker 3 (12:35):
I would still consider reaching for Go for a CLI
if you really care about that.
Speaker 4 (12:40):
Like execution performance.
Speaker 3 (12:42):
So for example, I guess when I was at work
group previously, I was working on the CLI there, which
wrapped around Homebrew Cli.
Speaker 4 (12:53):
So we wanted to provide as little overhead as possible.
Speaker 3 (12:55):
And the amount of stuff you can do in zero
point one of a second on modern hardware and Go
compared to Ruby is like fairly spectacular. Like in Ruby
you're really trying to make sure you avoid like any
IO or significant significant computation or anything like that.
Speaker 4 (13:14):
Ross Road just just.
Speaker 3 (13:16):
Because of the virtues of it being like a natively
compiled language.
Speaker 4 (13:20):
You can just do an awful lot of stuff much faster.
Speaker 3 (13:23):
And that the multi threatened story as well, is like
because it's not the global interprets of walk or a
come that's the Ruby name, but whatever the Ruby equipment is. Yeah,
you can you can afford to just leading into the
concurrency a lot harder.
Speaker 2 (13:36):
Yeah, that makes sense.
Speaker 1 (13:37):
So let's say that I want to build a CLI
and I'm trying to think of like a super fun example.
Let let's say that I'm gonna, you know, I'm gonna
get an open ai ap apike and I'm gonna you know,
I'm gonna make it talk to whatever, right, so you know,
maybe it'll talk to multiple services there. Right, So I'm
doing stuff maybe with the podcasts, which is kind of
(13:59):
what I'm imagining, right, So I hand.
Speaker 2 (14:02):
Off my.
Speaker 1 (14:04):
Audio or video to whisper and it bought I being
I got a transcript, right, and then maybe I hand
the transcript off to d LLM and it does some stuff.
A lot of this stuff I mean you would put
on the back of like a rails app or anything
else too, right, But you know, I want a command
line interface just to make it super easy for me
to kind of navigate through some of this stuff.
Speaker 2 (14:26):
Where would you recommend I start?
Speaker 3 (14:27):
Yeah, I mean, like for me with stuff like that
tend to in some ways, that's a kind of general question.
We like, how do you bolde software right? And I'm
sure everyone has different ways, but like the way I've
always liked to do it is essentially just try and
write the Well, I'll give my pre ai answer first.
(14:49):
My pre aions is essentially just right the filthiest, hockiest code,
assuming that no one will ever read it, to just
get the core thing done.
Speaker 4 (14:58):
So I guess in your case, it's like, okay, I
want I.
Speaker 1 (15:00):
Would write a script and I would point there. I'm
not going to do any argument.
Speaker 3 (15:04):
If I'm just going to assume an argument has been passed.
I'm going to pull out the first argument. I'm gonna
get the file, maybe the MP three file or whatever
from the podcast on disc. I'm then going to load
that into memory and then going to pass that off
to Opening.
Speaker 4 (15:17):
Eyes Transcription API.
Speaker 3 (15:19):
I'm then going to pull down the result of that
as like Jason or whatever. Then I'm going to immediately
get that not store anywhere, stick that inter opreenning guys
like im API do something with that, bring that down
and then print the output right and essentially build it
in such a way that there's no air handling. If
anything is in any way not the happy path, then
(15:40):
it will fail with a completely incomprehensible whatever message.
Speaker 4 (15:42):
Right.
Speaker 3 (15:43):
So, like generally that's the way I like to feel
out how to do things initially, And like I think
sometimes with like CLIs and stuff like that, it can
be tempting to be like, oh, I'm going to add
the options first, you know, because that's if you're almost
like thinking from like top to bottom. That might be
how you would think about it. But actually, like that's
not really Probably the thing you cared the most about.
(16:04):
You're trying to figure out is this thing even viable
or useful or whatever. And every minute you spend fiddling
with options is a minute you're not figuring that.
Speaker 2 (16:14):
Out right, right, not doing the quarteine that makes sense?
Speaker 1 (16:18):
And then so one thing that I'm looking at is
so then do you just build the CLI around it
so that you can say transcribe file nan yeah or exportant.
Speaker 3 (16:29):
So I again with this type of stuff, like I
tend to go for like, keep everything in one file
like until it starts falling over, right. And I would
again to start with not even necessarily use any methods
until I'm thinking about making things kind of dry and
you know, I'm having to call the same thing multiple
times or whatever. Right, So I just would tend to
(16:50):
do that until it feels like it's getting too much.
I don't know what that is, whether that's you know,
a couple hundred lines, five hundred lines, one thousand lines whatever, right,
and then start splitting up. You said, if you had
commands like say you're passing like a transcribed command to
do this, or like a read out loud command or
you know, start a chat command or whatever, then that
(17:11):
might be when I'm like, okay, well, logically there's like
three different things this is doing. Probably the code between
them is not actually that similar.
Speaker 4 (17:21):
So maybe these could all split out into.
Speaker 3 (17:22):
Separate files that are stored a bit separately and included
separately and whatever. And again that's one of the pros
and cons of kind of Ruby and Go and stuff
like that, because in Ruby, okay, you've got those different files,
but you've now once you've slit into multi files like
if someone else is like, oh cool scripts, how can
(17:44):
I use it? Then it goes from being like here's
literally like the copy and pasted script contents to being like,
you need to clone this repo.
Speaker 2 (17:51):
And here's how you want to ask this.
Speaker 3 (17:53):
Yeah exactly, So like okay, you could but make it
a GEM and then they could gem install it. But
like I'm showing my Homebrew maintainer bias here, but I've
I personally I really like Ruby jams and equivalent to
other languages. For like I want to library I'm writing Ruby.
I want to library for Ruby. Right, I kind of
resent it when I'm like, I want to install a
(18:16):
tool that just happens to be written Ruby. I don't
actually care what language this tool is written in, right,
And I have to use MPM with JAM or pip
whatever to install some run to cli. Like for me,
I would always either reach for Homebrew or if it's
just something my mate wrote, ideally, my mate just like
sends me the script and I have like a dumping
ground repo and I just like copying paste that file
(18:37):
in's by report and then I can run it or
modify it as I choose in future.
Speaker 1 (18:41):
Right, So that was that Yeah, you're you're talking around
one of the things that I was going to ask, right,
Because if it's just one file, it's like, Okay, stick
this in your path, make it executable, bam, you're done, right,
But if it's multiple files, yeah, I was trying to
figure out, Okay, how do you package that?
Speaker 2 (18:58):
And I guess, yeah, you could use some thing like
Homebrew or Yeah.
Speaker 1 (19:01):
I agree with you on the I mean, if it's
a Ruby specific tool like Rake, I don't really have
a problem with it being in Ruby gems. But yeah,
like the I've used other CLIs. I think it's like
the Heroku CLI or the Stripe cl I can't remember,
but one or two of those. It was like, yeah,
I had to go NPM install it, and I'm going, yeah,
but this is not a code tool, yep. And that's yeah,
(19:24):
I agree with you there. It's like, no, this this
belongs in Homebrew or apt or YOUM or you know
whatever whatever your package manager is for your operating system,
where it's going to pull it in and say, hey,
this is a utility you're going to use for other things. Right,
it's an application essentially that runs on top of your OS.
Speaker 4 (19:47):
I just yah, Ruby podcast, it feels like a safe
space to admit this.
Speaker 3 (19:49):
But yeah, like I my one is like NPM, where
it's like I ideally I don't have no Jetz on
my system at any given time, right, So like when
when something sayss like, oh, the way to do this
next step like step one NPM stall x and so well,
actually there was a step zero which was installed no.
Speaker 4 (20:06):
J I and given this thing.
Speaker 3 (20:09):
That you're asking me to do is nothing to do
the JavaScript, Like, I don't see why I have to
do that, right, So often I'm like, Okay, I then
go looking and I'm.
Speaker 4 (20:17):
Like, oh, is this is this already in Homebrew?
Speaker 3 (20:19):
And if not, maybe I package it and homebrew my
own benefit and other people in future or whatever.
Speaker 4 (20:24):
But yeah, I've always felt that that's that.
Speaker 3 (20:28):
Yeah, the kind of I think that's a You've brought
up a bigger question there about almost like distribution, and
I think that's that's a bit more kind of messy
and complicated because it ends up being like, well, what is.
Speaker 4 (20:39):
The distribution method?
Speaker 3 (20:40):
And Homebrew somewhat infamously, I think it might be one
of the first, maybe even the first to the whole
like cold to bash install the script that everyone is
much the rides, but like, you know what, if you're
installing something exactly, it works. But also like if you're
trying to install something that's kind of like non trivial,
like homebrew that involves cloning a pot and you know, again,
(21:02):
it's like the MPAM install. Like we could say, okay,
well step one get cloned, and so well I don't
have GET on my machine. It's like, okay, well you
need to still get. How do I do that? Well
it's you whoops.
Speaker 1 (21:14):
Yeah, I guess that's my issue, right is I generally
have noted NPM on my machine. Right I'm working in rails,
you know, I need some kind of JavaScript run time,
and so it's like, Okay, this feels weird, but I'll
do it. But yeah, if it's a PIP install because
I I never touched Python, or you know, if if
it requires Go or something like that.
Speaker 2 (21:34):
Now these days I'm working on Go.
Speaker 1 (21:37):
But yeah, it's like, why do I have to install
a programming language in order to get my tool when
it doesn't really have anything to do with the programming language.
Speaker 2 (21:46):
And so yeah, just getting into you know that step.
Speaker 1 (21:51):
I guess some of these come like self packaged or
when you do the install, then it'll you know, behind
the scenes, it'll install either a standalone version of the
laneguage or give you some way of building it on
your machine, or sometimes it's just an executable that's pre
built that you know, it has reasonable assumptions about what's
already there. Yep, and so so that works. But yeah,
(22:15):
so let's say that I am building a CLI. I
write it in Ruby, you know, Yeah, I mean I
can tell my friend to go get clone it, right
if they're a programmer, and you know, and then it's like, okay,
here's the command to run it. But yeah, let's say
that it's something that's getting wider adoption and it needs
that distribution right where it's.
Speaker 2 (22:36):
Hey, you know you're going to go run this on
your own machine.
Speaker 1 (22:39):
Right, Maybe it's a fancy CLI that cleans up large
files on my hard drive or something like that. Right,
it's some other utility that does some of these other things.
You know, it's helpful, and I don't need a UI
for it, or at least not a complicated UI, big
graphical UI.
Speaker 2 (22:55):
Yeah, how do I begin to package that up?
Speaker 4 (22:58):
Yeah?
Speaker 3 (22:58):
So again, probably unsurprisingly, my natural response to that is
like Homebrew is pretty good, yeah, distributing software, right, So
with Homebrew you can kind of stick it in a
repo and then call like Brew, create dashtash, Ruby, point
that at the repo and then humbrew will try its
best to figure things out. But then you know, if
(23:20):
it can't, you can fiddle with that and hombur can
help you with that. And you know, if this is
a thing that's pretty widespread use and you're packaging someone
else's software or whatever, then you could submit it to
the main repository and ChIL It'll be accepted fairly quickly
and then everyone can benefit that. If not, we have
this thing called called taps, which are like third party repositories.
Basically that you can run a command Brew tap new,
(23:43):
which basically will create a new.
Speaker 4 (23:45):
Tap for you.
Speaker 3 (23:45):
You can then push that up to get hub as
a repo, and we spit out some nice CI workflows
so you can use those free public repo actions minutes
to put them to use, and then you could use
that as your distribution method if you want instead, and
then that could be a nice way of you know,
regardless of what language you.
Speaker 4 (24:03):
Write it in, if you're doing Ruby or Python.
Speaker 3 (24:05):
Or Go or whatever. Then you can distribute your software
that way pretty easily.
Speaker 1 (24:09):
So if there's any kind of setup or build step
or anything like that, is there some version of like
an install script or something that it runs or yeah.
Speaker 3 (24:17):
Yeah, So there's like in Homebrew's Ruby formula, which is
kind of like our DSL for like building and installing software.
Basically there's yeah, there's like an install method and basically
you can much like a normal Ruby you can run
a bunch of system commands in there. Like we try
and do a reasonable guess of like, hey, if it's
a Ruby project, you probably want to run like bundle
(24:38):
install and then you know, stick a Ruby file in
a bin directory somewhere and that probably gets you most
of the way there. But yeah, but it's basically just
like little nice dsls that can kind of help you
get that software onto your machine. And it's I think
one of the nice things about Homebrew is it's definitely
of most of the packaging systems I've ever intacted with,
(24:59):
and from what I hear from people, it's one of
the easiest to just get started with like an existing
piece of software, because essentially you're just like, Okay, run
some commands which you can try yourself in Bash and
then move stuff from one place or another, and then
that's you.
Speaker 1 (25:14):
So I mean, in the case of Ruby, Homebrew already
runs Ruby. But let's say that I reach for something
like Go or Rust or maybe I'm using Python, right,
and so I can't assume that the person whose machine
is going to get installed on has you know, the
language or language tools that I need in order to
(25:34):
build and run it. How do you manage that kind
of a thing.
Speaker 4 (25:38):
Yeah, so in Homebrew, that's we just have like a
long line again with DSL.
Speaker 3 (25:43):
Depends on where you can basically say, okay, I need
Ruby or this particular version of Ruby or whatever it
may be.
Speaker 2 (25:49):
Where against all that globally?
Speaker 3 (25:51):
Yeah, yeah, so it install that, but Homebrew keeps track
of Okay, did did you ask me to install Let's
just say, say we have or openly night transcription thing,
call it transcribey or whatever, right, and you might install
it with Bruins School transcribe, So that would install Ruby
automatically because that's what you said. But then in a
(26:12):
year you're like, ah, you know, Ruby sucks or whatever.
Speaker 4 (26:15):
I'm going to rewrite this and go and then you
switch your.
Speaker 3 (26:18):
Packaging to Go, then Homebrew knows when you ran Bruins
School Truscribbe you weren't saying, hey, can I have Ruby
and transcribing. You were just I just care about truanscriby.
I don't care what trubscribey is written in.
Speaker 4 (26:30):
Right, So then if it.
Speaker 3 (26:31):
Changes in the future to Go, then Homebrew that knows like,
oh well you never asked for Ruby.
Speaker 4 (26:35):
We don't need Ruby anymore.
Speaker 3 (26:36):
Let's give it a Ruby, right, And again the fun
thing to Go as an example is like, and this
is where it gets slightly different. I won't go into
the details unless you're really interested of like if you're
you're building it yourself versus if Homebrew builds it for you.
But if humbrew builds it for you, then the nice
thing about Go is because Go spits out binaries, then
you don't even need to install GO on your system, right,
(26:59):
you can probably use the finery that we have built
for you, so then you can get it Ruby.
Speaker 4 (27:03):
You don't need Go.
Speaker 3 (27:03):
We need to instill GO when we're building it for
you in our service, But on your system you don't
need GO unless you're building a right from source as
we would.
Speaker 2 (27:12):
Say, right, similar with Rust, Yeah, exactly.
Speaker 1 (27:16):
And it seems like with the approach that we're taking
with Ruby or Python or something that's not compiled that way,
or Node as another example, right, if it's running on
any of those, the install node or install Python or
install Ruby approach works fine on Linux or Mac or
in the u SSL, yeah, WSL, whatever it is. I
(27:38):
am not a Windows I help my wife with Windows,
but she's not using the Linux emulator. So but yeah,
so that works fine there. And then if you need
to package it up for something like Windows, then then
you're looking at another tool that's going to essentially package
your language. Right, It's going to put your Ruby VM
(27:59):
in the execute the ball so that you can run.
Speaker 3 (28:02):
Yeah, so that's when it starts to get a bit
more messy, and different languages are optimized for that. So again,
like going right back to our original, like I'm writing
a CLI, so I think like one of the first
questions I would probably have is like, what are the
chances are that you're going to want to ship this
to Windows users?
Speaker 4 (28:20):
Right?
Speaker 3 (28:20):
And if unless the chance is definitely not, then Ruby
and Python and even more so things like Bash are
you know, very easy to get started across when there's
sorry across Linux and mac Os from the outset. But
then when you have like a Windows story, that's when
it starts to get a bit more messy. Like Windows ADS. Again,
(28:41):
like plenty of PACKGEP managers and stuff like that, but
your average Windows user is not going to be familiar
with those, and even a lot of Windows developers don't
necessarily use like package management as widespread as it is
in the Linux and mac Os worlds. So that's when
I would say, like the story that Go for example
(29:02):
can do for you is I'm less clued up on
Rust in this way, So I might excuse me any
Rustians who are listening that I might not know quite
as much about what Ross people are, but certainly I know.
With Go, the nice thing is you can just build
a single binary which you can then have run on
a Windows machine and that will work as expected.
Speaker 2 (29:21):
Right.
Speaker 3 (29:21):
But the other nice thing with Go is it's the
cross compilation story is very easily is very easy. So
even if you only have access to a Mac machine,
you can build the binary on your Mac for all
mac OS versions essentially and Linux and Windows. And if
you don't have access to those operating systems. You can
have a reasonable degree of confidence that, unless you're doing
(29:44):
kind of wackys specific stuff, that that binary will just
run fine on all three HOSS and like that. That's
a pretty appealing thing if you're building a CLI that
you want to be used cross platform.
Speaker 1 (29:54):
Yeah, that makes sense. I'm a little curious getting into
running it on Mac versus Linux? Like are it seems
like for the most part things just generally work the
same way. Are there meaningful differences that you need to
be aware of in certain cases?
Speaker 4 (30:12):
Yes?
Speaker 3 (30:13):
I think most of the time you don't need to
think too hard about things, and I think it's it's
often relatively obvious when you're sort of starting to get
a little bit more detailed. So, for example, one difference
would be the you know, we're gonna really nerd out
some nice os well like high level ors internals here.
(30:34):
But you know, like mac os and Linux run different kernels, right,
there's the Linux kernel and the mac os Darwin kerdile
and then there's the stuff in user space as well,
so like because yeah, like as you mentioned, like macOS
came from like a BSD kernel land like most of
the user space applications or BSD and on Linux, most
(30:56):
of the user space applications.
Speaker 4 (30:58):
Might be g do once.
Speaker 3 (31:01):
And Apple won't do so because of GPLC three, which
is a whole other, long and boring conversation. But yeah,
So basically, like if you're calling out to various say
like even Gret, right, Gret has arguments that will work
on maquest and they will not work on Linux in
the same way, and vice versa. So you might if
(31:23):
you're shelling out to Gret, you might need to have
like conditional logic based on the OS you're on. Or alternatively,
instead of shelling out to Gret, you just say, Okay,
I'm gonna instead like get all the lines and I'm
going to iterate off the lines in Ruby and do
things that way too. So I think with with like
writing Ruby itself, like Ruby and Python are both generally
(31:47):
written in such a way that like pretty much everything
you would ever try and do is going to be
cross platformed by default, Right, it's just various little gotcha's
like shelling out or again, I can't remember this is
still the case in twenty twenty five, I'm pretty sure
it is. But like, by default, the next generally ships
with a Letussure ship with a case sensitive file system
(32:09):
and Macwest ships with a case insensitive file system. So
if you're doing all your development on Mac and you
have like some file that you read in from your
repository or whatever, and it's uppercase a you know, Anti
TERB with an opper case versus antique with a lyric
ASA that will watch just final mac os if you're inconsistent.
Speaker 4 (32:27):
Between the repo data on disc and how you've written
in your source codes.
Speaker 3 (32:32):
But then on the next that my plot right, So
like these are types of little things that you need
to care about when you're doing cross platform development.
Speaker 2 (32:41):
Gotcha.
Speaker 1 (32:41):
So yeah, I'm just trying to think through the rest
of it. I'm imagining that. Yeah, if I split things
up into files, what it's going to do is it's
going to put all of my files into one place,
put the executable in the path somewhere, and then the
executable is going to be aware of where the things
are and just have that in the load path essentially
exactly exactly.
Speaker 2 (33:05):
Now, with testing.
Speaker 1 (33:08):
Are there specific things that you do when you test
them or do you just kind of test the internal
logic and assume that your flags and stuff work.
Speaker 3 (33:15):
Yeah, so that's a good question as well, because I think,
particularly we've never really written a CLI before, you would
probably think, particularly once you get to like the multiple
files level of like, Okay, I'm going to write a
bunch of unit tests for the internals and make sure
that they work.
Speaker 4 (33:33):
My experience as a you know, longtime.
Speaker 3 (33:35):
CLIR is that like, actually, that is going to cause
you a lot of pain unless you have at least
some tests that are actually shelling out and running the
script externally, right And we I guess relatively recently in Humbrew,
which in my mind is probably like ten years ago,
but that's recent as far as I'm conturned.
Speaker 4 (33:57):
Yeah, So we used to have.
Speaker 3 (33:58):
A lot of tests that would just you know, unit tests,
and then we would just find that like really core
functionality would break because all the internals, all the internal
APIs we're working, you know, it's it's the tailors all
the time with like unit test versus integration test, right of,
like you can have all of your internal APIs working
perfectly and are beautifully tested, but then when you fit
(34:21):
them all together, things explode. So as a result, like
I've always found with CLIs in particular, like really taking
that sort of outside in testing approach is really nice
to ensure that you're like, okay, so say again, we're
talking about this opening CLI type thing. Like if I
was writing a CI pipeline to test that, I would
(34:43):
probably give giob actions my like either my or test
open ei key, and I would have it in CI
like actually trying to run a transcribe command on maybe
like a single word or some very short amount of
input data and actually making sure that as close to
(35:03):
a real use of that application is actually being tested
and working as expected, because all the kind of clever,
like all the cleverer unit tests you might have will
just otherwise fall over when you need them the most.
Speaker 2 (35:16):
Yeah, I like it makes sense. I'm gonna oh, go ahead.
Speaker 3 (35:21):
I was just gonna say, so another thing you might find,
I guess as I've been talking about, like CLI performance,
because it's sorts in front of my mind with this stuff.
Speaker 4 (35:28):
So there's a guy.
Speaker 3 (35:29):
Who's written loads of really helpful little CLIs and rust
I forget his name off the top of my head,
but we can find him Google him later or whatever.
So he's got a really nice tool he's built called
hyper Fine, which is a CLI benchmarking tool. Basically it
does a bunch of other things, but that's what I've
found it most useful for. So what that does is
(35:50):
it takes this sort of outside and approach and you
can say like, hey, I want you to run this
command this many times and then tell me like what
the performance characteristics are, or I want you to run
this command and this command and tell me the difference
between the two. So quite often when I'm doing like
Homebrew performance work, the way I will measure the performance
differences is like that command could include like checking out
(36:14):
a brunch, or you can you should specify that as
like preparatory command or whatever. So I'll say, okay, check
out this brunch, run this Homebrew command, then check out
this brunch, run this homboric command, and then it just
has a nice little display of almost like well, when
you run this command, it's one point four times faster
on average, and that varies by like two percent like
(36:34):
across the iterations, and it will I think everyone's at
ten times by default, but you can you could say
run this a thousand times or two times or whatever,
and there's like as you might expect for kind of
a beautiful like proper cli.
Speaker 4 (36:49):
Nerd cli like this.
Speaker 3 (36:50):
You know, there's like so many different options to twiddle
every possible knob to get it to work exactly how
you want it to. But yeah, but if you're ever
in a world where you're like I really care about
the perform Morman's implications of my cli, like that is
the best tool I've ever used and works basically exactly
how i want it to because it does that inside
out testing right of like actually running the actual process
(37:14):
and then measuring the actual time that like it runs forth.
Speaker 4 (37:17):
So big shout out the hyper fine.
Speaker 2 (37:19):
Yeah, the I found it on GitHub.
Speaker 1 (37:22):
It's David Peter and or shark dp is his gethub handle,
And yeah, yeah, looks like a cool tool because.
Speaker 2 (37:31):
I wanted to switch gears a little bit.
Speaker 4 (37:33):
Yeah please, So I was.
Speaker 1 (37:35):
I wound up on your blog when I was getting
ready for the episode, and you have an article on
here and we've wound up talking about AI, and it
seems like it's still the hot topic. It's funny because
over the last i don't know, three four or five months,
I've really started to use AI quite a bit. Sometimes
I use it essentially to write most of the code,
(37:57):
and then I have people, you know, get concern learned
about whether AI is going to take people's jobs and
things like that, which you address in your article here
it says open source maintainers thrive in the LM era.
You know, you address some of that, and you talk
about how people are using lms and things like that.
I'm just wondering, like, what's your experience using LLLM tools
like Cursor or Copilot. And then the other question I
(38:20):
have is, and you kind of address it here, is
is there any hope for the people that are freaked
out that AI is going to take their programming job?
Speaker 3 (38:27):
Yeah, so I guess I'll make sure I get to
both of those. But yeah, I guess the experience wise,
I've been probably using some sorts of tools and some
cost of capacity since the very early days. So like
back when Copilot was still an internal alpha, I get
(38:48):
help based on I.
Speaker 4 (38:50):
Think GPT too to start with.
Speaker 3 (38:52):
Yeah, and opening I had publicly not really done anything
except released a bunch of papers. Yeah, Like I was
asked to test it internally essentially, and like I was
known for being the type of person who was good
if you wanted honest opinions of the things. I was
testing because regardless of the political consequences, if you asked
me to test some thing and I thought it was
all trash, try to be nice about it, but like
(39:14):
I would not tell you that it was great. I
would tell you that it was not great. But yeah,
So I remember trying this and being like this will
because I'm sure people it's almost hard to even remember.
But like if you put yourself back in whatever this
was like twenty twenty one or something like that, Like
you know, if someone had described what you're able to
(39:35):
do now with Copola or Cursor and they said I
built a thing that does this, you would be like
your eyes would roll out your head and then you'd
be like okay, like yeah, whatever. Like I've seen a
lot of autocomplete, like right, it always sucks. It just
gets in the way. It's annoying, it's not useful. Like
I write Ruby now, the auto complaint in Ruby sucks.
I don't really care and I don't really miss it
(39:55):
anymore because I've got used to it and whatever. This
is just yeah, nothing that I care about, right, and
I try to and like my almost immediate reaction was
like even in when it was way more limited, was like, wow,
this is this is not like radically transformative yet, this
is not incredibly workflow changing yet, but this is really good.
Speaker 4 (40:16):
And this is a lot better than I expected.
Speaker 3 (40:18):
And ironically, you could build a pretty damn good autocomplete
without any specific knowledge of Ruby as a language, but
just using this kind of LM technology. And I said
to the team, I was like, yeah, I assumed this
would be complete garbage and it's actually good, well done,
Like can I keep using this please? So yeah, So
I guess I went from that to use and copolitive.
Speaker 1 (40:41):
I just want to chime in that lines up with
my early experience with a lot of it, where it
was it was a little beyond autocomplete, and it didn't
always give me what I wanted, Like sometimes it was
like you are assuming that I am doing something wildly
drastically different from what I'm doing, but it was it
was right often enough to where it was like, Okay,
(41:01):
I definitely want this turned on.
Speaker 3 (41:03):
And I think that's the thing like in the early
days particularly, and I think maybe some people who tried
stuff out too early of over index on this. I
think it was very personal in terms of like what
what's the hit rate? Because I mean, I find this
with self engineers in general. Right, is there some selfware
engineers I know who will completely genuinely, without any sense
(41:27):
of irony, will express but this only solves the customer
problem ninety nine percent at the time. Therefore it's pointless
one percent of the time. Why would you ever do that?
And so well, actually, for most people, if this makes
things one hundred times faster ninety nine percent at the time,
which I'm not saying AI does, but like in some
declative software cases, things do, that's probably an enormous amount
(41:50):
of business value for a lot of people, right, Right,
But there's we have and I don't think it necessarily
a bad thing, but we tend to have a streak
as a you know, we're literally the people of zeros
and ones, right, So it's like we want things to
be perfect, and when things are perfect, it's easy to
kind of fawardto the like, well one did I bother, like,
you know, what's the point? Yeah, And I definitely think
(42:11):
there's only lms, like the only copilot, it's like the
autocompletion was you know, if that's wrong fifty.
Speaker 4 (42:18):
Percent of the time.
Speaker 3 (42:19):
There's a lot of people who just find it really
really annoying, Whereas I'm the type.
Speaker 2 (42:23):
Of person, yeah, but it sped me way up.
Speaker 3 (42:26):
Exactly, Like Whereas I think I'm lazy enough that I'm
like if fifty percent of the time I can just
like I'm a reasonably fast type or I was never
one of those like hyper fast. I've switched to the
Vorac because it gets me an extra ten per second whatever,
like VIM keyboard warrior types. So for me, like I'm like, oh, yeah,
you know, like half as much typing some of the
(42:47):
time is well worth it for me, right, and I
can I feel able.
Speaker 4 (42:52):
To ignore the visual noise of it always.
Speaker 3 (42:55):
Trying to prompt me to do the wrong thing, right,
And some people wait that and I guess it right,
But yeah, I guess I would say if you're one
of those people who I hated that in you know,
maybe even six to twelve months ago, like you should
check back in because things are changing, yeah very fast.
Speaker 2 (43:11):
Right.
Speaker 1 (43:12):
Yeah, So kind of getting into the question that I'm asking,
I'm enjoying this, right, I'm not saying you didn't get
to the point, of course. What I'm saying is so
like this morning, right or last night, I think it
was last night I started it, and then this morning
I finished it. But you know, I'm expanding my podcast
hosting system which also does courses in summits and coaching
(43:34):
and podcasts and screencasts, right, because that's all stuff I
want to offer off of the back of the podcasts.
Speaker 2 (43:40):
Right.
Speaker 1 (43:41):
I was like, okay, well, I've got this multi tenant
setup going, and so I need people to be able
to sign up, right and actually pay me to use
the system. And so I said, hey, I need a
sign up for him. And I mean, it wrote all
the views and rails, It wrote all the management and
the admin system, It wrote you know, and and it
basically worked. I mean I had to tweak a couple
(44:02):
of things because you know, it was like, okay, you
you know, you.
Speaker 2 (44:05):
Got this wrong.
Speaker 1 (44:06):
And then when I asked to do something else, it
changed it back and I had to go in and say, no,
actually you did it wrong the first time.
Speaker 2 (44:12):
But I mean it it.
Speaker 1 (44:15):
Would have taken me a day or so to figure out,
you know, to actually build it out on my own, right,
go in and put everything together. It's all pretty basic stuff,
but it I mean, it just wrote it in like
twenty minutes, right, and then and then the tweaking took
me probably another forty five minutes, and so I can
see people looking.
Speaker 2 (44:35):
At it and going, this thing is going to take
my job.
Speaker 3 (44:39):
Yeah, And I understand why people think that, and I guess, like,
to me, it would be when someone with your level
of experience, Charles, is able to do that and there
is zero tweaking, and like, I've done this one hundred
times in the last two weeks and never any point
if I had to change a single character like that
(45:01):
would be when I would be more concerned about being like, well,
you know, maybe I still think the job of being
a software engineer wouldn't go away in that case, because
I think we are even outside of the programming. Right,
my was cooking some sweet potato in the air fryer yesterday, right,
and my wife and I both use Chattypata, and I
(45:23):
transcribed at using voice, and she heard the way in
which I speak to CHATCHYPT and the amount of context
I gave it about like here's some facts about my
particular sweet potato and how I like it, here's some
facts about my air fryer and whatever.
Speaker 4 (45:37):
And she was like, wow, like you you said a
lot of.
Speaker 3 (45:39):
Stuff there, whereas I would just be like cook sweet
potato in air fryer, right, and the proplet she would
get back would be pretty poor, And the prop that
I get back is amazing, Right.
Speaker 4 (45:50):
So I guess some.
Speaker 3 (45:51):
People talk about like prompt engineering or context engineering or whatever,
but like, I think there's more to it than that,
and I think there's a reason why the experts in
prompt engineering or context engineering don't just happen to be
like international tennis players or philosophers or whatever. Like there's
mostly software engineers, or at least software adjacent people, because like,
(46:13):
this is still talking to computers, and computers still like
to be talked to in a like LM's like to
be talked to in a different way to you know,
your CPU.
Speaker 4 (46:22):
But it's still sort of an art and a craft
in that way.
Speaker 3 (46:24):
And like I can definitely imagine almost the worst case
scenario of when you're gonna be writing a lot less
code by hand than you ever were before, and for
some people that really bums them out and freaks them out.
But I still think that job of like I take
customer requirements and turn them into software, that's still a job, right,
And that's still a hard job, and it's still a
(46:46):
job that someone like yourself, with a lot of experience
you are able to work with LMS dramatically more effectively
than a random person who started coding two months ago.
Speaker 2 (46:56):
Right, Yeah, I agree.
Speaker 1 (46:58):
There are a couple of things you kind of talked
through that I want to touch on. I mean, one
of them is, yeah, you figure out what essentially what
context you have to give it, right, so you give
it all of the information it needs so it can
break it down and go okay that this is this
is what I'm working with, and then yeah, it'll give
(47:19):
you a much better response. One other thing that I
found that just kind of comes with experience with the
LLMS is that and this came up today, like I
forgot to tell it I'm not using device on my
Rails app, and so it actually said right because I
told it. I was like, you need to send an
email confirmation when people sign up for the system, right,
not just a welcome email, because I need to confirm
(47:41):
that their email works. And so it turned around and
it said I see that you're using device confirmable and
I immediately had to get in and stop it and
say I am not using device right then, and then
it did the right thing, and so.
Speaker 2 (47:55):
You know, I had to remember that.
Speaker 1 (47:56):
And so some of that with the prompt engineering, Yeah,
you figure out what context it needs in order to
make the right decision, and then you give it to it.
And a lot of it, too, is just being thorough
right so that it doesn't make any assumptions. I know,
we're kind of answerpromorphizing the LM and it's just a
predictive language model, but you know, yeah, it doesn't make
any assumptions on what you want. And so yeah, some
(48:20):
of it's going to just boil down to, yeah, I'm
the human that's giving it a proper prompt that gives
me better outputs. But the other thing is is that, yeah,
I still find that I have to tweak it, or
at least I have to verify it right. Even if
I didn't have to touch it, I have to verify
that it does the thing right. I have to spin
up my app and go and click through it or
(48:40):
write or if it's a CLI, you know, I have
to run it a couple of different ways to make
sure that it's still doing what I want.
Speaker 2 (48:46):
And so there's that too.
Speaker 1 (48:48):
And then the last thing that I'm going to just
point out is like, I'm not seeing companies laying people
off saying, well, you know, we're using the LLMS to
enhance the capabilities of our programmers and so we don't
need half of them anymore.
Speaker 2 (49:03):
We're not seeing yet.
Speaker 1 (49:03):
What we're seeing is we're seeing companies go, we can
get ten times the work done or you know, five
times the work done, are double the work done by
having the engineers that we have use the tools.
Speaker 2 (49:14):
And I think that just accelerates.
Speaker 1 (49:17):
And I think any company that's shortsighted enough to turn
around and go, we're going to lay off half of
our technical workforce, their competitors are going to come in
and eat them alive.
Speaker 2 (49:27):
Yeah.
Speaker 3 (49:28):
Yeah, I think that's some people that speculate that's the case,
some companies hint towards that being the case, But I
don't think it's. I think what you are seeing, and again,
I think if we're talking about things that you can
be freaked out about, is like, there's definitely been a
big drop in hiring for juniors, right, like this measurable
decrease in junior engineering positions. And I think there is
(49:49):
this outstanding question of like, again, you and I have
touched upon this already right, Like I say in that post.
One of the reasons why I talk about like why
I think open source maintainers are well clout for they
for the llm ROR. Right, is that conversation you mentioned before,
if you were talking about building something an open source project,
you could have equally been describing almost identically, some.
Speaker 4 (50:11):
Contributor you've never worked with before.
Speaker 3 (50:13):
Right, It's like, oh, well, you know what, I opened
an issue and then overnight some random person I never
met before appeared.
Speaker 4 (50:19):
They made a pr like it was mostly.
Speaker 3 (50:21):
All right, I had to make some tweaks, and then
I emerged it right like, And I actually think the
trust and verification process looks very similar, right for someone
like yourself with a lot of experience, someone like me
with a lot of experience, particularly if you have a
lot of experience with one code base you've worked with
for a long period of time, like you're able to
provide probably very in depth review of a relatively large
(50:44):
amount of code in a very short amount of time
in a way that makes these tools valuable to you.
Right if you were you were unable to do that
either because you lack the experience, you lack the contact
with the codebase, you lack the program and buility yourself.
Speaker 4 (50:57):
Then all of a sudden you get end up in.
Speaker 3 (50:59):
This pre care position where it's like, okay, either I
go one hundred miles an hour and like if this
stuff breaks, I wonder how to fix it, and I
won't know even really how to describe to the LLM,
like what's gone wrong beyond just copy pasting error messages
and when when that fails, like I'm just shit out lunk, right,
or I slow down and I say, okay, I just I'm.
Speaker 4 (51:23):
Going to learn software the same way everyone learns software
before LMS.
Speaker 3 (51:26):
Right, And then the people that I have to kind
of have a bit of the FOMO or even potentially
my management chain being like, hey, all the seniors are
like twenty five or fifty percent more productive, now why
aren't you? And it's I'm glad I'm not responsible for
social engineering or whatever, because like I think that's a
(51:46):
really hard question of like what are we going to
do as an industry, because we can't just say we
don't hire more juniors anymore, because that's not going to work.
But at the same time, I don't think we have
a really good answer for like what the juniors should definitely.
Speaker 4 (52:01):
Do to scale up, right.
Speaker 1 (52:03):
Like, Yeah, it's hard, Yeah, I like, I like, I
completely agree with you. I think kind of reading between
the lines on your blog post, it seems like part
of the answer is having juniors or people who are
at that level doing open source with you, right, because
then it's not you don't have your boss or your
company or the deadlines or things like that where people
(52:26):
are going, why are you taking so long? And then
the other thing is is that if you do take
the extra minute, even if you're still generating the code
with AI, to then ask the LM okay, explain to me.
Speaker 2 (52:38):
What you just did.
Speaker 1 (52:39):
Yep, right, you can start to get context for what
it did and why it did it. Now it may
not be giving you the best practice, but it's giving
you at least some context for what it's working with,
and in some cases it is.
Speaker 2 (52:54):
Going to be best practice or at least common practice.
Speaker 3 (52:57):
So and you're seeing that even with LMS now right now,
like the kind of study mode and chat GPT and
things like that, where I've played with that a little
bit myself when I've been kind of trying to learn
new things that I don't want to overlylean on the LM,
and you know, it's kind of trying to do a
bit of Socratic method stuff and all this sort of thing.
But even with I think it's a great point for
(53:18):
the open source where you know, there's a bunch of
talented homer maintainers I know who are in some cases
haven't had a tech job yet, are still in college
or relatively recently out of college, and like their code
reviewabilities for typical expectations of like a new engineer are
like completely off the charts. Like some of them are
dramatically better at code review than I am, having done it,
(53:40):
you know, probably ten times as.
Speaker 4 (53:42):
Long as they have, but they've had a lot of
experience doing this.
Speaker 3 (53:46):
And I think that might be part of the junior
story is that I think we cycle we sort of
had a little bit of an attitude I think for
a while, at least in some cultures of companies I've
seen where it's like, well, okay, we'll get everyone to
view a bit of code, but really like the more
senior people are going to be more responsible for doing
the more in depth hardcore code review of like gnarly stuff, right,
(54:11):
And that might be the culture that needs to change,
but we're actually like, okay, rather than saying to the
junior is like, hey, you just spend ninety five percent
of your sign coding. Don't worry about this code review
stuff too much. You'll figure it out as you go along.
Maybe we need to flip around and be like, actually,
you should spend ninety five percent of your time doing
code review and don't worry about this coding stuff too much.
If you read enough of other people's code, you're going
(54:33):
to figure that out, right, And maybe that's where we go.
And certainly that that would be an attitude that would
make you more qualified with working with LMS better for sure.
Speaker 1 (54:43):
Right, you brought up code reviews and that that kind
of reminded me of another thing, since we're talking about
AI and things like that. So yesterday I put in
a PR at work, and I asked some folks to
review it, and I got on this morning and it
still hadn't been reviewed. Whenever people get in and they
see the message, I'm sure it'll get done. Anyway, When
I checked it, get how asked me if I wanted
Copilot to review my PR?
Speaker 2 (55:05):
So, how do you feel about that?
Speaker 3 (55:07):
Yeah, I've actually at work brew and now at Homebrew
and some companies have done contracting with have enabled. In fact,
I don't know if I've done this on humbrew actually,
but basically, whenever I can, I enable enable copilot review
of prs by default, and I think good. It's sort
(55:28):
of so it has definitely caught a bunch of stuff
that humans have not caught on my prs. Again, the
false positive rate is like pretty high, but if what
you're concerned about is like, hey, I want to make
sure that someone is going to catch my typos again.
Like another blog post I wrote a long time ago,
(55:49):
and at this time, you know, this was whatever twenty
eighteen or something, so we weren't even talking about AI
and LMS, but I think it still kind of holds
up is I wrote about like I called it like
robot pedantry human empathy basically, And what I compared in
that post was like, on the one side, like people
are like ruber Cup, you can set to be like
hyper anal right and like really really pedantic and people
(56:12):
could actually be fairly tolerant of that. I actually quite
like that when it's dial up to eleven like that.
But if you, as a human give the same code
review feedback as a ruber cup up to eleven.
Speaker 4 (56:23):
All your coworkers will hate you.
Speaker 3 (56:25):
Yeah, often, even if you have agreed on these coding
guidelines as a group or whatever. There's something about like
that level of pedantry, about like if someone typo something
ten times going through and every single time being like
typo typo, typo, typo typo, even if you use the
gat up suggestion, so it's one click for them to
fix it. Like, people don't like that. They really hate
it when humans do that to them. But that's the
(56:47):
expectation that robots will do that. So that's one thing
I found is like useful on that side. But then again,
like I said in that post, so part of why
I wrote the post of the time is like there'd
be more and more moved like ultimate shown and stuff
like that, which I thought was really healthy and like CEI,
but you were starting to see people being like, oh,
the first time someone makes a poll request on my repo,
(57:08):
we should congratulate that person, So let's write a bot
to congratulate them. And I remember talking to a bunch
of people and they were saying like, yeah, it just
it feels completely meaningless when a bot says well done,
good job, you are rockstar. Now it's like, you know,
it's like I'm in some empty room by myself, and
you know, over the Tanoi came a robotic voice saying
you are great.
Speaker 4 (57:29):
You know.
Speaker 3 (57:29):
It's like whereas if a human says to you, hey, right,
well done.
Speaker 4 (57:34):
Like I'm Pumpedy or here, like, that actually has an impact.
Speaker 3 (57:37):
So that's and I think this is the same thing
with with kind of code of View and Copilot and
these tools and whatever. Now it's just like, you know,
like Copilot can give you a review where it points
out your typo is more effectively than a human can,
but Copilot can't say, wow, that was really cool, Like
do you solve this in half the lines of code?
Speaker 4 (57:55):
I thought it's possible, Like this is a really elegant solution.
Great job, right like.
Speaker 3 (58:00):
And that's again much as like the stuff about humans
and jobs and whatever, right like, most of us are
probably not going to be okay with spending forty hours
a week exclusively communicating to an LM, Like we want
to deal with humans who say nice things that they
actually mean that are not in the prompt right like,
and that does have more of an impact and that's
(58:22):
I feel the same way with like code review, where
it's like these are complementary tools, and the human code
review and the code of view are sort of doing
different things, but I'm glad that they're both there, and
I wouldn't use one to entirely replace the oiler.
Speaker 1 (58:33):
So I guess the other question that I was asking
is what AI tools are you using these days?
Speaker 3 (58:38):
Yeah, So for me, I'm mainly using like cursor as
my kind of in editor like code completion thing, and
like their agent modes kind of sporadically by again, I
use it kind of in the in editor fashion. I
still use chat GPT mainly for stuff where I'm just
like asking it things where I'm going to probably write
the code myself or with a bit of health, but
(59:00):
like I basically use that insteady Google. And then the
thing that has been interesting to me recently is so
I started everyone was being like all my kind of
AI forward friends were like, oh, you need to use
cloud code, cloud code, so sign up play on cloud code.
Speaker 4 (59:16):
I was like, Okay, this is quite good. I can
see the appeal.
Speaker 3 (59:18):
But I think just because I'm an open source maintainer,
like there's this GitHub agent pr thing where you can Now,
if you have Copilot for your organization or whatever, I'm
not quite sure how the pricing and the enabling works
or whatever, but it works on Humbrew essentially.
Speaker 4 (59:34):
So what you can do is.
Speaker 3 (59:35):
You can just assign an issue to Copilot and then
it will just open a pr that's like, okay, I've
tried to solve this issue for you, right, And this
is good that we're having this conversation today because it's
fairly timely because in the last week Humbrew had I
think eight open bugs on the main kind of package
manager repo that I review everything on. I was like,
(59:55):
I'm just going to sign all of them to Copilot
and see what happens. Right, And it's similar to the
thing you described earlier right where it was like in
most in a couple of cases it was like literally
spot on first time.
Speaker 4 (01:00:08):
In most of the cases it required me to do
the last.
Speaker 3 (01:00:12):
Like ten percent of the code right, and then in
a couple of cases it was like twenty five percent
of the way. They are pretty broken. I had to
have quite a bit back and forth. I ended up
kind of rewriting half of myself. But even then it
sort of solved the empty page problem for me, And
I think it's just something about me is like an
open source maintainer that it feels good and nice to
just be like assign issue PR, review PR, check out PR,
(01:00:37):
to finish it off, march PR, and like that's the
flow instead of like this thing that lives in my
terminal that I need to check and whatever.
Speaker 1 (01:00:45):
Yeah, I like, I like you you brought up the
empty page situation that that's a really good way of
putting right, because I'll sit there and Okay, how do
I need to put this? Okay, and then I kind
of have an idea of how I wanted to go,
and so I start kind of fiddle in with it, right,
And instead I can just I can prompt the AI
this is what I'm building. And sometimes I'll ask, you know,
(01:01:07):
how should I start, And sometimes I'll just tell it
to go for it.
Speaker 2 (01:01:11):
Yeah, I'm sure.
Speaker 3 (01:01:12):
And I found this as well, like sometimes when it's
even when it's completely wrong.
Speaker 4 (01:01:17):
Yeah, you probably heard.
Speaker 3 (01:01:18):
That, like a comber who said this ages ago that
like the quickest way to get a stack overflow answer
is to answer your questions with the wrong answer.
Speaker 4 (01:01:29):
Yeah, and then everyone will jump in and be like, no,
that questions wrong. Let me tell you how it actually is.
Speaker 3 (01:01:34):
Sometimes I feel like the same way with the LM stuff,
where I'm like, yeah, sometimes it goes off and it
doesn't completely fucking terrible.
Speaker 4 (01:01:40):
And I'm like, no, that's not how you do it.
You obviously do it like this, and I'm like, oh, yeah,
I figured out how to do it.
Speaker 2 (01:01:45):
Well.
Speaker 1 (01:01:45):
What's funny is that there were there were a couple
of things that I was working on with the because
I'm using Curser and agent mode. The main reason is
is because my employer provides us with credits on co
Pilot YEP on vs Code, which is what I'd kind
of gotten used to, but I don't want to use
works co Pilot credits on my personal stuff, and so
(01:02:06):
I use Cursor for my personal stuff and Copilot for
their stuff because I'm using the same gethub account for both.
And so anyway, there have been a couple of times
where I've pulled it to do something and it sits
there and it flails right, It's like, yeah, I'm going
to do this. Oh wait, I think I have a
better approach, So I'm going to delete everything I just
did and do it this way, and they'll do like
two or three things and yeah, I'll finally just step
(01:02:27):
in and say I've been watching you and actually I
just want you to do it this way, and it'll
just turn around that. Of course, it tells me I'm
smart for saying that, right, And again it's like you're
a machine, you know, but do it my way. And yeah,
and so just because it gets it wrong doesn't mean
that it can't get it right if you give it
just a little more a direction.
Speaker 3 (01:02:47):
And to just speculate widely from a minute, I also
have hypolthesis that maybe the reason why junior hiring could
be done is the people who were previously responsible from
entering those juniors are now essentially mentoring their lms because
like again that flow you described earlier about like with
(01:03:07):
rails and oh they thought I was using device and
I wasn't like that. Again, ten years ago, you could
have described some new talented junior person that you hired,
right and oh wow, they saved me so much time
and I had to do the last a little bit
to help them over the line and whatever. I think
the flow and also like the stuff you like and
the stuff that's really frustrating and exhausting about it is
(01:03:30):
kind of the same. And I personally couldn't imagine a
day where I spend half my days supervising agentic AI
workflows and then the other half doing the same thing
with like junior folks who are struggling in the same way,
or the worst of both worlds, which we're seeing in
open source now, which is someone who is junior who
(01:03:50):
is trying to use an AI agent to create a
pull request. And obviously you can't really prove this, but
like I had a PR today where which round a
review they were responding, and then they would paste something
into an issue comment with like the slash and characters
in there rather than you lines, which makes it look
like it was automated, And then I would ask.
Speaker 4 (01:04:11):
I'd be like, give a bunch of comments.
Speaker 3 (01:04:12):
But then one of the comments would be like, hey, like,
you've added two extra options to the CLI.
Speaker 4 (01:04:18):
Which one of those are you using? Do you need
both this functionality or just the functionality for one?
Speaker 3 (01:04:23):
Because if we just say the functionality, if you only
you need for one, and they would just completely ignore
these questions and just turn it into more code. And
I had that AI feel that I'm sure you're familiar
with Charles of just like more code every time.
Speaker 4 (01:04:34):
Every time.
Speaker 3 (01:04:35):
The solution is we're just going to add more code, right,
And it got to the point where I was like,
I'm just speaking to someone's cloud code through If I
wanted that, I would be using cloud code.
Speaker 4 (01:04:46):
Myself, so I couldn't prove it.
Speaker 3 (01:04:47):
But I just closed the PR because I was like, sorry,
feels like I'm speaking to an AI, not human. And
I thought to myself, I'll probably just assign this issue
to compile later and see how it does, and it's
probably going to be easier than what we're doing.
Speaker 2 (01:05:00):
Oh gotcha.
Speaker 1 (01:05:01):
All right, Well, I've got to start wrapping up, so
let's go ahead and do some picks and then we'll
call it good. Before we do that, if people want
to connect with you, if they have questions about Homebrew,
or it sounds like you will help people out for
hire or at least you used to, so if there's a
place for people to hire you.
Speaker 2 (01:05:18):
I'd love to have that too.
Speaker 1 (01:05:19):
But yeah, how do people connect with you on any
of those levels?
Speaker 2 (01:05:23):
Yeah?
Speaker 3 (01:05:23):
Sure, The best place for me is my website at
Mike mccaid dot com.
Speaker 4 (01:05:27):
If you go there, that's called links to all my
social media.
Speaker 3 (01:05:29):
It's got all my email, it's got talks or art
schools and all the type of stuff anything you might
want to know about me.
Speaker 4 (01:05:35):
That's basically the best kickoff point.
Speaker 1 (01:05:37):
They're awesome, all right, Well let's go ahead and do
some pics. I will get us started. Boy, it's been
a while since I've done this. I'm trying to think
of what games I've been playing lately. Here's one that
I'll do. So, I don't know if you do board
games much, but yeah, I do a little, not huge amounts.
Speaker 2 (01:05:55):
Have you ever played.
Speaker 4 (01:05:56):
Again, like a couple of times. I've never been a
DM or.
Speaker 1 (01:06:00):
Okay, So there's a game out there called Betrayal at
House on the Hill, and I think that there's a
book or a movie or something that related to it.
Speaker 2 (01:06:12):
Anyway.
Speaker 1 (01:06:12):
Essentially, you take your Scooby Dooe gang into a haunted
house and you go explore the house until the how
haunt starts and then you yeah, one of you becomes
the trader and so then it's the Trader against the
explorers until one of you wins, right, and so it
(01:06:33):
gives you you know, and there are two books. There's
the Trader's home, and there's the Explorer's Manual, and so
the Explorer's Manual explains to them how they win, and
the Trader's tom explains to the trader how they win.
And the reason there are two books is because there's
some things that the trader has to know that the
explorers don't need to know, or it will let help
(01:06:55):
them win if they knew. Anyway, so it does that. Well,
there's a version of this called Betrayal of Balder's Gate.
Balder's Gate is there's a video game, yeah, Balder's Gate,
but Balder's Gate is a D and D location, and
so if you ever look at anything that's Balder's Gate,
you'll notice it's Wizards of the Coast, which is the
(01:07:15):
company that does D and D and so anyway, so
Betrayal at Balder's Gate is Betrayal at House on the
Hill with the D and D elements from Balder's Gate.
And so instead of exploring rooms, you're exploring areas. And
you have the city like the streets, you have indoor areas,
(01:07:38):
and then you've got the catacombs underneath the city, and
so you can kind of explore or whatever you want
and there's some differences, but yeah, the rest of the
elements are the same. And so if you're kind of
like that, it doesn't have like strong story elements. It
really is just a you know, an explore and then
you know, fight off the monsters game. But it's fun
(01:08:01):
and yeah, I played it with my guys group a
couple of weeks ago. Board game geek waits it at.
I have six so you know, your casual gamers. It's
mildly complicated, but not so complicated that, you know, folks
that just like you know, a casual board game among friends,
wouldn't just pick it up nice. So I'm going to
(01:08:22):
pick that up. I'm going to pick that it came
out in twenty seventeen. I don't know if it's still
in print. That's the only caveat I have on that.
And then on the other picks, yeah, I've really gotten
into using the agent mode on Cursor, and so I'm
going to pick that as well. Incidentally, I was just
kind of goofing around and I thought, Okay, well, how
(01:08:45):
much work would it be to get it to essentially
vibe code React native app, a podcast subscription app.
Speaker 2 (01:08:54):
And I have to say that it did pretty good.
Speaker 1 (01:08:55):
It doesn't have all the features I want in it yet,
but it's good enough to where I'm starting to think
about whether or not I actually want to release the
sucker onto the app stores. And so you know, and
I'm not a React Native developer by any means though, again,
that's another technology I want to pick up because we
use it at work. So anyway, so I'm gonna I'm
(01:09:17):
gonna pick that as well. And then within React Native,
I'm using Expo.
Speaker 2 (01:09:21):
Yeah. Pretty cool. But yeah, Mike, what picks do you have?
Speaker 3 (01:09:26):
Yeah, so I guess my game related pick would probably
be avowed. So that was recommended to be by don't
know he's been on this pod before, but certainly legend
of the Ruby community justin Searles. Oh yeah, sort of
like you know, it's made by Obsidian. It's like a
kind of Bethesda esque like story driven RPG and it's just,
(01:09:51):
I don't know, like if you liked all those old
school rpugs like Oblivion and Skyrim and whatever, it's got
that vibe, but like lots of nice like quality of
life things like, so if a character ever uses a
sort of bit of jargon, you can just press a
button and then a glossary pops up that sort of
defines a specific bit of jargon they used, and I'm like, yeah,
I want this in every RPG now, And like, you know,
(01:10:13):
if you're a made you have a wand that casts
stuff and it doesn't require man and you know, like
lots of just like little things that have annoyed me
in other games. So yeah, I'm enjoying just exploring that.
I imagine it's one of those games I' probably gone
hundred percent just because like the world is interesting enough.
Speaker 2 (01:10:30):
Yeah, what's it called again.
Speaker 3 (01:10:31):
It's called Avowed avowe D. And then my, I guess
if you're not gamer like TV wise, we've been watching
Tokyo Vice, which is pretty interesting about this guy who's
a journalist and kind of lots of stuff with like
organized crime in Japan. That's kind of what network it's
(01:10:52):
on or whatever. But that's been good, so only two
seasons of that. We're about halfway through the second season.
Enjoying that a lot.
Speaker 4 (01:10:57):
And then I guess on the computer land.
Speaker 3 (01:11:01):
I guess, yeah, I mentioned it already, but I would
probably say the copilot agent mode where you can assign
it from assign it an issue and it will then
create a pr there's a bunch of rough edges there,
but there's something about the workflow that feels pretty different
to like running agents locally, and has a different trust
model in a good way in that like you can
(01:11:22):
let it just run arbitrary commands and it's running in
some random dog continuing and get ups infratructure that you
don't care about compared to your local machine.
Speaker 4 (01:11:30):
But yeah, but for if you're someone who lives in
prs all of the time.
Speaker 3 (01:11:33):
Then it might be worth seeing if that's an agent
flow that works for you more than like some term
or lap or some window in your right idea or whatever.
Speaker 2 (01:11:41):
Cool. Well, thanks for coming. This was fun.
Speaker 4 (01:11:44):
Yeah, thanks for having me always your fun line.
Speaker 1 (01:11:47):
Yeah, I'm gonna go ahead and wrap us up until
next time, folks max out