Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Also media.
Speaker 2 (00:05):
Hello and welcome to Better Offline. I'm your host ed
zitron as eva by our merchandise.
Speaker 3 (00:09):
Subscribe to the newsletter.
Speaker 4 (00:10):
It's all in the episode notes.
Speaker 2 (00:24):
And today I'm joined by programmer Colton Bogie, who wrote
an excellent piece about a month ago called no AI
is not making engineers ten X is powerful.
Speaker 3 (00:33):
Colp. Thanks for joining me.
Speaker 1 (00:34):
Thank you for having me.
Speaker 2 (00:36):
So tell me a little bit about your day to
day work. What do you do for a living.
Speaker 1 (00:41):
Yeah, so I'm a software engineer, and I work specifically
in what's called web application development. So that's kind of
like building rich applications that work in the web browser.
So think of something like Amazon or Google Drive or
something where you know you can do a lot of
interactive features within the applihation within a browser, right, So.
Speaker 2 (01:01):
Kind of like the fundament of most cloud software.
Speaker 1 (01:04):
Yeah. I would say that the if not the majority
of engineers work in web application development, then probably the
plurality do right now.
Speaker 2 (01:12):
And it's also a large amount of how people interact
with software is now in this way.
Speaker 1 (01:17):
Though, Yeah, exactly. Yeah, and that's why that's why most
developers perhaps work in this field now.
Speaker 2 (01:25):
And you'd think something like that would be perfect for
AI coding.
Speaker 3 (01:29):
It is.
Speaker 1 (01:30):
It is definitely the most applicable place for AI coding
of all of them. There's sort of a there's sort
of a thing where people are like, you know, oh,
you know, lms don't really work really well for my language,
and they'll be talking about something like RUST, which is
more of like a a high performance language rather than
a web language.
Speaker 3 (01:51):
What is a hypophobic language in this case.
Speaker 1 (01:54):
Just something that's designed to work on like weak hardware
or a like extremely high speed So like for example,
like video games aren't built in RUSS. They're usually built
in a language called C plus plus, but they're they're
built in a high performance language because you know, you're
trying to max out as much as you can do.
Speaker 2 (02:15):
Ye're taking advantage of the high performance compute you have.
Speaker 1 (02:18):
Yeah, whereas with a website like you know, like Amazon
or something like that, you're really not pushing the limit
in terms of interactivity, and so more modest languages are fine.
Speaker 3 (02:30):
Got it?
Speaker 2 (02:31):
So you laid it out really nicely in the piece,
so you can have to repeat a few things, I imagine,
but run me through why AI coding tools can't actually
make you a ten x engineer or ten times better engineer.
Speaker 1 (02:46):
Yeah, so basically AI is really good at AI is
almost shockingly good at generating code, and it's almost shockingly
good at generating code that runs, and it can answer
a lot of questions that are that are like difficult
or at least, you know, annoying is probably the better word.
(03:09):
It's really good at dealing with things that are annoying.
It's really good at, you know, so encoding. We have
this concept called boilerplate, and it's it's just code that
you have to kind of rewrite a lot. And as
an engineer, ideally you don't have to write a lot
of boilerplate. Ideal you automate things and distract things, so
you you don't need to do that very often. But
it's sort of like a requirement of the job. And
(03:31):
it's really good at writing that because you know, it's
just it's kind of like low intent, high volume code,
quantity over quality type thing. So it's really good. It's
stuff like that. The problem is that generating code really
isn't the hard part of being a software engineer. It's
(03:52):
it's one of the things that really matters, and it's like,
obviously if you go to college to learn how to code,
what you're going to spend most of the time doing
is is typing code, but it's it's really just like
one thing that you're doing, Like in reality, you're doing
a lot more thinking about like, Okay, how does this
work with the systems already have? How do I avoid
creating what's called tech debt? So tech debt is basically
(04:14):
like an easy way of thinking about it is when
you write code that makes writing future code harder. So
an example would be, you know, you want to write
like a piece of logic that, let's say computes sales tax.
If you're, you know, making an online story, you'd want
to compute sales tax based on where they are. You
only want to write that once. You don't want to
have two different places that you know handle sales tax,
(04:34):
because then, if you know, Illinois changes their sales tax, right,
you have to change two places instead of one.
Speaker 2 (04:41):
It doesn't doesn't The very nature of code also mean
that it naturally creates tech debt.
Speaker 1 (04:46):
You know, you can try to mitigate as much as
you can, but there's not even an idea like that
all code is tech debt. And and what LLLMS so
kind of what I mentioned before. You know, lms are
get at producing boilerplate. But if you get to the
point where boilerplate is really easy to produce and you're
not constantly thinking like, Okay, how do I avoid doing
(05:08):
this in the future, you start, yeah, exactly, you start
generating more and more code. And just like generating code
is an it's kind of like writing in terms of,
like you know, writing a book, like a good writer
writes fewer words rather than more.
Speaker 3 (05:22):
Yeah. Yeah, brevity is the soul of wit.
Speaker 2 (05:24):
Now. It's because what I've been realizing is that there
is this delta between people that actually write software and
people that are excited about AI coding. Because I had
Carl Brown from the Internet of Bugs on and it
was this thing of yeah, being the software engineer is
not just writing one hundred lines of code and then
giving the thumbs up to your boss. You have to
(05:45):
interact with various different parts of the organization. Early on
in the piece as well, you had this bit called
the math no link to this obviously in the episode notes,
where it's any software engineer has worked on actual code
in the natural company knows that you have these steps
where it isn't just like icode and then I give
the thumbs up and I'm dumb for the day. You
have to go through a reviewer, you have to wait
(06:06):
for them to get back to you. You have to
contact switching as well. We have to change different windows
and do different things. There's a shit ton of just
intermediary work that is nothing to do with actually writing code, right.
Speaker 1 (06:18):
Yeah, absolutely, I mean it's it really is more of
you know, there's parts of coding that is more science
than there's parts of coding that are more art and
more you know, in college, is really common if you're
getting computer science deority to take communications classes because you
just have to interact with a lot of people. So
the standard way that a thing gets done in a
(06:40):
software project is you have somebody called a product manager,
and that person's that person's job is to think about
what the product is as it exists now and what
should be in the future, and like what features we
should build, Like basically more than anything, it's you know,
what what should we build next? And then you have
designers who are going to decide how that thing should look.
And engineers have to be this meeting influence because they're
(07:03):
the only ones in the process who actually know what
it's like, how hard it is going to be to
build something. So a frequent interaction is like, you know,
product maators are like, oh, we should add this, and
an engineer has to kind of step in and be like, no,
that's borderline impossible or you know that would take six months,
right and you want coding.
Speaker 2 (07:23):
Lllms don't fix that problem. They don't do they actually
make it easier to develop products.
Speaker 1 (07:28):
I think there are uses for LMS in coding, and.
Speaker 2 (07:33):
I'm not even denying that. I'm just like, how, like,
what do what do those uses end up actually creating?
Speaker 1 (07:42):
Basically, you know, what I found is that I don't
really like using lms for most product work, even though
they're good at you know, say like oh add a
button here that does this. They can be good at that,
especially if you have a codebase that that works really
well with lllms. So some programming languages, some tools, some
(08:03):
we call them libraries and coding you know, they're basically
like shared software. Some of those things work really well
with lms because the lms are just trained on the Internet.
So sites like stack overflow and sites like leak code
and stuff like that, and so if they have a
large body of this code to work within their training data,
like these tools and these languages, they tend to be
(08:25):
better at writing them. And so you know, I work
primarily in JavaScript, as do most web application engineers, and
lms are quite good at JavaScript, but I still don't
like to use that much because it tends to just
not understand context very well.
Speaker 3 (08:43):
And mean impracticality.
Speaker 1 (08:45):
So it's kind of just like you know, understanding the
existing resources that you've already built in your code base.
So this is like the avoiding writing the same like
sales tax thing twice. They tend to default to rewriting
things and they tend to struggle to reuse the same
like style. So you know, an important thing in coding
(09:07):
is maintaining consistent like styles of your code. So like
there's there's a whole theory and practice of actual like
how your code should look like as like how the texture,
like how you should avoid, what things you should avoid,
because there's there's bad patterns, there's good patterns, and you know,
there's everything in between. I think it's very similar to
(09:31):
like you know, when image generation AI kind of came
onto the scene, you had a lot of people being like,
all right, well, graphic designers are done. You know, they're
out of a job. They don't need to do anything
because I can generate a logo now and like it
looks pretty good. Like I can generate a logo for
my nail slam or whatever, and look at it looks great.
But as soon as you try to generate a second
(09:51):
logo that looks like the first one and you know,
maybe has a slight modification or or something like that,
it's really it's really bad at it. It's really bad
at you using the context and you're like, okay, now
I need a graphic designer. And as soon as you
need you know one anyway, yeah, exactly, and so and
and you know that's it's very similar to engineers, you know,
(10:12):
they a lot of what you're doing is sort of
working around context and avoiding redoing things and avoiding moving
away from your existing styles.
Speaker 2 (10:23):
And what is the critical nature of styles?
Speaker 3 (10:26):
Is that?
Speaker 2 (10:26):
Because is that so the people looking at your staff
in the future can say, Okay, this is what they
were going for. This is so that everything doesn't break.
Speaker 1 (10:36):
Yes, exactly. It's about consistency. It's about so there are
certain you know, for example, JavaScript, which is the main
language I work in, is almost infamous for basically allowing
you as the developer to do all sorts of buck
wild stuff that you should like never do coding wise,
like a lot of old patterns, a lot of recycled stuff,
(10:59):
a lot of it just it just lets you. It's
it's an extremely varied language in the things it supports,
and so what you want to do when you're writing
in JavaScript is only use the good parts. You want
to strategically avoid doing a bunch of bad things. And
some of those bad things, there's tools that will automatically
detect if you're doing them.
Speaker 3 (11:29):
What are these bad things? Is it just the job?
What's so special about Java that it allows you to
do that?
Speaker 1 (11:35):
Yeah, JavaScript, which is technically different from Java, Yeah, no problem. Basically,
the reason JavaScript has all these bad parts is because
it was created in ten days as a toy language
at Mosaic, which was the precursor to Firefox, you know,
the first browser that really green traction and sat the Internet.
Speaker 2 (11:55):
I'm old enough to have used Mosaic somehow.
Speaker 1 (11:57):
Yeah. So JavaScript was created in like ten days almost
like just as like a thing to test, like just
something to try like, okay, what if we put a
coding language into the into the browser that people could
just like ship with their websites. And in response to that,
other company said, okay, well we need to support JavaScript
(12:18):
as well, and so when Internet Explorer came out, they
shipped their own version of JavaScript, which was slightly different.
And so you have this engineer that was kind of
built on like these like very hacked to get their principles,
that was then rebuilt with slightly different principles, and then
what you have now is thirty years later, sort of
the conglomeration of all those things into one language, and
(12:41):
it's improved dramatically in that time. But the nature of
coding languages is there's a lot of backwards compatibility. Like
a video game where you know, you you know, you
want to be able to build a computer that can
run a game from nineteen ninety one, so you want
the browser to be able to still run code largely
that was written in nineteen ninety five or whatever. So
a lot of that stuff still exists, and what they've
(13:03):
done is introduce new patterns that can be better, and
so there's a sort of an equal amount of like
avoiding that and then also just consistency. So there are
times where there's two good ways to write ways to
do something, but you always want to do it only
in one way so that anytime someone reads it, they
see that pattern and they say, okay, I know exactly
(13:24):
what this does.
Speaker 2 (13:25):
Right, And this fundamentally feels like something large of language
world is bad at because they don't know anything and
they don't create anything unique either.
Speaker 3 (13:35):
They just repeat yeah exactly.
Speaker 1 (13:37):
I mean, they are statistical inference models, and so they
are very good at generating what they think should come
next based on probabilities. And you can when you're training
a large language model, you can try to push it
in one way, push it in another way to say
like no, don't do that, do that. But like trying
to do that on the broad spectrum of all things
(13:59):
code is impossible, and so you're going to get things
that default to the way that they were done on
the Internet, and so much like so, like like I said,
JavaScript it is thirty years old, it's extremely it's gone
through a lot of turk times in terms of patterns
and so on the Internet, there are just fast swaths
of terrible JavaScript and you know.
Speaker 2 (14:22):
So it's just these models are trained on bad code
as well as good code.
Speaker 1 (14:25):
Yeah, exactly, and and that you know, I'm sure they're
trying very hard when they're training the models to filter
some of this stuff out, but like trying to do
a broad filter on hundreds of thousands, millions even of
what does say again, like decades, Like I mean, JavaScript
has been around for thirty years. I just have most
(14:45):
programming languages that are in use right now.
Speaker 2 (14:48):
Right, but this particular one seems particularly chaotic in how
it's sprawled.
Speaker 1 (14:54):
Yes, JavaScript is a particularly thirty language. Yeah.
Speaker 2 (14:57):
Why So what you're describing, I'm not trying to put
words in your mouth, is that this stuff doesn't do
the stuff that everyone's saying it does. Like it's not
replacing engineers. It doesn't even seem like it could replace engineers.
Like it's just not it doesn't do In fact, maybe
it's fair to say that. So it doesn't do software engineering.
Speaker 1 (15:16):
That's almost the perfect way to put it. It does coding,
but it doesn't do software engineering, and software engineering is
kind of this broader practice of like everything that comes
together around coding. So you know, some people really integrate
lms deeply into their everyday work and they do similar
work to what I do. So you know, there are
(15:38):
people who who are primarily having like lms write their code,
and they can be good engineers, but they're intervening pretty
constantly from what I understand, and you know, they're having
to sort of redirect it, make sure it stays on
patterns and all that stuff, and there's just kind of
(15:58):
you know, this is the reason that I don't really
elms that much, is there's just a constant tension between
the the lack of context they have and you know,
what you want them to do and constantly reviewing the
code and making sure it's up to standards when I
know I could just write it and you know it'll
take up out as much time then as it would
take prompting and re prompting. It's just I get better
(16:21):
code that way, and that's that's what I care about most.
Speaker 2 (16:23):
So do you think it's kind of a mirage al
most the productivity benefits?
Speaker 1 (16:28):
I think it would vary a lot. I think yes, broadly,
I think that there are productivity benefits, but like these
like huge outsides, like oh my gosh, I'm galaxy brained
right now, I'm doing so much. I think that that
is largely a mirage based on like extrapolation of like
small wins. I talk about this a little bit in
(16:50):
my article that you know, there are times. One thing
that I use LMS for a lot and not a
lot sometimes, but that they are really good at is
you know, sometimes you're writing code and you're like, I
need to write a thing that I will only run
once and I will throw in the trash, or I
need to write a thing and it uses this tool
that like I don't have the time to learn, but
(17:13):
like I'd really like to have this tool, Like I'd
really like to just like use this once and so
you can you can vibe code something and not really
understand what the code does. You can run it once
and not you know, validate the output and make sure
it works fine, and you know I've saved in that time.
You know, it might have taken me five hours to
like learn how to use this tool properly and like
(17:34):
learn how to use it with good standards. It could
even have taken me more, and instead I spent twenty minutes.
Speaker 2 (17:40):
You know, wouldn't that be dangerous because you don't know
how it works exactly.
Speaker 1 (17:45):
Exactly why I like to only use it for these
like one time like low. You know. So for example,
I wrote some code the other week and I was
basically I basically refactored some existing code. So I adjusted
how it worked a little bit, and I realized there
was a way that I could break some existing code
with it. But I didn't have an easy way of
(18:07):
checking across the entire code based. So we have tests
in our code base that would you know, catch issues,
but there's always some code that isn't quite up to
up to par with tests. And so I had this
idea that you know, a simple a really simple language
parser could go through and make sure that this code
was right, you know, something that is only like thirty
lines of code. But language parsing is really complicated, and
(18:30):
the tools to do it are you know there, there
are a lot of them, and they're very well supported.
And because language parsing is a huge deal, but it
would it would take me a lot of time to
learn that. So I just I just vibe coded a
little tool. I said, hey, find me every case where
I use this function, and I call it like this
in this entire code base, or actually said, I said,
(18:52):
write a script that will find me that and I
looked at the code, I was like, yeah, that looks right,
that looks right, looks right. I ran it, it said
there was there's no issues in my code base. So
I intentionally created an issue just to make sure it worked.
And it worked, it showed it, and so then I
got rid of the intentional issue. I was like, Okay,
this is probably good, and I pushed my code and
it turned out to be good.
Speaker 3 (19:11):
But that's you know, that also seems low stakes.
Speaker 1 (19:15):
Exactly, and it's it's what I would say, wouldn't scale.
So if I really needed to start doing like language
person constantly, like I was doing it daily at my job,
I simply would have to learn. Like you said, you know,
how do you know that there are any issues? I
would have to learn the tools, right, So the time
that I saved was by avoiding learning for this one thing.
(19:39):
But eventually, like if you're going to make something your
full time job, you have to learn it because you
can't fully trust the output.
Speaker 3 (19:46):
Because it isn't your job, like you were just kind
of mimicking.
Speaker 1 (19:51):
Yeah, and you know you have to. You know, the
llms make mistakes in occasionally extremely catastrophic ways. There's a
thing called slop squatting. Have you heard of slap squatting?
Speaker 3 (20:04):
No, but please tell me. I love this term so
much already.
Speaker 1 (20:07):
Yeah. So basically you might you might have heard like
domain squatting.
Speaker 2 (20:12):
So this is where like I think I know what
this is and I'm very excited to hear more.
Speaker 1 (20:16):
Yeah. So, like you know, this was a thing where
like in nineteen, you know, ninety one, you're like, Okay,
I think the Internet's gonna be big, so I'm gonna
grab Nike dot com. Right, I'm just gonna hold it.
So that's squatting a domain. So slap squatting is basically
where you the the lms. You know, they're statistical inference machines.
(20:36):
They don't they don't actually understand uh, what they're doing.
And so you know, sometimes they will import software.
Speaker 2 (20:45):
They will oh is this when it looks on GitHub
for something that doesn't exist.
Speaker 1 (20:52):
Yes, and and it it It will just add code
to your project that's like import this thing and it
will stall it and it'll say like, okay, this is
the library that you want to use, and it's it's
not right. It's it's either misspelled or it's you know,
for example, you would be looking for a library called
uh left dash pad, and it would import something called
(21:13):
left pad with no dash. And so what people realized
is that they can, you know, when the the lms,
there's statistical machines, and so they frequently they do the
same thing a lot, they make the same mistakes a lot.
So you could grab that library left pad with no
dago and jet and put code in there that works,
(21:36):
that does the thing that the library is supposed to do,
but also retrieves, you know, all of the secrets that
are in your environment or like looks for and crypto
wallets or like production database passwords and stuff like that.
Speaker 2 (21:51):
And if you're someone that can't read code or can't
read that kind of code, you would have no idea
this is happening.
Speaker 1 (21:57):
You'd have no idea. And even if you're somebody who
does read code really well, if you you know, look
at that he said, it's it's important left You're like,
that sounds right, that looks right, like unless you are
really familiar with this library, and even if you are familiar,
you might just like gloss right over it. So it's
it's really really really dangerous, and like, this is the
(22:17):
type of thing that could you know, like maybe if
like if the LM is making you even twice as productive,
you know, that doesn't mean much if you know, there's
a chance you could destroy your entire company with a
catastrophic security breach, you know, yeah, leak all your database
to this this hacker.
Speaker 3 (22:37):
And so yeah, not good. Yeah, is that becoming preblem?
Speaker 1 (22:43):
I haven't heard much of it like happening in the wild,
but it's it's just one of those things that is
bound to happen because again, these are just just statistical
models that they don't they don't have the ability to
really reason about the actual nature of the things that
they're doing. They can try, you know, they can make
a sub LLM call and ask the other element like
(23:03):
does this look right? But then you're you know, burning
more and more.
Speaker 2 (23:07):
And I also, at some point you are trusting the
statistical model to measure a statistical model's ability to do
is job.
Speaker 1 (23:13):
Yes, exactly, And you know it kind of devolves. Anybody
who's who's used an LLM for coding knows that the
deeper you go into like a single prompt, like the
more back and forth, the larger the context window, the
more garbage it gets. And so like as you have
things like working off of other LLLM input, which is
(23:35):
effectively what you're doing in a large context window. You know,
it's just what the LM is sort of reprocessing the
text that it generated and that you've added to it.
It steadily gets worse the later in the context window.
So so basically all of these sort of mitigations are
you know, they've made surprising progress with the way that like,
you know, things like this don't happen is like just
(23:58):
raw hallucinations of like I think this library exists. They
happen a lot less now than they used to, but
they're they're just always they're always.
Speaker 3 (24:06):
Gonnask and they're also always gonna be there.
Speaker 2 (24:09):
Yeah, it's not it's not really something you could unless
we invent new maths.
Speaker 1 (24:15):
Yeah, I mean, at least with the way we approach
AI right now, which is is is based purely on
language as tokens and it can't really fundamentally like understand
things outside of you know, word probabilities.
Speaker 3 (24:44):
So where is this pressure coming from?
Speaker 2 (24:46):
Because it feels like it's everywhere and you've got people
up Paul Graham who are wanking on about Oh that
I met a guy who writes ten thousand lines of code.
Speaker 3 (24:55):
I think he said, what is it?
Speaker 2 (24:57):
Are we just finding out how many people don't know
how code is?
Speaker 1 (25:00):
Wooks? I think a lot of that. Yes, Like I said,
it's exactly the same way as like people, you know,
when the first image generation models were like, oh great,
we don't need graphic designers anymore, and then they are,
you know, oh great, we don't need customer support chat anymore,
because they fundamentally don't understand what those roles do. You know,
they're they think graphic designer and they think image generator.
(25:23):
But a graphic designer is a human being. That's that's
dealing with different stakeholders, dealing with people saying like no, no, no,
the logo can't have that, or you know, yes the logo.
Speaker 3 (25:33):
Must get bump.
Speaker 1 (25:35):
Yeah, yeah, exactly, and they're dealing with you know, different requirements,
and then they're needing to make variations, which is something
that LMS are not always very good at, or I
should just say jeneratorf AI is not always very good at. Yeah,
and so so I strange part of it is is
is people who you know, like I said, you know,
you get these like brief bursts where you're like, oh
(25:56):
my god, I just say so much time and you
extrapolate that. So some of it I think as engineers
who are just like actual engineers who know how to code,
who see these things and they think like I did
this today, and like I must have been so much
more productive as a result. You know, I saw this
one thing happen. But they don't really actually measure in
depth what they produced and was it more than what
(26:19):
they would have produced. There have been some studies to
measure that, and they haven't looked particularly good for lllms.
You know, if you actually compare people, you know, using
AI versus not using AI, the results don't always look
particularly great for AI. And one thing that's very common
out of that is that people overestimate their performance.
Speaker 2 (26:41):
So I think that might be a problem across or
like a lot of people don't. I also think my
grander theory with all of this is a lot of
people don't know what work is.
Speaker 3 (26:51):
Like a lot of these.
Speaker 2 (26:51):
Investors and managers and such, and even people in the
media don't seem to actually know what jobs are and
how the jobs work, and they think that things like
coding is just like I said earlier, yeah, just walk
into work or write ten thousand lines of code on
a walk home. But now I can write twenty billion
lines of code because that's all my job is.
Speaker 1 (27:10):
Yeah, absolutely, I mean this is this is something I
talk about in my article, is that there's always this
like degree of separation and the people who are talking
the most about AI coding' aren't really coders and they're
not really providing like detailed reproduction steps.
Speaker 4 (27:27):
You know.
Speaker 1 (27:27):
I know engineers who love using AI like every single day,
and they use it for like all of their projects.
And I know really good ones to do that. And
if I asked them like, hey, how could I like
be more likely be more like you? How could I
be a better coder? One of the last things they'll
say is, probably, you know, start using AI more, for
(27:48):
it's really just a tool to accomplish part of their job.
And so so yeah, I think there's a lot of
you know, there's probably you know, plenty of genuine people
who who just like you know, they've never written a
line of code in their life. They pull up Lovable
they say, generate me an app that does this, and
like they legitimately are like, oh my gosh, it actually
(28:09):
worked like I I can code, and they just yeah,
they just you know, naturally, you know, they don't realize it.
They don't realize that, like there is so much more
to this actual practice, and you know they're they're they're
just not in tune with the way that coding actually works.
Speaker 2 (28:29):
Yeah, it's a little bit sad as well, because it
really feels like a lot of this is just the
point you've made about like the image generators.
Speaker 3 (28:37):
It's just this immediate moment of wow, look what this
could do.
Speaker 2 (28:40):
Imagine what it could do next, and then you look
at what it could do next, and it can't, Like
it looks like it can generate code, but it can't
actually generate software like it cut.
Speaker 3 (28:50):
It doesn't seem to do the steps.
Speaker 2 (28:51):
That make software functional and scale because there's these tendrils
of from software into the infrastructure to make sure it
can be shown in different places or to make sure
that it actually functions.
Speaker 3 (29:05):
On a day to day basic Yeah, and it's like, yeah.
Speaker 1 (29:10):
It's a pretty simple like curve, Like you know, you
start out and it generates so much code that's like
pretty correct, really fast. When you if you start a
completely bare project from just like a lovable prompt or
something like that. But as you go on, you know,
it's like the curve kind of flattens and it goes
(29:31):
down and it becomes less and less productive, and I
think eventually, usually like pure vibe coded projects hit a
pretty big wall because you're just introducing so much code
and it's not consistent, it's not it's not using shared tools,
and so eventually you just end up with this you know,
they call it spaghetti code. It's you know, code that
(29:52):
is so interwoven and difficult to understand that like you
can't actually see what's going on with it.
Speaker 2 (29:59):
Yeah, it was like context is the whole problem, just
because even if not to say, like LLM generated writing
is dog shit, and I think it's worse than code
because code code, code is functional in a way that
I don't think writing has to be. Like writing conveys meaning.
But good writing is usually more than just I am
(30:19):
entering this writing into someone's brain. So something happens in
the way that code is, but writing it they share
the same problem, which is great writing has contextual awareness,
It builds, it connects, there is an argument or there
is an evocation from it. In the case with software,
it appears to be. If you don't know every reason
that everything was done and fully understand the reasons that
(30:40):
were previous and the reasons that are happening right now,
you kind of will fuck something up naturally, even if
you do know how to code, if you just don't
read any of the notes, if you if if it's
not clear why things were done, things will break anyway, right.
Speaker 1 (30:55):
Yeah, exactly. And great writing, you know, it knows when
to you know, you know, a great knows you know,
when they need to explain something, when they don't need
to explain something. You know, they know, you know, I'm
writing a trade publication like I don't need to explain
how friction works, or I'm writing, you know, a public
press release, Okay, I do need to explain how friction works,
or you know whatever. So it's it's very you know,
(31:18):
these are the things that the lms are like, yeah, exactly,
just like very not good at you know, they can
they can generate stuff that looks good, but it's just
the more you try to build on top of it,
the more it'll it'll end up restating. Like you know,
lms are really good at writing, Like the classic school
level five paragraph essay. But everybody who actually writes anything
(31:42):
at all knows that, like the five paragraphed essays that
you wrote in high school are like terrible and like
nobody wants to read something structured as like you know,
premise three, argumented a paragraphs conclusion like that's it's really
bad like unconvincing writing. Uh. And it's the same with
you know, lms are really good at making toys and
(32:03):
like like quick things that like are are fun or
you know, maybe a little bit useful in certain situations,
but really bad at writing things that you know, uh,
like like a book level type things like you know,
ais are horrible at writing books because they restate things
and they lose track of what they're talking about.
Speaker 2 (32:20):
And the more stuff it creates, right, the more it
looks over, the more confused it gets.
Speaker 1 (32:26):
Exactly It's it's really really similar. I think that coding
has really just followed the same trajectory as like all
of these other things where we're like, oh, we don't
need copywriters anymore. Oh, we don't need uh designers anymore.
We don't need graphic designers, we don't need this, that
and the other. We don't need lawyers anymore we have
an ll m P or a lawyer, and you just
(32:47):
very quickly realize that like these these jobs aren't uh
you know, dumb factory uh like producing you know, pulling
a lever type jobs. They're they're about interaction.
Speaker 2 (32:59):
And that's an insult to factory labor. That is very
hard work. But it's not like a repetitive action that
is always the same thing.
Speaker 1 (33:07):
No, not in a sense anyway. Yeah, and like a
factory worker, you know, is doing a lot more than
just pulling levers.
Speaker 2 (33:15):
Yeah, of course, but it's not just hitting a button.
But I think people condense coding to this thing.
Speaker 1 (33:22):
Yeah, And it's very similar to like robotics in factories too,
where you know, the promise is like, oh, you know,
we'll just have a robot do this thing that a
human does. But the human is doing a lot more
than just you know, pulling the lever. They're like observing
the process. They're making sure that things are not getting
broken or getting gummied up and stuff like that. So
there's just there's just limits to what machines can do
(33:43):
when they're not actually intelligent. And that's just what it
comes down to.
Speaker 2 (33:47):
Do you buy the any of these companies like Google
a writing thirty percent of their code with AI.
Speaker 1 (33:57):
The thing about like those numbers is that they are
soon easy, maybe not to game, but to First of all,
I don't know anyone who's actually like measuring this in
a really like effective way, because the thing about your
like coding editor is that it's a lot of people
I don't like to use the AI autocompletion, but some
(34:17):
people do. You know, there's there's pieces where it'll you know,
you'll start writing some boiler plate and it'll like pop
up you know what it thinks you mean to write,
and you'll just hit tab the tab key on your
keyboard and it'll just like finish the line of code
that you were writing. And maybe that would yeah, yeah, exactly,
and it might even very often what it produces is wrong,
(34:38):
but it's like seventy five percent, right, So you're like, oh,
I can just like use these keystrokes, I can save
these keystrokes, and I can just fix what it did wrong.
Speaker 2 (34:47):
Oh God, it's like saying auto correct wrote parts of
your book.
Speaker 1 (34:52):
Yeah, it really is. And you know there are times
where like an AI does is fully write the code
for a feature, so people use you know things where
they can prompt an LLM and it will with like
what they want the feature to be or what they
(35:12):
want the bug fix to be, and it will go
and we'll write all the code and it will make
the sort of we call it a merge request, but
that's the request for new code that then gets reviewed.
It will go all that way. But the thing is,
you know, it might have written the code, but somebody
took time away from you know, their normal job, which
(35:36):
just be writing code, to write a really good prompt
to make sure it didn't screw it up, and then
reinteract with it. And so you know, did it write
the code, yes, but did it do the task not really,
because it needed somebody else to do some support work
to make it even possible for it to do it.
And that person could have That person needed to be
technical and needed to be able to say like, oh,
(35:58):
you need to look in this part of the code base,
And so you end up just getting the same type
of like actual work from the actual specialist who knows
how to code the same amount of work. They're just
doing something slightly different.
Speaker 2 (36:15):
Cole has been such a pleasure having you here. Where
can people find you?
Speaker 1 (36:19):
For sure. My blog is Coulson dot dev c O
L t O N dot dev. I don't post that
often because I work full time and I just I
just post when something really gets to me. But you know,
I might. I might say some things here and there.
Speaker 3 (36:35):
And you have your excellent blog that I brought you
on for.
Speaker 1 (36:38):
Yeah, yeah, that's my that's my most recent one. You
can feel free to check it out. I'm sure the
link to that will be in the description. But yeah,
it was great being here.
Speaker 5 (36:45):
Thanks so much, Thank you for listening to Better Offline.
Speaker 6 (36:56):
The editor and composer of the Better Offline theme song
is Matosowski. You can check out more of his music
and audio projects at Mattasowski dot com m A T
t O s O W s ki dot com. You
can email me at easy at Better offline dot com
or visit better offline dot com to find more podcast
links and of course, my newsletter. I also really recommend
(37:17):
you go to chat dot Where's youreaed dot at to
visit the discord, and go to our slash.
Speaker 5 (37:21):
Better Offline to check out our reddit. Thank you so
much for listening. Better Offline is a production of cool
Zone Media. For more from cool Zone, Media.
Speaker 4 (37:30):
Visit our website cool zonemedia dot com, or check us
out on the iHeartRadio app, Apple Podcasts, or wherever you
get your podcasts.