Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Also media, Hello and welcome to Better Offline. I'm your
host ed zet Trunk. Also today, I'm joined by Carl Brown,
(00:22):
a veteran software engineering host of the excellent YouTube channel
Internet of Bugs. Cayl, thank you for joining me.
Speaker 2 (00:28):
Thanks for having me.
Speaker 1 (00:29):
So I'm going to start with an easy one. What
is the software developer like? What actually is that?
Speaker 3 (00:35):
So?
Speaker 4 (00:35):
Basically what we do is we take ideas about problems
that people want to solve, generally, and we write software.
We write code that tells computers instructions how to make
the computer do the thing that needs to do to
(00:57):
solve the problem the person else is to solve.
Speaker 1 (00:59):
Right.
Speaker 4 (01:01):
Gaming programming is a little bit different, but that's most
software development is basically that.
Speaker 1 (01:06):
And this is another quite silly question, but necessary. How
much of that is actually writing code?
Speaker 4 (01:14):
It depends on how good you you're the people that
are asking for stuff. Is As a general rule, I
would say maybe between ten percent and twenty five percent.
Speaker 1 (01:25):
Okay, just really want to be ten to twenty Even
if we say thirty percent of the job, which is
more than you said, that means the majority of this
job is not actually writing.
Speaker 2 (01:33):
Code right now.
Speaker 4 (01:35):
That's that's largely for folks that are farther up the chain, Right, So,
if you're fresh out of school and you don't really
you're not in the job in the you don't understand
how to manage requirements for any of that kind of stuff.
Yet someone's going to basically hand you a thing to do,
and in that kind of case, you're going to be
spending a lot more time writing code than that. But
for me, it's you know, it's far far more talking
(01:58):
to people and stuff than actually code.
Speaker 1 (02:01):
Right. The reason I asked that, and the reason we're
doing this as well, is the there have been a
lot of stories around like LM's replacing code as LLLMS
replacing engineers, claiming that junior software engineers will be a
thing of the past due to LLLMS. How much validity
is there in.
Speaker 4 (02:17):
That, Well, when it comes to the really really really
fresh out of school kids, right.
Speaker 2 (02:23):
That you have to basically break everything.
Speaker 4 (02:25):
Down in hand, the little chunks of work, and LM
can kind of do that, although the kid will get
better over time and the LLM is pretty much fixed
right right, But past that it doesn't do a good
job of being able to do any kind of long
term thinking, and that's largely the job, right, I mean
(02:46):
this is this is not a set of you know,
I come in today, I do a thing today, I
come in tomorrow and having no understanding of what happened yesterday,
and do another self contained thing and so on and
so forth.
Speaker 2 (02:58):
Right, that's not the job.
Speaker 4 (02:59):
The job is a long sequence of building up on things,
day after day after day after day until we get
to the point where the whole thing together works, And
that's what it's supposed to do.
Speaker 1 (03:09):
So I think that I've known, and one of the
reasons I had you on as well, is that really
there are so many of these stories that claiming that,
like this software engineer's job is gone, that these companies
be writing all of their code with AI, and it
doesn't even seem like that is possible. One of your videos,
you did a really good thing around like the twenty
to thirty percent a link to this in the notes,
(03:30):
twenty to thirty percent of code that behind meta and
I think Google it was is written by AI. Now, again,
how much valuidity is there to them?
Speaker 4 (03:39):
Well, I mean so if one of the quotes was
something to the effect of thirty percent of the code
is suggestions that were given by autocomplete that a human accepted, right,
which could be as much as you know, the thing said,
oh wait, you spelled this wrong.
Speaker 2 (03:55):
Let me give you a suggestion about how to spell
it correctly.
Speaker 4 (03:57):
Right, right, I mean how much of the actual text
that you write is you know, it is corrected by
a spell checker?
Speaker 1 (04:05):
Right?
Speaker 4 (04:05):
If all that counts as AI, then what percentage of
your stuff is written by AI?
Speaker 3 (04:09):
Right?
Speaker 1 (04:09):
Well, in my case, absolutely nothing. But that's just kind
of a free I'm a just a complete free But no,
I get your point, and it's without being a code
of myself. It's something I've really noticed across these stories
where people just kind of blindly push them out and
they say, oh, it's twenty to thirty percent of the
code is written by but there's no verifying this. And
(04:29):
also it feels like it might create a bigger problem,
which is say, we accept this idea even though I don't,
and it sounds like a pretty spurious one kind of
silly to do so at some point, isn't code not
just the series of things that you write to make
a program work. It's connected to a bazillion other things,
which if you don't know why that was written because
(04:51):
you had something generated. Is that not a huge problem?
Speaker 2 (04:55):
Yes?
Speaker 4 (04:56):
But worse, what what we're finding when code gets generated
is that basically you end up doing the same thing
in a bunch of different places, but in each one
of those different places.
Speaker 2 (05:07):
You do it a different way.
Speaker 1 (05:08):
Can you give me an example.
Speaker 4 (05:10):
So, for example, when you need to go fetch a
thing from a server, right, well, you overhear in this
code you fetch a thing from a server. Over Here
in the code you fetch a different thing from the server. Normally,
you'd be able to use the same block of code
to do that, so that if there's a mistake in it,
you can change it once and it's fixed everywhere, right, Right,
But the way the llms work is you say, hey,
I want to fetch a thing from the server, and
it says cool, and it writes a whole thing.
Speaker 2 (05:31):
For you that may or may not work the same
way as the previous one.
Speaker 4 (05:33):
Right, And so now you find, okay, under some circumstances,
we're having a problem fetching things from the server. I
don't know which one of these twelve implementations that go
fetch from the server is the one that's actually causing
the problem.
Speaker 1 (05:45):
Right, Well, so isn't there isn't there a security issue
of having large language models, Like wouldn't all the code
be quite similar or at least more similar depending on
if everyone's using Claude or everyone's using well get hub copile.
Speaker 2 (05:58):
I guess this Claude now, No, not really.
Speaker 4 (06:01):
It basically kind of picks a random number at the
beginning and goes okay, So that's the I think if
it kind of like you deal a deck of cards, right,
whichever deck of card gets turned over first, that's the
beginning of the autocomplete that it starts. And so depending
on which example it's I don't want to say thinking of,
but depending on which example represents that, I'm grastically over simplifying.
But depending on which example is represented by that card,
(06:22):
it's going to go down one path or another.
Speaker 1 (06:24):
Right, And so what they actually what these large language
model coding tools actually good for? Because I get a
lot of people who who respond by saying, this is
proof that AI is a big deal, and I'm just
kind of like, I'm not even looking for a particular answer,
just truly what's useful about them?
Speaker 4 (06:40):
So they are decent at when you know what you
want and what you want is a fairly simple, self
contained thing, and you know how to tell whether or
not in the self can think thing does what you want,
It can type it faster than you can, like wells
so correct. Basically, Yes, it's like auto complete if you
(07:01):
if you know exactly what you want. Yeah, I mean
so I use it a lot because I programm in
a bunch of different programming languages a lot, right on
different projects at the same time or on the same
day or the same week, And it's really easy for
me to go, Okay, wait, which language am I in? Right?
Speaker 3 (07:16):
Now?
Speaker 2 (07:16):
Okay, how do I do this in this language?
Speaker 1 (07:18):
Right?
Speaker 2 (07:18):
So it's kind of you can.
Speaker 1 (07:19):
Actually understand the generation when it comes.
Speaker 4 (07:21):
To Yeah, it's like I know what kind of loop
I want, but I don't remember the syntax for this
particular language where I don't want to. So I use
it kind of like a Google Translate kind of thing
to go from one programming language to another sometimes.
Speaker 1 (07:32):
But you wouldn't trust it to build a full software package.
Speaker 2 (07:36):
Oh not at all?
Speaker 1 (07:37):
Why not?
Speaker 2 (07:38):
Well it wouldn't work to start with.
Speaker 1 (07:40):
Why wouldn't it work?
Speaker 4 (07:42):
Well, I mean, so I've done some experimentation on that
where I've taken fairly complicated.
Speaker 2 (07:50):
Challenges.
Speaker 4 (07:51):
Challenges are intended for programmers to basically get better at
their craft and that kind of thing. And I've run
ai you told it, you know, step by step, okay,
the the challenge says, this is your next step.
Speaker 3 (08:01):
Do this.
Speaker 2 (08:01):
The challenge says, this is your next step.
Speaker 4 (08:03):
Do this on really simple challenges in programming languages like
Python that it's got a lot and a lot of
examples for it does okay past the point where you're
in the really simple kind of language, things they just
they sometimes get to the point where they can't even
create anything that builds it all.
Speaker 1 (08:20):
Huh, why is there? Why does so many engineers swear
by it? Then?
Speaker 4 (08:27):
Honestly, I'm not sure to what extent the engineers are
swearing by it. I've talked to a lot of folks
who are like, you know, my group, you know this
big bank, you know friend of mine, My group is
getting copilot gemmed on our throats whether we want it
or not. And the executives are all really excited about it,
and none of us are interesting.
Speaker 1 (08:47):
So it's executive. I've I've personally had this theory that
it's like executive pushed, and that it's all about it's
all about what the bosses want to see rather than
even do.
Speaker 2 (08:58):
Sorry, there's a lot of wish fulfillment.
Speaker 4 (09:03):
There's a lot of like, we want to not have
to deal with these programmers anymore, so we would rather
deal with the AI thing, and we're just gonna hope
that the AI thing is going to be, you know,
just as good as the programmers are close to us,
just as good as the programmers, and not nearly as annoying.
Speaker 1 (09:16):
Seems like a definitional well maybe that's not the right word.
Seems like the difference between a software engineer and a
software developer almost because it's not just about flopping code out,
it's about making sure the code does stuff.
Speaker 2 (09:28):
Yeah, I mean, those terms get mashed to keep yeah,
I mean.
Speaker 4 (09:34):
So part of the problem is that in like I
live in Texas, and in Texas you're not allowed to
call yourself an engineer unless you passed the engineering exam. Right,
So I can't literally, I literally can't call myself a
software engineer legally in Texas As. I understand it. I'm
not a lawyer, but that's my understanding. So it's like
the terms get all confused.
Speaker 1 (09:51):
Right, So somewhat related, what is it that people misunderstand
about the job?
Speaker 4 (09:55):
Then, well, I mean, so one of it is what
you said earlier, which is that a very small percentage
of the job is actually slinging code. A lot of
it is basically trying to figure out what it is
the code should do based on what you've been told
that the problem, you know, the solution of the problem
that you're trying to solve.
Speaker 2 (10:13):
Another thing is that.
Speaker 4 (10:16):
A lot of the problem with the job is that
every little decision builds up over time, and at some
point a bug is going to happen. They're inevitable, and
when that happens, basically there's this process where what you
need to do, if you're being competent, is roll back
(10:36):
through the series of decisions, figure out what caused that bug,
and then figure out what other bugs are likely to
have been caused by that same set of decisions, and
then fix not just the bug you're working on, but
the bugs that you know, not just the bug that's
been reported, but the bugs that might have also been
caused by the same problem. Right, And that kind of
long term thinking is not a thing I've ever seen
(10:58):
LM exhibit at all. I talk about it like l
limbs or generative AI is good at solving riddles, but
actual software development is more like solving a murder.
Speaker 1 (11:08):
Yes, you said that in that wonderful video. Yeah, I
And it almost feels as if we are building towards
an actual calamity of sorts, maybe not an immediate one.
Maybe it'll be kind of sectioned off into areas because
you've got a new generation of young people coming into
software engineering or what have you, learning to use AI
(11:29):
tools rather than your videos definitely talk about this as well,
actually how to develop software and make sure it works,
and make sure that it has the infrastructural side inline,
and also that you're building it with the long term
thinking of someone else might need to understand how this works,
and they're not learning that, So you've just got a
generation of kind of pumping the internet and organizations with
(11:50):
sloppier code.
Speaker 4 (11:52):
Yes, although I mean one of the problems we're having
at the moment is that the hiring process for really
junior engineers is actually pretty broken at the moment, and
a lot of people are not hiring people that are
fresh out of school because they're expecting that the AI
will be able to do.
Speaker 2 (12:09):
Basically, a senior or a mid level developer with.
Speaker 4 (12:12):
The benefit of AI, with the benefit of AI that's
in air quotes, will be able to do the work
of that person plus a couple of fresh outs that
they normally would have hired, but they're not hiring at
the moment. There's some statistics about how the people that
are fresh out of school these days are historically underemployed
(12:33):
relative to the general population, at least in the US
where I live.
Speaker 1 (12:37):
It also feels like there's no intention behind the code,
like it's just if you're just generating it. You don't
really know why you made any patity. You could say
I chose these lines, But is that at some point
if you have large amounts of software developers using it,
however large, But the young people in an organization using
it to generate their code, they're neither learning to write
(12:58):
better code, nor are they learning how to just learning
how to fill in blocks they'll kin within the job.
Speaker 2 (13:05):
Yeah, I mean.
Speaker 4 (13:05):
The the trick is that those of us that have
spent a whole lot of time debugging software right and
like finding the problems and digging into them and trying
to figure out what's going on that kind of stuff.
It's going to be really hard for younger folks to
get hired into those jobs so that they have time
(13:25):
to build the experience to be able to do that.
And I'm afraid we're going to end up with basically
an older generation or generations retiring and a newer generation
that hasn't had the experience of doing that kind of debugging.
And then it's going to be a real mess, especially
since from what I can tell, the code that the
AI's generated are a lot buggier and buggier in weirder
(13:48):
like randomish kind of ways. Stuff just kind of comes
out of nowhere in a way that I don't. I mean,
I've debugged code from people that don't speak the same
languages as I do, you know, all that kind of
stuff AI code is different. It's just like, Okay, why
would anyone want to put that block there that doesn't
have anything to do with what we're trying to do
at the moment.
Speaker 1 (14:07):
And why is that? Is it just because it's probabilistic.
Speaker 2 (14:10):
I guess so.
Speaker 4 (14:11):
I mean it's hard to say why. I mean, the
idea of why an LLN does what it does is
kind of a you know, anybody's guess.
Speaker 1 (14:21):
Yeah, it's just I keep thinking of the word calamity
because you sent me these studies as well about how
they found like a downward pressure on the quality of
code on GitHub. Would you mind walking me through what
that means?
Speaker 4 (14:35):
Yeah, So basically what that study found, there were there
have been a couple of them, but what that particular
study found is that there's what they call code churn
has gone up. And code churn is basically when you
push something you like, add a line of code, you
push it into two test or to production, and then
in a short period of time, like I don't remember
exactly what the definition was, like in a month or
(14:57):
two months, that line of code changes, right. So basically
what that means is that the line of code that
got created, somebody decided after it got put in, oh wait, no,
that doesn't work right, we don't We're not happy with that.
We're going to change it to be something else, right, right,
And the percentage of lines or the number of lines
that that get changed fairly quickly after they get submitted,
(15:21):
has gone way up since the since the implementation of
get hub copilot. So and this is across like most
of the giant you know, millions of lines of codes
on GitHub and.
Speaker 1 (15:33):
For a simpleton me, why does it being changed? Why
is changing it so often bad?
Speaker 2 (15:39):
Like? Well, I mean, so, I mean if you do
it right the first time, you can move on to
the next thing.
Speaker 3 (15:45):
Ah.
Speaker 1 (15:46):
Right.
Speaker 4 (15:46):
If it's like you know, if you're writing a document
and you put put the document in there and then
you like you're get you're in in Google Docs and
you're like tracking changes and it's like, Okay, this sentence
has changed seventeen times.
Speaker 2 (15:58):
Obviously the person isn't happy with that, right.
Speaker 1 (16:01):
So the generative code isn't good, right, and so people
see you need to change.
Speaker 2 (16:05):
That's the presumption, yes, and.
Speaker 1 (16:08):
So I it also said the code quality itself? Is
that the only way they is that the only way
they measured it? Or is it there were other things
as well, So.
Speaker 2 (16:18):
They measured that they measured.
Speaker 1 (16:23):
Uh like moved code.
Speaker 2 (16:26):
Yeah, that the move code.
Speaker 4 (16:27):
The thing I was talking about earlier, where the uh
you've got a bunch of different places in the code
that all do the same You try to do the
same function, but they do it.
Speaker 2 (16:39):
In different ways. Normally, what would happen is you'd have
your you do it.
Speaker 4 (16:43):
You do a thing here, right, and then at some
point in the future you need to do that thing
again in a different place. And so what you do
is you would move that original block that does the
thing someplace else, and then you would call that block
from both places because it already works, right, And then
that way you've got you know, however, you go that
stuff from the server, you're fetching it the same way,
but with this thing. Basically instead of doing that, you've
(17:06):
got copy paste. Okay, when we put another one here,
and we put another one here, let me put another
one here, and it's a it's a maintenance nightmare.
Speaker 1 (17:23):
So for the for the audience as well, how does
the software developer actually use GitHub? Like really simple stuff?
I realized, but I think it's important for people to
It just occurred to be like, this may be something
that most listeners don't know, which is good to I
think it's good.
Speaker 2 (17:36):
Yeah.
Speaker 4 (17:36):
So so what we do is we basically make changes
to code. We get to the point where we the developer,
are happy with the way it's set up on our machine,
and then we do what it's called a push, and
we basically send all that code, submit all that code
up to get hub, and then theoretically, you know, there
can be automatic processes that kick in that like check
(17:57):
that code for particular things and run tests on and
that kind of stuff. And then at some point we
have a thing called a poor request, which is basically
a thing that says, Okay, I would like this to
go into production now, or more or less, I would
like this to get promoted into the next phase now,
and then someone theoretically will look at it and go okay,
that's fine, and then click the yes button or say, hey,
you forgot about this, go look at this or that
(18:17):
kind of thing, right, And the poor requests is kind
of the unit of work kind of.
Speaker 1 (18:23):
So with get hub you almost use it like an
organizational code dump, centralize all the code. Sorry just for
the non coding as well. And I think it's I
think that the LLLM industry has done a really good
job of dancing around these terms and selling them to
people like me. While they weren't selling they didn't work
(18:44):
on me. I am too stupid, But it's where they've
just like been like, okay, yeah, well lots of people
use co pilot, that's good, and this is good because
software is coding. But it kind of feels like, I
don't know, all of this is taking the one thing,
like one major pot out of software development and ruining it.
(19:04):
And I don't even mean coding. I mean it's the
intentionality behind software design and infrastructure and maintain Like there's
it seems like they're removing intention in multiple parts.
Speaker 4 (19:19):
So the way I would say it is when they
talk about the AI being able to do the work
of a programmer, what they're doing is they're devaluing all
of the stuff that's not just hacking code, right, And
so what they're saying is that basically the job of
a developer is basically just you know, typing, basically, right,
And that all of the work that we do to
(19:41):
understand what the problem actually is and how it needs
to work, and you know what other problems are likely
to show up when we try to do that, and
how to avoid those things as we go and that
kind of thing, all that work is basically not important.
Speaker 1 (19:56):
And I mean I two words which would probably annoy
you this is I feel like vibe coating is the
other part of this. So if I'm correct, correct me
if I'm wrong, vibe coating is just typing stuff into
an LEM and software comes out and hopefully it works.
Speaker 2 (20:13):
Yeah. Vibe coating is basically when you intentionally try it.
Speaker 4 (20:18):
Well, I don't know, but intentionally, but basically you make
a point of not digging into the code and looking
at what the LM is doing, and you basically say, Okay,
I would like something that does X right. I would
like a game where I fly airplanes around a city
or something right, And then you get what it spits out,
and then you say, you know, okay, let me try it. Okay, well,
(20:38):
can we have more airplanes? And okay, can we have
some balloons with you know, signs on them now? And
can we do this kind of thing? And then you
don't think about what the side effects are. You don't
think about what things could go wrong, You don't think
about air conditions that kind of stuff, and you just
hope that this whatever you look at and has the
right vibe and that you know, if it if it
looks like kind of what you wanted, that probably it's
(21:01):
going to be fine, or hopefully it's going to be fine.
Speaker 1 (21:03):
How do you feel about vibe coating?
Speaker 3 (21:05):
So I do it.
Speaker 4 (21:06):
Sometimes vicoding is great for a thing that you're going
to do once and then throw away.
Speaker 1 (21:12):
Yeah.
Speaker 4 (21:12):
Right, So if it's like, you know, okay, I want
to I want to do a thing. I want to
translate this thing to you know, I want to make
this table go into this format over here or that
kind of thing. You do it, you get the output
you want, you throw the code away. No big deal, right,
like a prototype almost, yeah, basically, And so you know,
we call them spikes or tracer bullets. Sometimes it's like
a you know, let me get a thing that works
(21:33):
at all, right, and then let me see what I
can learn from that to move into my big maintainable project.
But for anything that's like, you know, this thing needs
to run for a while, this thing's needs to not
get hacked.
Speaker 2 (21:46):
This thing needs to you know, not crash. It's a
really bad idea.
Speaker 1 (21:50):
Yeah, And at some point I feel like someone building
a product that they don't really understand the working So
of it's kind of almost identical to generating a story
with at GPT, except kind of more complex and more
prone to errors.
Speaker 3 (22:05):
Yeah.
Speaker 4 (22:06):
And the other thing is that there's an adversarial component, right,
so people will intentionally try to go hack that thing
that's sitting on the internet, right, Yeah, in a way
that they don't intentionally try to go mess with the
story that you wrote, right, right, And so even if
it works all by itself, that doesn't mean it's going
to work when somebody starts pounding on it intentionally trying
(22:27):
to break it. And if they can break it, then
that's a whole other set of problems that you now have.
Speaker 1 (22:33):
It feels like quality assurance is just never part. Oh no,
they are they claiming they're going to do quality assurance
with large language models. Yet they must some.
Speaker 4 (22:41):
People are Yeah, I mean, to be honest, a lot
of companies have just been getting rid of quality assurants
over the years.
Speaker 1 (22:49):
Right.
Speaker 4 (22:50):
Really, when I worked at IBM, we didn't have quality
assurance at all. They would, no, seriously, they would do this.
I was in IBM's Cloud group and they would do
these these what do they call them, uh, packathon kind
of things that they didn't call that.
Speaker 2 (23:03):
I don't know what they called it.
Speaker 4 (23:03):
But basically everybody in all the other development groups would
get together and basically bang on the code that was
about to get released from some other group to try
to see if they could break it. Right, But they
didn't have dedicated testers anymore because they decided I guess
that they didn't think they were worth the money.
Speaker 2 (23:19):
I don't know, but we had some issues because of that.
Speaker 1 (23:22):
When spinal movement happened.
Speaker 4 (23:25):
I was in I don't know, so I was at
IBM in like twenty seventeen, twenty eighteen, right, so it
would have been sometime prior to that.
Speaker 2 (23:32):
When I got there, they didn't have any QA folks.
Speaker 1 (23:35):
Really just feels like the it's the management problem as well.
It's the management cotton people.
Speaker 2 (23:40):
I would think.
Speaker 1 (23:41):
So it's a real shame as well. And I forgive
me if I'm forgetting exactly where You've mentioned as well
that there is like compound scar tissue from AI generated code,
a larger problem of lots of this code being generated
with AI.
Speaker 2 (23:57):
Well that's that's my expert, right, Yeah, just a potential worry, right.
Speaker 4 (24:03):
Right, so that the more of this we get and
the more issues that we have, the more stuff we're
gonna have to dig out of, right, And what I'm
honestly envisioning at some point in the I don't know
how long it will take.
Speaker 2 (24:14):
The crypto bubble took way longer to pop than I expected.
Speaker 4 (24:17):
So I don't know how long it's going to be
before this one does, but I'm expecting that there's going
to be this big push to try to clean up
a bunch of this crap here in a few years,
once people realize that a lot of the code that's
being written and generated right now is has all of
these vulnerabilities that nobody's bothering to check for at the moment.
Speaker 1 (24:35):
Right, and those vulnerabilities again non technical way, I read
that it was like the call upon things on GitHub
that don't exist, so bad actors create something that resembles
what it's pulling from.
Speaker 4 (24:46):
That's so that's that's a more specific kind of one.
I mean, there are a lot of things. I mean,
so there have been computer viruses since the eighties, right right,
you know, the Morris worm and that kind of stuff.
And basically there are no own ways that code if
you have to write it in a particular way in
order for it to be secure, right right, And even
(25:07):
then sometimes people come up with novel ways of making
something not secure, and.
Speaker 1 (25:11):
How how do you have to write it to make
it secure? If it's possible to explain.
Speaker 4 (25:15):
Well, I mean, there's a big, long list of rules, right.
I mean, one thing you can do is you can
use languages that are what they call safer. But still
you have to make sure that any input that you
get from the network, you're really really careful to make
sure that it doesn't get to overwrite parts of your
program that actually execute things. You have to make sure
that it doesn't have the opportunity to be able to
(25:37):
write to places on your disc that it shouldn't be
able to write to. You have to be able to
make sure that it doesn't have access to read data
that it shouldn't be able to read, you know, all
that kind of stuff. And when those things don't happen,
you end up with you know, so and so got hacked.
You know, turns out that somebody, we think maybe China,
is reading the email of the you know, people in Congress.
(26:01):
You get another letter in the mail that says your
social Security number has been you know, leaked by you know,
some credit checking firm or something like that.
Speaker 1 (26:10):
Even even like I think it was what the big
hot target data breach from a while back was through
the HVAC system. It was it was it's just except
now we've got and that was with humans writing the
code right, imagine if we didn't know. Oh god, it
really does feel like the young people are going like that. Actually, no,
(26:31):
I take it back. You were talking about agile the
other day. I'm going to ask you to explain that
in a second. But it's like, it sounds like for
almost decades they've been gnawing away at management's been gnawing
away at the sides of building good software and building
good software culture.
Speaker 4 (26:48):
Yes, I mean, there's an argument that says we never
got it right in the first place. But I mean,
I mean, if you think about it, software has been
a thing for what fifty years, sixty years, seventy years, right,
I mean compare that to like construction engineering or bridge
building or that kind of stuff. Right, we're still, you know,
relatively speaking, in the infancy, In our infancy as a
as an industry. You know, it's been a it's been
(27:12):
a constant evolution, and a lot of times the things
that were the things that we did to solve a
problem that we had ended up causing other problems.
Speaker 1 (27:21):
Right.
Speaker 4 (27:22):
So, going back to agile, in the long long ago, right,
we used to manage software projects the same way we
manage like build you know, bridge building and building building project,
you know, construction projects. And it turns out that when
you're going to build a bridge, you know beforehand what
you need to build a bridge to do. When you're
building software, a lot of times people are changing their
(27:43):
minds as you go. Right, and you build a thing
and you show it to them and they're like, oh,
why don't we put this over here, and why don't
we change this? And that kind of thing right right,
because you don't have the same kind of constraints physical
constraints that you do when you're trying to build a bridge.
And so we gotten this problem where you would create
these project plans about how you were going to to
build this thing, and you would never be anywhere close
to on time because things would change the whole time.
(28:04):
And so they created this thing called the agile methodology.
I'm drastically simplifying. There were steps in the middle, but basically,
so this agile thing is where we instead of saying, okay,
so this is what the whole project's going to look like,
we're gonna be doing. We're going to be done in
six months, and then things changing along the way, we
basically block off a thing called a sprint. It's a
week or two or a month, maybe it depends. And
(28:25):
then you know, everybody picks their own sprint length and
then you go, okay, I'm only going to talk about
what's going to happen in the next sprint or two, right,
And then you get to the end of that two
weeks and you go, okay, cool, this is what we
got done. What do we want to do next? And
then okay, that's what we got done, and what do
we want to do next? And that kind of thing,
and that way, as you go, you have the opportunity
to change things. You have an opportunity to roll changes
(28:46):
into the process, that kind of thing. Right. The problem
with that is kind of the same way that dates
always ran out in water in waterfall, land projects can
go way way longer than they were expected to at
the beginning because everybody's focused on just two weeks at
a time, and you never kind of take a big
step back like you ought to and go, okay, wait,
you know we were supposed to be done, you know,
(29:07):
two months ago.
Speaker 2 (29:09):
When are we going to wrap this up?
Speaker 1 (29:11):
Right? And how has that led to things agting worse?
Is it that just software culture software development culture has
been focused on short term perpetually.
Speaker 2 (29:21):
Is the short term is part of it.
Speaker 4 (29:23):
Part of it is there are you know, unscrupulous developers
out there that basically want to extend the length of
the project so they can get more money out of.
Speaker 2 (29:33):
It, right right, That's that's always the case. But the
other thing is that you end up with a real.
Speaker 4 (29:42):
A lot of times, you end up with a real
lack of like long term planning and long term understanding,
right right, because everybody's you.
Speaker 2 (29:48):
Know, some kind of thing. You know, companies are only
worried about what happens next quarter.
Speaker 1 (29:52):
Right.
Speaker 4 (29:52):
If you're only worried about what's going to happen the
next week or the next four weeks, the things that
you look on, look at, you know, tends not to
have the longer term implications that sometimes you need. Right
And there are times you get close to the end
and you're like, oh, you know, we didn't think about
this problem yeaheah.
Speaker 1 (30:12):
And also if you're in a two or three week thing,
you're probably not thinking even what you did last sprint like.
Speaker 2 (30:18):
It maybe last one, but not like two or three,
two or three ones ago.
Speaker 1 (30:24):
Is this a problem throughout organizations of all sizes? Is
this a consultancy problem. Is it everywhere?
Speaker 2 (30:31):
It's most places, Huh, there are, there are some places
that are.
Speaker 4 (30:38):
Usually in startups, we're a lot more ad hoc and
we're a lot more you know, focused on trying to
get things done. The basically, the the the idea is
the larger you get as an organization, and the more
money you're throwing at it, and the more management control
(30:59):
you want, the more of this overhead you put in place,
and the more complicated things get as just as a
as a management structure kind of thing.
Speaker 1 (31:07):
And this in the big So this is something you'd
seem like a Google and an Amazon as well.
Speaker 2 (31:11):
Oh absolutely so.
Speaker 1 (31:13):
Do you do you think it has the same organizational effects.
Speaker 2 (31:16):
Or largely yes.
Speaker 4 (31:21):
The so those organizations tend to be well, those organizations
historically have tended to be.
Speaker 2 (31:31):
Before the recent like in shitification wave.
Speaker 4 (31:34):
Those I'm assuming I can swear on this, Yeah, yeah,
those organizations have historically been fairly more engineering driven, which
means that you typically have people higher in the organization
that are technical and have been programmers and who understand
(31:55):
some of the implications, and so they tend to try
at least we try to run interference with management and
to try to, you know, make sure everybody's on the
same page and that kind of stuff. A lot of
a lot, not all, but a lot of problems can
get get lessened if you have people in the organization
that are at higher level whose job is not to
(32:16):
manage people, but whose job is basically to keep track
and coordinate between different groups that are doing different technical
things right, to.
Speaker 1 (32:22):
Make sure people aren't building the same thing I'm guessing,
or are building the right thing in the right way.
Speaker 4 (32:27):
Yeah, and that how what this group is building is
going to impact what this group is building at some
point in the future. And making sure that when you
get to the point where those two things need to
talk to each other, they're both aware enough of what
the other one is doing that the two things hook
together correctly.
Speaker 1 (32:42):
Yes, So, based on my analysis at these companies, that's
definitely gone out the window. I mean, even with LLLM integrations.
So there was a Johnson and Johnson story that went
out Wall Street General a couple of weeks ago where
it was like they had eight hundred and ninety LM
project Generative AI project, of which Taitla the Pereto principle
wins again ten fifteen percent of them were actually useful.
(33:05):
And the thing that stunned me about that of them,
the fact to confirmed my biases, which I love, was
the fact that they were eight hundred and ninety at
the fucking things and no one was like should we
have this many that There was no like like selfare
engineering culture that was like, hey, are we all chasing
our tails? Is this useless? But it sounds like they
were all focused on their little boxes.
Speaker 2 (33:28):
Yeah, I mean so the other thing.
Speaker 4 (33:29):
So understand that again greatly oversimplifying a lot of the
new stuff that's happened with large language machines, large language models,
and generative AI. People didn't expect, right. It was kind
of a surprise when you throw a whole bunch more
data at a large language model and it started spitting
out text in a way that nobody really There was
(33:52):
no like mathematical reason to expect it to be able
to be as good at generating rottocomplete.
Speaker 2 (33:57):
Stuff as it is.
Speaker 4 (33:58):
It was, right, And so there's this belief that if
we did the thing and we unexpectedly got more than
we asked for, if we do more of the thing,
maybe we'll unexpectedly get more of what we wanted, right
that hasn't seemed to really pan out the last couple
of years from what I can see, but that we
(34:19):
don't really understand enough about this to know whether it's
going to work, So we might as well throw spaghetti
at the wall and see if it sticks, because it might.
Kind of mentality is kind of pervasive at the moment,
and everybody's there's a lot of fomo. There's a lot
of like, you know, well, our competitors are probably doing this,
and so we don't want to get left behind. It
kind of reminds me of the rumors that they talked
(34:42):
about back in the eighties when the CIA was doing
all this psychic research, because supposedly the Russians were doing
psychic research and it was all complete crap, but both
sides were convinced that the other side was making some progress,
and so everybody was dumping a ton of money into it.
Speaker 1 (34:55):
LMM Kultrum exactly. Yes, the title of the episode, So, okay,
(35:15):
koltr aside, is this something you're seeing in software development though,
because I know I've seen that in management or it's
just going to like shut the ship in there. This
seems like it's an important thing, right or is this
Are you seeing it within software development?
Speaker 2 (35:29):
So I am seeing it within.
Speaker 4 (35:32):
Software planning, right, So when managers are sitting down and saying, Okay,
we need to build this new thing, we need to
create a new group, we need to split this group apart,
we need to decide what our headcount is going to
be for next year, there's a lot of okay, and
what do we think the AI is going to do
next year?
Speaker 2 (35:46):
And how many headcount do we think that's going to
save us?
Speaker 4 (35:48):
In that kind of thing, right, There are some companies
do a Lingo is one, Klarna is one OP sorry, BP,
the former British Petroleum of what last year had a
thing where they said they were cutting seventy percent of
their contract software developers.
Speaker 1 (36:04):
And in most of these they've kind of rolled them
back as well.
Speaker 2 (36:07):
And I don't think dual lingo has yet.
Speaker 1 (36:10):
This is just being unfit to you. They like a
day ago, really just like that would kind of It's
so funny. It's so funny. It's so funny that this
just keeps happening in exactly the same way. It's like, oh,
what a surprise, human beings to do stuff. Yeah, but
it kind of gets back to I think what you've
said about everything with l lams. It's like you can
teach something to say, Yeah, I think the right The
thing you're looking for is this, but you can't teach
(36:32):
it context. And that's been a point you've made again
and again, Like it seems the job of a software
engineer is highly contextual, unless you're like in the earlier days.
Speaker 2 (36:41):
Yeah, and I like it.
Speaker 4 (36:43):
It sometimes to the Memento guy from the Momento movie, right,
where like can't form long term memories. Then do you
do you really want the Momento guy to be the
person that's building the software that makes the seven thirty
seven max be able to compensate for its control input. Yeah.
Speaker 1 (37:00):
Well, the thing is, though, with that argument, they would argue,
and I know that there is a better argument here.
They would argue, well, what if we just give it
everything that's ever happened? What if we just show every
single thing we've ever done in GitHub? Surely then it
would understand.
Speaker 4 (37:16):
So the what I have seen from the papers that
I have read is that lllms have a basically squishy
middle context problem kind of the way that you do right. So,
if somebody gives you a big document to read or
a big long documentary to watch or something, and then
they ask you questions. What they're going to find is
that you remember a lot more from the beginning of
(37:37):
that and the end of that than you do from
the middle of that. Right, and lllms have the same
kind of problem. Right, And the other problem that the
LMS seem to have is that when you give them
a whole bunch of instructions, just instructions, polled on instructions,
pulled on instructions, they can either get confused and forget
some of the instructions, or they deadlock, or they just
(37:57):
start going, Okay, I can't satisfy all of these I'm
not even going to bother to satisfy any of them,
or they'll pick one or two. The fact that you
can take a million tokens and you can stick that
in the memory block that the the GPU is going
to process, doesn't necessarily believe, doesn't necessarily mean that all
(38:21):
of the tokens in that memory block are actually going
to be treated equally and going.
Speaker 2 (38:25):
To be understood. Right in theory, maybe if you could.
Speaker 4 (38:35):
Train your if you could like custom train an LLM
and modify all of its weights based on exactly what
your stuff was, and do that like day after day
after day after day. As things changed, you would theoretically
get better. I still don't think it would be you know,
I still think would understand the context as well, but
(38:57):
that would be ridiculously expensive.
Speaker 1 (39:00):
Yeah, and at that point you could train a person, yes.
Speaker 4 (39:05):
I mean the person would probably be more annoying. So
that's I mean the point. I mean, a lot of
this is seems to be really you know, we don't
like dealing with the Prima Donna programmer kind of thing,
right that there's this you know, I mean not just programmers, right,
we don't also don't want to deal with the Prima
Donna reporters or the Prima Donna illustrators or just want
(39:28):
to get rid.
Speaker 1 (39:28):
Of these people. Right. He's annoying. They ask for stuff,
they want money.
Speaker 2 (39:33):
Yeah, and days off and sickly even you know, healthcare, and.
Speaker 1 (39:38):
It's just disgusting, how Dad. It's so it's frustrating as
well because across software development and everything, but especially with
self A developers, it feels just very insulting because it
doesn't seem like this stuff. Actually, here's a better question.
Have you seen much of an improvement with like one
to oh three like these reasoning Do you I think
(40:00):
the reasoning models change things for the beta. If so, how.
Speaker 2 (40:05):
So a little that they don't make as many stupid mistakes.
Speaker 4 (40:13):
It is basically what it what what it boils down
to going back to your your first thing, though, right,
I mean so. There was a piece, actually a couple
of pieces recently. One of them was about, you know,
tech workers are just like the rest of us, They're miserable.
There I'll I'll give you blinks to these. The other
one was a Corey doctor opiece that was like the
future of Amazon coders is the present of Amazon warehouse
(40:37):
workers or vice versa. There's there's a lot of there
has been a lot of deference given to software developers
over time, because you know, we have been kind of
the engine that's made a lot of the last twenty
thirty years work, and there's a desire to make that
(40:58):
not so anymore, and to make us just as interchangeable
as everybody else. I guess, you know, from a from
a economic standpoint, I kind of don't blame them.
Speaker 2 (41:09):
I understand why they're trying to do what they're doing.
Speaker 4 (41:11):
I don't I mean, I don't think that the warehouse
workers should be treated the way the warehouse workers are treated,
you know, much less everybody else gets treated that way.
And it's been a lot worse since the giant layoffs
at Twitter now X. When that happened and the thing
didn't crash and completely burn like everybody was or not everybody,
(41:33):
but a lot of people were expecting it to, the
the sentiment became, well, maybe all this, all these software
developers aren't as important as they you know, we've always
thought they were.
Speaker 2 (41:46):
And you know, we will see over time what the
end result of that is. My guess is it's going
to be end up being a mess.
Speaker 4 (41:54):
But you know, I'm a I'm a software developer, right,
I'm gonna it behooves me for it to be a mess, right,
So it might just be my bias that's getting in
the way.
Speaker 1 (42:04):
I actually I think that you're right though, because I
remember back in twenty twenty one and onward the kind
of post remote work, the remote works. There was the
whole anti remote work push, but there was the whole
quiet quitting and things like that. That's twenty twenty two
where it's like software engineering, they just they expect to
be treated so well because twenty twenty one's all the
(42:25):
insane hiring, right. You saw tech companies like parking software workers.
I think that played into it as well, where all
of these companies who chose to pay these software engineers,
they were the ones that made the offers, got pissed
off that they'd done. Someone thought we should cut all
labor down to size, and then along comes coding. Almost
makes me wonder if most of these venture capitalists talking
(42:46):
about this don't know how to code themselves. Yeah, gotta wonder.
Speaker 2 (42:51):
I don't know many that do. Yeah, I know some
that have at some point.
Speaker 1 (42:56):
But the best thing it's at some point it's like
they're not part of modern software development culture, which I
know sounds kind of wanky, but I mean, just how
an organization builds software feels like something they should know.
But then again, they don't know how to build a
real organization ethos. Who the fuck? Yeah?
Speaker 2 (43:16):
Well, I mean, honestly a lot of it.
Speaker 4 (43:20):
I've been in organizations that VC's basically killed, right because
you know, we built a thing. That thing was, you know,
a reasonable business, But vcs don't want a reasonable business.
They want either one hundred ex return or they want
to tax write off, and they don't want anything in between, right, yeah,
So I mean what what they're looking for is really
(43:41):
I mean, they're not trying to run a regular business, right,
They're not trying to do the normal process. They're trying
to either you know, hit one out of the park
or throw it away and move on. And so they're
they're the rules for them are different because what they're
trying to accomplish is not what the rest of us.
Speaker 2 (43:57):
Are trying to accomplish. As a general rule.
Speaker 1 (44:00):
The theme of the fucking show, it's just like, it's
just like you have these people that don't code saying
how cod is should code, like Dario amat Day the
other day saying that this year we're going to have
the first one person software company with a billion dollars
revenue or something like that, and it's just I feel
like there are some people who should not be allowed
(44:20):
to speak as much sometimes, but it's just frustrating and insulting.
And it's but now that you've got me thinking about it,
it does feel like this is an attempt to reset
the labor market finally coming for software developers. And I
don't mean finally in a good way, right.
Speaker 4 (44:33):
I mean, it feels like that being in the being
in that organ being in that industry at the moment,
it really feels like that.
Speaker 2 (44:40):
Is it scary right now?
Speaker 1 (44:43):
Is it scary right now?
Speaker 4 (44:45):
Not for me because I'm old enough to be semi retired, right,
But I mean, I've been talking to a lot of folks.
I've been having a interviewing how much folks that are
that are listeners from my channel and kind of trying
to get a feel for what's going on. And I've
talked to folks that are you know, like I said that,
(45:05):
I talked to some folks that were like, you know,
I work for a big bank. They're cramming copilot, dinner
throat or eeveryoneted or not. I've talked to some folks
that are like, every time I sit down with my boss,
I'm thinking that, you know, this is going to be
the day that I'm going to find out that my
group is getting cut the way the other three groups
in the company is getting cut.
Speaker 2 (45:22):
There's a lot of.
Speaker 4 (45:24):
Artificial productivity requirement increases kind of thing, which is like
like one, just you know, we you know, we expect
more tickets closed per you know, two week period than
you know we've had before because we were giving you
this AI now, so you ought to be more productive
that kind of thing.
Speaker 1 (45:43):
Would the ticket necessarily be something that you just write
toad for it more than just.
Speaker 2 (45:48):
Well, so generally it's more than just that. But but
generally the ticket.
Speaker 4 (45:54):
That's kind of the way that we track the work
that we do in a lot of organizations, right, And
some tickets are like, I'm building a new thing, and
those are kind of easier to predict. And some tickets
are this thing isn't behaving right, go figure out where
the bug is. And those are a lot harder to predict,
but they have these things. Agile has this thing called
a velocity graph where basically you see how many tickets
(46:15):
per person get closed over time, and people want to
see the slope of that line change because they're giving
you AI.
Speaker 1 (46:24):
And I'm guessing the people telling you to change that
don't know what they're talking about.
Speaker 2 (46:28):
That seems to be the case.
Speaker 4 (46:29):
Great, so I mean the good news in theory, right,
I don't know to what extent this is going to happen,
but in theory, if they keep telling people, you know,
that slope of that line should be changing because you
have AI. Now, over time, if we see the slope
of that line not changing though, right, then theoretically it
(46:50):
will be proof that the AI is not providing the
return that people expect it. Well, you're not using it, right, Well, yes,
there's always that you're not prompting it, right.
Speaker 1 (47:00):
That is that is basically what I am people. One
of the many reasons what you want is like, I
want to have people that actually code on to talk
about this stuff, because it's really easy as a layman
myself and for others to just be like, oh but
this does replace coding, right, and it does? It sounds
like it really doesn't.
Speaker 2 (47:20):
Like it can help.
Speaker 1 (47:21):
It can be like a force multiplier to an extent,
but even past the initial steps, it just isn't there.
Speaker 4 (47:28):
Well, I mean, so the best analogy I've always found
to writing code is actually just writing, right. I Mean,
you can get chat GBT to spit out a few
paragraphs for you, right, but you know, you end up with,
you know, the legal briefs that have the story that's
made up or the you know, just things that aren't
connected to reality or stuff that you know, when people
(47:49):
read them, they're like, I mean, you you can tell
the difference between AI slot generated you know, like the
stupid the insert from the Chicago Sun, Yeah, yeah, the
Philadelphia Inquirer. You know, all the books, all the things
you can do this summer, right that like made up
books and all that kind of I mean, like, but
even even the articles that weren't the ones that we're
(48:11):
making up stuff. You read the you know, this is
what's going to be happening this summer. This is what
the weather's going to be like or whatever. And you're
reading and you're like this this there's no like insight here,
there's no thought here, there's no you know, there's nothing
in here that I get to the end of this.
I've read the whole thing. I understand the whole thing,
but I don't have anything I can walk away.
Speaker 1 (48:30):
With, right and I AI agents aren't coming along to
replace software. But that you're not scared of Devin.
Speaker 4 (48:37):
I am not scared of Devon, so I well, actually
I kind of am. I am scared that Devon is
going to make a mess of things and then more
things are going to get hacked, and that's going to
end up being worse for everybody.
Speaker 1 (48:48):
On the unit. Right, how would it do that?
Speaker 4 (48:51):
By I mean like we were talking about before, right,
So when you write code that isn't secure, right, and
you write code that you know uses a library that's
got an old version of a thing that they that
there's a known bug in it, but you don't bother
to check to see if there's a fix for that bug.
Or you don't use best practices when it comes to
writing code and that kind of thing, or you don't
(49:12):
think about the the kinds of maintainability issues that you're
going to have, and you do things like you ship
out code in a in an Internet of things thing
a light bulb right, or Internet Wi Fi router that
cannot be patched over the Internet that has a bug
in it, right, And now it's like that thing is
(49:33):
going to have a bug in it forever, and you're
gonna have to find all the ones on the on
the earth and turn them off before someone's not going
to be able to take them and be able to
hack them and use them to attack somebody else from there.
Speaker 1 (49:44):
I mean, IoT is a huge probe low. Oh yeah,
but the cheap ones have like the spywab stuff and
panto mining it.
Speaker 4 (49:52):
Just but yeah, the ones, the ones that have they
have like really nasty vulnerabilities, and they have no way
of being updated once they leave the factory, right, and
it's just as long as they're out there, they're going
to be a problem literally for everybody on the internet.
Speaker 1 (50:06):
Jesus, Well, what can to wrap us up? What can
a new engineer, someone new to software development? What can
they learn right now? You've kind of done a video
on this, but I think it's a good place to
wrap us up. What can they start learning to actually
get ahead, to actually prepare for all of this.
Speaker 4 (50:23):
That's a really good question. So you can't these days,
you can't really be able to be an engineer. You
can't get hired as an engineer without some ability to
talk about being able to do prompts and use you know,
some kind of AI code editor or that kind of thing.
It's just an expectation of the job. Now, whether it
(50:45):
should be or not a different thing. The I mean,
like I said before, there are situations where you tell
it what you want and it will type faster than
you possibly can. So you know that's not necessarily bad.
You need to understand that you need to figure out Well, okay,
I'll get back to something else. You need to figure
out basically how to test the thing, right, So how
(51:07):
do you make sure that the code that it spits
out does what you meant it to do. And what
I'm expecting is that we're going to spend more time
thinking about testing and thinking more about, you know, trying
to find exceptions and that kind of thing than we
have in the past, because the code that's actually being
generated is going to be less likely to be quality
than it was in the past. Right. The problem is
(51:29):
it has become the case in the in the programming
industry that the things you need to do to get
through the interview to get hired have very little resemblance
to the things that you actually do on the job
that you need to actually do a good job. And
so that's a whole different We could probably have a
whole other podcast episode just about the interviewing problem.
Speaker 2 (51:48):
But the main thing right now, it's so right now
the whole hiring thing.
Speaker 4 (51:55):
And this isn't I don't think true for just programmers,
but it's especially true for programmers. Is all you know,
bots that customize your resume and write a custom color
cover letter and then send them over to the submit
the thing to the bot that's screening the resume and
screening right, and that getting it to the point where
you can actually talk to a human is a nightmare
(52:16):
right now, So the whole hiring system is kind of broken,
so that the actually getting to the point where you
can get hired is a nightmare at the moment. But
the thing that you can do is figure out what
kinds of things that AI are good at is good at.
And one of the things that AI is pretty good
at is things that don't matter as much, right. So,
(52:37):
like you know, AI can pick the layout of a
site potentially right, and you could have it picked two
or three of them, and you can basically do what's
called an A B test, and you can randomly assign
people to it. You can figure out which one of
them performs better, and you can throw the rest.
Speaker 2 (52:50):
Of the money.
Speaker 1 (52:51):
And even then at some point you will probably want
the design customized.
Speaker 2 (52:55):
Yeah, I mean, but but.
Speaker 4 (52:58):
I think there will be a lot of things where
people can kind of get something that's kind of good
enough to get started right, right. And I think that
to some extent this is going to be kind of
a boon for the industry in the longer term where
somebody who can't program right now, but who has some
idea of kind of what they want can do like
(53:19):
a vibe coding thing. They can validate that the market
that they want to try to attack exists, right, and
that people want to use the kind of thing that
they built, and then they can bring in somebody to
actually build it, right, you know what I mean. And
those kinds of things wouldn't necessarily have been able to
happen in the complete absence of AI.
Speaker 2 (53:39):
So it's not, I don't think, completely useless.
Speaker 4 (53:41):
And there there's times when as a as a developer,
there are things that we're not good at, like you know,
writing marketing copy and that kind of stuff that if
we're trying to do a project for ourselves, you know,
a lot of that stuff we can just outsource to
the AI because it's not the thing that keeps the
project from actually breaking and getting hacked in that kind
of thing, right. So it's kind of like there's this
(54:02):
concept where you need to keep the things that are
part of your competitive advantage in house, and everything else
you can kind of outsource to somebody else. The kinds
of things you can outsource to somebody else are the
kinds of things that you potentially you could throw an
AI at because they're.
Speaker 1 (54:14):
Not even even then it's like, it doesn't seem like
that's a ton of things right now or will.
Speaker 2 (54:20):
Be again, it's the so it's basically two things.
Speaker 4 (54:25):
It's things that where the quality of the thing doesn't
matter really, right, which every business has those kinds of things, right,
And they're the kinds of things where you can define
a metric that you can test the AI against and
let it try over and over and over and over
and over again until it gets to the point where
it's good enough.
Speaker 1 (54:43):
Yeah.
Speaker 4 (54:44):
Right, So if your metric is more people click on
this button than the button before, right, then you can
have the AI create a whole bunch of different ways
to skin that button, right, and then you can say, okay,
so the one that tested best is the one we're
going to keep. That's the thing you can throw an
AI at, right, because you've got a well defined way
of checking in no telling how long it's going to take,
(55:05):
but you have a well defined way of checking to
see if it's working right or not.
Speaker 1 (55:09):
So yeah, I mean, for years I've had the theory
that this industry was a twenty to twenty five billion
dollar total addressable market pretending to be a trillion dollar one.
And everything you're saying really is just it's like you're
describing things like platform as a service. They like like, yeah,
things that you use in tandem with very real people
in intentional ideas.
Speaker 4 (55:31):
Yeah, this is I don't see a world in which
this is a we replace all the humans. You know
that the whole Like, you know, this is going to
displace eighty percent of the white color workers in the world.
I just you know that the only the only people
that are really going to be replaced anytime soon are
people that either weren't doing a great job to start with,
(55:54):
or people whose bosses don't understand what they were doing
to the point the boss thought that what they were
doing mattered. And my guess is that there's going to
be regret at that point and that at some point
they're gonna have to bring those people back.
Speaker 1 (56:08):
Well, Carle, this has been such a wonderful conversation. Where
can people find you?
Speaker 4 (56:14):
I am Internet of Bugs at YouTube is probably the
easiest place to find me, and then there are links
on that channel to point at other things.
Speaker 1 (56:21):
And you've been listening to me at Zichron you've been
listening to Better Offline. Thank you everyone for listening, and yeah,
we'll catch you next week.
Speaker 2 (56:35):
Thank you for listening to Better Offline.
Speaker 1 (56:37):
The editor and composer of the Better Offline theme song
is Mattawsowski. You can check out more of his music
and audio projects at Mattasowski dot com, m A T
T O S O W s ki dot com. You
can email me at easy at Better offline dot com
or visit better Offline dot com to find more podcast
links and of course, my newsletter. I also really recommend
(56:59):
you go to chat Where's your Head dot at to
visit the discord, and go to our slash Better Offline
to check out I'll Reddit. Thank you so much for listening.
Speaker 3 (57:08):
Better Offline is a production of cool Zone Media. For
more from cool Zone Media, visit our website Coolzonemedia dot com,
or check us out on the iHeartRadio app, Apple Podcasts,
or wherever you get your podcasts