Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:08):
Welcome to the Angular plus Show. We're app developers of
all kinds share their insights and experiences. Let's get started.
Speaker 2 (00:21):
Welcome back, everyone to another episode of the Angular plus Show,
where we talk about Angular and all things tangential and
adjacent and sometimes just completely unrelated. It really depends on
the guest. Today, we're gonna be talking about something that
is relatively unrelated but also more related because of the
focus that the Angler team has been putting on this
(00:45):
topic a lot more recently. We might get into some
of our stuff, we might not. The topic is that
is necessarily about what the Angler team's been working on,
just a tangential to that kind of stuff. So before
we get into the topic and our guests though, I'm
joined by my two co hosts. Brooke, How are you, Brooke?
Speaker 3 (01:03):
I am great? How are you?
Speaker 2 (01:05):
I'm good. I'm stoked. I haven't seen Jason. Oh, I
guess I just spoiled our guest, Jason Warner. I haven't
seen Jason since an er GI KOMF. Yeah, I'm excited.
I'm excited for this episode.
Speaker 3 (01:20):
Be love.
Speaker 4 (01:22):
Yeah, I'm stoked to be here. I'm stoked to be here.
I'm excited to talk about AI burnout.
Speaker 2 (01:27):
Hey, you spoiled the topic of the episode.
Speaker 3 (01:30):
We spoilers from the top.
Speaker 2 (01:36):
Brian always stuff in my candle, always the candle stuff.
Speaker 4 (01:42):
Should we actually start over?
Speaker 2 (01:44):
No, I think we're okay. Editors are going to be like, oh,
these guys, I tell you. At least Yahn's not here
swearing swearing his mouth off. That's I gotta fill that
role today. Anyways. Our guest Jason Jason Warner. Some of
you may know him from the streams that he does
(02:05):
from the community stuff like that. Jason, how's going.
Speaker 3 (02:08):
I'm doing really well. It's awesome to be here, really
looking forward to it. Good.
Speaker 2 (02:14):
Are you burnt out?
Speaker 3 (02:15):
Am I burnt out? From AI?
Speaker 5 (02:18):
Uh?
Speaker 4 (02:19):
Man?
Speaker 3 (02:20):
That's a loaded question, isn't it.
Speaker 2 (02:23):
I feel like there's like one in that half answers right, yes,
or I'm getting there.
Speaker 3 (02:29):
Yeah, right.
Speaker 6 (02:30):
Well.
Speaker 3 (02:31):
The the interesting thing is when when AI first came out,
I don't think most of us saw it as a
threat to our jobs. It wasn't as good as it
is now. I think coding is a solved problem, kind
of like you know, Stockfish has solved chess. Most of
the good, most of the good AI's out there, of
(02:53):
solved coding, and so a lot of us in our
industry are looking at, you know, how do we survive?
And that's where I think a lot of the burnout
comes from. So, yeah, philosophically, I get asked this question,
what fair?
Speaker 2 (03:10):
I don't disagree with you. That being said, there's a
huge bot coming. That being said, after spending the last
like three weeks orchestrating and babysitting and handholding AI agents,
spinning off dozens of tasks a day and yeah, hundreds
of bull requests. Automatically, I'm not currently worried for my position,
(03:35):
right right, Well, yeah, what I am burned out from it,
but I am burnt out from it for sure.
Speaker 3 (03:42):
What I'll tell you is I look at like AI
coding very similar to when the AI image generation first
came out and the AI video generation first came out.
I was huge into it, spent a bunch of time
learning how to do it. After a while, you learned
that no matter how well you prompted the image, it
was always going to generate what it wanted to generate, right,
(04:06):
and you're you're looking for random keys and things like that.
That's the way I look at coding, and when I
say coding solved. What I mean is if if you
define your entire job as being a coder, your entire
job is writing code, your job is maybe not gone,
but it's it's it's on the endangered species list and
(04:29):
it's I mean AI coding is just getting better and better.
Speaker 2 (04:33):
But that's not we that's not all our job, right
like that, If you define.
Speaker 3 (04:40):
Your job as writing code, you've been probably obsolete for
the past five to ten years. Anyway. Your job is
not to just go in and produce tons of code.
Your job is to understand the business, translate business into
you know, whatever you're doing, or if you're working for,
you know, whatever organization, your job is to understand what's
(05:01):
going on and translate that into you know, whatever, whether
it's code, whether it's documentation, whether it's architecture, whatever it is,
that's your job. Your job is to provide value. Coding
very very small percentage of you know, what you should
(05:22):
do if you want to be valuable.
Speaker 2 (05:24):
Yeah, there's still the human element there too, right, Okay,
quote unquote AIS and they're not again this is we
don't need to get deep into this, but they're not
actually AIS right, their language models, right, right, Like, this
is where the AIS like are still falling short, even
though like the like quote unquote thought leaders in this space,
like the big haunchos of all the big ones are like,
(05:45):
you know, this is where they want to go, but
like there's still they're lacking that human element and they're
lacking that element of creativity and problem solving that we
as developers, we as software engineers, we as technical leaders,
we as whatever still have to be able to bridge
to successfully deliver the thing we're being paid.
Speaker 3 (06:08):
For, right right, And I mean to go along with that,
you make a very good point. So when we talk
about AIS, they aren't really artificial intelligence. They just process
a ton of text.
Speaker 2 (06:22):
Right.
Speaker 3 (06:23):
Where I find AI helping me the most is enhancing
my job giving it a code base and saying, hey,
I'm looking for these hot spots. Can you find potential
hotspots for me? And have it go through the codebase
and give me spots where I can look the other place.
That's super helpful. And I don't want to call out
any specific AIS for this, but I use a specific
(06:46):
AI that's really really good at Angular and you guys
can probably guess what it is, but I've got it
set up with my GitHub account because when I'm doing
solo projects, one of the things that I tend to
find is I tend to double down on my on
my weaknesses, and they tend to amplify when I'm working
(07:09):
solo having an AI that can come in and be like, hey, idiot,
you use dollar any in a template. Here's some better ideas.
It's like, okay, thank you, right, so there's some benefit there.
Where I start to get the burnout is you know,
when when I'm working my job and the emails I'm
(07:30):
getting are obviously written by AI. Yeah, or I'm reviewing
code and they've left their prompt in the comments and
the AI code isn't really that great.
Speaker 2 (07:43):
Yeah, it's like, oh, come on, rights.
Speaker 3 (07:48):
It's the slop right the I was just reading about
a guy who started a fiver career that's now turned
into a six figure job for him, and he just
says he's going to clean up the messes that AI makes.
And it's a quickly growing industry. That's where I think
we're starting to see burnout and where we're starting to
(08:10):
see businesses pull back on what we're doing with AI
that you know it used to be. If you mentioned AI.
Everybody's like, Oh, that's really really cool. Now when you
go into these business meetings, people are like, well, wait,
what benefit does it provide? You know, there's new metrics
(08:32):
coming out showing that hey, maybe developers who are coding
with AI, depending on their skill level, AI may actually
slow you down, right, these ms may actually slow you down.
And that's where I think we're going to start seeing
more of the burnout show up.
Speaker 6 (08:51):
That's actually exactly what I wanted to bring, because you've
already mentioned like solo coding and then more like enterprise coding.
Speaker 2 (08:58):
And that's where I'm seeing.
Speaker 6 (08:59):
A lot of burnout is that there seems to be
a huge gap really between the engineering teams and the
executive leadership, like when it comes to that AI adoption, right,
like the expectations, the communication. That's where I'm seeing the
burnout come from is these investors and the backers who
(09:22):
are saying you have to use AI and you have
to show us how you're using it. But the developers
aren't necessarily agreeing with that. It needs to be used
as much as they're being expected to use it. So
you know, how do you kind of work through that
and how do you help the investors and the backers
see that this isn't really to our best advantage here.
Speaker 3 (09:44):
Well, to go along with that, my past two jobs
have been in fintech. Fintech is highly regulated. We go
through all of these different regulatory you know, commissions, We
go through all of these different and depending on where
you operate, right, Like European laws are much different than US.
(10:07):
And even in the US, like California is so different
from the rest of the country, right, And there's just
so many different laws and regulations and lms aren't there.
And you know, sometimes the code that they're grabbing off of,
you know, whatever data set they were trained on, is
actively working against your regulations. And I'm sure you're probably
(10:30):
going to see that also in like the insurance industries,
the medical industries. And that's an excellent question, Brook, Like,
you know, you've got backers who are like AI AI AI,
and then you've got the big ais, right who still
aren't financially solvent. They're relying a lot on their backers,
and these backers want to get return on their investment,
(10:53):
so they're they're out there, you know, AI AI AI,
and you have to step back. I mean, m cp
is is a huge thing right now. Right, And Brian
I'm sure has a lot to say about m CPS,
but I believe that the S and m c P
stands for security. Oh uh yeah, yeah, right, yeah, we're
(11:21):
we're struggling with with that right now. As as we
look at integrating AIS. We would love to use m CPS,
but because of security restrictions and because you know, we're
a highly regulated institution and because we deal with so
much money m CPS, we we've got to be very
(11:45):
very careful what we do. And that's that's what I
that's what I see from like enterprises, we we would
love to play with the new toys, but the new
toys maybe aren't ready for us well, and that's kind
I feel like the.
Speaker 2 (12:02):
The core issue at play here. This is kind of
like we're getting into like elements of a potential bubble
as well, but that's not what we're talking about this,
But like, the the issue is that the the current
hype or what we're being told about what they're capable
of and what they abide buy and etcetera, etcetera, etcetera. Right,
(12:23):
these companies are still fighting that they didn't use copyrighted
material and core right, That's that's where they're at, and
they're the ones telling us that, like this is going
to solve all your problems. The delta is just so
big from like what they can actually do, and like
I just spent the last three weeks orchestra and as
AI agents, and like it is like they've gotten a
(12:45):
lot better than in the last twelve months. That's what
Like I am burnt the hell out from handholding these things, right,
and like so like the delta between like what we're
being told they're capable of doing and like what they're
actually came doing is just like soap or what rules
they buy buy or you know, we're seeing all these
news articles come out about like so like just working
(13:09):
around the security restrictions in place, like oh, no, it
shouldn't have told that person to go do that horrible thing,
Like oh that was our bad, And it's like, right, no,
that no, that can't be your bad. That's not remotely Okay,
Who's who's on the hook for your shareholders? Right, Who's
who's on the hook for you know, for for people
(13:30):
who lost their personal data? Yeah, and all of the
contracts are written in such a way that ultimately it's
it's it's the engineer who needs to be making.
Speaker 3 (13:41):
Sure the code is okay, that they accept the code.
So uh yeah, I I feel you there that you
want to turn them loose, but at the same time
you have to babysit them. And the way I describe
it when when people ask me, you know, how should
(14:01):
how should we be using AI? If you're using AI
to generate code, I look at AI is like that
super eager junior developer who just wants to throw a
whole bunch of stuff that it's really excited about onto
the screen. And your job is to be like, no, right,
(14:23):
good idea, good concept, but let's pull it back a
little bit and think about the bigger picture. And that
that's the way that I tend to work with AI
generating code. When we get into agentic stuff and agentic flows,
I don't really use them for code. I use them
more for analytics and so like I have an agentic
(14:48):
flow with with my streams where I just throw my
YouTube streams, you know, my YouTube replays into an agent
and I just say, hey, you know, what did I
tell my community I would do? What places that I
could do better? And it goes through and analyzes everything,
and it gives me good feedback on how to improve
my streaming. I do the same thing, you know, with work,
(15:12):
I've learned that you know, taking notes at my level
is vastly important for what I do. So I feed
my notes into an agentic flow and you know, I
ask it, hey, rate my week. How am I doing
on a scale of one to ten? Where are places
(15:33):
that I can communicate better? Where are places that I
can do this right? And based on my notes, it
starts feeding things back. Or I just produce this document,
evaluate this document against my notes or against my chat
or against whatever. That's where I see a lot of
value is AIS or lms are so good at just
(15:53):
analyzing tons of text that I can't keep in my
brain and giving me places to look.
Speaker 2 (16:00):
Yeah, and on the like, absolutely I use it for
a lot of that kind of stuff as well. The
one place that I have found it to be crazy
helpful with coding specifically is very deterministic tasks. So like
this like thing that I've been like task I've been
working on in the past like three weeks where we've
like split this big migration from going from just and
(16:26):
like inline graph QL querious mutations to vat test and
graphical cogen Okay, thousands of tests, like sixty two different
test suites across different applications and stuff like that. Right, Like,
there's I don't even know, hundreds of files kind of
things to change, but like eighty five percent of it
(16:47):
is literally the exact same process. Yeah, here's the before,
here's the after. You need to add this file and
then run this command and then change the file to
beat this format. Eighty five some of them literally just that, right,
So I'm able to spin off like dozens of agents
to do like through three files at a time, and
then I go fix up the EDU cases. That's the
(17:09):
best place I've found for AI to do the coding
side of it.
Speaker 3 (17:15):
Absolutely.
Speaker 2 (17:15):
The other side, you know, analyzing things, comparing text unbelievable
at that kind of right, we create so much like
textual content as companies and as developers and as people
that like, there's just so many sources of information literally everywhere, right,
(17:36):
So like vectorize it, ask an AI compare all that.
It's fantastic. But then there is the other side where
it's like, Okay, you got this monotonous tasks that would
take someone you know, four months to do. Okay, we'll
just like iterate on and loot pull your hair out,
building a good prompt for like four or five days, right,
(17:56):
and then reduce the task down to like two weeks
from like four by spinning it right right, Like that's
the best place I've seen it for sure.
Speaker 6 (18:04):
Absolutely, that kind of rings up another question for me though,
because there is this emphasis on speed, right. I think
that's why a lot of especially like management and leadership,
they really want to like generate that code quickly, and
like you're pointing out, Jay, there is that advantage. But
how do we help our teams like when we're you know,
(18:27):
we're on our teams, we've got our sprints, Like, how
do we really encourage our engineers and companies I guess
to like stay focused on purposeful architecture or that complex
problem solving or like innovating in creativity without asking AI,
what do you think I should do here? What's a
(18:47):
cool way to?
Speaker 3 (18:47):
You know?
Speaker 6 (18:48):
Like it may give you some ideas, but I don't
think that we should forget that we have that same capability.
So how do we encourage that? How do we not
lose those you know, those skills that we really do
need Man.
Speaker 3 (19:05):
That is such an insightful question. I love that question.
One of the things that I've kind of added to
my profile is this notion of being like a code philosopher.
And the reason I did that is I'm beginning to
(19:27):
discover and especially as I've been streaming more and more,
and I'm realizing why more streamers do reactionary content and
stuff like that. Most people who come to my streams
probably know how to do better what I'm showing than
I know how to do. They're there for the entertainment content,
(19:48):
but they're also there for the insights that I can
provide into how do you level up as an engineer,
as a developer, as a coder. Your questions very insightful,
and my answer has been lately, we need to think
(20:09):
about the cost of our decisions. And most engineers, to
your point, Brooke, live in this two week period right
where every sprint has to be completed in two weeks,
and if it carries over, that's a black mark against you,
and so they're they're very buried in these two week timeframes.
(20:30):
A lot of the decisions we're asking them to make
are going to affect things down the road. Two three, four,
or five years, and that's where we need to back up.
And as we're making decisions and as we're evaluating what
the you know, what the LM is generating, as we're
evaluating the packages that we're bringing into the project, as
(20:53):
we're evaluating the patterns that we're producing and we want
to encourage, the thought needs to be what am I doing?
You know, what's the cost of this? And the it
becomes a bigger question the more you level up in
your career. As a junior developer, the cost is often
(21:15):
only to yourself. As you get into the senior developer role,
the cost now becomes maybe your team or maybe two
or three developers. Right as you get into you know,
like a team lead or a staff position, Now that
decision could have effect on a lot of people's career.
(21:37):
And then as you move up into architect director senior
staff level, well, now your decisions have the ability to
affect companies. And that's where that's where, you know, the
philosophy comes in. If I ask my team to do this,
am I taking them too far off the beaten path?
(22:00):
Am I digging us too deep of a hole that
if my engineers want to move on to another company
if they want to move to another team within the
same organization, Have my decisions locked them into what I
think is cool and that becomes you know, big considerations,
and that that's the same lens that I like to
(22:21):
take with with you know, the code that AI generates.
When I look at the code that AI generates, am
I locking my team into AI generated code because we
can't understand what the AI is doing? Am I taking
my team too far off the beaten path by using
an AI only to generate my code?
Speaker 2 (22:41):
You know?
Speaker 3 (22:42):
Those Those are the philosophical things that I think as
leadership we need to be taking very seriously. But even
all the way down to yourself, even if you're a
junior coder. I mean, I don't even want to say even.
I mean, if you are a junior coder and all
you do is affect yourself, you still need to be
making those decisions because you're in charge of your career.
(23:04):
If you want to become a senior, you need to
be making those decisions that are going to lead you
to be a senior and owning that code and what
does that cost look like in the future. Those are
big decisions.
Speaker 2 (23:18):
I think the one thing I'd add on to that too,
that I've seen within my team is like going back
to Brooks question around like how do you you know
solve this or handle that?
Speaker 3 (23:27):
Right?
Speaker 2 (23:27):
Is what I've seen is that can come down to
like company culture a bit too, where like I'll take
my team, for example, because I work with them every day,
they really care about the quality of what we're building, right.
They want to build the thing, have it launch, and
have our customers love it. Ye want to They want
(23:49):
to see people say like, damn, that's awesome, Like good
job guys, Like yeah, maybe maybe those people saying that,
especially both for our customers, like they're not so for
technical so like most people don't actually understand what it
took for us to deliver that solution, but they're impressed
by it. And my team really they have so much
(24:10):
ownership or the product, so much ownership over the systems
that they're building and the quality they'll put They want
to do the best that they possibly can't build the
best solution. They hate cutting corners, they hate making compromises, right,
and if they're not fully bought into the idea that
(24:32):
these outputs can be trusted, they're good. They have a
healthy amount of skepticism, right, And like there is obviously
that like that skepticism is that it's a range, right,
There's there's people that like we're talking about like crazy hype.
You're like, okay, let's just calm down a little bit there.
(24:52):
I don't think we're going at that point. And then
you have like the CTOs that are I don't know
if you're just like click baiting or like engagement farming
on LinkedIn or whatever, but they're like this is the
worst thing ever, and like I'm never letting my team
use any of these tools effort. And you're like, Okay,
I'm just gonna say it. I don't think we're at
(25:13):
the zero on the scale with this technology. Sure we're
not at a hundred, but we're very far away from
zero in terms of like helpfulness here, right, So you
get those things inside. So a healthy amount of skepticism
is good. So it comes down to, like, yeah, you
just like want to ship stuff and like not really
care about the quality and the output and like understanding
what's going on by all means vibe code until your
(25:34):
heart's content. When you start caring and feeling that ownership
and wanting to deliver a high quality thing. You aren't
going to one rely on these lms. No, We're just
we're not even remotely at that stage right now. How
are you encouraging it?
Speaker 6 (25:52):
What are you saying to your devs to help build
in that culture of caring, carrying.
Speaker 2 (25:59):
About the productor caring about using lms?
Speaker 6 (26:02):
About yeah, about what? Like you take that ownership of
Even if I've used AI, I still needed to take
that responsibility to double check that this is good, solid code.
Speaker 2 (26:13):
So that's it. We've built it up over years. I
have a really small team. It's me and three other devs.
We're all mostly full stack. Some of us focus on
one side of this ack a little bit more, but
everybody can do everything at Charles and I've given and
like some of these guys, like I hired effectively out
of university, Like two of them I hired effectively out
(26:35):
of university. Right. Funny enough, all four of us actually
have the exact same degree from the exact same university.
So that's kind of funny, but yeah, a little bit. Yeah,
University of British Columbia OPENAG campus really nepotist. But I've
given these my members of my team, these projects. I'm
(26:57):
saying you're owning this thing, like this is or project.
You were gonna go architect, design and build this entire
feature for us that might take three or four months.
You do that, you're gonna feel a sense of ownership absolutely, right,
and especially after you've seen that you can do it,
and you've seen the results of like how the leadership
(27:20):
thinks about it and what the customers are saying about
it and like all that kind of stuff. Right, Like
you start building this like I want to build this
product as best as I possibly can, and I feel
ownership over this all of this work that I put
in there, and like it probably it helps a little
bit that our mission is like we're trying to make
charities better effectively, right, and so like our mission is
(27:43):
like impact social impact focused, right, So we want to
build these like good, high quality solutions for people that
are solving the world's hardest problems.
Speaker 3 (27:52):
Yeah.
Speaker 2 (27:53):
Right, So like having a culture where like, yeah, maybe
they would be considered juniors when I hired them, but
like they've proved that they can go build these big
systems completely by themselves, and now they have this ownership
over them, right, So now they have a healthy amount
of skepticism, where like I don't want to ruin this
thing that I have that i've like I know is
(28:15):
of high quality, like hopping on the bandwagon too soon
kind of thing. Right, Yeah, in terms of like how
we're building in and like I'm not going to force
a team to use AI. I'm not going to force
them to do anything. What I will do is place
the Jedi mind tricks by showing them how it can
(28:37):
be helpful, so that I incept the fact that it's
helpful and beneficial for them without telling them they need
to use it. So I've spent a lot of my
time in Trellis is build building tooling, building systems, writing prompts,
showing examples, Like I'm gonna go and show them this
massive migration project and work here. I'll be like, I
(28:58):
probably cut the time down to ten percent of what
it would have been if we had done this ourselves. Right.
If you prove the benefits of a tool AI or otherwise,
if you prove the benefits of something to someone, then
they're going to believe it and want to use it.
Speaker 6 (29:13):
Seah, I totally love that though, because what you're highlighting
to me anyway, tell me if I'm wrong, Jason, is
that if we're going to use these tools, if we're
going to let AI really be part of our process,
you cannot forget that it's not all about the technology.
You have to bring in that human element. And I
(29:34):
think what you're explaining, Jay, is that you're doing just
that You're still giving your team members the autonomy to
go and do things their own way without forcing them.
You're trusting them and their ability to be the professionals
that they were hired to be. And so they then
use that choice to then follow your example, not feeling forced,
(30:00):
and they go learn how it can benefit them in
their own way. And then I think you're going to
get results ten times better than if you were shoving
it down.
Speaker 2 (30:10):
Their throats and forcing it, requiring it, all of that.
Speaker 6 (30:14):
But I do think that it's just such an interesting
a balance there where you can't just emphasize technology, technology, technology.
I think in anything you do really have to remember
that human side to it, and that people need to
be treated with that respect and that trust.
Speaker 5 (30:34):
Like you said, good morning, you know that moment when
your coffee hasn't kicked in yet, but your slack is
already blowing up with Hey, did you hear about that
new framework that just dropped?
Speaker 3 (30:46):
Yeah, me too.
Speaker 5 (30:48):
That's why I created the Weekly Death Spur, the newsletter
that catches you up on all the web def chaos
while you're still on your first cup.
Speaker 3 (30:56):
Oh look, another anger.
Speaker 5 (30:57):
Feature was just released, and what's this typescript's.
Speaker 3 (31:01):
Doing something again?
Speaker 5 (31:05):
Look also through the poor requests and change doot grama
so you don't have to five minutes what's my newsletter
on Wednesday morning? And you'll be the most informed person
in your stand up. Oh ah, that's better the Weekly
Desperate because your brain deserves a gentle onboarding to the
week's tech matness. Sign up at Weeklybrew Dot Depth and
(31:26):
get your dose of deaf news with your morning caffeine.
No hype, no clickbait, just the updates that actually matter.
Your Wednesday morning self will thank you.
Speaker 2 (31:35):
And even like you can even compare this to like
Instead of like employee to employer, you can look at
this from like employer to your customer. There's like there's
a thing called the adoption curve. Right there's the the
early adopters, there's the laggards, and everybody in between. Right there,
it's the same thing internally within a company, all three
(31:56):
of my devs are going to adopt tools at different rates,
different times, So having a single policy across my team
is only going to cause problems. And like, to me,
that just seems logical that I'm not going to like
force someone that may be closer to the laggard side
than the early adopter side to do something because I'm
(32:17):
just going to alienate them, and then I'm not gonna
get the already good output i was getting from them prior.
They're like, prior good output is going to drop because
I'm forcing them to do something along the curve that
they're not at.
Speaker 3 (32:32):
What's interesting is it as your teams grow, and as
they are more and more people, you start to get
that curve starts to become more and more defined. Right,
what you'll find is people's values tend to drive where
they sit on that curve. And what do I mean
by that? Well, I worked with a guy who he
(32:54):
worked his his tech job because he was building a
van and the van was what he valued, and so
you knew that you would get excellent work out of
him during his tech job, but once his job was over,
he was done. The other extreme are are the people
who are just constantly out there, you know, researching the
(33:16):
latest framework, researching the latest language, researching the latest AI innovation. Right,
the values are different. As leadership, it's important that we
recognize who values what and allow them to align with
their values. So you give ownership to the people who
value ownership, and they're going to pull the rest of
(33:40):
the people along, Right, They're your early adopters that are
going to help you show the value. And if there
isn't value, then you know you also need to be
willing to you know, cut and move on. But that, like,
from a moral standpoint, allowing people and you said this
really well day, that allowing people to be where they
(34:02):
are in their journey helps them produce better than if
we get behind them and just push as hard as
we can. Because the more people we try to push,
the harder that is and the slower it goes. I'm
so funny.
Speaker 2 (34:15):
Guys don't like being pushed, and that's what leads to
the burnout, right, Like, when you push them, that's what
creates that burnout.
Speaker 6 (34:22):
So then on that, on that thought, like, what do
you think are some of those warning signs that we
as team members, as developers or as tech leads or managers, like,
what are those warning signs that we should be looking
for to help us know that we are about to
hit some burnout here? Like have you found anything not
(34:44):
so much yet because it's still pretty new, and what
are your thoughts on that.
Speaker 3 (34:48):
Man Over, over the course of my career, I've gone
through various periods of burnout and and you know, desire
and stuff like that. One of the one of the
biggest key indicators of burnout for me personally is when
I start going I don't care. You know, when when
something that I used to care about happens and I
(35:11):
just have apathy. That's a big sign for me personally
that there's apathy and I start to look for that,
you know, as I move into leadership, I start to
look for that in meetings. A meeting where we go
and we have like a knocked out, knock down, drag
out argument that I hate your right. Yeah, maybe it's
(35:32):
not comfortable, but there was passion in that meeting and
people cared. As soon as you get into a meeting
and you say something it's potentially controversial and you get
blank stares, maybe you need to either reevaluate your messaging
reevaluate who's in the meeting, or maybe you're burning out
(35:52):
your teams, And that's a big key indicator for me apathy.
Once I start seeing apathy, I need to look and
dig deeper and go, wait, what's the cause of this
and how do we help alleviate it?
Speaker 2 (36:07):
Well, it probably means they've just resigned themselves to the
status quo, right, And you're like, I don't want my
team feeling resigned. They're not always stick around forever. If
they're feeling resigned, I don't help them to stick around.
Speaker 3 (36:19):
I like working with them.
Speaker 2 (36:20):
I actually don't want to replace them with AIS because
I like them as right exactly.
Speaker 4 (36:27):
I think it's like, first of all, I've loved this conversation.
I've just been listening and just really enjoying it. I
hope the listener is in the same boat. I really
think that like part of the part of the thing
that maybe I just wanted to inject in the conversation
here a little bit, is like, let's not forget the
learnings of the past as we like bring them into
the president and hopefully the future, because you know, we've
(36:47):
been through this, whether it's humanity or whether whatever it
is we've had like technological revolutions or breakthrough So whatever
you want to call this, you could you could put
the revolution or whatever word on it, or break through,
whatever you want to call it. It's been a long
slog for those working in the mL industry, so for them,
perhaps it's not so much of a breakthrough. But you know,
I think there's a lot of analogies you could look at.
(37:08):
I think one of the analogies that's close to me
anyways is just like aircraft pilots, right, I mean, so
like for the longest time, you know, pilots prided themselves
on perhaps like being able to fly by like stick
and wire, and there's a lot of skill there, there's
a lot of love there. There's still a lot of
people that do that, and that's amazing, and it's like
(37:28):
a craft. But then automation comes in and things like autopilot,
things like computerized systems, things like just you know, non
like digital digital avionics, and these kinds of things that
challenge the industry in many ways and probably will continue
to challenge that industry for years to come, especially in
this regard with mL and AI. Hopefully for the goal
(37:48):
of improving quality and safety, I'm sure there'll be some
stumbles and some backsteps along the way, and I think
we're going to I think we're seeing that here too,
And I think the analogy plays out pretty well. It
breaks down in some regards, but I think it plays
out pretty well. And then I think we're going to
continue to see job shift and change. And I'm not
trying to like dismiss any sort of burnout, like I
(38:10):
totally get it. Like sometimes it's like I can't even
keep up with hacker news. I can't keep up with
all the different tools and the new startups that come out.
And so I think the other thing, the other analogy
that I think plays well here too. And I've just
thought about this while you guys were talking. So if
this challenged me here, if you're just like na a
dog that doesn't work, I'm good with it because I
(38:32):
don't want to like take it and run with it
too far if it's a bad one. But I do
think there's also an analogy here in terms of typescript.
So think about, like, you know, ten years ago, I
was writing just JavaScript, dude like and plugging holes getting
runtime exceptions in my browser. Property of undefined does not
exist or whatever property food does not exist and undefined.
(38:54):
I'm like, ah, duh, I didn't refactor this other file
over here. I forgot to do that.
Speaker 3 (38:58):
You know.
Speaker 4 (38:59):
Hopefully that was an whatever it is, right, And so
we had like we had a deficiency perhaps or there
was like there was an opportunity to like improve quality
and improve the applications that were shipping at the end
of the day to our end users. And so I
think the same thing is maybe kind of true. And like,
you know, if you're familiar with like Gartner's hype cycle,
(39:20):
with like the hype and the trophic delusionsment and like
all of that, Like I think like Typescript played out
pretty well in that regard. And again, call me out
if you're like, nah, dog, like I don't know what
you're talking about. But like I remember when typescript first
came out, oh gosh, I don't know, twenty fourteen, fifteen,
I don't know whatever, ten years ago, okay, And like
(39:42):
there were people that were just like, this is gonna
solve everything, right ever types clab like I'm reach. I
just mean that was me and that's fine, that's fine, right,
And we talked about the adoption curve and all of that,
and I think I think you guys are on point
on that regard. But like there's people that were just like, oh, dude,
I just wrote just renamed everything, my tire and my
title thing. I'm going all in on this baby. And
(40:04):
you know, then they ran into issues and they're like,
oh shit, I need Google Maps and there's no types
for Google Maps. So now I just got to like
I'm neuters here, I can't go this or right, or
I start writing my own types or I'm like, oh
that's wrong, or like, and there's this like churn that
happens during this process of just like it's great, it
has promised, We're not sure. And then you have the
people also during the typescript like again, go back, like
(40:26):
you know, rewind the tape like whatever, eight ten years ago,
that we're just like f typescript, typescript can go eat
a bag, up beat whatever, right, and they're just like,
I'm never writing typescript ever ever.
Speaker 6 (40:36):
I hate it.
Speaker 4 (40:36):
I don't need types. And then there's like eventually we
collasced into like I'm not saying like typescript one. What
I'm saying is like the industry collasced around it. And said, Okay, well,
like how do we take this tool and how do
we make it the tool better? How do we get
better results from the tool? How do we better understand
the tool, and like how do we end up like
(40:57):
taking this and reusing it to our advantage? And I
think the same plays out two in the pilot analogy,
because I mean, like if you look at the aviation industry,
buying large, like safety is like incredible, Like the fact
that you get on a plane, which is like balancing
on a toothpick at like the thirty five thousand feet
above the earth five hundred miles an hour is pretty
crazy bonkers, right, and it's sweet, it's amazing what they're
(41:19):
able to do. But like there's still situations where you
know the pilot's coming in in like what's called like
an IFR condition and like low level fog, and like
that dude is trained to like fly the stick, you
know what I mean, Like he knows how to do this,
Or there's an engine valuere on takeoff like this happens,
believe it or not, Like you have dual engines, they
still take off, they still rotate, they still take off
(41:40):
because they have to. Let's do that or like worse
things happen, and so like, as engineers, like at the
end of the day, like we are problem solvers, critical thinkers, right,
and so we got to take the same thing. And
we're also like resource management is such a huge thing, right,
And so Jay, you've talked about this, Jason, you've talked
about this. Brook you definitely talked about it, Like how
do we use this and like not either overdo it
(42:02):
or underdo it? How do we find this thing? I
just think we're in the midst of that, and so
like I don't know, like we've got to anchor ourselves
to like, first of all, this is not like the
first time like humans have gone through this, and like
it'll like hopefully, like we'll figure things out. I imagine
people are gonna get burned and that I hate to
think that, Like there are people that are just gonna like,
(42:24):
I hate my job, I'm leaving the software industry. And
then people that are gonna be like I'm all in
on AI and they're gonna dump tons of their life
savings into it, mortgage their house and go bankrupt, and
I like that's too bad too. But somewhere in the middle,
hopefully we're gonna land and we're gonna be able to
like get on a plane and know that it's safe,
and we're gonna like open up a project and it's
type script and we're not going to be like, oh,
(42:46):
I don't know what to do here, Like this is good.
I'm happy with where we've settled, and I hope that
AI gets there. And really what we're talking about is
just large language models and co generation. Anyways, I feel
like I'll get off my soapbox. That was like I
haven't said anything for forty minutes, and I was just
like really thinking, and just like I love the entire
conversation and I just want to say that like people
(43:09):
that are listening, like the hype is real. I think
there's the anxiety that that can induce is wild. And
I think that there's a there somewhere in my mind. Anyways,
there's a future where we're able to use the tools
in a really meaningful way. We recognize that they are
still artificial and they're not natural intelligence, and then we
embrace that system design and that human element that you
(43:30):
guys have talked about, and hopefully we're able to use
a like LLMS in a way and models in a
way that like make our jobs better and ultimately for me. Anyways,
this is where I get excited, is like making apps
better because that's at the end of the day, Like
and you hit on this jay, like you talked about
like your customers, they don't know how it was built right,
and it could be the shittest code in the world
(43:52):
that you could vibe code and ship something. Sure, Yeah,
but like at the end of the day, like, let's
build things that like users and like people really love
to use. And I think that we have an opportunity
with with the technology to improve the type of apps
that we're building. Like let's like get rid of like
shitty complex forms that are just like a pain in
the ass to fill out, Like ye that that can
(44:14):
be blown away. Navigation and routing can be changed and rethought.
I think a lot of opportunities in which users interact
with technology can be changed.
Speaker 2 (44:24):
If I have if I have one request for our listeners,
please don't be that one guy, that one very well
known guy that we all know and see post online
about typescript and how it's the worst thing ever that
is full and on Java script that runs a big company.
Please don't be them. About AI slash LMS cool.
Speaker 4 (44:47):
Does the typescript thing?
Speaker 3 (44:48):
Does it play out well?
Speaker 6 (44:48):
I don't know.
Speaker 4 (44:49):
Is it a good Yeah? Yeah, it feels like we
went through this. I think as developers, like we just
did this. I know it's like ten years ago, but
that's me jos.
Speaker 2 (44:58):
There's still people that are like, you know, this is
the worst. I've never touching this. I've removed all of
it from my COB's from my entire multimillion dollar company,
Like that's an insane thing to do.
Speaker 4 (45:08):
And I suspect you could do the same thing with LMS.
You can say nobody can use an LM to generate code.
And I know we don't talk a lot about this,
but this is where my passion is. Nobody can use
LLM in the app itself. I don't want any data
going to the LLLM. I don't want no zero, you know.
Speaker 2 (45:21):
What I mean. And you could have that.
Speaker 4 (45:24):
You could be that person. You can do that if
you want. I think this pros and cons to it,
And so there's somewhere there's like some sort of like
intelligent choice of like, Okay, let's be really rational about this.
Let's critically think about what resources we want to use
at the time. And that's something that like when you
get into plane like that, the dude, the people the
people that are in the front, like they're super trained
on that to know when to like override this system,
(45:47):
when to do this, when to take control, when to
like they go through processes constantly to know like how
to analyze all the information that's available to them and
then make the right choice, hopefully at the right time
and make make it so you get to your destination
on time. And so we want to do the same
thing as software engineers, so our users can get what
(46:07):
they need at the right time. Whatever they're trying to
buy something, they're trying to solve, you know, whatever a
crisis or trying to figure something out or whatever it is.
Speaker 3 (46:15):
And so, yeah, well I think I think you guys
are touching on something. And he made me think about
So I've been developing this theory for a while and
it's it's it's one that I've just recently started talking about.
Speaker 7 (46:30):
But I call it the theory of the fief and
it really fIF f I E f fife Okay, yeah,
and yeah.
Speaker 3 (46:40):
It comes from so I like to play games. I
don't get to play games as much as I used to,
but I enjoy it. Crusader keys, right, is this game
about owning owning, you know, everything from accounting all the
way up to a duchy to a kingdom to an empire.
And you know, back in back in the olden times,
(47:01):
when when leadership wanted to recognize you as a knight
or whatever, you were given a fief, you were put
in charge of something. Throughout my career, I've watched this
happen over and over again, that when you put somebody
in charge of something, they start building walls around that
something to protect it, right, They start protecting their fief,
(47:21):
and they start making allies to help protect their fief,
and it tends to play out in organizations over and
over again. What we're talking about here is very similar
to that, right, Like the guy who removes typescript right
now is protecting their JavaScript fief, right, And it's just
it's the way we are as humans. We're very tribal.
(47:44):
We like our tribes and we want to protect what's ours.
So there's another analogy from history, Brian that you made
me think of as you were talking about the pilots.
But it's the analogy of the buggy whip industry.
Speaker 4 (47:58):
I am not familiar. You're gonna have to so.
Speaker 3 (48:02):
The buggy whip. So as automobiles were becoming more and
more popular in the United States, the buggy whips saw
the writing on the wall. But buggy whips were used
to drive the stage coaches and everything, and they were
a massive industry in the United States, and they saw
the writing on the wall, and so they were trying
(48:22):
to protect themselves. So they went to Congress and they
lobbied Congress, and they set like ridiculous speed limits on
automobiles that you know, if an automobile went over three
miles per hour, it was illegal.
Speaker 6 (48:33):
And right.
Speaker 3 (48:35):
I don't want to be the buggy whip of my industry.
I want to be prudent in my decisions, but I
never want to be the guy that's like, oh ai sucks, right,
oh typescript sucks. I don't want to be the person
who's trying to protect my own fief. I may do
it subconsciously because that's just the way humans are, but
(48:58):
I never want to be that person who is lobbying
Congress to keep stage coaches around when automobiles are the
obvious advantage, right, and obviously automobiles won out, but if
you go back and look at the history of the
United States, there was a point in time where the
buggy whips thought that they could beat automobiles by legislating
(49:20):
them to be worse than they are. And that's one
of the things that frustrates me about lms and AIS
right now. And I don't want to get too much
into political stuff, but lllms are in the news right now,
specifically certain lms because of advice that has been given
to teens. I'm very concerned about the direction that this
(49:43):
could take our country, and not even just our country,
but just AI and llms in general. Right because there's
a global audience here for the Angular Plus show. And
I don't want to get into politics, but one of
the things that I think we need to do because
it is easy to get burnt out and it is
(50:03):
easy to not care. But when we see things happening
in our industry and even in the purview of AI
that we potentially think could be dangerous, that the decisions
being made could lead to dangerous outcomes, we need to
be careful. Right certain certain governments are trying to force
(50:27):
l ll ms to log everything wherever you come down
on that, you should be participating in that discussion, right,
neutering or turning llllm's brain dead. Wherever you come down
on that, you should you should be participating in those discussions.
And I think that's an important part of this because
(50:48):
what I really really worry about. I'm a big fan
of like the cyberpunk dystopia, right, like that whole genre
of sci fi. But I really worry that as we
build these bigger and bigger organizations. You know, we've got
Google who just recently purchased nuclear reactors to be able
(51:09):
to run their AI, the divide between the haves and
the have nots on llms and AIS could really segregate
us globally into these communities where, you know, one of
the one of the things that I really enjoy it
(51:30):
and I forget who said it, but when you add
a fine to a law, you're really just saying that
the poor people can't do it. And I look the
same way with like these llms and stuff like that.
When you say that these lms can't do it, what
you're really saying is that that, you know, the people
who can't afford to do it can't do it, but
(51:50):
everybody else at the top can still do it, and
so we create this divide. And I don't know, philosophically,
I think and morally it's to our benefit to try
and help AI make the world a better place and
not push AI into a have and have not type situation.
Speaker 6 (52:14):
I think what I'm pulling out of this overall from
from Brian's comments Jay Jason is kind of two things,
Like I don't know any time in history when that
black and white thinking has ever served anyone really, right,
Like do not use this and use this only?
Speaker 2 (52:34):
Like that's never served anyone ever.
Speaker 6 (52:37):
So to me, what I'm really getting out of this
is like there really has to be that balance, and
you know, like obviously being thoughtful, being purposeful about what
we're doing, why we're doing it, who we're doing it for.
Speaker 3 (52:53):
Are we doing it for our own like ego, to make.
Speaker 6 (52:57):
Us look good, or are we doing it truly for
the customer, for the good of the company, for the
good of our coworkers. So I think, you know, just
slowing down, being thoughtful about what you're doing. But it
really just comes down to that balance, like like with anything.
So I don't know, I just say, like don't push it,
don't force it down anyone's throats. But exactly what Jay
(53:20):
was saying, it's got to be that encouragement, encouragement through
example through this is how I'm using it. Maybe you
could use it that way too. Do you have any
ideas yourself always turning that conversation back and letting it
be a two way discussion rather than this is it.
Speaker 2 (53:39):
I don't know.
Speaker 3 (53:39):
That's kind of what I'm getting out of this.
Speaker 4 (53:41):
Yeah, And I really like Jason's comment, Like, I mean,
there's something healthy about engineering culture where you can challenge
assumptions and have conversations and maybe get a little heated,
hopefully in a safe environment though, where people can express
their opinions and be heard and also can say, you know,
just because you know the minority, maybe I'm in the minority,
and I was able to speak what I believe to
(54:04):
be the best thing forward, the best path forward, although
it wasn't chosen, but I accept that, and I'm I'm
good to move on with the team and and and
you know, they heard me and they understand me, and
I hopefully still had an influence in the outcome. Yeah,
And so I think that's I think that's what we're
talking about. Is like I think there's a healthy engineering
culture where this uh we are definitely somewhere on the
(54:27):
hype cycle of AI and recognizing that, I mean, oh yeah,
I'm not sure we might be over Like I don't know,
Givity five. Maybe Mike and I were talking about this
the other day, Like it's impossible, Givity five put us
like right on the other side of like the hype cycle,
at the top of the peak of the hype cyclical curb.
I don't know, we'll see or the stock market will
crash and then it'll be very clear that we're heading
down into the Troffa delusion. And but yeah, I just
(54:49):
uh yeah, I just I guess the biggest thing is
like there's no way that any one of us, I
don't think, although some people online maybe pretend to do this,
but like there's no way you could keep up with
all this stuff. I don't know, Like maybe maybe I'm
just giving myself an out, but like I have a
three year old. I love spending time with her, Like
I get a play and do like silly things and
like bubbles and whatever, right, and like I'm just not
(55:12):
I'm not reading AI news all the time, and I
don't want to and I can't because that's just overwhelmed.
And so I think there's yeah, there's just like maybe
just like let yourself off the hook. You don't have
to know about every AI tool on the planet. So
your buddy's like, oh, you got to do claude all day, baby,
Maybe not, Maybe you don't have to, you know what
I mean. Maybe you really like whatever you're using co
(55:34):
pilot or maybe whatever it is that you kind of
bring into your like be open, listen to other people
and like especially people on your team that are like, hey,
this is what I did. Like I love the idea
Jay that you said, Like you know you did. You
just did a big project. Hopefully you saved a ton
of time, and I bet you did. You built this
really cool thing. So you know, take it inside, take
share it, you know what I mean, Like have fun
with it. You know, hang out with friends and show
(55:56):
them what you're doing. And that's super cool, that's super
fun to do.
Speaker 3 (56:00):
And so we just got to like let it go.
Speaker 4 (56:03):
Like we we're in a hype cycle. There's no way
that we can keep up with all the stuff that's happening,
you know, I can yea if you're an investor. Yeah,
Like you're a VC, that's like go for a baby,
read all day and like that's your job, you know
what I mean. But I do have to, like like
Brooks said, like we you know, the sprint's gonna end.
I kinda gonna get this pr in you know what
(56:24):
I mean. And so it's like, how do I use
a tool like whatever it is, Codex or Claude to
maybe get me part of the way, and then I
have to review the code and that's not quite right
or tweak this thing or get it to the you know,
high quality that I want before I feel safe and
feel comfortable putting that pr up and and hopefully putting
it in the in front of some eyes as some
human reviewers, which we haven't talked about interestingly enough on this.
(56:46):
So I think there's also a whole discussion around who's
reviewing your code these days. I've noticed that I myself
have been like, oh, get like co Pilot reviewed and
it's good, click the button, squash and bird baby, it's green,
And then you're like, how did this get in the code?
Speaker 2 (57:04):
Base?
Speaker 4 (57:05):
Yep, ye, you should be rethinking about the automation of
PR reviews because I think there's benefits outside of just
like analyzing the code is one thing, but also just
like knowing what your teammates are doing and what's being changed,
it's also really good.
Speaker 3 (57:20):
Yeah, how it fits into the overall organization, right yeah, right, yeah,
and the code Maybe the code may be correct, but
it may not fit.
Speaker 6 (57:29):
And that's why it's benefit to things like at our company,
we're doing weekly like AI lunch and learn type things
where anybody can volunteer. They can come in and show
how they've been using a certain AI tool.
Speaker 2 (57:47):
Yeah, and then there's like discussion around that.
Speaker 6 (57:49):
But what that also does is because we work in
stream aligned teams, where different teams kind of own different
parts of the app, it's creating a way for us
to make sure that we're always kind of bridging those
potential gaps of like they're over there doing this thing
and we're over here doing this thing. But when you
have those regular like lunch and learns or tech talks,
(58:12):
whatever you want to call them, I just think it
helps to make everybody more aware of what the other
teams are doing, so that there's more shared code base,
you know, between or amongst all the teams.
Speaker 3 (58:26):
Yeah, I agree with that. I think what you're touching
on is, as an introvert, one of the more difficult
things for me communication. That's the human piece that I
think we've been talking about quite a bit, is communicating
what is our strategy? How do we consistently do this?
What should we be doing? Totally agree with what you said, Brock.
(58:48):
Good point on that note.
Speaker 2 (58:50):
I feel like we cover so much stuff in this episode,
so hopefully our thoughts were coherent and there is a
relative thread through everything we were talking about. I don't know.
I'm sure we'll hear about it in the comments or
on Blue Sky or something like that. Thank you Jason
for showing up and talking about a burnout, the coding philosophy,
(59:16):
all everything, all that kind of stuff. It was a
really interesting topic and I hope our listeners enjoy this
one for sure. Thank you Brian, Thank you Broke for
co hosting with me today. Anybody have any final thoughts?
Speaker 4 (59:30):
Well, I would just say, like, I guess maybe just
back to my like it's okay, Like it's okay if
like you you show up to whatever, to a meetup
and somebody's like, what how are you not doing this?
Why are you not using beep whatever X y Z
tool And you're like, I don't know, Like, oh my gosh,
how did I miss this? Like I feel like I'm
(59:51):
like whatever, falling behind or I don't know something like that.
That's that's just garbage. It's kind of a lie. Don't
believe it, and stay focused on building really great tools.
And I think that there's a lot of opportunity ahead
of us. So don't get too swallowed up by the
hype curve or whatever by the yeah so well said.
Speaker 2 (01:00:11):
Well, if you want to get in contact with Jason,
his socials will be in the episode description, reach out
to him. Check out his stream if you haven't checked
out the stream before. I was always talking about stuff,
And I like what you said at the Star at Jason,
where you're like you doubled down on your weaknesses or
however you phrased it. I think your streams are great
for that where you're like, I don't know how to
(01:00:32):
do this thing. We're going to figure it out today
kind of thing. So if you want to learn, check
out jason stream. But anyway, thank you everyone, We'll see
you next time.
Speaker 8 (01:00:42):
Hey, this is pressed to mine. I'm one of the
n g champions writers. In our daily battle to crush
out code, we run into problems, and sometimes those problems
aren't easily solved. Nji Komp broadcasts articles and tutorials from
ngie champions like myself that help make other developers lives
just a little bit easier to access. These are Coles, visitmedium,
dot com, Forward Slash, n gcomm.
Speaker 1 (01:01:04):
Thank you for listening to the Angular Plus Show, a
NGCOMF podcast. We would like to thank our sponsors, the
NGCOMF organizers Joe Eames and Aaron Frost, our producer Genebourne,
and our podcast editor and engineer Patrick Kayes. You can
find him at spoonful ofmedia dot com.