Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
Ah, Warren, you're looking kind of sharp there in that jacket.
You look kind of like a NASCAR driver with all
those corporate logos.
Speaker 2 (00:15):
What's going on?
Speaker 3 (00:15):
Man's fun It's funny you should mention that, because I
have something interesting for today's episode. One of the recurring
themes on our podcast I feel like is incident management.
It's something like lots of people want to talk about,
and quite a few guests that come on to discuss
their stressful, traumatic experiences with on call and whatnot. One
of these guests has stepped up and wanted to be
(00:36):
today's sponsor for the episode, and that's a pager Duty.
So I've actually been a fan of page Duty in
the past when I've reached for an incident management tool.
And what's nice is, compared to the competitors that we've
heard from, it's clear that they're actually listening to feedback,
unlike you know, other enterprise companies utilizing your internal messaging
platform like Slack to interact with incidents. Especially for US,
(00:57):
I feel like it's like a baseline requirement for communication
and collaboration, and they've actually opened up their Slack integration
to everyone, not just the customers who have shelled out
money for their enterprise plan, so it's really nice to
see compared to competitors where it feels like you need
to pay an enterprise tax even if you aren't an
enterprise company. I'm particularly like their automatic channel creation when
(01:17):
there is an incident. You know, if you have like
lots of incidents, like one every single day, that gets
pretty tedious. So, uh, thank you Page Duty for sponsoring
this episode.
Speaker 1 (01:25):
Yeah, that's super cool. Definitely thank you to Page of Duty.
And I'm a big Page of Duty fan. It's one
of those tools which I think there's not a lot
in this category. I could be wrong, but it's just
one of those like when you are trying to do
a certain thing, you know, incident response, like Page of
Duty is just the name that comes to mind.
Speaker 3 (01:42):
You a sake of you know, people thinking like it's
really easy to do, and then they go out and
do it themselves. And like I've been in a bunch
of companies where this has been a pattern of trying
to do it yourself. And the best part is when
you hook up your incident management or on call reporting
you know, monitoring solution to your production systems and you're
running on the same infrastructure, and then you have a
(02:04):
production incident. Well you know what else is down at
that exact same moment, So I can I can highly
recommend not building this yourself. That's my own sort of
traumatic experience.
Speaker 1 (02:14):
I think that's a rite of passage, building your own
monitoring system. And then you do it and you learn
that lesson that you just brought up and like, wow,
that was stupid, and then you go to patre Duty's
website and move on with your life.
Speaker 3 (02:27):
So I've I've shilled out for the episode. So maybe
we'll talk about something interesting now right on.
Speaker 1 (02:34):
Well, yeah, I think today's guest might have some interesting topics.
Speaker 2 (02:39):
O Mair, how are you, Bud?
Speaker 4 (02:41):
Yes? Good, how are you? Thank you for having me here?
Speaker 2 (02:44):
Dude, welcome back.
Speaker 1 (02:44):
I can't believe we determined it was three years ago,
right that you were last time?
Speaker 4 (02:49):
Yeah? Yeah, that's just nuts.
Speaker 1 (02:51):
Yeah, Like I don't even want to have this conversation
with myself about what I've done in those last three
years that I also don't remember.
Speaker 3 (02:58):
So the listeners are are missing out on that pre
recorded conversations that were having about you know, how long
how long has it been? Actually? Because Will you were
telling us, you know, last three years may or may
not have happened for you.
Speaker 1 (03:12):
Yeah, And I did mention before we started recording that
I bucket things into events that happened between before nineteen
ninety and things that happened between nineteen ninety and yesterday.
And I really can't get any more granular than that.
So Omer was just here yesterday, I know exactly.
Speaker 2 (03:32):
And you know what's weird.
Speaker 1 (03:34):
I noticed that you and I are wearing the same
T shirt today, and I've got to go back and
look further recording because I think we were in the
same T shirt last time you were on as well.
Speaker 4 (03:44):
Here's a fun fact. I only have one kind of
T shirt. I have like thirty of these differing colors.
It's the same T shirt. That's very possible.
Speaker 1 (03:54):
Yeah, my wardrobe is very much like that as well.
My wife bought me one of these T shirts once
and I was like, oh, that's super cool.
Speaker 2 (04:03):
And now that's all I.
Speaker 3 (04:06):
Have, just that one T shirt.
Speaker 4 (04:09):
Just the one one copy.
Speaker 2 (04:13):
Yeah.
Speaker 1 (04:13):
I mean Omar's over there flex and saying that he
has like them in different colors and multiple shirts. I
have just this one.
Speaker 3 (04:21):
Every day is laundry day for you.
Speaker 2 (04:25):
Yeah, sure, yeah, yeah, yeah, yeah, we'll go with cool.
Speaker 1 (04:30):
So we were going to talk about Kubernetes and ll ms, right, yeah,
give us a rundown on that.
Speaker 4 (04:37):
Over the past years things have happened for me as well.
So I'm an already architect a zest, which means that
I lead our Kubernetes products. So we're building stuff to
help you optimize kubernets clusters. And recently it seems like,
well everybody have noticed the one hype that drags the
(04:59):
world to direction. It's all around AI now and people
are starting to focus more on kubernets and AI at
the same time, which is something I never expected. I
actually thought the word goes specifically for DevOps. I thought
it was always going towards serverless and kind of not
worrying about infrastructure and managing your own stuff or not
caring about resources, which in some way did happen. And
(05:22):
you did see like different platforms like both serverless from Awus,
azure GCPAA, but also platforms like versaill, Heroku, fly Io,
things like that helped you just deploy your app and
move on with your life. But then that's some I
don't know what changed the tide but it seems like
companies are pushing towards Kubernetes. The last time last Cubicon,
(05:46):
I think it was in London, they said seventy something
between seventy and seventy five percent of corporates in the
world are either already using kubernets or migrating. So that
was mind blowing to me. I never saw that coming.
And over time we've started seeing companies naturally go to AI,
(06:07):
either using AI throughout their products or trying to train lms,
or actually adopting them after they've been trained, but trying
to deploy them on their own. This is kind of
things that we can focus on. We can also speak
about the unrecorded part that has to do with AI
hype and how people are vibe going their way to production.
Speaker 3 (06:28):
I think we're definitely going to get there. But you
mentioned a lot of interesting things because I was on
the same path for you, like Servilis for me started
like twenty twenty fourteen, even like before Kubernetes was a thing,
and there was ideas of this coming into a lot
of companies, And I feel like there is this question
of like why did not the better technology become more popular,
(06:49):
the more extreme like push things aside, focus only on
the business value. And you basically you just said it
really got me thinking like why did And I'm sure
I'm going to get some angry emails for this. Why
did a worse solution become more popular? Why did you
get better adopted? And I think it's because there's this
natural tendency for humanity to take step changes for things
(07:13):
rather than giant leaps. And there's actually a core concept
for this in mathematics. It's like you found a local
optima and you're just making small little jumps, and in
order to find a larger jump out larger maximal, you
have to make giant leaps. And in Japanese it's called
kaikaku rather than kaisen if you're familiar with manufacturing lean terms.
And I feel like it's really uncomfortable for people to
(07:34):
throw away everything they have and make a huge leap,
and I feel like serverless is a huge leap, and
Kubernetes is people can just keep doing what they're doing
today and delude themselves into thinking they're making a real change.
Speaker 4 (07:47):
Do you feel like it's a step back.
Speaker 3 (07:49):
I don't think it's better than what the Open container
initiative Docker containers could have been, but given the companies
that were backing it, which was pretty much Docker, I
think that really answered the question of why it didn't
get further.
Speaker 4 (08:03):
My opinion of this is that the one great thing
that Kubernetes did is growing insane, like an insane community
around it, which goes to open source projects under C
and CF, but also companies either building on top of
these open source projects or just pushing themselves into C
and CF, which is incredible because from a very raw
(08:26):
product that you had to do so much to just
get to production, you can now get your well you
deploy Kubernetes. You can either do it on your own,
but you probably won't. You use a service, but then
things like helm and Customize and other things around it
can just help you deploy things. And then companies started
building on top of that operators, so you can get
you know, elastic logging, monitoring databases, cash instances, you can
(08:50):
put whatever you want. With an operator, you just help
install something and then you have everything you need. It's
not from the start, but it kind of gives you
control over everything. And then people said, okay, then now
you have to manage infrastructure, have to manage notes and
what's going to be with auto skating and things like that.
But then you have projects like Carpenter, which is AWS
(09:13):
is pushing, but Azure is jumping on that wagon. So
you can kind of have the best of both throws, right,
You keep your control, you don't pay as much, which
is debate of but we can talk about that. It's Kubernetes.
It's something that deploys containers. That's it.
Speaker 1 (09:28):
I think there's a nerd aspect of it as well,
because it's something that's just fun to totally nerd out on.
It's so configurable and so flexible. I know quite a
few people who are part of the Kubernetes at Home
project or the Kubernetes Home Lab project, and the amount
of money and level of work these people have put
(09:48):
into their home Kubernetes lab for doing who knows what,
you know, like telling the refrigerator when it's time to
defrost or whatever, and and they just get completely passionate
about it. And I think that's a big allure to
Kubernetes because in our industry were people who like to
(10:10):
just nerd out on stuff and tinker with stuff, and
with servilists, you don't get that option. You can deploy
your container and it works, and if it doesn't work,
you can deploy your container again and then it'll work.
Speaker 4 (10:24):
Yeah.
Speaker 3 (10:25):
I mean, I think the whole lad thing is interesting.
You got experience doing something that you liked more than
whatever your company was doing because your company was doing
something horrific, and you can go down that route. I mean,
I do see people who are even running stuff at
home that would prefer servile lists, but they feel like
(10:45):
it's too much of a burden to convince their organization
to make the switch. Fundamentally, it does feel like it
has to be a switch, you know, And I think
that cognitive burden or political burden is just too much
for people to deal with a lot of ways. What
I'm actually interested in is something you said earlier, and
that's you're surprised there's a marrying between AI and Kubernetes because,
(11:10):
as we'll pointed out, running locally in a home lab,
like what are you going to run? Like, you're not
going to run dockers Form and you're definitely not going
to run Nomad After everything that Hashi Corpus done, so
like what are you left with? You're you open open
stack or coro os, like you're talking about spinning up
operating systems everywhere. I do feel like Kubernetes is an
answer there. But for AI it also is like one
(11:32):
of the few things that I have actually recommended, especially
if you need to spin up lots of models or
configure the parameters for running those models, for doing infernts
that are different per user or per customer. Having independent
models scales well with different namespaces in Kubernetes, whereas the
other container orchestrators don't really have this concept. You just
spin up the same container over and over again with
(11:55):
the same parameters, and it's not really fundamentally controllable. So
I feel like the models a little bit different there.
Speaker 4 (12:01):
The beautiful thing about Kuberneties is that it's very extendable.
You can basically do whatever you want. Right It's a
set of APIs that you have on the basic binary.
But if you want to build on top of that,
we mentioned operators earlier, we're building operators at ST. You
just make up your own APIs, you deploy them to
the cluster. They work, so you can kind of change
(12:22):
whatever you want. You mentioned name spaces, you can if
you want it. You can build your own notion of
namespaces that would fit whatever you're trying to do, and
it would just work. You would have to install it
with other applications as a service provider or as a
stand alone project, but anything is possible. That's how they
This is something I really appreciate about kubernets. First of all,
I totally agree with Will that it's totally something to
(12:44):
nerd on about and just see things moving around and
having your containers shift around different notes. It's cool, it's
fun to play with. Here's a hot take. It created
job security in a sense, so people are kind of motivated.
You mentioned home labs, right, both of you people were deployed.
Who would in their right mind with deploy Kubernets in
(13:05):
their home lab. It's just it's it's so many layers
of complexity. You can solve it with a fifteen other
open source project, but you're going to kubernatis because you
already know the beast and you feel like you've tamed this.
So now you're going to deploy everywhere, including your own home,
and that's how you want to see things progress. Because
you moved people would literally only only answer to recruiting
(13:27):
emails that have Kubernetis in this job description, right, So
that's part of it, and I think it created like
this ecosystem of companies are building for Kubernetes. People only
want to work with Kubernetis. It's be they build a
fomo around it. Everybody wants to be there. It's that
somehow keeps being the next cool technology thing that always
builds up and progresses for partially for a good reason.
(13:50):
You ask what's good to use in your home lab. Well,
you can use Kubernetes. There's k three S, which is
the smaller sibling, the lighter weight thing Kubernetes. There's also
we mentioned Fly Versail, all these commercial companies, there are
open source alternatives to all of these, like Qualify is
one that comes to mind. This is an open source Versail,
(14:14):
slash Heroku, slash Fly that you can deploy in your
home lab, which integrates you with anything. So if you
wanted LMS, for example, you can. It has a list
of you can make up your own plugins. It's just containers.
But you can't choose an LLLM or speak to open
AI from your application, or put your own LM there
to run next to your application. And it works.
Speaker 1 (14:35):
So I'm curious about the LLLM aspect of it. You know,
using lllms with Kubernetes, what kind of what do you
actually gain from that?
Speaker 4 (14:47):
Right? So, I think there's a lot of use cases.
The two main ones are either I'm an AI company
and there's you know who is not an AI company
these days.
Speaker 2 (14:56):
So great anyone who wants funding.
Speaker 4 (14:58):
Is every company's name has changed over the past twelve
months to something AI. First of all, there are these
companies that are either they own the LLM or they
have to train it right, so they need a fleet
of large and powerful machines to train their lms. They
(15:19):
need big discs, YadA ya. And then the other part
of it is these services are starting to cost a
ton of money, especially if that's your core business. You're
starting to pay like your cloud bill is now the
second problem in the organization, not the first. So some
companies find it that running your own l M or
(15:42):
you know, there's a ton of open source lms. You
can go to hugging face and get whatever you want,
maybe lighter ones, smaller llms that can work even quicker,
maybe tailor made to whatever you're doing, and then you
can run it alongside your application, which means reduced latency
not as high costs. But then this brings the complexity
(16:02):
complexity story of infrastructure right because you have to right
size things. Speaking about Kubernetes, Kubernets lets you dictate how
much memory and CPU you're going to use. But once
that set, and we'll talk about the last upgrade of kubernets.
But once that's set, it's set. That's it. It's there.
Over time, you probably want to change these requests and
(16:23):
limits to fit whatever the application is doing. Either it's
consuming too or it requested too much and it's not
actually utilizing everything, or it needs more and now there's
no more memory to serve, so you want to change
these things. There's a few ways to do that, but
it's really really hard to do automatically. That's one aspect
of things. The other is these llms are usually just large,
(16:47):
large language models. They also consume They consume a lot
of disk, and with disk space it's exactly the same.
You can create a PVC on kubernets, but once it's there,
it's really hard to change. You can extend it. Sometimes
if you're working with the right cloud provider, the right
c sign. I'm not sure if you're aware. Kubernet Is
(17:07):
one point three to three was just released a week
or two weeks ago, and you can now go to
your requests on a specific pod and change them and
you don't have to restart the pod. They can just
change and it will inflate inside the node to whatever
it needs to consume, which is a really really cool
change and that helps us a lot. You don't have
to restart pods anymore if you want to scale up
the resources that you consume. So that's a really big
(17:30):
release and that's only been out for a week or two.
Speaker 3 (17:33):
I mean, I hear things like that and I think, wait,
why wasn't it doing that all along? That seems like
a required fundamental piece of the infrastructure. And you mentioned
earlier the APIs have been figured out. You build a
lot of things around the APIs to make adjustment. So
my question is, like you actually like the api is
that Kubernetes provides, because if you're building operators all the
time to sort of adjust and change the abstraction layer
(17:55):
to interact with your provider containerization level schedule, or your orchestrator,
then I feel like you're getting closer to exactly the
promise that Servilus has been offering all along.
Speaker 4 (18:07):
That's a philosophical question right there. Oh, definitely, that's what
you're trying to do, right You're trying to abstract things
by building them yourself because you don't want to mess
it with infra, but you want to control it, which
is kind of the conflict we all have because sometimes
a lot of times we need the control. Right, we
(18:28):
as a company have to have access to the notes,
so we can't work with serverllus. We literally have to
make changes on Linux. By the way, the recent change
why would why can't you change a container? A running container?
Why can't you change? The resource had to be developed
in order for us to get the functionality. So we're
always trying to make changes so that it's easier to
(18:50):
deploy things. Production is more stable, our life are easier.
But we still want the overall control, and there's still
the ability to make changes to APIs and new operators
and you know, change this beast. However we want to
sell new products.
Speaker 3 (19:06):
I'm gonna keep going there, Like, how many companies actually
need this level of control?
Speaker 4 (19:10):
It depends what you're calling this level. I mean, not
everyone needs access to the noes, but some do in
a way, not every You mentioned the scheduler earlier, By
the way, big thing now in Kubernetis, don't know if
you know, you can't really access the scheduler, which is
just another open source component within kubernets. You cannot use
(19:31):
it in most cloud providers as it was intended to.
Meaning since when you're using Kubernettis through a cloud provider
again Aws, Azure, GCP, or any other flavor, the scheduler
is part of the control plane, you cannot access it.
You can use it if you're instantiated a new pod
the scheduler, we'll take that pod and then we'll schedule
it wherever is available. But if you want to extend
(19:54):
the scheduler, which is something you can do in kubernets,
for example, you can plug in your own extender and
create custom logic scheduling, you can't do it. So that's
another place where Kubernetis was built to serve everyone so
that they can do anything. But then the cloud providers
take bits of it and say, okay, no, no, no, that's serverless.
(20:15):
Now you're not touching that. We take care of that,
which is okay until it's not. I don't know. It's
the ever going conflict of who manages what and who
has access to it. By the way, AWS can run
you know, fargate on AWS. Probably other services other cloud
providers have the same, But fargate you can run Kubernetes
without nodes. I think to some extent it works. But
(20:38):
what happens when you do need the access I don't know.
I think most most companies actually don't use it.
Speaker 3 (20:44):
Oh, I mean, so fargate it's sort of a special
part of AWS that gives you the serveralless aspects of
container management without having to go deep into understanding the
complexities of the node management or scheduling. And it was
like I'm going to say it was recent, but it
may have been three years ago still that you couldn't
(21:05):
actually run fargate effectively with an EKS on AWS due
to some of the limitations that came along with it.
Like you could do it, but then for whatever reason,
you wouldn't be able to get Internet access or i
AM wasn't working correctly, like you know, permissions didn't really
work out of the box for accessing other services, and
so you know, people obviously were trying to do that,
but it was sort of a joke from that standpoint.
(21:26):
If you want Kubenetes, you're only going to get the
horr mode version of it. I think now some of
those have been or most of them have been fixed,
so there's, you know, back to very little excuses not
to use that. Although the canonical reason not to use
our gate or server list has been access to say GPU,
the if you are building models or doing any sort
(21:47):
of video rendering, et cetera, et cetera, you aren't going
to be able to use fargate. I want to say,
it's been a while since I looked at it, but
it used to be the case that you need to
actually get virtual machines that have access to GPUs or
GPU optimized machines in order to run your cluster.
Speaker 4 (22:04):
Yeah, that's a great point, but if you don't have
that requirement, I would urge you to test what was
farget was built originally for ECS, which is the alternative,
a WS alternative to Kubernetes, which works great, really a
great orchestrator. If you don't have anything complex, you don't
need operators, things like that. You are going to be
married to a WS naturally, but you get everything out
(22:27):
of the box, just as you would expect with any
other platform. Right, you get autoscaling and routing and the
firewalls and naturally cloud which connects to for every monitoring
need you have. Again, you're going to have to pay
something for AWS to run all of that, but it's
going to be usually it's going to be cheaper than
running Kubernetes, something I really like doing. Running ECS with
(22:50):
Farge is great.
Speaker 3 (22:52):
Yeah, no, it's one of the best things. Ever, I'm
surprised that more companies don't find opportunities to utilize that. Like,
if you're I usually start out to con station of that.
You need to prove why you can't use that as
a solution before you decide to just hop over to
EKS or running kubernets on top of your own easy too.
Speaker 4 (23:12):
And we talked about the community right around kubernets. There
was an ever going rumor I think there still is
that AWS are going to ditch ECS in favor of
EKS forever for like a decade really, and every time
you ask them they would officially unofficially that we tell
you no, we're building it. It's core business. Said that.
I think Netflix was running a large part of it once,
(23:35):
it's not going anywhere, but there still is a rumor.
So many manages to maintain that rumor going on and
just you know, convince people to ditch whatever they're using
and go over to kubernets.
Speaker 1 (23:48):
It's probably started by the Amazon eks team.
Speaker 4 (23:54):
Kudos to them.
Speaker 2 (23:55):
Right cool.
Speaker 1 (23:58):
So back to your point earlier, Warren about who needs
this level of control, I think there's a it's like
one of those right time.
Speaker 2 (24:07):
Right place things.
Speaker 1 (24:09):
You know, you hit a certain level of scale where
if you're going to manage your costs, you do need
that level of granularity or at least visibility into it.
And with a lot of the serverless type providers, you know,
you just get this huge bill because it provisioned whatever
(24:29):
you told it to and then your finance team is like, hey, dude,
you need to figure something out here. And that's whenever
you start wanting to get more control over it. But
I think it's it's not something that most people initially need.
It's something that you find you need after you've got
everything else up and running, you know.
Speaker 3 (24:49):
I mean it's interesting you bring that up because I
always like big the model contrarian here. We don't use Scubernetes,
and not only do we try to embrace over lest
wherever possible, we actually try to use edge workers. So
in out front that's lambdad edge or cloud from Functions,
or in cloud Flare it's web workers. I don't actually
(25:10):
know if Azure and GCP have something, which is the
fact they never heard of. It encourages me to say, yeah,
they don't have something, but I'm not going to be
caught recorded saying that's you know, on the record for sure,
which is really interesting. And honestly, even with that, even
if you multiply our compute costs two or three times more,
it's still nowhere near the top of the biggest cost
(25:31):
concern in our organization or even in the cloud.
Speaker 4 (25:36):
I have a question to you. Maybe that would be
a segue to the other point we talked about earlier
with vibe coding and how everything is going around AI.
But it feels, it feels, at least from scrolling doom
scrolling LinkedIn that everyone's building something right. Everyone can now
spend a weekend and build an MVP or even a
product and push that production. And I wonder where they're
(26:01):
starting something, starting a narrative where people on the Internet
can just build their own products mainly with AI. By
vibe coding not vibe quoding whatever, you can pretty quickly
get to something working and then deploy that to one
of the platforms we mentioned earlier, which is mostly completely serverless. Right,
you pay a monthly subscription based on how much you use,
and you don't have to worry about anything. So do
you think that would change the tide a little bit
(26:24):
in how many organizations are using Kubernetes and how many
people most of the people we know, I think, are
not actually running businesses on their own. They're part of
a team in a company, a large company that uses
whatever one of the cloud platforms in Kubernetes. Do you
think that will shift something? Because so many people that
are building products and trying to build their own businesses
(26:44):
and using so much serverless because they don't code, and
they don't know how to manage infra or at least
that's not their core business and that's not what they
want to deal with.
Speaker 3 (26:54):
I mean, that's I think what I heard there is
technically lms are generat like when we use vibe coding,
the result is actually a serverless solution. So everyone who
uses LMS, the vibe code or generate solutions using AI
in anyway is actually saying Kubernetes is wrong. Serverless is
the right answer. Yeah, I mean sounds good. I like
(27:15):
that argument.
Speaker 4 (27:17):
I mean you're on you're left with that option only right,
because it generates code. Most people who I think most
people who use it don't actually tell it here's the
architecture we're going to use. This is how you're going
to separate things. It's not working like that. They actually
they describe the business logic or how they want things
to look and work, and that's all they care about.
(27:38):
They don't care about how it's built, whether it's the
most efficient solution ever. They just care about something that
does the business logic they care about and for it
to be accessible to other people on the internet, right,
which basically means serverless.
Speaker 3 (27:51):
Like you said, so, I think from my research and
what I've read through like door reports, and we actually
interviewed a whole bunch of product managers, what we did
find is that quality goes down, but the throughput on
delivering solutions goes up. Speed for development in a way,
when you're using vibe coding or ELMS in any way,
(28:12):
so I think the question is, you know, are you
willing to make the trade off of less quality for
delivering a solution faster. Then the ones that were most
effective in this mode were the ones that could basically
give an LM a spec of their solution, so not
just the architecture, but literally how it's supposed to work,
and those interactions followed from what product managers do in
(28:34):
some way. Now, if product managers today are giving specs
to their development teams on what they should be building,
I mean they're probably not doing a very good job
because I don't know any humans that like taking you know,
hard coded specifications unless you're a consulting company or contracting
company who's doing software development doing value based work getting
paid by the spec But most most teams are not,
(28:56):
and so you transition their abilities to working with LMS,
and they're very effective with churning stuff out. Tim goes
with like deep challenging hard tech. If you're an engineer
and you're working with some specification released by standards body,
converting that into actually something working using an LM is
way more effective because it is very much consume this
(29:16):
data and transform it. Transformations are very effective. So I
have a little bit of my own hot take here,
which is that my theory is that the more engineers
nerd out about a topic, the less value it offers
the organization.
Speaker 4 (29:33):
I tend to agree.
Speaker 2 (29:35):
Yeah, that one's going to be hard to argue against.
Speaker 4 (29:39):
I wanted to ask you something. You started by saying,
based on what you measured, that quality goes down, like
throughput goes out. Yeah. And what I hear between then
and correct me if I'm wrong here, But what I
hear between the lines is we moved the problem to
our future. Set right, because it's out there. You see there,
(30:00):
you go, it's in production, you can see and use it.
The functionality is there. However, the moment things need to
scale or you know, stability. If quality goes down, it's
not as stable. There's more bugs to fix. These things
tend to grow exponentially. So don't you feel that's just
pushing the problem either elsewhere or to the future.
Speaker 3 (30:19):
Yeah, and in a critical way. I think this is
one of the pasts that will cause that, Like the
downfall of humanity doesn't come from you know, robotic AI
terminators that are that are you know, impacting us. It
comes from very subtle things that we've already accepted. I
was reading some paper and I don't remember what it was,
and I may have a link later, but it was said,
(30:39):
we're comparing often humans capabilities, like how well we do
versus how well LMS can do, And what we should
be comparing is our weaknesses versus their strengths. It's very
part of war some sup perspective here, what problems are
we causing for ourselves that lms are you know, falling
(31:00):
into and are going to cause this problems in the future.
So yeah, for sure, it's a huge issue in a way. However,
I think this goes into the perspective of like, what
do you actually need in your company? You can sacrifice
quality in some way and deliver your product because your
end users don't care about it, then yeah, for sure.
You know, increasing throughput on delivery, increasing your delivery rate,
you know, is a thing that you should do. But
(31:21):
if you care about performance and reliability and architecture, you know,
something that my company cares about. I think a lot
of companies secretly care about this. If you look longer term,
you can't be using LMS in this way to be
long term effective. It's going to be a critical problem
for your company, sooner rather than later.
Speaker 4 (31:40):
I think it goes even beyond that what I've seen
in one of my projects and one of two three
more developers. But it's basically just me and I figured
using so much it's curser now. But I've used a
bunch of them. Sometimes it would build the feature and
the feature works, and then when I code review, it
removed a bunch of other lines totally irrelevant to what
(32:01):
it was trying to do, which is I could not
figure out why or how. And this made me think,
if I deploy this to production, my throughput goes up.
Right if it's a GERRA ticket, that Gerra ticket is
now done. I've finished my task. I can move on.
But if I don't have automated QA or the right
CI pipeline, nobody knows about this thing until someone needs
(32:23):
this feature a month from now, which again begs the
question is throughput goes up does not mean that everything
is done correctly, And if people don't actually code review
what's going on? Is it real? In a way? And
that made me think that maybe developers are kind of
moving away from being the ones that write most of
the code to the ones that have to review most
(32:44):
of the code.
Speaker 3 (32:45):
So we know that that's going to be a failure
though too, because in order to effectively review stuff, you
need to be able to have the whole context of
what's going on. And for a human you're like, I
already forget things, you know, I'm sure everyone forgets things
in a solution that has millions and millions of them
lines of code, and so especially code that you wrote yourself.
I think there's tons of jokes out there is like,
(33:06):
who is the idiot that programmed this?
Speaker 4 (33:08):
Oh? Oh that was me?
Speaker 3 (33:09):
Actually so and now that idiot is is going to
be an LM and also produced ten times as much
or one hundred times as much code that you've never
seen before. And so that's not a realistic solution to
expect people to actually review that. They're gonna, you know,
looks good to me and approve it. And the counter
(33:30):
argument has been for a while, oh well, they'll just
also create automated tests and you'll review those for the
business cases and validate your solution against it. However, the
problem is that the context window, the context window is
fixed sized, so the thing that the input tokens into
every LM will never go to infinity will never be
able to contain all of the relevant information that is
(33:53):
necessary because it just it's a computational model. Even if
it gets more and more, you still have to provide
it that context way. Maybe you hope that providing it
all of the source code on MPM, if you're using JavaScript,
and also your source code, and also your GEO tickets,
hopefully you're not using Gira, using linear or something else,
and you know your GitHub or get lab repositories and
(34:14):
every email that was ever sent in your company, and
every Slack or discord message or some better chat tool,
like everything the company has ever done. Maybe you have
enough context there, maybe you'll get to that point. The
interesting thing though, is that humans aren't computational models, So
the value we're providing into the system includes some non
computable box black box that is an input to this
(34:38):
buffwer development into the business development, and where if we're
taking humans out of the loop there, we're actually by
nature removing something that an LM will never be able
to replace.
Speaker 4 (34:51):
That's super interesting in my Again, I might be it
might be disrespectful to elms, but I feel when I'm
working on a project for a few months or a
few years. I have a deep sense of familiarity, and
like you said, maybe one day this context window grows
enough to replace me. But at the moment, it feels
like even if things work and it's not deleting lines,
(35:11):
that it shouldn't they work because it's just created a
bunch of additional code that is already there. And maybe
it's doing things. I have a ready sketch instance that
works for my application, and it's just built another layer
of interaction with Reddish, just because it wanted to extract something,
and there's a library specifically for that called Reddish. It
just couldn't find it and just did something on its own.
Speaker 3 (35:32):
There's a great paper that compares the Linux operating system
versus I think it's ECOALI and how the DNA structure
represents the source code and how these two things compare
to each other, and you find that Linux is sort
of this upside down pyramid where there are some root
modules that are fundamentally critical and used by everything, and
(35:53):
then there's leaf nodes that you know, depend on composite
things that end up depending on the route, an upside
down binary tree, if you will, and whereas e COLI
is like a right side up pyramid. The most critical
functions are highly replicated throughout the DNA, because if one
of them becomes corrupted, you don't end up with a
single point of failure or catastrophic failure for the organism.
(36:15):
It can still continue on and you know, not leak
all your customers data to the internet. I mean not,
you know, just go through apoptosis and die as an organism.
Speaker 4 (36:23):
So you're saying it's a good thing the way it operates, well.
Speaker 3 (36:27):
Yes, and no. I think there's an intentionality behind the
evolution where you can say, well, for reliability, it needs
to be this way. But the LM isn't doing it
based off reliability to preventing its mutations, right, I mean
it's not going in that direction.
Speaker 4 (36:45):
The other thing it's not it doesn't care about is maintainability. Right.
If you'd have a feature to build and it's just
added a bunch of additional code, it might not be
all that critical to anyone, not even the resources. Fine,
a few more lines of codes, just a few more
bites that are stored, especially if you're come finding it.
But it's not mainteenable, and that begs the question should
(37:05):
it or if we're taking if elms are going to
take over everything, it should not really be maintainable. However,
something that's not maintainable is not really reviewable. Right If
there's one thousand lines of codes added to everything, every
little feature you develop, because just how lms work, it's
not really maintainable, scalable, reviewable. You just kind of shift
(37:27):
humans away out of the process.
Speaker 3 (37:30):
I think there's a huge mistake where we're generating things
from l lambs and committing that as the relevant artifact
or other humans to review. So in the case of
generating source code, having humans review that, or even writing
tests and then reviewing that, or you know, generating I
think emails or blog posts written by l lambs and
outputting that, Like, that's not the value. Isn't the output
(37:50):
In this case, the value was the prompt. It was
the human input here or what however you generate flipping
or flipping a coin or asking the l lump to generate, Like,
it doesn't really matter. You have a prompt, that's the
thing which was valuable. It's like when someone says, hey,
I used an LM to completely generate this blog post,
I'm like, cancel. Just tell me what prompt you use
to generate the blog post, because then I can do
(38:11):
it myself and interrogate the result. I don't need your
blog post. You didn't apply any original thought there. You
just copied what someone else created. If you use claude,
you copied what anthropic thought. If you use CHGPT, you
just copied what OpenAI has for data. So just get
get rid of all that. And from a source code standpoint,
it means committing these prompts and trusting the underlying models
(38:32):
in some way, or doing some sort of model validation separately,
and then using the prompts as the mechanism and so on.
Every build of your project's solution architecture, you rerun all
the prompts against the model, generate a new output validated
against some historical data from what your users use, for instance,
and go from there. And I think that's a much
more mature understanding of how lms can be effective.
Speaker 4 (38:55):
So I heard someone doing that. But in order for
just throwing out prompts and expecting some results and then
doing something, his prompt is always I'm going to ask
for something. Don't do anything yet, just give me the
plan and build it in the best best practices in mind,
blob about mcp another buzzword we can throw in there,
(39:16):
but if you have the right mcps, you can actually
grab the best practices from whatever you're building and then
give me the plan. Let's talk about it, let's go
every over everything, and then once we're done and I approve,
then you start building, which he says reduces the number
of errors by like fifty percent and he can maintain
the same output but without so many errors.
Speaker 2 (39:38):
I think that's.
Speaker 1 (39:41):
There's a video I was just watching yesterday from Anthropic
mastering cloud code in thirty minutes, and it's from the
guy I don't know his name, the guy who created
the claud cli, and that was that was his fundamental
approach in the talk is first is just have a
(40:02):
chat with the AI about what you're trying to do,
have it, throw out some suggestions on ways to approach it,
and then talk those through and really just having a
much more interactive conversation with it before you ever let
it start doing anything. And then I think to touch
(40:23):
back on something you guys brought up a few minutes ago, like,
I think that's the real role, the long term role
of software engineers with AI. It's not reviewing the code,
and it's not having AI replace you. It's about giving
AI a clear set of instructions and scope so that
(40:44):
when it goes off to do a task that it
doesn't you know, build a completely new library instead of
using the one that's already there. Or one of the
cases I had early on, I asked it to write
some tests and then went and checked the word work
that it did, and it it was trying to like
install Postgress inside of my doctor container so that it
(41:08):
had a database to to use during the tests, And
I'm like, no, no, we're.
Speaker 2 (41:14):
Not doing that.
Speaker 1 (41:14):
How about you just mock the database? Call okay, can
we do that? But it comes down to like to like,
you know, giving a clear set of instructions. You know,
had I told it up front, that it would have
gotten to the result faster. And so I think that's
probably the downfall of like vibe coding and letting it
take on large chunks of work, is it's going to
(41:36):
make bad decisions and then you end up with the
problems that we've talked about already, with something that's not maintainable,
largely inaccurate. And then tying back to the original conversation,
it's probably going to be a cost hog when you
try to run it on some serverlist platform.
Speaker 3 (41:57):
I feel like this is the the quintessentially example of
our user experience. Right, it's as a user, don't do
the thing you want, instead, you need to be trained
to use the tool. And I feel like I'm dystopian
perspective at this moment where it's like we're being trained
on how to interact with the robots that we've created,
(42:17):
rather than changing the models in a way to respond
to how individually we work. And I mean I say
that it's like sort of really ridiculous, but you know,
we're now in a way beholden to our AI overlords
who have already decided what's right and wrong, and only
if we interact with them in the correct way where
we actually get a valid response and we're what we're
(42:39):
looking for.
Speaker 1 (42:40):
So do you think it's a valid analogy then to
say I shouldn't have to learn how to drive a car.
I just want to get in it and go.
Speaker 3 (42:48):
Yeah, And I think that has improved cars over time. Right,
automatic seat belts, automatic braking, automatic air bags, cruise control. Right,
I mean these things are like, right, We're bad at
all of these things we should provide capabilities and improvement.
And you see, like maybe UI products for companies that
(43:09):
do care about the user experience are improving them. And
I know you meant that as a joke and I
took it too serious.
Speaker 1 (43:14):
No, No, it was a serious question because I wanted
to see how the analogy compared.
Speaker 4 (43:20):
It's actually a great analogy, right, It's something that we
use technology to. It's people's work, right, driving cars, driving taxes,
driving whatever. If you move that over to ropaths, then
these people need to change their line of work, which
I'm trying to think of in my line of work.
Is it going to be redundant? Is it going to
be able to be done by AI solely? You don't
(43:43):
need to review anything, you don't need to All you
have to do is prompt the right things and it works.
And I wonder if this is going to happen and
when and in what way? Because throughout history, every time
there was a technological advancement, people thought, Okay, that's the
end of the world. Everyone is everyone's going to be
out of work, and the opposite happen. Instead of it
improving our lives and making us work less hours. Is
(44:05):
just the other way around. Okay, great, more profit like
more throughput more profit, work more, produce more.
Speaker 3 (44:11):
I like that you brought up this example because I
actually feel like it's a counter example to the argument
if we if you look at books like Sapiens, which
was released not too long ago, we see that the
goal for improving or automation has never been to improve
individuals lives.
Speaker 4 (44:27):
That's exactly where I put this from.
Speaker 3 (44:28):
By the way, allow allows society to support additional humans,
even if it means subjugating even larger portion of those
humans or entities organisms to you know, a below poverty line,
or you know, sacrificing even more for them. So it's
not that those technologies exist to make humanity better, it's
humanity exists to be able to make the technologies better
(44:51):
so that, you know, other we can increase our population size.
So you know, there's a question.
Speaker 4 (44:56):
Of who's controlling who's controlling who?
Speaker 3 (44:59):
Yeah, and how many humans can be supported on this planet.
Speaker 4 (45:02):
But the argument means that we as humans support the technology, right,
so we produce technology to improve the chology. That's basically
what we're doing. That then reproducing that one.
Speaker 3 (45:13):
Deep AI may take away our capability to reproduce in
the future. That's that's where you wanted us to get too, well.
Speaker 4 (45:21):
Right, I heard an interesting argument that said that humans
are the sex organs of machine it's going to take over,
but where it's we're the reproduction capability.
Speaker 3 (45:38):
I mean obviously not taken to be literal, right, I
don't know.
Speaker 1 (45:42):
No, there's there's got to be a movie or a
book that's that that covers that topic. Like that's that's
some pure sci fi gold, right there.
Speaker 3 (45:54):
I mean, in a way, it's sort of a how
viruses work, like AI in a way is a virus
and we and viruses work by getting into your cells
and if there RNA viruses, replacing your DNA or inserting
in your DNA their own set of DNA sequences for
those amino acids, so that your body, you're the cells
(46:15):
automatically produce the virus itself.
Speaker 4 (46:17):
And it wants to reproduce and infect others, right, which
is great at we're using everybody's using My parents use
AI now to ask whatever they want and it works
and then they tell their friends and that infects someone else.
So it's exactly like a very good virus, to be.
Speaker 3 (46:31):
Honest, effective virus effective.
Speaker 4 (46:33):
Yes, a virus.
Speaker 3 (46:35):
Virus is virus is canonically by scientists all over the
world have said to not be alive. So I like
this analogy. If humans are the cancer, then the virus
the AI is definitely the virus.
Speaker 4 (46:46):
This lives me with some thoughts, Well, bring them on.
Speaker 1 (46:51):
We got to we gotta keep going with this episode
till we all get canceled.
Speaker 4 (46:57):
I don't know how to bring this back to devovs
and code about it feels like we're installing viruses.
Speaker 3 (47:02):
And I think it's useful for people to take a
deeper look at the technology that they're utilizing and how
it's being deployed within their company and what changes that
they're making over time to make that more effective but
for them and both their future jobs but also the
long term in the company.
Speaker 1 (47:20):
Wow, well done, Warren, getting us right back on topic.
Speaker 2 (47:26):
That was a pro move.
Speaker 3 (47:28):
Thank you.
Speaker 2 (47:28):
Yeah, and I got nothing to follow up.
Speaker 4 (47:31):
I have so many thoughts I need to process.
Speaker 3 (47:35):
What's the What's the most important one for you? I
think would be the question. So you know, you brought
up the topic of not just Kubernetes and how ZESSI
is utilizing it, but also the impact of building it
to support lllms, both internally and from third party companies.
Have you seen hands on specific challenges other than what
we've already talked about, not really.
Speaker 4 (47:55):
If I'm trying to connect that into both the philosophical
aspect of it and the technical parts. It's mainly focusing
on improving, right, which is everything we talked about is
improving either the technology or whatever drives the technology. So
people are trying to companies are trying to improve the
way they run lms, the lms themselves, the infrastructure that
surrounds it, and honestly their cloud bill at the end
(48:18):
of the month, which has to do with everything. And
it's also a funny aspect of AI because it consumes
so much energy resources not in the form of computers,
and chifts in the form of energy, right, and the
costs are to other professionalism. What they're saying is that
it's not maintainable and it's not scalable, and it's going
(48:39):
to hit a wall at some point, and I'm wondering
whether that wall is going to be more companies trying
to run more tailor made lean llms that only serve
one purpose or general purpose solutions like that are going
to be more consumed by AI companies, much like we're
consuming cloud resources from cloud providers and shifting away. So
(49:00):
I don't know exactly where it's going, but I seems
to be a very very expensive resource at the moment,
So I don't know.
Speaker 3 (49:10):
You've pulled out the optimistic perspective. There's a good principle
on this, and I just I the name eludes me
at the moment. But even if you make it cheaper,
you'll end up with this actually contradiction where the result
is more usage, not less. And I think the biggest
problem is that we're already seeing we talked about this
a little bit in one of the previous episodes, that
(49:33):
companies will just continue to use additional energy and rather
than care about trying to make it cheaper, will keep
on trying to figure out how to build more and
create more energy. Yeah, and so this means open reopening
coal and gas mines. Now, the question could be, do
you think there's an opportunity for good here where there
will be companies will start trying to invest in figuring
(49:55):
out how to get fusion reactors so that we can
get a step up in energy creation over because we're
never going to get there with sore or wind or water.
I mean, we have seen some situations where I believe
it's trying to trying to beam energy from outside the
atmosphere back down to Earth.
Speaker 4 (50:14):
You know how in Google flights, when you're searching for
a flight, it would tell you how much how much
pollution it creates or whatever in terms of forgot always
the measurement. So at one point we were trying to
do the same because we're in the business of making
infrastructure more efficient and effective and in a way allows
you to reduce both your costs but how much infrastructure
(50:37):
you're using, which you can follow up to aws not
using as much. No, so maybe we're you know, supporting
the environment by reducing the energy companies use, and at
some point we're trying to give you how much you've saved,
but also how much you've helped the environment by saving
on resources, which is an interesting analogy. I don't know
if it would work, but okay.
Speaker 3 (50:58):
So I'll be pressed by you know, the contrarian perspective.
So here it is, it's the only way this could
be effective is if we paid companies for having a
low spend but compared to what how do you actually
do this. We know carbon credits didn't work for reducing
carbon dioxide emissions into the atmosphere, so yeah, so I
mean that fundamentally is the problem. But there's actually another
(51:20):
issue here. If reducing your spend or using comparative technologies,
let's say the cloud providers using AWO said, you know,
here's A and here's B use B, we'll actually reduce
the cost more. You know, this is actually cheaper, not
because it's technology technologically cheaper or requires less energy, but
because it is better for the environment. The problem is
(51:40):
that you'll start to see companies pop up that abuse
option B and reselling that, you know, at a cheaper
way to other companies, so like competing with aws but
increasing the price and then taking a cut of it.
So there are companies out there that just deal with
carbon credit resells. They buy credits to resell them, or
even the worst case, the worst polluters in the world
(52:02):
just buy tons of credits from these intermediaries, and so
it doesn't help at all in any way, and you're
just making another company rich in the process who is
just abusing that gamification.
Speaker 4 (52:13):
Model can fight human nature, you can with AI.
Speaker 1 (52:17):
Maybe no GCP has that there's different different regions you
can choose, and some of them like display their carbon
emissions or carbon offsets that you gain by using resources
in that particular region.
Speaker 3 (52:37):
Yeah, I mean, I guess you'd have to be fined
for how much you're utilizing. But you know, from a
human perspective, if you get a fine, then you say, oh, no,
it's okay that I'm doing this. The government is, you know,
extracting their reward for their their you know, return for that.
So I don't know how you can really think about
this in a way that makes sense, Like maybe there's
(52:58):
some way. I just haven't thought about it enough. But
it seems like there's not a lot of good options
like how much should it be? How many? How many
carbon credits like or usage should I have as a company?
Like one five is five a lot. I don't know
what the appropriate numbers are. I think it's like one
one kilogram per like international flight. I think it is
(53:21):
the number. I don't know if that's right.
Speaker 4 (53:23):
I'm just gonna go with that kilogram of carbon dioxide
per international flight.
Speaker 3 (53:27):
Yeah, I think it's something like that. I don't know
if that's right.
Speaker 4 (53:30):
And what does not even mean, okay, let's say that
that's what does it mean in terms of pollution in
terms of that effect on the atmosphere, well.
Speaker 3 (53:38):
I mean actually figuring out what the direct effect is
is an impossible problem to solve. So you know, you
just take what the pollutant is and you measure it, right,
like the amount of whatever poison that you jump into
the river. How much poison is it matters for human beings? Well,
that's sort of hard to describe. There's like a huge
problem right now with the forever chemicals. Not the not
like the teflon on your on your hand, but the
(53:59):
products that care hate the teflon on your pant are
dumped in the water by companies. And how much of
that is bad? Well we can say that you know,
one is work is worth like two is worse than one,
But is how bad is two? How bad is one?
Speaker 1 (54:12):
Like?
Speaker 3 (54:13):
That's that's a really hard problem to answer. So I
don't know what it is for carbon credits. I do
know there are a bunch of companies out there that
are investing in trying to expose as information and somehow
utilize it. And there's lots of countries with grants available
to create green products or projects, but they don't usually
focus on like the carbon credits because it's like it's
such a challenging thing to go off of. So instead
(54:34):
they invest in things that they believe are sustainable for
whatever definition is sustainable.
Speaker 2 (54:39):
You have like Kubernetes.
Speaker 4 (54:43):
This actually always made me think whether everybody hates the
cloud platforms? Right? Everybody wants to. I mean there's a
people want to manage their own infrastructure. They don't usually,
but they like to hate on them. And I always thought,
do I by using a cloud provide like AWS, RGCP
or azure, is that better for the environment? Solely on
(55:05):
that perspective, Is it better to use something central that
has a lot of resources that are actually shared resources
in a lot of ways, which means it's more efficient
in a global level as opposed to a company or
me just putting a server act here, which which could
naturally consume a lot more than it actually should because
you're you're buying in order to scale, So companies who
(55:26):
do that probably buy lots more than what they actually need.
Is it better for us to use? Again, solely on
the perspective of efficiency utilization and how it affects the environment,
is it better to to use a cloud provider than
to run your own infra?
Speaker 3 (55:42):
Yeah, I mean that's sort of difficult. I think there's
a couple of different parts to that equation. The first
one is how bad is it for what you're doing
if you're running on PREMI I think that the recycling
of electronics waste is like one of the biggest waste
recycling problems in the world, and it's like only getting
worse by a huge fact. There's a magnitude in order
to do the waste processing. It actually consumes a ton
(56:03):
of energy, both humans and like physical energy, and no
one plant can take care of it all. Usually it's like, well,
we remove the plastic parts and then ship all the
other parts to other people, and then we we take
out the gold and then ship the rest of it,
and then we take out the silver and ship the
ret like we can't do anything else, and then at
the end of the day it's all in the ocean. Yeah,
(56:24):
that's right. So that's sort of a hard answer. I think,
how long as an individual, both as a company and
a individual, are you good at handling your technology waste?
You get a new iPhone every every year, you know,
that's probably bad. Although you know, here's the flip side.
You know, what would you use the money for? Are
you taking the money that your company saves by hypothetically
(56:44):
not using the cloud provider and doing you know, using
it for green purposes? You know, are the products you're
creating and making the world better? Well, yes, okay, it's
that's an absolute. Really, you have to compare it to
the company you gave the money to. So is Amazon
on if using aws taking your money and building green
product projects to improve the world or what they're doing
(57:06):
worse than what you would have done with the money
that you had as a company. So sometimes paying less
could be worse for the environment. Other times paying less
is better because now you have more cash to do
things in a better way. But that doesn't mean that
you will.
Speaker 4 (57:21):
I always like to think that being in the optimization business,
it's reducing waste at the end of the day, regardless
of how it's reducing waste on the application level translates
to the resource level, translates to energy papapa. But thinking
about it further, that might not always be the case,
which is super interesting to me.
Speaker 3 (57:40):
We optimize things because we like to.
Speaker 2 (57:44):
Builds.
Speaker 4 (57:45):
Might very well be a local optima that doesn't affect
the chain. Right. Another thought, I'll be left with the night.
Speaker 1 (57:51):
It's busy work with a dopamine hit. Well, I feel
like we've thoroughly covered the topic. We think should we
do some picks?
Speaker 3 (58:02):
I think I think it's time.
Speaker 1 (58:03):
Where Yeah, we've been here before. Would you bring for
a pick this time?
Speaker 4 (58:08):
Two things? One, if you're watching series TV series, I
really liked Mobland. Did you hear about that? It's guy
Ritchie Tom Hardy. It's really cool Mobland. That's one right
and completely relevant. And the other one actually is a
little bit relevant. Is a few. I think it's a few.
(58:31):
It's probably one software developer from Google on their spare
time start building kind of an alternative to GET, which
is not really an alternative because they can work together.
So I started using it in one of my projects.
It's called jj jiu Jitsu. It's a really cool open
source project that kind of lets you work with change management,
but not as much hassel as GIT. So it's just
(58:53):
at a chain of changes that you can just change
at any time, move through history like it was nothing,
where GIT makes everything a little bit more complicated. That's it.
These are the two.
Speaker 3 (59:06):
It's like editing the object model graph asd or or
GET as your direct mechanism. So I feel like, you know,
people that want to spend more time with their source
of control revision system but feel better about it. This
sounds like the perfect tool.
Speaker 4 (59:24):
Yeah, yeah, it's it's really nice and they work together. Again,
if you're working on a Git project and everybody else
is working on GitHub with the git locally, you can
still run jjly on your own machine, but then when
you're done, kind of wrap it in a commit and
then push it to a different branch which opens a
PR and everything, so you can enjoy both worlds. I
really liked it.
Speaker 2 (59:42):
Right, all right, Warren, what do you got?
Speaker 3 (59:45):
So I have a very controversial pick this time for
our listeners. I'm going to say doing surveys that that's
going to be my pick, now hear me out. I'm
not talking about like doing surveys that like from to
pay you money, because those are a waste of time.
Although I did start like that as a person who
thought that you could make some money doing that, I
(01:00:07):
never did. What I'll say is that surveys are like
I see as my opportunity to change the world in
my favor. And by doing them and giving feedback means
that I can change these companies how they're thinking, hopefully
so that they start and actually listen to that and
then make some changes. And so if I withhold my opinions,
that means I basically say, I love the world the
way it is right now, but I also don't care
(01:00:29):
if it changes, but it definitely couldn't be better, and
I don't think that's true. I like complaining about things.
So now, lots of companies we know just completely ignore
the surveys after they're done. You know, if you've ever
taken like an EMPs survey to ask you if you
love working for that company, we all know your executive
team is completely ignoring whatever you wrote there. I'm sorry
to tell you that. But on the flip side, if
(01:00:51):
you fill out the survey for the Adventures and DevOps podcasts,
I can guarantee you that you'll be entered into one
of the four remaining twenty dollars AWS credits left that
we have in store, So you definitely want to do that.
Speaker 4 (01:01:08):
I'll sign up for that. I just have to say,
I'm I have a searching in the gym. Every time
I come back home, I get an email with a
survey and I filled it like one, two, three times.
Nothing happened. It was it was I put in a
lot of time. They didn't even they didn't even reply.
So it feels like they are throwing it away. But
I'll definitely do yours.
Speaker 3 (01:01:29):
The only reason that The only thing worse than doing
the survey and feeling and not getting replied, I feel
like they throw it away is doing the survey that's
really long, getting to the end, and when you click submit,
it says like, oops.
Speaker 4 (01:01:40):
It crashed, right, you have fifteen minutes.
Speaker 3 (01:01:45):
Yeah, please don't if you make a survey, please don't
do that.
Speaker 4 (01:01:49):
That's what happens when you have vipe quarders building that oops.
Speaker 1 (01:01:56):
The other thing I don't like about surveys is when
you get the email that says like how do we
do and it's like a smiley or a frowni emoji,
and so you just click the smiley emoji like cool,
we're done. Oh wait, no, you're asking follow up questions. Okay,
I'll do the follow up question, and then then there's
another follow up question and then it's like.
Speaker 2 (01:02:17):
No, screw you. I'm not doing this. I was trying
to be nice, but now f off.
Speaker 3 (01:02:22):
So everyone that's listening. If you're building a survey, remember
that Will says that you can get him both with
the email and then one more question after that, and
maybe one more if you promised him something. That's that's
that's the threshold. I mean, there is a there's like
the sun cost fallacy.
Speaker 4 (01:02:39):
Right, exactly what I wanted to say.
Speaker 3 (01:02:41):
Yeah, so you know you've already committed to submitting your
feedback still more.
Speaker 2 (01:02:46):
Before you got the reward, right, for sure, you got to.
Speaker 1 (01:02:49):
You gotta feel like you're unlocking something in each step
of the survey.
Speaker 3 (01:02:53):
Yeah, it should definitely have a like anyone who build
a survey platform s, I should definitely see every survey
that I submitted, all the feedback so I can be like,
I got five more points for this for filling out
this feedback.
Speaker 2 (01:03:03):
I don't know what.
Speaker 4 (01:03:03):
That's absolutely nothing.
Speaker 3 (01:03:05):
I mean Google does it and it makes me feel
good about leaving reviews for restaurants and other places.
Speaker 4 (01:03:10):
You have to getting your green Garden githubh.
Speaker 2 (01:03:15):
Oh yeah for sure?
Speaker 1 (01:03:16):
Right, yeah, so you just need to tie the survey
to fake Internet points and everybody will be fighting to
fill it out.
Speaker 3 (01:03:22):
Yeah, well, what'd you bring cross?
Speaker 1 (01:03:25):
My pick has to do with some changes for me.
I have become the engineering manager for a new company
called Katana, and so my pick is going to be Katana.
If you want to go check out a website, it's
Katana dot Network.
Speaker 2 (01:03:40):
It is a layer two blockchain.
Speaker 1 (01:03:44):
That specializes as a DeFi platform. So we're like really
rethinking the way that decentralized finance works.
Speaker 2 (01:03:54):
And how to make it.
Speaker 1 (01:03:55):
More financially rewarding, but also a lower barrier to enter,
so that if you've ever played with DeFi in the past,
you know that you had to go and get like
buy ethereum and then find someplace to convert that to
wrapt ethereum and then find a bridge that would let
you swap it into what you were really trying to
invest in, and like every step of the way, you're
(01:04:18):
going deeper and deeper into rabbit hole and not really
sure if like this place that you're interacting with is
just fixing to steal everything in your wallet or if
that really is the right pass. So we're trying to
eliminate all of that.
Speaker 2 (01:04:31):
So well, that's my pick.
Speaker 3 (01:04:32):
That's quite interesting. Honestly, I see you a get a
new new opportunity there. I know whenever I try to
do anything with crypto, I always ask an l ALM
to decrypt my my wallet for me.
Speaker 4 (01:04:43):
Now there's a way to use energy efficiently.
Speaker 2 (01:04:52):
Well, I don't think.
Speaker 3 (01:04:53):
We're ready to get started on in the next podcast. Yeah,
I just.
Speaker 2 (01:04:57):
Had chat GPT remember my seed phrase for me? Cool?
Speaker 1 (01:05:04):
Yeah, so there you go. If you're interested in that,
checkout Katana dot Network. It's it's been pretty cool, like
there's some smart dudes working on it and I'm excited.
Speaker 3 (01:05:14):
So we'll have a link below the podcast. But is
there like something you're specifically looking for at the moment,
like are you looking to hire? Are you looking for
customers or users? Like what's what's the breakdown?
Speaker 1 (01:05:27):
So I am hiring a full stack role. So if
you're interested in that, hit me up. But other than that,
if you're just interested in DeFi check it out and
I would love to have your feedback to see what
we're getting right, what we're getting wrong.
Speaker 2 (01:05:42):
Yeah, do that survey right, It's so there's no survey.
Speaker 1 (01:05:48):
Just hit me up on X or email and say, dude,
this is cool or dude, this sucks and that's the
end of the survey.
Speaker 3 (01:05:57):
Uh oh, so you're gonna plaster your email over the
all of the internet so people can respond to you.
Speaker 2 (01:06:01):
Well, it's not hard to find.
Speaker 1 (01:06:06):
I mean yeah, like, based on the number of recruiting
emails I get, my email cannot be hard to find.
Speaker 2 (01:06:14):
Cool.
Speaker 1 (01:06:15):
Oh ma, thanks man, it's been fun having you back
on the show.
Speaker 4 (01:06:18):
Yeah, thank you for having me. Good to see a
boat Warren.
Speaker 1 (01:06:22):
As always, thank you appreciate everything you do here and
we'll see here when next week