Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:08):
Hello everyone, and welcome back to another episode of Adventures
in DevOps. And I'm joined today again by my co
host Amy Knight. How you doing, Amy?
Speaker 2 (00:16):
Pretty good? Walking in?
Speaker 1 (00:19):
She likes walking, that's uh.
Speaker 2 (00:22):
I gotta get my steps in.
Speaker 1 (00:25):
I don't have a good strategy for that yet. Yeah,
so today I have to be completely honest. The topic
is in and around finnops. But I don't really know
what finnopps is. But if it's anything like DevOps, I'm
guessing it means firing all your business analysts and hiring
some additional software engineers to do a job they don't understand,
and then, after your years of failing, reappropriate the name
(00:46):
to mean something else and claim success. Well, Amy, you're
already smiling, so maybe you know something I don't know.
Speaker 3 (00:52):
I don't actually laughing because it's very entertaining.
Speaker 2 (00:54):
And I also.
Speaker 3 (00:57):
I think, like last week, I realized your title on
LinkedIn a tech entertainer.
Speaker 2 (01:03):
So that was a very entertaining intro.
Speaker 1 (01:05):
You did a great job, well, thank you. So we
pulled an expert from the field, chief strategy officer at
cloud Bolt, Yasmin Rajabi, and I've noticed here so you
have a history of product management, and so I'm really
excited to have you on the show.
Speaker 4 (01:21):
Welcome, thanks for having me excited to be here.
Speaker 1 (01:24):
How completely off base was my thoughts about what finof says.
Speaker 4 (01:28):
I mean, not so far off base.
Speaker 5 (01:31):
Any time you put like an ops inside in one
of the words, you're obviously trying to force some things together.
Speaker 4 (01:38):
And it's funny.
Speaker 5 (01:39):
A question I get asked often is like, okay, so
how are finnops teams and DevOps teams coming together? And
the OPS in pinnops is supposed to be for DevOps,
so technically like they're supposed to already be together. They're
not supposed to be separate teams. But there's always the
ideal and then the reality of if you take financial
analysts and you take engineers and you want to bring
(02:00):
them together, they speak different languages, they come from different worlds,
So there are folks that are trying to use software
to bring that together. I'd put us in the mix
of that, but it's still like people from completely different
worlds and trying to get them to think and care
about the same topic.
Speaker 3 (02:18):
Okay, I was going to add too, we also need
to add in is it greenops. So it's like getting
environmentally savvy as well within the realm of finnops.
Speaker 5 (02:29):
Okay, well, I realized that I need to reduce the waste,
but I want to be able to measure that from
a like my carbon footprint standpoint. And it was interesting
I was talking to a Phinnops person that is using
those metrics to drive people to care inside their organization,
because the people in the organization is like, Okay, well,
it's not my money, I'm not spending it. But when
you actually tie it to the environmental impact, the kind
(02:50):
of human nature part of people helps kind of get
people a little bit more passionate about reducing the waste
inside their inside their organization.
Speaker 3 (02:59):
As far as the step, I did kind of lap
it off before.
Speaker 2 (03:02):
But I saw.
Speaker 3 (03:03):
Something recently about like as I're putting more and more
data centers up, like the water shortage that's happening to
the communities around them.
Speaker 5 (03:10):
Some cities are literally like their grid cannot handle any
more data centers, data centers coming online, and like they're
starting to buy in different area, Like it'll be interesting
what things look like in ten years.
Speaker 3 (03:21):
Like I saw like these poor families like literally like
they can't run their appliances as they could like a
couple of years ago.
Speaker 2 (03:26):
Yeah, anyway, Okay, So I.
Speaker 1 (03:28):
Think that's a really interesting topic because we've had a
lot of our previous guests on the show, either they've
been AI focused in the last few months or they've
had a unique insight into data center operations. One of
the ideas that keeps coming up is that it's actually
not a struggle to get energy reservations for the data centers.
But it's interesting that you bring up the water impact
(03:49):
because that's a new thing that I'm not familiar with.
Speaker 5 (03:52):
Where it kind of interplays with water is probably less
of my expertise and background, just the fact that you know,
you need the water to cool down the data center.
The more the more power it's using, then it becomes
it's just like a cycle.
Speaker 4 (04:04):
Though I would say their.
Speaker 5 (04:06):
Ability to use liquid cooling and the data centers reuses
the water instead of an endless supply of water. The
recycling the water that exists has at least helped make
some of an impact. What's interesting is people are pulling
these metrics out and trying to overlay them on their
usage as well. So, Okay, I'm writing an application I
(04:27):
deploy it. I never think about these things, but as
I can start to pull some of those metrics out
of either my cloud provider as they start to provide them.
I think Microsoft started to pull in like carbon data
and other metrics that you can pull up or if
I'm running my own data center. Then being able to
pull those metrics out then kind of overlays and says, hey,
here's the impact of what you're doing, Like make sure
(04:48):
you're setting these things correctly. Otherwise there's there's a lot
bigger impact than just financial.
Speaker 1 (04:53):
What's a good example of needing to pull in accounting
or financial operations into software.
Speaker 5 (05:00):
So I'll say for most of my background in this
space is on the Kuberneti side, and when you're deploying
a Kubernetes app, you need to set requests and limits,
one from a reliability standpoint to make sure you actually
have the resources, but two typically what people do is
just like set them really high so they don't have
to think about it, and then deploy the application. And
(05:22):
it's really hard to understand the impact of that, especially
because that's at the pod level and you pay for
nodes and you're like, okay, well, I don't really know
how my pods translate into my nodes. So I'm paying
some amount of money, but how do I actually know
the impact? And so being able to pull in both
the billing data and then actually allocate it correctly, because
one of the challenges in Kubernetes is, Okay, how are
(05:44):
you going to allocate your costs? Do you just look
across name spaces and say like, okay, I'm going to
divide it by name spaces and then that's what you get.
But what if you have a larger name space or
you have how do you split out like cube system
and all of anything on the control plane. That's some
of the challenges that people have. And if you're what
(06:04):
you need to be doing is pulling in that billing
data to say here's how it's costing you, here's how
we can split that out if we want to split
up the pod level, and then you can be able
to tie that back to the actual nodes that you
pay your bill on.
Speaker 1 (06:18):
So would it be accurate to say that Historically maybe
software engineering teams haven't been held accountable for how much
they spend in.
Speaker 5 (06:24):
The cloud for sure, And often what happens is the
developers are setting those requests, and then the platform engineering
team are the ones that are like, Okay, well we're
managing the cloud environment, we're managing those bills. They get
asked to do something about it, and I don't know
the app. What do you want me to do? If
I touch something, I'm going to get a phone call
from probably not a phone call, slack message from a
(06:45):
developer of like, hey, what do you do to my system?
It's down, it's having challenges. You are the last one
to touch it.
Speaker 1 (06:51):
Amy having experienced there.
Speaker 3 (06:53):
Yeah, this is a very real problem. I would say,
like the requests and the mins also, and I think
like if a scale gets large enough, you also need
to understand like traffic patterns and scaling appropriately for that,
because you know, if you're an early stage company, you
can just kind of scale at like a you know,
(07:14):
relative level and stay there. If the app really grows,
which hopefully it does, you're going to be hit with
a like, very realistic problem. It's going to cost a
lot more money to be running your app at you know,
nine am versus midnight if it's a global application, to
get into having this spread apart things and allocate resources
appropriately versus like the distribution of the request coming in
(07:35):
and things like that. So it's a very complicated problem, and.
Speaker 1 (07:38):
Most of the time the people that have been sort
of on the chopping block for figuring this out have
not been the engineering teams who understand what the application
is doing. So how do you get that back, Like,
how do you align the incentives appropriately so that if
the amount of money you're spending on your cloud provider
or on capital allocation for on prem data center resources,
how do you align that back with the software engineering teams.
Speaker 5 (07:59):
I think a little bit of it is reporting, just
giving them visibility. Some of our customers call it shame
back reporting. I'm like, yeah, I get it, but maybe like,
don't call it that. But I think on the other hand,
it's also just giving them the tooling and the insights
that they know, they have the confidence in the systems
that are kind of handling this for them. I really
don't think you're going to get far if you go
(08:20):
to a developer and you're like, hey, you need to
change your configuration settings. This is costing us a lot.
They're almost never going to be incentivized to do anything
about it because their goal is deploy new applications. Move
the business forward. And so I think I would say
it's harder to incentivize them to do the work than
it is to give them the tooling that allows the
(08:41):
work to be done for them, while they can get
input into the tooling. So like, if you say, hey,
I'm going to do this for you, you have no insight,
no ability to override anything, no ability to constrain it.
They're gonna be like, no, you can't do that. For example,
you can take in their concerns and codify that into
the system that's handling doing the fixes for them, then
you're there're a lot more incentivized to kind of move
(09:02):
this forward because it's less work for them and they
get the better outcome.
Speaker 1 (09:06):
So I can see like one of the core problems
here is that maybe the team that's fundamentally responsible for
managing the application like building it and running at the
development team DevOps team, how how do they get the
knowledge for what should be reasonable as far as spend goes,
I do s disconnect, like do you have any I mean, it.
Speaker 5 (09:26):
Definitely depends on the organization of like how much they're
spending because scale's different, right, Like a very very large
bank is going to be spending a lot more than
a ten person startup with not always because sometimes, especially
with AI tools, ten person startups can be spending a
lot of money. But the metric that we usually look
at is waste percentage, So of everything that you're doing,
(09:48):
how much of it are you actually using? And it's
usually a better metric to go to folks and say,
do you know you're like wasting seventy percent of the
resources that you are paying for? So we don't go
in and say, like, you should be spending less here
and there, because sometimes you want to be spending more.
It's just are you efficiently using the dollars that you're spending.
Speaker 1 (10:07):
Do you think that may optimize for the wrong thing though,
that if you encourage teams to have high capacity utilization
for their pods, nodes, or if they're using something else,
let's say virtual machines, just like, oh the CPU is
only at five percent, get we should use a much
smaller machine to achieve the same thing. Optimizing at that
(10:28):
level maybe optimizing the wrong part of the application, Like
maybe the issue isn't that the utilization is too high
or too low, but the thing that they've built is
just wrong altogether.
Speaker 5 (10:40):
Yeah, Yeah, oftentimes it can also be just the app itself,
the way that it's architected. And even sometimes when you
write size, then you end up with a smaller but
higher number of pods, and all of those pods are
going to be making network calls. So, like, it's a
lot more complex than just looking at resources. You do
have to look at the whole system, and so you
have to look at the actual application itself, how it's
(11:02):
consuming metrics, Maybe it needs some changes within the application.
What can you tweak in tune there there's no like
one size fits all, here's what's going to save all
your problems. But it's what can you start with and
what's the lowest hanging fruit? I mean, honestly loose hanging
fruit for most people is just shut off systems you're
not using. And it's surprising how often that is the
majority of the waste in an organization.
Speaker 1 (11:25):
That's that's actually surprising to hear, because I feel like
Corey Quinn, if you're familiar with Internet personality, often jokes
that the highest cost for you know, utilization is your
backup buster or your you know, your database backup. You
know you're never using that, And yeah, there's you know,
a huge cost associated with that. I mean, there is
something to be said like understanding your utilization patterns to
(11:45):
be able to eliminate the waste in that way. Is
that really, like, are we don't going to like fifty
percent of the cost is just you know, right sizing
and then you're done.
Speaker 5 (11:52):
Honestly, typically like sixty seventy percent of the waste is
just it like sounds kind of a little embarrassing because
you're like, it's just like set your CPU requests at
your memory request.
Speaker 4 (12:02):
We're talking about two settings.
Speaker 5 (12:04):
And I feel like in the VM world that was
at least a little bit easier to say, go set it.
But in the container world, you now have sometimes millions
of containers, and each individual container has different resource needs,
and so it becomes more of a scale and like
toil problem than then it's anything else. So I do
think a lot of the waste, or at least what
(12:24):
we see is a lot of the waste is just
in right sizing. But honestly, there's also so we track
you mentioned storage, We track kind of unused storage and
backups that you haven't touched in ninety days, and the
amount of pocs that we come across and we're like,
just just turn this on and see what happens. People
(12:46):
are like, oh wow, like I got a lot of
waste there that is just easy. Go remove it, delete it,
turn it off. It's surprisingly more often than I would
have expected.
Speaker 3 (12:56):
Oh no, I'd say, I'll add to another problem I
have seen. So it's also not just like I said
it and forget.
Speaker 2 (13:02):
It kind of thing. I have seen.
Speaker 3 (13:06):
Issues where you have like a portion of the app
that is maybe not touched as often as application grows,
Like if they're only deploying it every six months, suddenly
what you've requested like if you do a redeployment, but
if you don't kind of babysit that you can run
into problems too because it changes.
Speaker 5 (13:21):
Yeah, if you don't have policies in place that are
continuously looking at this like okay, great, you were able
to either write size or remove or downscale once, but
then your application is going to change. And you mentioned
like apps that change every six months, some change daily,
so how are you going to stay on top of that?
And one of the other things you had mentioned earlier
was just horizontal scaling of you know, you need more
(13:43):
resources at nine AM. Tools like the HPA, like CATA
are great for that because they see requests are increasing. Okay,
I'm going to give you more replicas allow you to
scale horizontally, and that's awesome because it'll always help you
keep up with traffic demand. But if you're not scaled,
if you're not at your configurations correctly, you're just duplicating
waste across the board. And it's very good from a
(14:06):
reliability standpoint, but not very good from a waste management standpoint.
Speaker 4 (14:09):
So you kind of have to.
Speaker 5 (14:10):
Take the vertical approach alongside the horizontal.
Speaker 1 (14:13):
I had a train of thought, and I guess I
got stuck on this a little bit. There. There is
this aspect of I know, especially engineers in this area
tend to want to optimize everything and and part of
that there's a lot of products that come out. So
I can imagine that someone wrote like some sort of
AI based product that like you deploy into you Kubernetes
cluster and automatically tries to write size all of the
(14:36):
CPU usage and odd sizing, you know, whatever needs to
be allocated. Like this seems like a genius idea, right.
Speaker 5 (14:43):
Are you describing storm Forge or look us up ahead
of time, so we we don't use everyone's like, oh
I use AI.
Speaker 4 (14:52):
We use machine learning, thank you, it's math.
Speaker 5 (14:54):
It's like that I can't do. It's not some fancy
AI like we and we develop it in house. We
have PhDs that work on this stuff. It's like patent pending.
Depends on the audience sometimes like yeah, okay AI machine learning,
but like it's not the same. And I feel very
passionate about just making that differentiation because I mentioned this
(15:15):
is a scale problem. When you're looking at the different
settings at the scale, then that just means it's a
really complicated math problem. And so we use the math
to help solve that problem. We do look at the
trends in the data, like usage patterns, HPA, scaling patterns,
all of that, and then we pull the metrics straight
from Kubernets.
Speaker 4 (15:33):
It goes into a Prometheus.
Speaker 5 (15:34):
Database, and then our algorithms look at those scaling behaviors
and come up with what those right configuration settings should be.
You could say like okay, cool that anyone can do
that and come up with the configuration settings. I think
what makes us unique is two things. One, the horizontal
scaling behavior we talked about before where people usually use
the HPA KETA. Those are the most common tools when
(15:58):
it comes to Kubernati scaling and have to work with
those tools. You can't work against them because if you're
going to go in and vertically right size a container,
for example, and it's horizontally scaling, then the HPA is
gonna be like, oh, well, you're using more of your
downsize so you're moving using sorry, using more of your resources.
I need to go give you more replicas. And then
(16:20):
the VPA or any other vertical right sizing solution would
be like, well, okay, now you need more resources. So
you get into that turn cycle. And one of the
things we look at is your horizontal scaling behavior in
addition to your vertical right sizing requests, and then set
those together so that you can continue using the tools
that you are using that are open source, that like
(16:41):
I would never want to ask anyone to rip out,
and then also kind of reap the benefits of vertical
right sizing and then the other pieces we've been talking
about developers and how to incentivize them and input into
it is important for that confidence to be able to
actually do this automatically.
Speaker 1 (16:57):
You say that and now I'm just wondering if this
isn't a solved problem, like why is it that, say
that JVM or Kubernetes pods or even Docker containers, Like
why am I even serveralist solutions if I'm running on
VERSL or aws lambda functions? Why am I specifying like
timeouts and CPU and memory usage? Like why is it
just not automatically determining based off of I don't know
(17:19):
the last ten years and daily load, but how it
changes over week by week, month by month where holidays
are based off geographic regions, Like why not just take
all this stuff and just automatically figure out what the
scaling capacity needs to be? Like I guarantee you as
when I was an engineer, I definitely could not have
figured out like how many requests I was going to
get a week from now, let alone a month from now,
(17:40):
or how is that going to change based off of
whatever marketing campaigns are being run, et cetera.
Speaker 5 (17:44):
Yeah, So I mean the VPA that is open source
comes with kubernatores allows you to do just basic like okay,
look at my past, I think week and usage and
come up using P ninety five with what my request
should be and for like basic deployments that works right,
look at what I did in the last week and
then use that and set my request for me. I
(18:05):
think some of the challenges are the VPA if it's
scaling on CPU or memory, doesn't work well with the
HPA what I mentioned earlier. So you're like, if you're
going to choose between something that's horizontally scaling you or
vertically scaling you, You're gonna go horizontal because it's more
about reliability and how do you keep up with your
traffic patterns versus like vpas more about how I reduce
the waste.
Speaker 4 (18:25):
And then if you if you're an organization.
Speaker 5 (18:28):
That maybe has more cyclical patterns, things change over time.
You want to be looking at like a quarter's worth,
a year's worth of data, where you're going to store
that data, How are you going to look into that data?
It all costs money. So then that's where kind of
vendors come into the mix. To be honest, if you
wanted to build something yourself, you could, But do you
want to be into the business of kind of building
(18:48):
systems that manage your own systems or do you want
to be in the business of building applications that move
your business forward.
Speaker 4 (18:55):
So that's kind of where where where we see it.
Speaker 5 (18:57):
And sometimes I talk to companies that I've built something
themselves and it works for them, and it's like, great,
if it works for you, it works they could.
Speaker 1 (19:04):
I'm gonna ask a really embarrassing question here, because I
am a self proclaimed not Kubernetes expert. What is VPA
and HP.
Speaker 5 (19:10):
HPA's horizontal pod auto scaler. So HPA looks at the
target utilization. It looks at your CPU and then you
can set like a target utilization, say, if I'm using
more than seventy percent of CPU, I want more pods,
So give me more replicas copies of my my deployment
if and it'll scale for you horizontally up as you
(19:30):
need more, and it'll.
Speaker 4 (19:31):
Go back down as you need less.
Speaker 5 (19:33):
The VPA is a vertical pod auto scaler, and that
looks at resources in a vertical manner of like, okay,
you have x amount of CPU, so like you have
maybe one hundred mili core, you need two hundred mili
core or you need fifty mili core. Does the same
for memory, both of them, of course, but that is
what allows you to scale vertically. These are both tools
that are open source, so anyone using Kubernetes can be
(19:57):
using them. Should one hundred percent be using the HPA.
There's almost never use cases where you shouldn't be using
the HPA, though there are some, but they're tools available
for folks and almost everyone. I think like data Dog
did a survey now three four years ago where over
half of their customers or prospects people that did the
(20:17):
survey we're using the HPA, like one percent, we're using
the VPA. Since then, the number's grown, but it's still
majority HPA. Because it's very easy to use. It makes
sense to be able to scale horse montally, and.
Speaker 1 (20:30):
So one of the things that I think could be
interesting here is you said, basically the number one thing
that should be done is maybe time based load management,
so reducing your capacity scaling down when you don't actually
need these resources. I think this resonates with a lot
of cases, especially not that long ago, and I'm sure
(20:52):
there are still some companies that are doing this.
Speaker 3 (20:54):
They have.
Speaker 1 (20:56):
Managed action runners for GitHub or workspace runners for for
get lab or whatever they're using for CICD, and they're
not Those aren't being used when no one is doing
software development. Obviously in this more remote world, that's harder
to predict over time now, I think. But if that's
number one, and I think that's sort of the obvious one,
(21:17):
like what's number two in number three of what people
could be doing or you frequently see as problematic or
you know, high cost and could easily be cut out
or maybe not too easily.
Speaker 5 (21:27):
Yeah, So we talked about AI earlier, and just like
you know, everyone's doing something AI, a lot of people
have a bunch of data jobs, like they're using Spark,
and so it's fairly easy to deploy jobs on Spark.
And what I've seen is people will have a lot
of waste in kind of similar gethub runner, like it'll
(21:48):
go run, go do a bunch of stuff. How long
does it take to go run and go do that stuff?
How many resources does it take? Typically not an area
you're going to go optimize because you're like, okay, well
last maybe a couple of minutes, if even maybe it's
under sixty seconds. But when you start to do that
at scale, very similar to Spark jobs, that becomes a
challenge because now you're running tens of thousands of these
(22:10):
data jobs that are very compute intensive when they're running
and then they go away, and how often are people
pruning them making sure these are jobs that are actually
still relevant to the application. What I've found is when
it comes to AI related things, people are less thinking
about how am I going to optimize this? More just okay, spence,
(22:30):
spence bend, I need to just run it. Like GPUs
come up often too, and people will ask us like, oh,
are you optimizing GPUs? And my next question will be
would you actually optimize your gp like do you have
any initiatives? And they're like no, I just know it'll
be a problem in a few years. And I'm like, okay, yes,
but right now.
Speaker 4 (22:47):
But when people get R and.
Speaker 5 (22:49):
D budgets, it's just give me the fastest, most powerful
system that I can put together. They're not really thinking about, Okay,
is this actually cost effective?
Speaker 1 (22:57):
I think it's a really interesting path to go down
because I know, historically, at least in my career before
I moved slightly outside of hands on software engineering, the
joke amongst a lot of my leadership was there's no
reason to optimize costs in any way because the amount
that we're spending in the cloud is just completely dwarfed
by whatever else thing we're paying for. Usually it's engineering
(23:20):
salaries by like factors of magnitude. But as we get
closer to having companies with smaller size and more hardware requirements,
especially given the hardware requirements for running mL workloads or
data jobs or specifically running models open source or otherwise,
(23:40):
there is a huge increase in cost there per engineer. Right,
And I don't think the mental I think the mentality
has actually gotten worse. Right, Like it used to be
much more conservative when like which tools we use, which
cloud writers will use, you know, what will we write?
And I feel like now it's gotten way more liberal,
like no, go as fast as possible, pay as little
attention to this. Are you still seeing companies be like okay, no,
(24:03):
we actually need to think about this methodically, like we
know it's going to cost a lot to run our
models or is it really just a VC race?
Speaker 4 (24:11):
It's definitely a race for sure.
Speaker 5 (24:12):
Like every time I open up LinkedIn, I get these
posts about like these companies that had two employees or
ten employees and achieve this much, And I'm like, yeah,
I wonder what they're spending on the public cloud providers.
But we do get the question, so it's rarely are
people like I actually want to do something about it.
But we talked, you know, we kicked this off talking
(24:33):
about finnops and DevOps and bringing that together. The finnops
teams are the ones that are asking the question. When
I was at fops X a month or two ago,
I got the question about GPU optimization often of like
how do I manage to spend here? And the thing
I didn't want to say is like, honestly, your AIML
(24:53):
team is just going to spend whatever they want, because
is anyone giving them that guardrail of don't spend. But
it is great that the phinops team is thinking about
it because they know this is going to be a challenge.
And it starts with visibility because as they are building
the case of hey we should maybe maybe put some guardrails.
Speaker 4 (25:10):
I'm not saying go go.
Speaker 5 (25:12):
Tell the team they can't spend any money, but like
some guardrails, it's okay. Getting the visibility is the first
step to that.
Speaker 1 (25:19):
Is it just ACYNC background jobs run by quote unquote
data teams that are, you know, trying to make the
business run or I think it's definitely.
Speaker 5 (25:27):
A mix I'll even just like talk internally about cloud Bowl.
So we talked a lot before about the machine learning
that we do, but we also do have like we're
working on a chatbot where you can ask questions about
your infrastructure, and I get this question off and from
people it's like I don't want to put them into
the reports and have them figure it out. I just
want them to act. Now everyone has learned how to
(25:48):
interact with the chatbot that we get our main practitioner
being like, I just want my end users to ask
a question of what was the AWS bill for the
last month and how to that map to what we
had put in our budget. And so we are actively
kind of developing based on a lot of.
Speaker 4 (26:04):
The libraries that are out there and exists today.
Speaker 5 (26:06):
And so it is a mix of Okay, I need
the infrastructure to be able to deploy, and what I
do my training on is different than when I do
my inference on. So there is like i'd say, because
we have some history in building the tooling, we are
a little bit more cost conscious than other organizations would be.
But we use that to then say, Okay, if I'm
(26:27):
an end user, what is the information I would need
to be.
Speaker 1 (26:29):
Able to look at it sounds like it's not necessarily
fin ops, which needs to happen now it's been mL
OPS is the primary focus of cloudbal to be a
much smarter version of the VPA and HPA. Or is
that the set like you know, hyper focus in this
area or is there like extended integrations.
Speaker 5 (26:46):
Yeah, so cloud Bowl there's a portfolio of products, and
the Kuberneti's focus comes from the Stormforge acquisition. That's like
kind of my background. That's probably why I talked about
a little bit more, and that is Kubernetes right sizing,
so the ability to actually look at your traffic patterns,
use machine learning to come up with what your right
configuration settings are. And we're currently building that into the
(27:09):
cloud Bolt platform, which is a is a whole sleoth
of things. And the cloud Bolt platform pulls in billing
data from AWS, Azure GCP and it does all that
with the focus back. So the Finnops Foundation came up
with the focus back to basically be like, hey, everyone
needs to talk the same language.
Speaker 4 (27:26):
Like it is.
Speaker 5 (27:28):
It's hard enough to give people the right insights. Now
they have to know what do I what is storage
and AWS compared to Azure and when you're talking to
Finopps folks, not all of them know all the technical terms.
So how can we abstract that we rebuilt our data
platform to be focused native, so we can automatically pull
that information and give you reporting and visibility across all
(27:49):
your public clouds or anything private like VMware, Openstaff, new Tannics,
all that with that one language, one abstraction layer, and
then you can interact with chat thought that we're still
working on. So I'd say it's in beta, it's not
like publicly available yet, but we are using it internally.
Speaker 4 (28:05):
Is pretty cool, and interact with your data there.
Speaker 5 (28:09):
You can do your cost allocation to say okay, these
different departments are often like, we have a lot of
MSP customers and so it's like the cloud Ponzi scheme.
Someone's buying cloud from someone then selling it to someone
else and they're selling it and every layer you need
to understand what are the but the discounts that get applied,
what are the margins? I want to add to this,
how do I make sure that third tier customer is
(28:30):
only seeing what they should be paying for, not what
I'm actually paying for? Because I could get a little hairy.
Speaker 3 (28:35):
For me ask a question like technically, how does this
actually work? Like you said, we're looking at like traffic patterns,
Like how is it actually intercepting and reading those traffic patterns.
Speaker 5 (28:44):
Yeah, there's an agent that sits in the cluster. It
pulls the metrics from like se advisor, pull like cube
state metric type metrics. It's a couple pods that sits
in the clusters, not a damisut or anything, and then
pulls that information out and then ports it into our
our database. And that's one source. So we also have
an agent that pulls metrics from like VMware for example,
(29:06):
looks at that utilization data and then with the focus
spec we can pull overlay information from like your billing reports,
and then it put that on top of like all
the utilization metrics.
Speaker 2 (29:20):
Okay.
Speaker 3 (29:20):
I always get nervous when we talk about like adding agents,
so the agents are not sitting like within like a
current deployment or something like that, because then it's like, Okay,
now I have like more research in quests than I
need to account for on top of to solve this problem.
Speaker 4 (29:33):
Yeah, yeah, totally.
Speaker 5 (29:34):
That's the people are very sensitive to agents. The word
I mean soot not.
Speaker 3 (29:41):
Agent agents, but agents living inside your clusters.
Speaker 2 (29:44):
Yea.
Speaker 5 (29:45):
And what the storm Force software actually does is we
use our own software to right size the agent so
that we can make sure that the agent is just
using the resources required for that cluster. And there are
two lightweight pods at least in the kubernet space. When
people have agent based tools, a lot of times of
damon sets and so they take up a lot of space.
Speaker 4 (30:03):
On every node.
Speaker 5 (30:04):
And we've taken a deliberate approach to have a very
lightweight agent that is just two pods that script and
metrics and send them up.
Speaker 1 (30:12):
Is it a security concern or like a capacity utilization
issue of our resources in the cluster.
Speaker 2 (30:17):
I've actually seen it as a capacity issue.
Speaker 3 (30:20):
So we've I've seen people like deploy agents for various
things and suddenly of capacity issues with because they're you know,
everybody's fighting for the same resources. But if these are
like you're saying, sexic deployments, I feel like that's less
of a less of an issue.
Speaker 1 (30:34):
At least you have some like previous trauma there.
Speaker 3 (30:38):
Trauma it seems like, well, is.
Speaker 1 (30:42):
There something that we can shame specifically?
Speaker 3 (30:48):
And I actually probatuct.
Speaker 1 (30:51):
Well, let's just start like this is there is there
a frequent concern with pulling an open source you know,
definitions into your into your cluster without really evaluating how
much how resources they're going to consume compared to something
that building yourself, and is that the source of most
of the problems or you know, because these things aren't
necessarily built for like I can I always complain about
(31:13):
the security of open source not libraries, but full services
that you deploy within your cloud environment or a cluster,
because they're not designed for scale and reliability. They're designed
to usually funnel you into their paid product or licensing,
so you know, I it's like always a concern from
that standpoint. So I can imagine an equivalent issue where
(31:35):
they're not really designed to have low impact from a
cost standpoint either.
Speaker 3 (31:40):
Yeah, I mean, I'll time in I guess a little bit,
but I'm also curious that you have in the thoughts.
But at least just what I've seen is is more
of it's less of a security issue but more of
a resource issue.
Speaker 5 (31:51):
I mean on our end, because so a previous version
of our same product did actually use a lot of
resources inside the cluster and customers would be like I'm
using your tool to reduce my waist, but I also
am wasting resources to use your tool. And so when
we kind of went through a re architecture, that was
a very important piece for us also because I mean
(32:12):
it speaks to our credibility. We're supposed to be reducing resources,
then our own bits should also be using limited resources.
So that is why we take a very lightweight agent approach.
And it's actually not an open source agent. We've gone
back and forth of like should we release the code.
It's not like it's anything secret, it's just it's always
been a private agent. But you know, people can see
(32:34):
the helm chart, they can see what resources will deploy
with like default requests, so that obviously everything's capped. But
then people will use the software to then write size
it to make sure it is literally just using the
minimal required resources and its size to the cluster too,
so you use a small cluster, you're not spending the
same amount of resources then if you had really like
(32:55):
a large cluster smart I have sometsd from agents because
I a lot of time at Puppet and there's a
lot of agent wars at the time, to the point
where I like hated the word, but.
Speaker 1 (33:06):
You know, of well, you know, I guess I could
say maybe I'm glad that you have some you know,
PTSD from working at Puppet, because I definitely have some
from using Puppet. But that was a long time ago,
and it's been quite some time since I've had virtual
machines even to run stuff on. Now I feel like most,
well I say it most. I've read some statistic like
(33:28):
still fifty percent of the world uses some PHP backed
builders for websites, So I mean, there's still a huge
amount of you know, people that aren't even using any
sort of infrastructure as code solutions. And I'm over here saying,
you know, isn't everyone pretty much using some icy and
deploying their their stuff directly and spinning up containers. Amy's
(33:48):
already shaking her head. Either mean she has a really
great story to share or a really great story she
doesn't want to share.
Speaker 2 (33:56):
I just I.
Speaker 3 (33:59):
Don't know. It's a lot of people will try to
do the right things, and then other people are just
gonna try to do the quickest, what seemingly work solution,
which is just click it and go. But I see it.
Speaker 2 (34:12):
I see it so often itrks me I mean.
Speaker 1 (34:16):
There is a huge aspect here where it's about aligning incentives.
So I think most people, uh most people in general
have a logical desire to do something. They see some
initial conditions, and then they follow that their path as
best as they can to go up with the action
they want to take. And part of that input is
whatever incentives they have to go forward. And sometimes those incentives,
(34:38):
you know, aren't aligned from one team member to another one,
or from one team to another one, Like you give
certain incentives to your develop your engineers, and then you
give different incentives to not engineers or release folks or
engineering folks, product management or uh we talked about having
financial operations folks you know, have responsibility of actually reducing
(35:00):
the costs, but separate from not actually making sure that
the products that you're building are a have low footprint, right,
and so you immediately have a different of incentive there. So,
you know, I think there is a lot that goes
into into that sort of poor decision making process where
you get some optimal outcomes when your organization as a
whole doesn't have this included. And I don't remember the
(35:21):
last organization I was in or the last company I
was talking too. I was like, yeah, you know, and
our objectives and key results are okay, ours, we actually
have like reduced the total cost associated with our infrastructure.
Although I feel like everyone's gonna be like, yeah, of course,
of course you need that. Of course you you have
to counter your new development. But I don't think a
lot of companies are doing that, are they.
Speaker 3 (35:41):
No.
Speaker 5 (35:41):
It's funny when I ask people, I'm like, oh, like,
how important is like reducing costs, Like it's a top priority.
I'm like, oh, okay, can you rank like your top
priorities And it doesn't even make the top five and
you just said reducing costs is important, Like, well, yes,
we have the security thing we need to do with
this new application we're trying to get out, and it's
always compete. That's why I think like any solution is
(36:02):
going to need to take minimal time and just work
inside a system so you can be like, Okay, this
is low hanging fruit. I can go tackle it. Even
if it's six on my list, it's still in the
top ten. But it's like, it is really interesting every
time I talk to an IT leader and they're like, yeah,
this is so important for us to costs and then
I'm like, okay, rank your priorities.
Speaker 4 (36:22):
Where does it actually sit.
Speaker 3 (36:23):
You are mature enough engineer, you kind of treat this
as like just your bread and butter is like part
of your job, so like, yes, it helps to have
the incentive there. In my opinion, I feel like if
you're going into that role, like this is not something
like management should tell you like, oh, we need to
reduce costs, Like this is just like a given of
like as part of the reliability metric that people measure,
(36:45):
Like the cost metric could be something that you just
go to your management team on a quarterly basis and
be like, here's the cost.
Speaker 2 (36:52):
We reduced or if here's what we did. I like
that you shouldn't have to be told to do it.
Speaker 1 (36:56):
If you're an experienced engineer or even a senior engineer,
you may have your own internal metrics for say when
to pull out a public package or you know open
source solution, like oh yeah, I go to GitHub and
I look and I see there's fifteen thousand stars. That
means yeah, sure, I'll use it, no problem. You know
there are ratings there that that are and I feel like,
you know, as you ascend the sort of competency ladder,
(37:17):
and you get to a staff staff plus principal engineer,
you're not like you need your own sort of internal metrics.
And I feel like you're increasing the scope of your
delivery at those higher levels and there has to be
a trade off, like how do you make make sure
you're doing the right thing? And I totally agree with
you Amy that part of it is like, well, how
much am I spending to make sure that this technology
that we released to have a business impact is costing
(37:38):
us the right amount to actually have a positive ROI
for us and not and not be negative.
Speaker 3 (37:43):
I would say like, just like if you enjoy your
job and you're looking at your runway, like this should
be something that is consistently top of mind. And I
mean through no fault of like more you know, newer engineers,
this is just not something they think about. It's not
something I definitely didn't think about my first two years.
I was just it's like, oh my gosh, I need
to get this speech. You're out.
Speaker 1 (38:01):
I think there is definitely a cliff function here, and
maybe either of you can correct me, but I feel
like there's like there's a switch where early on in
people's careers. They assume that the money that a company
is spending is their own money, like, oh, this price
tag is like you know, we get customers all the
time like, oh, this is like going to be ten
thousand a month. That's too expensive. I'm like, how, like,
how much are you paying your engineering team to maintain
that solution that's not even working? And how much are
(38:24):
you paying your cloud provider to run that solution for you?
I assure you it's like at least ten times how
much you'll be paying us. And then there's a switch later.
I feel like where they you know, they're on one
of these two sides of extremes, right, like the money
doesn't matter or the money matters too much, And I think,
just like everything, I think the number one answer staff
plus engineers love to give is you know it depends, right,
(38:46):
you know, how much would we spend on our stack? Well?
You know, well what are we running? You know, how
important is it? What's the reliability necessary? I do want
to ask you a question, yesman, though, is that Let's
say I am in one of the roles, and while
we all collectively agree that you should have the responsibility
and accountability for either running some sort of solution that
(39:07):
handles auto scaling, pulling in the appropriate sasas, or opens
our solutions to help you. What if I'm not giving
given that flexibility, Like what can you know on the
ground engineers do to help make a case to say
their leadership or their teams to have a more mature
perspective on how to approach the problem.
Speaker 5 (39:26):
The first question I usually ask people is like, what
what do you think your cluster utilization is? And they'll guess,
and usually they always guess higher than it actually is.
Then they'll go look into whatever monitoring system they have
and they're like, oh wow, it's like twenty percent. And
I'm like, okay, what if you just doubled that? Right? Like,
I'm not saying go to eighty percent, I'm just going
to go from twenty to forty percent and setting that
goal to be something that's achievable. Typically, I found that
(39:49):
is something people can do on their own, without tools
and just get an understanding of how bad of a
problem do I have and how much waste is there?
Because when when you go to people and you're like, hey,
we could remove waste, then it's like the language you're
removing something there's there's a risk in that. But when
you say, what if you could improve your cluster utilization
from just double it from twenty to forty percent, you
(40:10):
still sixty percent headroom, Like there's still a lot of
waste there, but you've doubled your impact. It's a pretty
good metric that you can feel good about.
Speaker 1 (40:19):
I do see this fhere of like I don't want
to touch something I don't understand because I don't want
to break it, Like maybe it's twenty percent right now.
But I think the other thing that we hear frequently
on the show is like, how as an experienced engineer
do you get to some sort of depth? I get
asked a lot when I'm giving toxic conferences like how
did I become a security expert? I'm like, I have
no idea. I guess I just spend a lot of
(40:40):
time doing things. And I think this is one of
those areas like if you see the capacity is really
like utilization is really low, Like maybe ask the question like,
well why is it so low? Like what should it be?
Like you don't have to change anything, but maybe you
should be able to answer, like, well, if I wanted
to change it, what steps should I take how should
I know what it should be?
Speaker 2 (40:56):
Like?
Speaker 1 (40:56):
What information should I pull out? I think it's really good.
Speaker 5 (41:00):
I went back when I was an engineer. I learned
everything by breaking things. Made sure I broke them in dev.
But like, you don't learn until you try. And so
as long as you have a safe space to try,
and you have a cluster in dev that doesn't matter
and it has really well utilization, you can practice different
ways to improve that on a cluster that doesn't matter
until you kind of hone your skills and go deploy
(41:21):
it to other environments.
Speaker 1 (41:22):
You can definitely say no comment here if you're not
comfortable answering this question. But like, surely people were concerned
about the cost of the cloud before moving there, or
their cost of their you know, container management solution or
you know lead of virtual machines that they were using
vSphere on VMware, Like weren't there like solutions, Like weren't
people already talking about and doing this before?
Speaker 5 (41:43):
Yes, but there was an expectation that moving to cloud
would be cheaper, and then there's a reality that like
you have to put in work to actually make sure
the cloud is cheaper, and most people didn't put in
that work, so now they're paying for it.
Speaker 1 (41:55):
Do you think that's like a huge driver for the
let's move off of cloud because it's too expensive And
you're like the voice of reason here, It's like, well,
it doesn't have to be too expensive. It can be
the right amount of expensive or even cheaper if you
just think about what workloads you're running and what they
are capacitily utilization, and how to scale effectively.
Speaker 5 (42:14):
Yeah. I think like everyone wants this ideal world where
they don't think about day two problems, but doesn't matter
where you're deployed, you have to think about day two problems.
And the reality is for some people, probably moving back
to the data center is the right thing, and that's okay,
Like there is this oh, you have to be on
the cloud to be cool, which like I think people
are starting to get that you should be on the
cloud for the right reasons and if you are there,
(42:38):
you need to be thinking about your day two challenges,
like making sure your configurations are set correctly, like you're
reducing wasts, you're doing the right sizing, you have policies
to go clean up unused backups, just you know, basic things.
Speaker 1 (42:51):
I just want to totally disagree with you. I mean,
I think it's a very mature approach to say, you know,
there are reasons that you may want to go to
an on prem data center, but like my, my justifications
for doing that are just drying up one after another one.
Like I think about just the the depreciating assets for
on prem, Like it used to be the case for
sure that you could buy some you could buy a
(43:12):
data center or even run a data center and throw
your own server blades in and those would be usable
for a decade or even twenty years. And now I
feel like, especially if you're doing something cutting edge technology
and your run and you want to do some data
train you know, evaluation, or build some models or run
the models and do inference, the technology to maintain a
(43:33):
competitive advantage actually requires frequent hardware upgrades. I don't know.
I actually would love to sell someone to tell me, like, no,
actually here is this area where it makes sense to
actually be on prem. But the more simple, the more straightforward,
the more common what you're doing is like everyone else,
the less of a reason there is, like it should
it really should be cheaper being in the cloud for me, I.
Speaker 5 (43:53):
Think for most startups that are also like deploying to
cloud native applications makes sense, you still got to think
about day two otherwise you're just going to spend like crazy.
But like you still have retail companies that have been
around for a while and have like oms systems that
are not built to run on a public cloud. So
it's you're forcing your shoehorning something that's just going to
(44:15):
cost you in the long run. And because I mentioned
our history of like being an infrastructure automation platform for
the last fifteen years, we see a flavor of every
type of infrastructure you can imagine, like things that are
still running in a data center somewhere on tapes, like
it still exists, and not everything makes sense to move
(44:36):
because if you're just going to lift and shift it,
then you will be paying more.
Speaker 1 (44:39):
It's interesting because it was never for me that moving
to the cloud was actually a cost reduction. I mean,
I believe it, it's a reduced cost you build the
right thing directly, but like my personal motivations were always
like high reliability, getting cutting edge features and support the
infrastructure platform whatever you're utilizing, and being able to you know,
do quick have quick development cycles, you know, be more
(45:01):
agile and I feel like in the agile world with
just in time technology, there's a higher there is a
cost associated with that, so like I do see it
as paying for the advantage. I also like the mentality
of like, well, it actually doesn't have to be more
expensive if you, you know, utilize what the cloud provider
is offering. It's interesting to see that, you know, you're
still engage with companies that are are running off prem
(45:24):
and it just will, like forever, always be cheaper.
Speaker 5 (45:27):
That fifty percent metric honestly doesn't surprise me. It probably
did before we got acquired because I've been living in
the Kubernetes world for so long. But the more and
more I talk to different customers and prospects that there's
still a mix of there's a healthy mix of on
prem and even honestly within Kubernetes, people running Kubernetes on prem, I.
Speaker 1 (45:47):
Used to trust them. I mean, so I'll say where
this is from. Like I think it was like every
year WordPress would come out with a number of like
the percent of the Internet that was running on WordPress,
and like, I never liked PHP, so I wasn't particularly
inclined to trust the numbers coming out. But what after
every you know, local recent scandal. I even trust those
numbers less. But whatever that number is, it's still higher
(46:08):
than I would be comfortable with, you know, as a
human society, that we're you know, overutilizing this. So there
are tons of uh, you know, non tech zero one
or two person companies that before the web flows and
levables of the world bubble eyos, there there was like
you needed to get your website up and running and
GeoCities and whatever Yahoo had, you know, just Google sites
(46:30):
you know, was not cutting it. So you did do
this and with a number of themes and plugins like
I do get it. But that means like they can't
utilize any of this, and the costs for those is
of course really high, but they have no value in
transitioning to you know whatever. Think of like your local hairdresser,
like they're not going online. I mean at this point though,
I'm sure they're using some SaaS product to manage all
(46:51):
their scheduling and LMS to actually do the hair dressing itself.
Speaker 4 (46:56):
That'd be great.
Speaker 5 (46:57):
Ask a chatbot like how should I cut my hair?
I guess you could never thought of it.
Speaker 1 (47:01):
Well, I just assume you're not too far out from
asking the chatbot, like handing it to the scissors and
telling it to do what it thinks is best.
Speaker 3 (47:09):
I actually saw a video on the other day of
one of Musk's robots doing haircuts.
Speaker 1 (47:14):
Yeah, we're not going to go into that topic otherwise
I think I'm gonna actually have to cut it out
of the podcast episode.
Speaker 3 (47:23):
So I'm just stating the facts. YEA.
Speaker 1 (47:27):
Well, you know, I think for some people the facts
are even up for debate.
Speaker 2 (47:30):
So, yes, it was a video.
Speaker 3 (47:32):
I'm like, did an Ai just make this video or it
is just like a realistic video.
Speaker 5 (47:36):
I just I don't even can never tell honestly, every
video I watch, I'm like, is it real?
Speaker 4 (47:41):
I don't know.
Speaker 1 (47:42):
There was there was something I'll say that was really interesting,
totally unrelated. Uh, there's a British comedy show called QI
Quite interesting and every season they would say facts and
they by like the thirteenth season, what they had done
is they actually calculated the lifetime of the facts, so
what they said earlier on became less correct the later
(48:02):
that season would go so like it was like the
previous season, like n minus one eighty six percent of
the facts were still correct, but like the first season,
by the thirteenth one, there was like something like sixty
percent of the facts were like just no longer accurate,
like science had like whatever we had decided different now,
And I think the canonical one is like what the
food pyramid? Like, what is the food pyramid supposed to be?
(48:22):
Every year there's a different change to it, and it's
it's never never accurate. I think I think before we
go too much down another tangent, maybe maybe it's a
good time to switch over to picks. Maybe mostly nods here. So, Amy,
what did you bring for us today?
Speaker 3 (48:37):
I debated on this because I don't want a lot
of people to go out and buy it and then
they have like a they're no longer able to fulfill
the oilers. But it's really good, so I feel like
I should share. I'm a big health person, hence why
I'm walking on the treadmill, and my pick is going
to be like a specific protein powder. So when I
(48:58):
was like new on my protein powder journey, I didn't
quite care. I was like, what does it taste like?
Speaker 2 (49:03):
And what's the price?
Speaker 3 (49:06):
So this stuff is a little of pricey, but it's
like super clean ingredients and it's the brand's called the clip.
Speaker 2 (49:12):
The other good thing.
Speaker 3 (49:13):
They have a protein I feel really embarrassed, like maybe
other people are not into this, but in case someone is,
they also have a protein bar now, which like, if
you travel for work frequently, I'm always like, if I
have to go to conference or something like, I need
something that is going to not like make me feel
like trash when I'm there, because sometimes the food doesn't
always I don't know, it's not always the best. So anyways,
(49:37):
they have a protein bar that just came out. I
had to order like a month in advance. That's how
special this prostin bar is to me.
Speaker 2 (49:44):
And I just got it. And it's still like this and.
Speaker 1 (49:50):
You know you did you did Peloton last last episode,
so you know, we clearly on a healthcake here.
Speaker 2 (49:55):
I guess I just want to think of something else
next week.
Speaker 1 (49:57):
You don't have to. You can definitely bring high. I
don't live a healthy lifestyle picks every single week. I think,
you know, especially for those of us who sit at
our computer, like literally sit at our computers all day long,
I think there's a something good to be said for you.
Speaker 3 (50:11):
Actually, here's something practical about it. It's not just healthy,
but a lot of people who are like trying to transition.
I hear like a lot of my coworkers are like,
I want to get healthier, and they switch to a
protein powder. A lot of protein powder is way based
and that can upset people's stomach. So this one is
actually made from beef protein, which you might think, Like, initially,
I was like, this sounds disgusting, but it actually tastes
really good.
Speaker 1 (50:31):
That's a that's a pretty good cell there. Okay, Osmin,
what'd you bring for us?
Speaker 5 (50:36):
So initially I think I had more of a fun
fact that I learned this week than a pick. But
since we're on a health pick topic, I've been having
bone broth recently. A friend got me into it, and
I've been drinking like beef bone broth, chicken bone broth.
There's one on Amazon that I buy by like nineteen
(50:58):
ninety snacks. It's like a orange container. I'm sure they're
all good, but uh, it's like actually really good instead
of a tea or just lunch snack. If I'm like, oh,
I don't have time, I'll just heat it up. And
I think maybe it was a thing that everybody else
knew about that I didn't know about. But yeah, I've
been really getting into that lately.
Speaker 2 (51:17):
They're also good for travel.
Speaker 1 (51:18):
I've never heard of it, and I'm I don't think
I'll ever have it, so.
Speaker 3 (51:24):
You should try it.
Speaker 4 (51:25):
You should just try it.
Speaker 3 (51:25):
Yeah, really, Okay, here's an idea to use it too,
Like if you don't want to try to drink it,
Like if you ever have some sort of like pasta,
you can cook your pa in it.
Speaker 1 (51:33):
So I absolutely go to the store or actually a
lot of butchers will throw away bones. You can go
in just ask them for the bones. And you don't
need to buy beef bullion or chicken bullion at the
store or vegetable stock. Honestly, just boil the bones and
some vegetables. I usually use caracter celery in a bay
leaf in a pot for like an hour or two,
(51:54):
and then use the liquid for whatever else you're making.
And it's just so much better.
Speaker 2 (51:58):
I like your idea better.
Speaker 5 (52:00):
Just try drinking it. Just try it, even if it's.
Speaker 1 (52:04):
I don't know, Like I find for me that the
best broths are like usually when the meat is a
little bit fatty. So if you use like chicken thighs
on set or wings like instead of breasts and uh,
pork or beef bones, Like I don't have that much meat,
but like there's a lot of flavor here. I still
think it's like too fatty for me, Like I don't
like the sensation of having like a fatty drink like
(52:27):
that that's the idea is warming up to me. But
it's it's a lot, it's a lot of extra effort
to go out and do this. But I suppose if
you know you, if you live close or you you
buy whole animals to to cut up and consume, then
you have probably a lot of extra bones. And if
you're not utilizing them to also make drinks, I guess
here's your here's your opportunity. Okay, So I actually wasn't
(52:49):
sure what I wanted my pick today to be, so
I went and I grabbed this article called lions and
dolphins cannot make babies, And it sounds pretty obvious on
the surface. I mean, obviously they're in different families in
the taxonomy for animals. But the article is about basically,
(53:11):
how we got to this point in technology required a
synonymous relationship, symbiotic between whatever the hardware we're utilizing and
the software that we're building. And I think this is
true in a lot of things. It's known as an evolution,
as red Queen's race. So you think of like bacteria
and viruses competing for resources, or even any sort of
(53:31):
parasitic organism and the host evolving. And it goes into
a little bit about how we probably will not have
any more innovation in AI unless we really take a
huge giant leap in some of the hardware technology available
to us. We can't just build better software and fix
some of the problems. I mean right now, I think
(53:53):
one of the limitations we have is everything that's built
in the last six years was based off of the
paper written by Google on transformer texture, and like, sure
there are some shortcomings, like it can't figure out patterns
or do math, but we'll extract some characters from you
can ask it like where is the math, and then
like rejects that and then pass it to any sort
of solver engine and get that. So you know, you
(54:14):
start to solve some of those problems. But fundamentally, to
get to the next level, we'll need something different, and
it's not just writing new software. I think the idea
here is fundamentally that we're stuck until we actually understand
that we may need a different kind of hardware to
power this. I don't know if quantum computing is sufficient
to running this, but really just something that's slightly different
(54:35):
from our current computational engines.
Speaker 4 (54:36):
I'd love to read that.
Speaker 1 (54:37):
It's not very long, so I think it maybe overseelling this,
but it'll be in the link to the in the
podcast episode. In the show notes, thank you so much
for our kind guests to show up today and tell
us all about it off and off, So thank you
for coming. It's been fantastic, And thank you to all
of our listeners for another great episode of adventures and
DevOps and I hope you're all back for the next one.
Speaker 3 (55:00):
The st