Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
And we're live, so welcome back everyone to another episode
of Adventures in DevOps. This week, it's Nathan Goulding, current
SVP of Engineering at Vulture and with a previously long
history of product and engineering architecture background.
Speaker 2 (00:21):
Welcome, Thank you, Warren. It's great to be here.
Speaker 1 (00:24):
Yeah. Yeah, I have to say I'm pretty excited because
one of the things I saw on Vulture is that,
and I hope you can give us a little bit
more of the rundown here is all about high performance
compute in the cloud. I mean, when I think about
what your company is doing, it's infrastructure and on the
back end, bar metal, et cetera. And that for me
(00:44):
is a whole area of technology that I just really
have never gotten into. So maybe you can tell me
a little bit more about, you know, high level what
they've been doing and how you got into it. Yeah.
Speaker 2 (00:53):
Absolutely, So, you know, I guess maybe starting with how
I got into it. You know, I actually started in
the infrastructure space back in two thousand and one, and
I ran a very small game server hosting business called
East Coast Gaming Network, and very small. I was just
out of high school. And you know, the one of
the key features when you talk about high performance. There's
(01:15):
many dimensions to high performance. Obviously, when you think about
performance through some lenses, you think about high clock speed,
you think about lots about you know, large amounts of memory,
you think about the latest and greatest GPUs. Another dimension
to that is latency, network latency, and you know, in
the gaming industry specifically, low latency is incredibly important for
(01:35):
the gamers who are you know, if you like you
know Vulture we host the entire Call of Duty series
and have for several decades. You know, the latecy of
the end player performance, yeah, is incredibly important. And you
know the no lag no frag is you know that
that's the you know, like the the thing that gets
said in the industry is like lag is the ultimate
(01:56):
killer of game performance. And so building out a So
so that's how I got my start, was delivering game
servers and and we're lagazing. You know, network latency is
incredibly important, and so you know, at Vulture we have
a strong background in you know, multiple decades of operating
cloud infrastructure with high performance you know, end users at
(02:18):
the end of that. And so it started with network latency,
you know, hosting game servers around globally for the entire
like I said, the entire Call of Duty series, that's
really evolved into offering a wide range of high performance options.
And so, you know, the thing that we really lean
on is, uh, you know, we talk about price to
performance or price and performance. One of the things that
(02:38):
we really lean on, you know, being an independent cloud
provider is delivering better price and better performance. So you
know when when you compare with with some of the
other you know, the other cloud providers that are out there,
and so being able to do that means that at
every level of the stack, whether that's compute, storage, networking, latency,
those are all aspects where we look, what are the
(03:01):
things that we can do to deliver a better performing
product uh than than some of the other providers that
are out there. Because we understand that, you know, we
are in a competitive landscape and we are an independent
cloud provider, we have to make sure that we actually
go above and beyond in many different areas to be
able to demonstrate that value prop to to our customers
and that and that's what we are are focused on.
Speaker 1 (03:20):
I mean, that's sutually really interesting because I've never been
in software game development, and I never I never wanted
to be, But there are lots of questions there that,
like you mentioned, you know, Call of Duty, that that
is like premiere first person shooter game out there where
absolutely latency is a huge aspect to having it work effectively.
Speaker 2 (03:41):
Yeah, absolutely, no, I know it really is. And that's
something where you know, those are can sometimes be You're
typically your you know, your your loudest and most unforgiving
customers are the ones who are in the middle of
a game and expect, you know, one hundred percent of
time with you know, no latency at all, and so
you know that really is the proving ground of being
(04:03):
able to deliver cloud infrastructure in a really robust and
reliable way. That you know is these are not academic
or theoretical problems that we are solving. These are very practical,
pragmatic problems for customers who are very demanding.
Speaker 1 (04:18):
I always thought the Triple A Studios just ran a
terrible data center in their own warehouse and made lots
of mistakes. So I guess it's good to know that
there are actually providers out there that are dedicated to
solving some of the ridiculous problems of like server management
and having dedicated servers for individual groups of gamers on
at scale.
Speaker 2 (04:38):
Yeah, no, for sure, and that's really you know, that's
one aspect of what we do, you know, as a
infrastructure provider. We we are a full fledged cloud platform
and so you know, when you think of the a vulture,
it's it's the you know, like modern hyperscaler is what
we like to refer to us as you know, we
have we do have bear metal, which is like the
(04:58):
lowest level of kind of like cloud infrastructure that you
can get. There's no hypervisor, You're just getting access to
the raw metal and folks who who value that level
of performance and single tendency. Sometimes there are compliance requirements
that come along with that. You know that that's an
area where we can deliver that in a cloud delivery model.
(05:20):
And so, you know, with the advent of a lot
of the you know, and this is not a new evolution,
this is over the last few years of these much
higher level services. You know that that people kind of
forget that, especially when serverless as a thing became much
more popular. Gosh was that eight or nine years ago?
Serverletsts became a thing, you know, you kind of as
(05:43):
a cloud infrastructure providing you chuckle a little bit, because
it still runs on a server. There's still a server
there that runs that. It's just that there's so many
layers of astraction above that that you kind of forget
about that or you don't have to think about that.
And so that's one of the unique challenges that we face,
especially as you know, I lead the engineering team. You know,
we're hiring engineers to build the cloud, and so one
(06:04):
of the challenges that we face is, you know, they're
born in the cloud developers who've never been in the
data center before. They've never accessed the lower levels of
infrastructure before, and so you know, how do you find somebody.
It's like a chicken into the egg problem. You know,
if people assume that there is the cloud that will
deploy their application, and the problem set that we're trying
to tackle is how do you actually build the cloud?
And that's that's something that is unique and something that
(06:25):
you don't find in a lot of areas.
Speaker 1 (06:27):
I mean, it used to be the challenge of every
company to requisition purchase capital expenditures for warehouse location and
build up data centers. There are several blades and hire
ops folk to run and over time, we find companies
specializing in every part of the stack, and all of
those experts have migrated to the cloud providers, hyperscalers and
(06:51):
local country based cloud options for managing that. So if
you you know, if you're interested in that, those jobs
to exists, they're just working for our companies like yours.
Speaker 2 (07:02):
Exactly No, that's exactly right. And and you know, being
able to you know, be able to find those you know, individuals.
But at the same time, because what we are delivering
is a service that allows other developers and platform engineering
teams to consume that in a really really seamless way.
Is you know, like that is the bridge. It's the
bridge between the underlying physical infrastructure and providing tools that
(07:27):
you know, DevOps and platform engineering teams can consume in
very native ways. And and so that's it's a really
interesting challenge to be able to do that and really
you know, again abstract away the complexity and the difficulty
and deliver on the promise of the cloud, which is
you know, infinite amounts of you know, infrastructure very cheaply
and very quickly. Immediately, you know, you need to be
(07:48):
able to scale infinitely and you need to do it
right now. So it's a it's no small, no small challenge.
Speaker 1 (07:54):
I mean, I like that you brought up server lists
as an interesting comparison here. Do you find that your
customers also have a challenge of dealing with burst based loads,
which usually lend themselves to servilest solutions, or is it
that a lot of them understand a lot of what
their expectations are for volume of requests over time.
Speaker 2 (08:13):
Yeah, that's a really really interesting question, and I think that,
you know, I've seen customers fall into two kind of
like primary categories with respect to how they think about
cloud infrastructure. And you know, on the one end, you
have folks who understand and maybe have some sort of
background in delivering infrastructure in some level, maybe they were
(08:35):
a platform engineer, or they were an sre or they
were somebody as part of the DevOps team, and they
understand like the pieces that compose that, and so when
they think about scaling, they think about many of the
challenges that go along with scaling, is you know, everything
can you know, you can stand it up very quickly once,
but when you actually try to scale it, then you
(08:57):
come into things like again latency and maybe in a
different context here, which is how long is it going
to take for that function to execute? Is that going
to where? Where is that function going to be running?
Is it going to be in market, is it going
to be in a central location? Where are the end users?
And so the questions that get asked are typically different.
And then there's a class of user who really doesn't
(09:19):
think about that and also doesn't want to think about that,
And it's just unapologetic about not thinking about that. It
says that is not my problem. My problem is delivering
the application. My problem is servicing my end user everything
with respects to how it gets deployed, where it gets deployed,
how much of it gets deployed is your problem, you know,
either cloud infrastructure provider or some you know SAZ or
paths a service that runs on top of that and so,
(09:41):
and it is very very unapologetic about that and just
says this is, you know, my job is not to
think about the scale and concerns that that someone else's.
Speaker 1 (09:48):
Do you get a lot of the customers fixated on
like regional based services or is there like a fair
set that's like, well, some want localization where their potential
users are with this, like the gaming ones where you're
connecting to maybe other players in a local area to
reduce cross player pings and increase fps. Are there global
(10:09):
customers as well, They're like they don't want to think
about where their where their users are, and they still
have to connect and you're solving for problems like that
in that space as well.
Speaker 2 (10:17):
Yeah, I would say that, you know, the Yeah, there's
typically you know kind of like falls into two categories.
There is those that are location sensitive that say it
needs to be here. That could be for many different reasons.
It could be because they're only serving customers in the
specific market and that's going to be the best performance
for them, or there's other concerns for regulatory so for
(10:38):
data governance, it has to be in the EU, or
has to be in India or some other location where
there's specific regulatory concerns that it needs to be in market.
On the flip side, you have a global software or
application that needs to serve end users in a specific
(10:58):
area that where you're optimizing for the lowest possible latency
in that market. And so in that case, you you know,
the the entity might be in wherever they happen to
be in the world, but they need to make sure
that they're deploying into as many markets as they are.
And it's not just gaming, you know, we get a
ton of other customers that are in the security space,
that are in the network telemetry space, where they'll take one, two,
(11:22):
maybe a handful of nodes in every single data center
that we have. We have thirty two regions, thirty two
data individual data centers around the world, three two individual regions.
They'll take one or a handful of nodes in every
single one of those locations because they're doing let's say,
you know, eyeball network monitoring. So they'll send out probes
on the network, and they'll do it with us, and
(11:42):
they'll do it with a bunch of other cloud providers.
They might have hundreds or thousands of individual systems running
in terms of their global platform. As a let's say
you know, a network you know, latency, performance, uptime, health
monitoring service. They might deploy you know, thousands of probes
around the globe, across you know, across our entire footprint,
(12:03):
but across many other cloud providers foot prints specifically for
the end purpose of monitoring the network. And so so
in that situation, they're you know, they're they're optimizing for
something that's very specific for the application that's unique to them.
And and again like similarly for you know, if they're
running some you know IoT edge network for you know,
(12:25):
for an automotive company that has you know, the connected cars,
that's you know, again like common use case where they
might have something that's deployed into specific markets to reach
the cars that are in that specific market with the
lowest possible latency for for what that is. And it's
not self driving, Like you know, the self driving stuff
is never going to be you know, off the device
(12:45):
that's going to be on the vehicle. That would be catastrophic.
But but for other pieces of connected car where you
do have it off device, then then you're going to
want to have that in the region that the cars
are operating in.
Speaker 1 (12:57):
Its jumping into the controversial topics already, but before I
get into that one, I'm really interested to know how
you managed to figure out where to place your your
edge location. So do you have different types of data
centers or are all of them pretty much equivalent as
far as capabilities that they offer. I can imagine the
size and maybe the individual resources may be different depending
(13:19):
on actual need and where the customers are, but are
they fundamentally treated the same. Like if we look at
AWS or GCP, they don't offer every service in every region,
and they also don't offer a data center in every country.
So you may be in a place where you have
users who want to utilize your well culture to serve
(13:39):
their customers and there may not be any localized resources there.
How do you figure out where to put the edge
nodes and what do those actually look like?
Speaker 2 (13:47):
Yeah, you know, it's that's a really fantastic question because
edge means different things to different people. And you know,
when edge computing first was a thing, you know, a
few years ago or several years ago, both thought that
edge computing was, you know, there's going to be a
data center at the bottom of every cell tower, and
that was you know, kind of a thing for for
(14:07):
a minute and kind of like pushing the boundaries of
edge because you know, if you look at the evolution
of data center footprint over time, it was all the
major metros were the hubs for data centers. And the
reason for that was just as there was density, you
had all the major financial markets, you had all the
major tech technology companies, they were all in these regions
and people wanted to have you know, data centers that
(14:28):
were in those major regions. So you know New York,
you know the Bay Area, Ashburn, you know Dallas, there's
you know some major Chicago, there's major metros where there's
you know, just if you look at the US, the
density of those data centers there, and then started to
branch out into regional operators to go into other places
that were still major metropolitan areas but not you know,
(14:49):
kind of like you know Tier one cities in the US,
and then it kind of like skipped over like rapid
expansion to and now we're gonna put one at the
end of the edge, you know, at the bottom every
cell tower, and that you know, like if you look
at the cost of capital to deploy at the bottom
of every cell tower as an example, it just doesn't
make sense if you have you know, I'm looking, you know,
(15:10):
streaming a YouTube video in the car, and you know,
I'm passing one cell tower, you know, and I'm only
connected to that tower for you know, for for two
to three minutes as I'm driving down the highway. The
amount of data that I'm processing over that network is
just never going to recoup the cost of putting building
the data center and installing servers there and and having
you know, over the network delivery of things like self
(15:31):
driving cars is you know, just a safety disaster. And
so you know, you're looking at how much data actually
needs to get processed at the far edge and so,
and it's just it's just not that much, you know,
so so but then then you you like enter GPU,
it's like, okay, wait a minute. So now there's a
bunch of edge data centers which aren't in the cell
(15:53):
tower you know, aren't at the base of cell towers,
but they are edge data centers insofar as they might
be in the middle of a wind farm in South
Dakota and it's like, well, what are they doing there?
They might be like mining crypto as an example, where
there's access to very cheap power, but you're going to
have concerns in terms of redundancy, so like are they
you know, like optimizing for cost is different than optimizing
(16:13):
for redundancy and resiliency, which is what you're going to
gatch if you're you know, operting a data center that's
delivering you know, services to the financial services industry. It's
just like a different tier of service. And so you
have a bunch of data centers that pop up in
like the far it's a far edge, it's a different
far edge, but also out out there as well. We
don't have any of those. We are focused on major
major markets that that's where our We do have thirty two,
(16:34):
but they're in all the major markets that are around
the world. And so the things that we look at
are the common things like population density, we look at
obviously customer demand. If somebody needs us to show up
in a specific location for a specific reason, we'll do that.
And so it's you know, we're really focused on being
able to deliver services to places where there are you
(16:55):
know that there's a there's a need there, and so
our data centers are not again like we didn't grow
up in you know, we didn't grow up having data
centers in the middle of nowhere that we're then trying
to like you know, retrofit with data you know, with
backup generators and with better network performance because they suddenly
have a different you know, value add in terms of
(17:15):
the industry that they're going after. These are you know
again like they're they're tier three plus data centers, which
is you know, like one of the highs. There's four
tiers and you know Tier three plus is like you know,
one of the best and so so. So yes, these
are like high you know, like I can't remember they
I've lost count of the number of nines that it offers.
(17:35):
But these are again like they're they're major at all,
the major like name brand you know, data center providers
or the or the folks that we work with, and
they're and that's you know, because our class of customers,
just their their enterprise customers, they expect a certain thing
in terms of the uptime of these of these services
(17:55):
and the network they're going to get there.
Speaker 1 (17:57):
So are you seeing that your customers and potentually this
outer market segment of customers that you're not necessarily geared
to right now are leaning towards more hybrid model over
time or and they bring their own local data center
or already built server rack and they're connecting that up
to your cloud, or are you offering something to be
(18:17):
able to build that connection so that those customers who
may not have data center expertise like I can think
and in AWS, there's like some sort of deployable outposts
for instance, for running tech on Is there some corollary
that you've got or see customers actually interested in.
Speaker 2 (18:31):
Yeah, you know, that's another really really great question. You know,
the database outpost, I think has you know, seen somewhat
limited optick, you know, because if you have, if you
have an on prem data center. I think that when
cloud went through the first wave of explosive growth, people
assumed that everything was going to go to cloud and
(18:54):
that on prem was going to go to zero. And
the reality is is that when you're deploying, you know,
when you're flipping capex to op x, so instead of
buying servers, you're paying you know op x. You know
your depreciation schedule for your your CAPEX. You know, you like,
from a financial perspective, you recognize that if you're going
(19:15):
to be in the cloud, and at this point everybody
understands that they're going to need infrastructure in some fashion.
That if you're going to be in the business of
having infrastructure boud infrastructure in some form either delivered on
prem through you're on you know, your your platform team
combined with your on site team, or if you're just
consuming it through the cloud, that you're going to be
going to need it over a long period of time,
(19:37):
and so those charts cross out of about the three
year mark for most people in terms of the cost
of the TCO. And so if somebody's saying, hey, we're
going to need this infrastructure, we're planning to be in
business for the next you know, two to three or
four decades, then they're going to say it's going to
be way more cost effective for us to own this gear.
(19:59):
Understanding that that, you know, the lines crossed at about
you know, three years or so, but the useful the
actual useful life of those systems is probably somewhere between
I don't know, six, seven, eight years probably, And so
if you have the you know, so a CFO is
going to look at that and say, hey, like we
should be doing that, you know, putting servers and data
(20:20):
centers because it's going to be more cost effective for
that asset. There's other challenges that come along with that
because now you have to have a team. You have
to either have the team already or you need to
hire the team, and you need to do a lot
of things that people don't generally do as part of
their business. I mean, when you know, when was the
last time that you were inside of a data center.
Speaker 1 (20:38):
Ever, I was lucky early on in my career and
I managed to get in one because the company I
worked for what offered global manufacturing services, and so they
would put a data center within each of the manufacturing plans.
And I there's a lot of stories that go on there,
ones like you probably don't expect to have to deal
(20:59):
with a flood or fire or having both at the
same time very close to where your data center is,
and that being like a critical problem. So just reliability
of that thing is another challenge there, Like I actually
feel like hiring you know, that's obviously one of the issues.
But there's just like so many in the list goes
on and on, for instance, correct projections, right, like how
many of us were like, oh, yeah, you know, uh
(21:20):
six years ago, you know, what's gonna be so different
around the world, like sands, whether or not any sort
of LM or AI. Yeah, I brought it up and
we're only out Like the twenty minute mark is is good,
accurate or useful? I do feel like it's impacting a
lot of companies, and that's something that I can guarantee you.
Very few financial experts put into any of their market
(21:45):
predictions or for their companies. So yeah, I mean, if
you have a perfect understanding of what of how much
money you're going to spend and how much like what
your accounts receivable is going to be going forward, your
revenue and where you can capitalize on that, yeah, go
for it, you know, make the those purchases. But I
guarantee you no one is that good. And if you
are that good, you know, just quit your job and
go put some money in the stock market right now,
(22:08):
you know. And I say that, I'm like, I'm sure
the correct buy is like shortening the market by you know,
any amount like that's probably how a bunch of people
are gonna become rich in the next couple of years.
Speaker 2 (22:17):
Yeah, I don't know. I've never had too much luck
for that. I feel like the moment that I decide
that I'm gonna, you know, make make a better it
just feels like gambling to me. I'm gonna be like, oh,
I'm gonna short sell, and then that will be the
moment that it's the bottom of the bottom of the market.
I'm gonna stick with delivery infrastructure as my no.
Speaker 1 (22:32):
I think I think that's incredibly I think that's incredibly
wise and right thing to do. Yeah, for sure, Like
there are so many analysts out there that statistically they're
just gonna like they only do this. Like my brother
was telling me when he was in working for a
mutual fund that literally there's like one person at that
company and all they do is focus on like five
companies specifically. That's it. That's all they know. So there's
(22:54):
no way that you're gonna have more information or be
more of an expert to be able to make better
decisions in them. You're pretty much betting against everyone else
who has also no idea. So you know, maybe you're
more lucky than that. You know, if luck is on
your side, definitely go for it. But for immateur investors,
definitely any sort of market index fund. If you're feeling risky,
you know, and you want to you want to be
(23:14):
like you know what, I want to either have nothing
or everything, then then you know, you take the take
the gamble, like you.
Speaker 2 (23:20):
Said, exactly exactly. Yeah, I'll never never put money, you know,
make my own bets there because it truly feels like
a bet. It feels like you know, and the way
that I approach that is, you know, I only I
only would bet the money that I'm willing to lose,
which is very little. It would prefer to do things
that I can control and uh and and have some
(23:41):
knowledge over. So but yeah, you know, you know, getting
back to your original question about about you know, like
the hybrid models and how customers are viewing that, I
think that, yeah, you know, that's an area where you know,
I think that there's there's certainly folks who have you know,
again like putting them into kind of two category, is
there are folks who are clouds only and there are
(24:03):
those that are hybrid, and we see both of those
and I think that having a strategy that accounts for
both of those is is really important. And so I
think that it would be a mistake to think that
everything is ultimately going to end up in the cloud,
because that has not been born out, and I don't
think that will continue to prove out. But cloud is
an important strategy and an important thing for a lot
(24:26):
of even those customers who are traditionally on prem and
I've worked at well both of those places that are
you know, that have a hybrid strategy because that's what
they need. They need cloud for the birstability, they needed
for instant access to because they can't make a prediction
of their capacity. They need to have a hybrid strategy,
but they just simply either can't or don't want to
(24:47):
be fully in the cloud, and so having a strategy,
a technical strategy that they can account for that is
I think, is I think really important.
Speaker 1 (24:54):
Yeah, no, I'm totally with you. My advice has always
been make the mistake in going to the cloud, and
then ten years later when you're like, oh, well, you
know it would have been better if we didn't, then
you can start being like exactly figuring out how you
want to move some of your stuff back on prem
unless you have a lot of old hardware sitting around
that you're just like, oh, I want to throw a
bunch of extra money at maintenance. Which is the thing
that I thought about when you said having the data
(25:16):
centers at the bottom of every cell tower. It's not
even like having to spend the money to go and
build them there, or the complexity of building an application
that handles like lossy packets, like not meeting anywhere because
you're in between switching. I mean, you think about how
reliable GPS seller networks are, Like it's very difficult to
get your position in some places in the world just
(25:37):
due to jamming and whatnot. And now you have to
deal with not like unreliable technology and then it's going
to break down, right just having one of those at
every single cell tower, that's a lot of extra maintenance
that you're going into it more so than like how
do you even send a person there, you know, in
a truck with the right equipment to go and investigate
and then hip thing Like a lot of those are
in places that are very difficult to reach. You know,
(25:58):
there's pretty much just one cable going to the tower
and no one ever goes to that tower ever.
Speaker 2 (26:02):
Again exactly that that's very true. And uh and again
like you also, you know these are running on you know,
like if there's you know, if you lose utility power,
you know, you're operating on a generator and you know,
I I do have a UPS power bank you know
for my home internet to ensure that it's up all
the time, but it lasts about eight minutes. And you know,
(26:23):
and and for the cell tower, you know, with the
very low power, you know, the the things that are
running there typically you know, their arm chips for low
power consumption, the switching like all of that is you know,
it's all low power devices, even the you know, the signal,
I mean they have to broadcast the signal. But you've
got maybe you know, somewhere between eight and twenty four
(26:43):
hours of redundancy of the current cellular infrastructure that's there.
Then you start tacking on you know, kilo watts of
of server compute and and you know, you have much
different challenges if if you lose utility power or if
something if something goes wrong. So it's it's a yeah, no,
that that was you know when that happened, it was
you know, the assumption was and I think the other
aspect of that, going again, going back to the high performance,
(27:05):
low latency, like the other aspect of that is that
you know, it's like, what is the use case that
requires that low level of latency because we operate in
thirty two DA centers around the world. It's thirty two
and there's you know, tens of thousands, hundreds of thousands
of cell towers. Like what's the difference, Like what what
what what do you actually need that for? Because we
already operate, you know, our our platform, you know, thirty
two we operate and we are able to reach. I
(27:27):
think it's you know, ninety plus percent of the world's
population under forty milliseconds. It's like in like, so like,
what's the use case that necessitates that you dropped from
forty milliseconds down to you know two or one? There
are those like there's again like financial services you have
like you know, you're you're not dealing in milliseconds. Yeah,
you're you're dealing in a unit much smaller than that.
But that's in a very specific location that the global
(27:50):
application use case of insanely low latency has yet to
be born out. And I think for a minute there
it was like well, self driving cars, of course, but
you know, pushing that over the network, over the you know,
the EM spectrum with jamming and with everything else that's
going on again is just you know, a safety disaster.
And so that that was like the use case that
everybody trotted out when that was you know, when when
(28:13):
that was a hot topic, but that that has not
really again been worn out in reality, and and that
the use case where you need high amounts of compute
at that you know, minuscule latency to necessitate a deployment
like that. Again, as we've yet seen that.
Speaker 1 (28:28):
Yeah, I mean, I'm with you there. I mean, we'll
see for real time decision making, having that on vehicle
for autonomous vehicles would make way more sense. I mean,
we already see this needing to be the case for
any sort of on personed missions outside the atmosphere or
a planet. So we're already thinking about this as a
(28:51):
as a species of how to build technology like this
and what is actually necessary. But I have seen an
uptick in companies that are trying to provide remote services.
Rather than having you know, full autonomous vehicle, you just
have the driver is somewhere else, like it's sitting at
their desk, driving your vehicle to the actual location. And
so with that, I can see a little bit more
(29:13):
of a use there. I could see maybe it's backed
up by redundant systems within the vehicle, so the person
who's in the car, like you, still don't have to
drive the vehicle. But I do see these as long
as there's some differentiation in the markets around the world
being financially viable to outsource the driving to someone else
somewhere else.
Speaker 2 (29:33):
Yeah, I mean I think that that It's it's a
super interesting one to talk about because I think that
there are you know, again, whether it's somebody driving your
car for you, or you know missions you know in
other parts of the world that you have an operator
sitting somewhere you know, those are those are valid use cases.
(29:56):
But the realities is that the electromagnetic spectrum is open
to anyone. And so and I don't mean open in
so far as yes, you know, you have to encrypt it,
you have to worry about like, but anybody can you know,
has access to both you know, broadcast or receive those signals.
That's open. Obviously it's regulated. You'd be in a lot
of trouble if you broadcast over part of the spectrum
(30:17):
that you were not authorized to broadcast over, but it
would be extraordinarily disruptive if somebody did knowingly. So it's
it's it's an attack factor because it is an open spectrum.
I mean, anybody can you know, by commodity parts and
assemble an electromagnetic you know, something that either you know
sends or receives over the electromatic spectrum in the frequencies
that these would operate in. And so there has to
(30:37):
be the ability for either that like if you take,
like somebody driving your car remotely like that sounds great,
But if that signal gets jammed or if that signal
gets dropped, there needs to be something on the device,
either in the form of somebody has to be sitting
behind the wheel, or the car needs to stop itself
and then coordinate it between the other cars to to
stop themselves without causing an accident. But but that that
(30:58):
the medium in particular, if it's why or if it's
fixed in place, there's things that you know, there's a
there's a relative amount of a surety around what's happening
to be able to stop it because it's fixed or
it's wired, and so the redundancy of that is so
much better. But we all see, you know, the you know,
the electromagnetic spectrum is is already extraordinarily crowded and is
(31:18):
only getting more congested, and it is also open to
interference and jamming in ways that a wired network is.
You know, it doesn't suffer for those things. So it
just it presents unique challenges to to be able to solve,
to be able to do that in a way that
you know is safe and and you know and accounts
for the reality of the medium by which this information
(31:39):
gets transferred.
Speaker 1 (31:40):
So if you're making an autonomous car company and you
need to send signals between each of the vehicles and
some you know primary data center, vulture is probably the answer,
is what I'm hearing.
Speaker 2 (31:52):
I yes, I guess that's you know, getting back the vulture. Yes, no,
that would be that would be the way to do it.
As if you need a locate a data center. I
mean this is this is also true, like this is
actually what we do with connected cars, which I will
also say, like the connected car, you know, use case
is is totally valid. It's not for maybe necessarily like
driving the car, but you know, I mean on my
device itself right here, I can talk to my car.
(32:13):
I can unlock it, I can lock it, I can
see is it moving, where it's the what's the location.
I can set the climate control. So there are things
that are like for non critical communication between the vehicle
and the end user or the cloud, is a totally
valid use case for sure. It's it's only when you
so you know, when you have you know, human lives
that are being transported in the vehicle that you know
(32:34):
it could be potentially deadly. Is when you say, okay,
maybe le's like let let's not do that over the network,
but certainly for things that are you know, with respect
to how you start the car or stop you know,
lock it remotely. Those are all things where you know,
connected car is a super valid use case where and
we see that all the time where it's like and
that's also the valid like if you're doing a connected
car in you know, in in Asia or in Australia,
(32:57):
or in in the India like there, or in the U.
Those are all areas where having a data center that
is in market is incredibly important. Like that's that's an
area where you know, you only get goodness from being
able to deploy close to where the end users.
Speaker 1 (33:09):
Are makes a lot of sense. I do have to ask, so,
you know, I'm running a company. I don't think that
high performance computing is something that we need, but are
there telltale signs that I should be thinking about the
infrastructure that my product my company is utilizing Differently, are
there are there certain things that I could like ask
(33:30):
myself potentially you know, the audience that is listening to
this right now, something for them that would help identify
that just using a regular cloud provider or doing something
on prem like may have a better alternative.
Speaker 2 (33:44):
Yeah, you know, I think that the you know, being
able to look at the I guess I should also say,
you know, we if you just want access to easy
access to large amounts of compute around the world, Vulture
is a great fit. So you know that we have
all the SDKs like Cluster, API and Terraform, and you
know it's API for saything that you can do in
(34:05):
the portal, you can do via the API. And so
you know, we've again that's like been battle hardened with developers.
That's a huge part of our origin story and the
DNA of this company is being able to serve developers
and DevOps, individuals and teams. That's really that's a core
part of you know, what Vulture is all about. And
so so, yes, there's a piece of performance, but it's
also ease of use. You know, if you if you
(34:25):
work with some of the hyperscalers, it's incredibly complicated just
to get something. I mean, I guess they've made something
kind of like the onboarding process easy so you can
click the button to get going, but they're incredibly complicated.
And you look at you know, some of the hyperscalers
and you know, you look at like the I am
or like resource hierarchy models of how permissions get structured,
and it's almost seems impossible to set up a new
service or it's like wait, why is this throwing me
(34:47):
an air? I'm the root user and I can't even
deploy this new thing that I want to try out,
or when I add my second user, I put these
accounts together, How the policies get things like, you know,
all the services of how you do networking and there's
ten different kind of gateways that you you can all
stitch together in very different ways. Like there's there's incredible
complexity with when you are at one of the hyperscalers,
which you know, we are focused on kind of distilling
(35:10):
that down into what is it that you actually need?
And so I think there's there is the performance piece
of it, which just says, you know, how can I
get better, better performance at a better price. It's not
like apples and oranges, where you know it's it's like,
well it's less expensive, but it's also cheaper and smaller,
and this is like better for less and and then
also the easy of use component of that. How do
(35:31):
we keep it simple? How do we distill these things
down to say, like I just want some compute, like
I just want a Kubernators cluster. I just want some
object storage. How can I deliver that to the end
user in a way that is is easy to consume
and and and they can actually enjoy the process of
getting it without having to think about you know, okay,
like I am an individual, or I'm a developer, and
I or I have a small team, like I'm not
(35:52):
a giant enterprise that has multiple business units and multiple
organizational units with different cost centers and different billing accounts,
Like I don't I shouldn't need to think about that
every single time I go to deploy like my next instance.
And so that's like like so if if there's you know,
challenges in consuming things at the hyperscalers, or it's like
why is this so complicated and hard? Yeah, like and
(36:12):
then you and then you look at the bill. You
get the bill at the end of the month, and
it's like, Wow, I didn't realize that I needed a
PhD in finance and accounting to be able to understand
my cloud bill. That's that's an other area where you know,
like I was recently looking at a at a bill
from one of, you know, from a potential prospect that
was coming from a hyperscaler and they were getting charged
(36:35):
eight thousand dollars for configuration. There's literally objects in the
database they're getting charged eight thousand dollars for like usage
based some number of hours per configure item inside of
the database. And and I was like, that's a really
interesting I mean, that's a fantastic business model. I would
love it if if you know, like if everybody, I mean,
I guess with your hyperscaler, you've just kind of chosen
that that's an okay, an acceptable thing. But typically, like
(36:57):
the administrative aspects of what you're delivering as a struture
provider or as a SaaS service, it's just it's baked in.
It's like it's the cost of doing business. But if
you if you like looked closely at your bill, it's like, wow,
I didn't. And on a small scale, it's like, okay, well, okay,
that's fine. They charged me thirteen dollars for my configuration objects,
and I have a lot of them. That's fine. But
imagine spending eight thousand dollars on a month on configuration objects.
(37:19):
That was just how my cloud got configured. So like
there's areas where if you look at like the many
different ways, like we don't charge for you know, requesting
like S three requests, and like that's a huge cost
for on ata us. It's not just you're just getting
charged for the storage, you're getting charged for requesting it.
That's another area where like keeping it simple, where it's
like you're going to get charged, you know, per terabyte
(37:40):
per month for the storage that you use, and we're
not going to nickel and daim you on you know,
all these different areas and so so there's a lot
I mean, yes, there is. Again, like there's the performance
piece of it, but there's many other areas where people
you know, are kind of tired of the ways in
which the hyperskillers do business and want they want an alternative.
Speaker 1 (38:00):
No, I'm totally with you. I think that one thing
at leads from an AWS standpoint has been I try
to provide that as granular understanding as possible so that
you can optimize as much as you want based off
of what they're providing. But I feel like it's sort
of a leaky abstraction. You can sort of figure out
how they built their service based off of how they're
(38:20):
charging because it's so transparent, and it doesn't necessarily help
because it has a lot of complexity there. When we
built our own service, we offer auth as a service,
and part of that is realistically all of our competitors
charged by like monthly active users, and it's like, well,
what if your users aren't like monthly active, what if
they're used once and whatnot? And so trying to come
(38:40):
up with a single metric that handles every single use
case and is easy to charge and understand has been
really important. Whereas I feel like you will see this
where there's like some primary metric and then there's like
five other metrics that are like but if you do this,
it's like another charge, and if you do this, it's
a third charge. Like that doesn't help anyone exactly.
Speaker 2 (38:58):
No, it's very true. And I think that, yeah, I
think that when you have like when you sit around
a room, when you have the opportunity to sit around
a room and just come up with different ways to
charge people for using the platform and an extracting value
from it, you can come up with very many creative
ways to add on, you know, tackle on different ways
to to to charge customers and and that's something that also,
(39:20):
I mean, we've been doing this for you know, Vulture
for for over a decade, and it's kind of remarkable
that we have been able to keep it as simple
as we have. And I think that's a testament to
being able to focus on you know, providing fundamental cloud
infrastructure to a tech enabled uh, you know, mid to
large tech enabled enterprise and SaaS companies, and that's you know,
that that gives us the ability to really focus on,
(39:43):
you know, focus on making sure that it's simple, easy
to consume, and without having to introduce all these ancillary
ways that people get charged, and it's it's easy to understand.
Also just makes our lives easier when somebody comes in
and says, hey, you know, can you please digest my
you know, my my bill from a hyperscaler and we
are able to look at that and say, a great,
like all of these just is a big red you know,
like big X mark, Like just that's you know, I'm
(40:05):
just not going to get charged for that. And people
look at that and they say they think, oh wow,
I didn't even realize that that was an option. I
just thought that this was the way that it was
and being able to provide an alternative that is really powerful. Yeah.
Speaker 1 (40:17):
I mean, there's actually a great book out there called
Platform Revolution where it actually talks about if all of
your customers need to hire a third party consultant to
do something, then you have an opportunity to recapture that
value within your own platform. And so I think the
other hyperscalers, you know, fail at that where there's huge
successful businesses out there in that regard that are trying
to just explain your cloud bill and it should tell you, no,
(40:40):
we should probably do something a little bit smarter than
with how we're actually providing this. It just makes tracking
so much easier for everyone.
Speaker 2 (40:47):
Yeah. Well, it's much easier to add than it is
to remove, I've found. You know, it's it's you know,
being able to add more. You know, but what you
cut and how you cut or how you create things
that make it simpler is often, you know, is often
is difficult because you know that complexity typically gets added
for either well intentioned reasons or because somebody said that
(41:08):
was a requirement and so it got added. And now
once it's there, it's very very hard to unwind.
Speaker 1 (41:14):
Hopefully, no one no one's adding features because someone said
it's a requirement without actually backing it up with a
good justifications. I just can't imagine that's happening in the
real world. Is that there is this good quote that
I'm totally going to misattribute. That perfection is not when
there is nothing more to add but nothing left to remove.
(41:34):
So I think that definitely applies to having a good
building strategy.
Speaker 2 (41:39):
I love that. Yeah, that's very that's very, very true.
I love that.
Speaker 1 (41:42):
I will I do want to ask I feel like
we were dancing around this a little bit. Uh, since
you are building data centers, how does sort of research
and analysis go into the requirements and understanding of like
where to place the data center, geelocation base, like any
sort of potential issues catastrophic wise to impact the data center.
And you know, the part that's really interesting for me
(42:04):
is what sort of attention to security of like physical
security of the data center.
Speaker 2 (42:09):
Many things that go into that. And you know, we,
I mean we we also should say, you know, we
worked with a lot of fantastic data center partners who
who do a lot of that work for us, and
so you know, we we typically don't go out and
procure land and do construction, but we do work with
our data center partners too to go through that assessment,
(42:31):
and there's there's many things that they go into that,
especially as you know, if we if we want to
unwrap the presence of AI and GPU, there are many
things that go into that, especially around the advent of
the AI boom that's been happening, and so access to power,
access to reliable power, access to multiple sources of large
(42:54):
amounts of power is one of the I guess one dimension.
The other aspects that go into that is okay, well,
now that if you're delivering a lot of that power
into a very small space, you have to cool it
because there's a ton of heat and so you know,
things like you know like the just like things that
have evolved over the last few years from air cooling
(43:17):
to liquid cooling to other things like reird or heat
exchangers where you have it's like a hybrid where it's
like liquid cool to the ship you know DLC, but
you know there's other components that still generate heat, and
so you need to have some fans that will go
to a real or heat exchanger. So there's a bunch
of different things that go into that. And so the
(43:37):
like where you specifically do the data center is you know,
you know, there's many many things that go into that,
and then once you have it, you have to you know,
you're talking about in some cases situations where what you're
deploying into a single data center is and this is
(43:57):
true across true US both power footprint and across networking.
And one of the things that just is fascinating about
what has happened is that you have right now a
moment where in a single location you could be deploying
more power than you might have intero for a GPU footprint,
then you might have across an entire global footprint of CPU,
(44:22):
which is really incredible. And you have that also at
the network level too. You know, if you look at
some of the larger clusters that we've deployed and the
GPU fabric, which is you know, the GPU's effectively get communicated,
they get connected via networking on the back end, so
you have like front side networking which is like my
public IP and the back end networking for GPU fabric networking.
If you look at some of the larger clusters, the
(44:42):
amount of aggregate throughput that you have in a single
clusters more throughput than the entire Internet in a single cluster,
and there's multiple clusters that are getting deployed, and that
like if you look both for power and for networking,
you're and then you're deploying them on a time scale
where like it's you know, twenty twenty five, Like it
(45:02):
took how many decades of the Internet growth to get
to where we are. And now if you look at
what's going to get deployed, know what just got deployed
last year was going to get deployed over the course
of the next one or two years. It's really this
incredible volume of we just it took us this long
to develop the entire Internet with all of the CPU cloud,
and now it's like we're going to be deploying what is,
you know, ten or one hundred or one thousand or
(45:25):
ten thousand x the power networking on a much shorter
time scale. And that has I mean, just as an observer,
it is incredible to see what's being deployed. As somebody
who also deploys that it is you know, it's presented
challenges that I've never seen before my entire career. And
so that's been really really fascinating to witness both again
(45:48):
as an observer and a participant.
Speaker 1 (45:50):
I mean, that's interesting. I mean, I think we've known
for a long time that waste heat is still like
fifty sixty percent of costs of data centers, possibly higher
or lower, but you know, it's it's a huge amount
to have to deal with that. It's not the compute
or human resources or physical resources. But the other thing
that really comes to mind is at this point, I
wonder if you're already getting to the point where you're
(46:12):
identifying concrete competition for the power, the actual energy to
power your data centers with the other hyperscalars, or whether
or not it's still seen as a limited resource for
the for the moment, or whether or not this is
an actual question that's come up in a concern that
you're you have to start dealing with.
Speaker 2 (46:31):
No, it's always a concern. Yeah, I mean, I would
say that the again, access to power, large quantities of
power and the relatively small densities is for sure an issue,
and we all face it, you know, And I would
say this, there's a planning cycle to that as well.
I mean, everybody understands that it's not an unlimited resource,
but that I think that honestly predates you know, GPU,
(46:52):
Like there's always the fallback to utility power and maybe
there's a special you know, special agreement that you have
with you know, utility power. But data center providers you know,
have to have guarantees and so that's always taken the
form of you know, contractual obligations, you know, for for
power delivery. And this is no different. Although because of
the scale it it, you know, where the power generation
(47:13):
comes from, and the time scale in which to deploy it,
you know, has introduced different things. But you know, data
centers have never really been just like oh yeah, just
like plug it into the outlet and you're good. Like
it's always been, you know, there's there's been a little
bit more to that and so so so yeah, that
that has always been a special consideration, and I guess
more so now just more like the time piece of
ensuring when that power is going to show up so
(47:36):
coordinate things on the back end.
Speaker 1 (47:38):
No, it makes co clete sense. I'm sort of wondering
whether or not you're taking into account you'll, like, say,
where data centers by other hyperscalers out there are putting
their data centers, because you could end up with a
conflict of you know, a scarce resource.
Speaker 2 (47:54):
Yeah, no, I mean certainly there's you know, we we
we definitely, you know, we definitely look at that. I mean,
typically the way that that shows up is that there's
going to be a certain amount of power, you know,
ten twenty fifty megawatts of power that gets delivered through
a contractual obligation, and so that would be over some
number of years or decades to be able to do that,
(48:16):
and that's kind of like preserved power capacity, and so
that that's just you know, like you you reserve that capacity,
and that like if you're going to say, you know,
if you're gonna let's say there's twenty megawatts that is
available in Cleveland or in Cincinnati or in South Dakota somewhere,
that that becomes available, and then whoever takes it takes it.
(48:37):
And so that is and in some cases people are
in the market for additional capacity. In some cases they say, well,
we just took down, you know, thirty megawatts in Pittsburgh
and so we're fine, and so we don't need that
twenty that just showed up over you know, in the
in the Midwest. But there's ten other people that were
chomping at the bit for the power that was in Pittsburgh.
(48:57):
And so you know, now there's you know, if there
were ten people that were in Pittsburgh, one of them
got satisfied, and now there's nine people that are going
to that might be interested in the twenty mega wants
you know interest, just as a simple simple example. So,
and that also fluctuates with the demand of the end customers.
And so depending on you know, who's who's you know,
who's buying it on the back end, you know, the
(49:17):
you know that will you know, influence the band. And
so in some cases there's opportunities that are you know,
that that are being chased, and so folks might be
asking about the same you know, for the same end
customer coming out at different angles. And so it's a
it's a really it's a super dynamic time, super dynamic market.
And so we're always you know, assessing what's available based
(49:38):
on our own needs and and our own you know,
and our end customer needs as well.
Speaker 1 (49:41):
I mean, it sounds like it's not a struggle yet.
I mean, obviously there are ins and outs and complexities
and contracts and future prediction for for demand needs. But
it sounds like at the moment there's still positive optimism
or you know what's available and you know, sure you
may not be able to build a data center in
one local but picking another one is still is still
(50:02):
an option. Uh. You know, I'm wondering when this when
I'll say, craze of power usage spikes even more, whether
or not that will actually drive innovation and the science
is to complete the years old fusion research that it
looks only like China at the moment is actually sufficiently
invested in.
Speaker 2 (50:21):
Yeah, you know, I've seen every so often I get
articles that I'll come across my news feed that talk
about breakthroughs in fusion and and I've seen several of
them over the last even twelve months or so, and
so I think that it's it's going to be. I mean,
I think there's maybe one other alternative that's that has
(50:43):
more of a practical implication, but SMRs, which is small
I think it's small modular reactors there. There it's it's
nuclear power, but there it's on a much smaller scale.
And so and I think even I think equin X
announced that they were investing some number of millions of
(51:03):
dollars into s m rs as as an alternative. And
you know, the US has not been a leader in
renewable nuclear power, whereas the EU has made a lot
of advancements over the last couple of decades. You know,
if you look at the cycle of you know, nuclear
waste and how you you know, kind of process that
(51:25):
and can you know, ensure there's a healthier you know,
like our strategy to to nuclear waste is to bury
it in the ground, and which is it's that that's
a choice, but there's other choices that are that have emerged.
And so I think that innovations in the nuclear space
is something that's really fascinating because you're able to have
(51:47):
access to you know, without a lot of the footprint,
the carbon footprint and fossil fuels and you know, unrenewable
energy sources. It's it's really really compelling, but you do
have to completely think about the complete picture for it
to be a sustainable method.
Speaker 1 (52:02):
I do want to get an energy expert on the show.
Speaker 2 (52:04):
Now.
Speaker 1 (52:05):
I will say that bearing into the ground is still
better than putting it in some other places. And and
I think the the idea of I think is switching
to thorium over the uranium yellow Cake from two fifty eight.
I think there are innovations. I don't know if I
have your optimism about what Europe is doing. There is progress,
(52:25):
not as much as I would I would really like.
Speaker 2 (52:28):
You know, it's it's a it's really interesting and I
am not an expert on it, because I think there
was it. I think, if I'm remembering correctly, Germany was,
you know, like on the brink of this kind of
like renewable cycle and I'm there's there's a name for
it that I'm forgetting. But then they announced that they
were going to shut it down or something, and I
just remember thinking about it. It was like but that thing,
(52:49):
like you were on like the brink of like getting
to the point where we all wanted it to get to,
which was you know, more of a I think from
a technical perspective, they were. They had made more advancement,
even though I think there were some policy that prevented
it from chem to fruition, whereas I and I don't
know that those those translated over here. But maybe maybe
you have some more then than I do.
Speaker 1 (53:11):
Well, I'll share what I know. So facts. Germany definitely
did shut down all their nuclear reactors the interesting thing
is they had been like promised to be decommissioned for
over a decade, and other countries faced with the same
similar challenges crises that we're facing, said you know what,
let's stop our decommission process. Part of the challenge in
(53:31):
Germany was a huge lobby from the coal miners associations
because there's a lot of jobs associated with that, and
that still has a huge impact in many countries switching
over to realistic renewables. Actually in the last two decades
one of the hugest impact to switching to fission reactors.
(53:52):
So what we're calling nuclear is other quote unquote green
renewable energy sources. Like the lobby against vision and nuclear
by solar and wind is like I would stretch ridiculously.
It's just so sad. This is there's probably not the
podcast to go into that. It's all gonna stop there,
and I'll say, uh, say, if there's one last thing
(54:17):
about Vulture you wanted to share, we can we can
close it out with that.
Speaker 2 (54:22):
Well, so yeah, look, I would say maybe closing it
out getting back to cloud infrastructure and DevOps. You know,
I think one of the things that we we say
is that you know, Vulture is the platform for platform
engineering teams. So you know, if DevOps and platform engineering
is is your jam, where you find that that's an
area where you're doing a lot of it. You know,
I think that Vulture has a really compelling platform that is,
(54:45):
you know, it is something that you'll enjoy working with.
It has the again compelling price, compelling performance, and it
needs of use attached to that with a global footprint
and so that checks a lot of the big boxes
for for DevOps individuals and platform engineering teams. And so
if that something that that resonates, you know, definitely check
us out and you know, let us know what you think.
Speaker 1 (55:04):
I love the sales pitch. I will I will ask
because I'm sure someone will be wondering, like, what's the
interface that you're providing? Is it you know, some custom APIs?
Are you providing like some sort of Kubernetes adjacent replacement
for the control planes, et cetera. What does that look like?
Speaker 2 (55:22):
So yeah, it is, you know, it's it's a PI first.
So you know, if you use terraform or or cluster API,
you know you'll feel right at home. We have providers
for cross plane, we have providers for cross plane cluster,
API terraform and so yeah. So like full API deployment,
there is you know, future rich portal as well, and so,
(55:42):
but that you know typically is going to be the
entry point is going to be one of those like
you know, we have a go Go Vulture Goaling library,
so one of those entry points for API is probably
going to be that. We do also have VK which
is the Vulture Kubernetes engine, so this is vanilla Kubernetes
and is something that you know, it is kind of directly,
(56:02):
you know, instead of runn Kubernetes yourself, which is totally fine.
You can run Kubernetes on VMS or bare model yourselves
if that's your jam. But if that's not your jam,
you can consume kubernetovkay from us and we'll run it
for you.
Speaker 1 (56:17):
No, that's that's good, not my jam. And I don't
know how many people would you know, pick it personally you.
I think I'm gonna keep my opinions across playing to
myself for this episode.
Speaker 2 (56:27):
Uh okay, So I think I will say I was
going for I was going for a walk the other
day and there's a fire hydrant and attached to the
fire hydrant was a series of you know, piping in
tubes connecting down to a little water faucet on the end,
and it just kind of reminded me of Kubernetes a
little bit, where Kubernetes is the fire hydrant and then
to deploy your applications just like a little water faucet.
(56:49):
It's like it can be really overkill sometimes to be like,
wait a minute, why why am I needing to do
this and just to get this little bit of application going,
But it is you know, yeah, that's yeah. If deploying
something that's a little bit easier is attractive, then than
the we can do that for you.
Speaker 1 (57:10):
We actually had a Kumminatory consultant expert consultant come on
a few episodes ago, and I highly recommend that episode
for anyone that's interested in the different ways of deploying
to cloud providers with your technology stack. Okay, so with that,
let's move over to pay. So my pick for today
is I don't know how I'm gonna be lame, but
(57:32):
I like this. There's a show on YouTube and Nebula
called jet Lag the Game, and it's they have a
couple of different game versions. They play tag within like
a small area like multiple countries involved, or capture the
flag in in Japan or in a larger area. Basically, trains, planes,
(57:53):
buses are involved in game playing, and they even have
a card game version where you can play with your
friends like in an actual, live, physical location. So there's
lots of there's like lots of episodes that they do
about three or four seasons a year, and it's absolutely fantastic.
I think it's like if you hated the reality show
(58:14):
Race around the World or Amazing Race, this is so
much better. This is like actual high quality, like game
relevant stuff. There's no no real tricks involved.
Speaker 2 (58:27):
That's awesome. I love that. Well. I would have to
my pick of today, I would have to say there's
a TV show called Money Heist and I'm not sure
if you're familiar with this, but it's it's it's on Netflix,
and it's about It's a show about it's an incredible
like psychological it's every single there's the first season, it's
(58:49):
every single episode. Believe you wanting to watch more. It's
it's in Spanish, so don't get the dubbed version. Do
Spanish to subtitles, but it is phenomenal and I don't
want to give too much away, but it's called Money Heist,
and if you watched the first you can get through
the first episode, you'll you'll it's an incredibly smart, well
done about a heist in of the Central Bank of Spain,
(59:13):
and so it is fascinating. I absolutely love it and
if you haven't seen it, check it out. Money Heist.
Speaker 1 (59:20):
I feel like I watched something that was like a
single episode of and I don't know if it was this, you.
Speaker 2 (59:25):
Know, it might have been. There's many things that happened
in that in that season, and it is really it's
just one of the smartest TV shows that I've watched
in a while, and it's it's entertaining, but it's also
really really well done. So yeah, it's and not a
lot of people have have have have heard of it.
Speaker 1 (59:45):
It.
Speaker 2 (59:45):
Somebody recommended it and watched the whole thing and was
aid it was one of those shows where I think
we all have them right where you just can't, Like
you get to the end of the episode, you're like,
I like, you know, it's I really have to you know,
I've got an early early morning meetings and the I
have to stop watching, But I just really what happens next.
It's really really smart about you know how they do that,
(01:00:06):
and and yeah, I really highly recommend it.
Speaker 1 (01:00:09):
It's a sleep sacrifice worthy. It sounds like a great pick.
Thank you, Nathan. So I just want to say thanks
again for coming on the senior vice president of Vulture
here with us, and it was a great episode, and
thank you all the listeners and viewers for staying around.
And we'll catch everyone next week.