All Episodes

March 4, 2025 • 61 mins

Send us a text


What happens when you get Eyvonne, William, and our special guest Nick Eberts in the same conversation? You get a GKE party! In this episode, we dive deep into the world of multi-cluster Kubernetes management with Nick Eberts, Product Manager for GKE Fleets & Teams at Google. Nick shares his expertise on platform engineering, the evolution from traditional infrastructure to cloud-native platforms, and the challenges of managing multiple Kubernetes clusters at scale. We explore the parallels between enterprise architecture and modern platform teams, discuss the future of multi-cluster orchestration, and unpack Google's innovative work with Spanner database integration for GKE. Nick also shares his passion for contributing to open source through SIG Multi-Cluster and provides valuable guidance for those interested in getting involved with the Kubernetes community.


Where to Find Nick Eberts

  • LinkedIn: https://www.linkedin.com/in/nicholaseberts
  • Twitter: https://twitter.com/nicholaseberts
  • Bluesky: @nickeberts.dev


Show Links

  • SIG Multi-Cluster: https://github.com/kubernetes/community/tree/master/sig-multicluster
  • Google Kubernetes Engine (GKE): https://cloud.google.com/kubernetes-engine
  • Spanner Database: https://cloud.google.com/spanner
  • Kubernetes: https://kubernetes.io/
  • KubeCon: https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/
  • Argo CD: https://argoproj.github.io/cd
  • Flux: https://fluxcd.io/
  • CNCF: https://www.cncf.io/


Follow, Like, and Subscribe!

  • Podcast: https://www.thecloudgambit.com/
  • YouTube: https://www.youtube.com/@TheCloudGambit
  • LinkedIn: https://www.linkedin.com/company/thecloudgambit
  • Twitter: https://twitter.com/TheCloudGambit
  • TikTok: https://www.tiktok.com/@thecloudgambit
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Nick (00:00):
What I'm really passionate about is making sure the things
that we're building forMulticluster and GKE land
upstream in the open source,Helping the team deliver some
tools in SIG Multicluster, whichis part of upstream Kubernetes
right, and the purview of SIGMulticluster is to start to help
build the primitives fromMulticluster toolchains across

(00:22):
upstream Kubernetes which wethen use in the managed
Kubernetes world.

William (00:37):
Welcome back, fellow nerds.
I'm your host, william, comingto you from the depths of Mount
Doom, or let's just say MountYamel, where I'm going.
Lord of the Rings, here,instead of the rings of power,
we forge manifest ofinfrastructure.
And with me is my companion,co-host on this quest, yvonne

(00:59):
Sharp, who, like Gandalf, guidesothers through the treacherous
paths and perils ofoperationalizing multi-cluster
Kubernetes and such how are youdoing?

Eyvonne (01:10):
You should have asked me which character I wanted to
be, because to me, the real heroof the story is Samwise Gamgee.
So if you're ever going to picka character for me, I want to
be Sam be sam.

William (01:26):
Anyway, you know, I said harry potter and I pivoted
to to lord of the rings, like ina in a in a moment's notice,
because you said you hadn't readharry potter books I know um.

Eyvonne (01:31):
You see how bad that pivoted there I'm super
culturally behind, like myhusband, and I just started
watching peaky blinders.
You know I watched lost five orsix years after it went off the
air.
So I get there eventually.
I just have to have tons ofevidence that it's worth my time
and there is no evidence.

William (01:49):
I watched Lost five or six years after it went off the
air, so I get there eventually.

Nick (01:52):
I just have to have tons of evidence that it's worth my
time.
There is no evidence in Lost.
No, yeah, okay, that's true,and a little bit appropriately,
as in the time that you spentwatching it would be.

Eyvonne (02:00):
Lost.
Yes, and that other voice thatyou hear is my friend, peer and
colleague, Nick Eberts.
Nick, welcome.

Nick (02:10):
Thanks for having me on.
Good to meet you, William.

Eyvonne (02:13):
If you don't know Nick, he is a product manager, GKE
Fleets and Teams at Google Cloud.
And Nick, why don't you tell usa little bit about who you are
and what you do?
Google Cloud.

Nick (02:24):
And Nick, why don't you tell us a little bit about who
you are and what you do?
Yeah, I'm a human being,allegedly yeah.

William (02:30):
So you're not AI?
No, you could be AI and wewould never know right now.

Nick (02:37):
No, I have like the appropriate amount of phalanges
and such.
But yeah.
So I've been in this computerbusiness for too long, started
with the Navy back in 2002.
And I've had this weirdprogression I had degrees in
geophysics and somehow I'm stillon computers.
Whatever, I spent some time,you know, moving up through the

(03:00):
back end channels of what theycall now engineering, platform
engineering.
Yeah, I was a sysadmin doingLinux stuff way back in the days
and then worked my way througha bunch of jobs.
Now here I am at.
Google working as a productmanager.
I actually the place thatYvonne and I met was we used to
be on the same team and I'm sureeveryone who's listening knows

(03:23):
what team Yvonne is on.
She's on a special team, but.

Eyvonne (03:27):
But yeah, yeah, that that is absolutely right.
The, the team that has has youknow three different names in
several different iterationssince then, but still doing, you
know, largely the same thinghelping customers adopt the
google cloud.

William (03:39):
That's what we do, so it's really nice being sales and
product people get along sonicely.
I love it yeah.

Eyvonne (03:46):
We try.
You know there's, you know,some what we would call healthy
conflict and banter andcamaraderie, and I think you
know we need a bit of tensionbetween sales and products to
get to the right place, you know.

(04:06):
So we've all been there and Isuspect we've all been on both
sides of that conversation yeah,well, I mean, but like just to
take a moment to acknowledgethat role.

Nick (04:13):
Right, the it's not just sales, it's it's being sort of
tip of the spear, highestescalation point of of sales
engineering.
Right, and so I used to do thatjob, so I totally respect it.
But, like, as a product manager, there is nothing more valuable
to me than someone who doesthat job and talks to like
hundreds of customers and usesthe stuff.

(04:33):
It's like an invaluable sourceof information and feedback.

Eyvonne (04:38):
It warms my heart to hear you say that, Nick True.

William (04:42):
Yeah, so you mentioned the the platform, platform
engineering words.
You know, I think I read theother.
I was reading like something,something I think it was gartner
that was saying uh, it was likesome high number, like 80 or 90
percent of software engineeringorgs are going to have a team
dedicated to platformengineering by like 2026 or 2027

(05:06):
, which is pretty, pretty wild.
Um, so what?
What is this platformengineering?
Isn't it just devops like what,what's what?

Nick (05:16):
what is the difference?
So, like, how far down theturtle are we going to go?
Because, like, what is devops?
No, but so devops.
I think we all know that.
It's a philosophy, right, it'sa mode of operations, it's not a
bunch of tools like you can'tbuy a devops, even though
probably my companies would sellyou one if you would want one.

(05:36):
So, uh, I think, like for aminute, like we've all been
professionals in this industryfor a while, right?
So you like, the arc ofplatform engineering, I think,
starts with, I don't know, wewere racking and stacking
hardware and trying to serve theneeds of the business in data
centers that we own, and thenthe cloud came out and we said,

(06:00):
oh hey, we really shouldn't dothat because then my job's not
relevant.
So we fought against it for awhile.
But then a bunch of people wentout there shadow IT Amazon and
Azure and Google and when theydid it, they kind of like made
these teams of folks whounderstood the API surface of
those clouds right so they canwrite a bunch of automation and

(06:23):
they could do things likeatomically.
So the developer and thisengineer whatever you want to
call them DevOps engineer worktogether to like quickly iterate
and release software withouthaving to deal with putting a
ticket into the infrastructureteam Turns out that that's cool
for a little while.
But you know, these enterpriseorganizations are huge.

(06:44):
They have piles of differentapplications and teams.
I mean, some of these companieswe work with are literally like
15 companies.
You know what I mean.
Like they.
I think, uh, there's a localcompany to be here in Georgia
that actually has five CIOs,right, Cause they're that big.
And so, um, then comes like thequestion of efficiency and
governance and control.
And so then comes like thequestion of efficiency and
governance and control, and soyou take these people going fast

(07:07):
, sort of inefficiently, out onthe edge, and you're like, okay,
how do we allow them to keep ongoing fast but get some
consistency and some economiesof scale?
And I think that's whatplatform engineering is born out
of.
I could tell you one thing,like when we write these product
docs in Google, one of thethings that I like to start with
is like, what isn't this thing?
And this is the controversialbit Platform engineering is not

(07:32):
a UI, it's not just an IDP, andI well, IDP is not define what.
What does IDP mean?

William (07:44):
Yeah, it's definitely not a UI or even like a service.
It's a conglomeration of.
I would say it's aconglomeration of different
services that you're publishingfor consumption by developers
from some sort of centralizedself-service platform, and you
probably have a platform theacronym IDP though it's
important here.

Nick (08:03):
I'd love to clear this here.
It's not developer platform,it's developer have a platform.
The acronym IDP though it'simportant here.
I'd love to clear this here.
It's not developer platform,it's developer internal
developer platform.
Yeah, and a UI is great.
I think that, if like, becausewhat is a platform is a product,
right.
So what do you do when youbuild products?
You interview your users, findout what they want, take those

(08:29):
things stack, rank them based onpriority and then build them.
So if a lot of your users wanta ui and that would make their
life better, then build it yeahyou've got.

William (08:35):
I want to clarify something.
You sort of said like thedevops was happening kind of on
the edge, kind of the hey, we'regoing fast, we have these
automations, we're doing allthese things.
But we're you sort of okay,it's kind of disjointed.
But then, like, are you kind ofsaying the evolution of that,
in the right way in the contextof a big organization, is what

(08:58):
platform engineering is.
It's almost like a progression,taking those practices and
guiding principles and making itconsumable to like a large
organization.

Nick (09:09):
Yeah, that's great.
You should market that.
Maybe you could sell something.
No, no, that's 100%.
It's a large, a large sectionof the book is just basically
about like why it's important,and it's much more related to
org charts and organizing humanbeings um into pockets to

(09:31):
deliver the right amount ofvalue to each other than it is
about, like again, tech.

Eyvonne (09:35):
it's never really about tech directly, I think well,
and I think the the place wherewe find ourselves, like for
those of us who grew up when wewere racking and stacking
physical servers and when youwere, you know, installing, oh
yeah, hi, like that's all of us,right, we are the elder
millennial Gen X folks in tech,but you know, there were

(09:59):
physical things in the physicalworld that helped us structure
our organizations, right,physical things in the physical
world that helped us structureour organizations, right, like
we need a team that racks thisserver, puts in the rack screws,
that cables it to physicalswitches.
And I think, like, the placewhere we find ourselves now is
that the systems, they aren'tphysical, they don't exist in

(10:24):
the physical world.
The systems, they aren'tphysical, they don't exist in
the physical world.
And so we need a new way to mapour organizational structures
to the work.
And it's not always as clearbecause there aren't physical
things, it's, they're all most Imean, you go down far enough
there are physical things, butmost of us that are operating in
cloud aren't operating onphysical things.

(10:44):
So we need a new way to thinkabout how we're going to
structure our orgs and how we'regoing to structure our systems.
And all of that is somewhatephemeral because we can't put
our hands on it, and so to me,platform engineering is about OK
, how do we develop thosesystems and structures and
interaction surfaces for ourpeople and the technology in a

(11:06):
way that makes sense, whenthere's nothing for me to
physically look at to understandwhat it is?
I feel like that's sort of theproblem that we're trying to
solve these days in the broaderindustry.

Nick (11:21):
Yeah, for sure, and it's.
I think it's like thismechanism that we have to help
promote DevOps principles in abunch of smaller app teams.
Right them, a team self-service, right?

(11:46):
You want to get out of theirway, but you also don't want
your business to end up in thepaper because you got hacked,
because blah, blah, blah, blah,secret and whatever governance
guardrail didn't get put up.

Eyvonne (11:52):
So it's that, or HIPAA compliance, or PCI or any of
those, and that we have theadded complication that a lot of
those regulations and auditingprocesses were built and
understood in an old frameworkthat doesn't necessarily map to
how we deploy technology today.
So that's the other challenge.

(12:13):
A colleague of mine shared thisgreat I believe it's a
Churchill quote.
He's like first we make ourbuildings and then they make us.
And I feel like that's where weare, like we made this system
of managing infrastructure andnow it's shaped the next
generation, even though itdoesn't exactly completely fit.
So yeah, fun, fun.

William (12:36):
I almost feel like this .
So like this I don't know, I'mcurious to get your thoughts
here, nick.
I don't know, I'm curious toget your thoughts here, nick but
it seems like this shifts theoutskirts of the responsibility
from all these teams back tothis centralized platform team.
Essentially, like, theresponsibility of managing
infrastructure complexities lieswith the platform team, so that

(12:58):
the devs, they, can just focuson their building or
applications and such.
And, with that being said, italmost seems like a drop-in
replacement for, like, I don'twant to say enterprise
architecture, but kind of like anew wave of enterprise
architecture-y type thing,because usually enterprise
architecture was like decoupledfrom the boots on the ground,

(13:21):
like they came up with these.
You know they're in the ivorytower and they're just writing
these things to shape how thebusiness is going to run
technology.
And maybe these things areadhered to, maybe not, maybe
stuff was never updated, maybethey're on the ball, maybe
they're not.
But platform team seems likethis um, they take that
responsibility of thearchitecture and sort of marry

(13:43):
that with the execution and theconsumption as well.

Nick (13:49):
Yeah, I mean from my field days, field engineering,
whatever I mean, that was whatenterprise architecture was
turning into.
And so, blake, platformengineering is certainly older
than the word itself.
Right?
We all acknowledge that we'veprobably been engineering
platforms for our entirecomputer life.
But, like when I would go intothese companies, usually the job

(14:11):
title of the person who'strying to figure out how to
build the thing that they'regoing to share with everyone was
an enterprise architect, andmaybe still is.
But I see the main flaw inenterprise architecture is too
much talking, too much writing,not enough doing so.
I hope that platformengineering gets those
enterprise architects to startmaking their hands get dirty

(14:34):
again and building principlesthat you expect the teams that
are using your platform.
You should be able to build,and I mean move fast, but
obviously not quite as fastbecause it's much more risky.
But you should be able toiterate and improve the platform
right, and and you need actualhuman beings to do that.

(14:55):
And I still don't understandquite the value of the person
that's just writing a doc,hasn't touched the code in years
.
I think that there's valuethere.
So I'm sorry for all you thatdo that for a living, but I
can't.

William (15:10):
I just don't relate to it yeah, I mean, especially if
you have your hands in the tech,like if you're, you have a
direct impact on the hands thatare in the technology.
You really need to have done oractively work with those
technologies, or else mileage isgoing to vary greatly.

Eyvonne (15:27):
Well, and in order to be able to do that effectively
and not have your own grubbylittle hands in it, you have to
be a phenomenal listener and bewilling to hear people who have
experiences that you don't have.
And that's incredibly difficultfor all of us, and actually

(15:50):
it's just much easier for you tosit down and figure the thing
out than to try and suss it outfrom different sources without
that hands-on experience.
So, yeah, I mean.

Nick (16:04):
So one of my new overlords here, um in, you know, in the
container runtimes org andgoogle, is someone I've known
for a while, um, gabe monroy.
Just go.
A lot of people in our spaceknow him, um, and one of the
greatest things about gabe isand I'm not just kissing his ass
, I don't do that but one of ofthe greatest things about Gabe
is that, like I didn't get that.
Ooh, siri, hush your mouth.

(16:26):
One of the greatest thingsabout Gabe is that, like, if he
finds out that you're fallingbehind or something or need help
, like he's in the pull request.
He's in the repo looking at thecode, like trying to help you
out.
And we're talking about someonewho is, you know, two steps
down from Thomas Kurian right.
He about someone who is, youknow, two steps down from thomas

(16:47):
curian right.
He's got honestly no businessdoing it.
Except for that, he's got everybusiness doing, in my opinion.
I mean that's really cool.
I I appreciate that someone atthat level still cares and
understands what we're dealingwith, you know I love that.

Eyvonne (17:01):
That's great let's talk about some of the pieces that
make up this platform that we'retalking about.
I mean, you are on the CloudRuntime's team.
A little alliteration there isgetting me this morning.
Let's talk a little bit aboutthat work that you do there.
I know one of the questionsthat you had in the notes is

(17:22):
really, when we're talking aboutKubernetes, how many clusters
is too many?
Can you ever have enough or toomany?
Do you ever have too manyKubernetes clusters?

Nick (17:30):
Yeah, yeah, so that's I mean.
At the root of it all is justmanaging infrastructure and
Kubernetes is.
I think I've been in thecommunity for almost a decade
now.
I just have to say that I needto go back five years and see if
I can find that job that wasout there in the world that
required 10 years of experiencewhen Kubernetes was only five

(17:51):
years old.
I now can actually apply forthat job, so let's go.
But, yeah, so Kubernetes is likean operating system or an API
for infrastructure.
It's super handy, but it's notexactly the abstraction you want
to give a developer who'swriting business logic.
They don't want to understandand try to comprehend and reason

(18:14):
about all those things.
It turns out also that platformteams to some degree.
Maybe their life can be easierif they have to understand less
about all of the idiosyncrasiesof Kubernetes and the VMs
underneath it and the storage.
Like you want to know enough toget your job done and you want
to make sure you're not gettingovercharged.

(18:35):
But maybe you want to manageservice right.
Maybe you don't want to have toupdate the control plane on
your own or data plane, and sothat's.
You know that's what gke is.
It's a managed service, um, butthen it turns out that, like a
platform, I have yet to see aplatform team build a platform
of one cluster.

(18:56):
Right, you, you ultimatelyyou'll have.
Either you're going to haveenvironments you know what I
mean.
You don't want to, uh, shipyour dev code in prod although I
think it's pretty cool if youcan pull that off, um or you
have, like, regional constraintsor high availability needs,
where you want to have multipleum sets of infrastructure in

(19:18):
different regions for eitherbringing your application close
to your end users or for faulttolerance.
But you ultimately end upmeeting more than one cluster,
and kubernetes itself is not.
It doesn't.
Its world ends with the clusterthat it lives in.
Like, that's it.
It doesn't understand otherclusters.
We've tried cube fed, rest inpeace, like.
We've tried a lot and it'sfailed every time.

(19:40):
Um, and so my job is to tryagain.
And so the things that I'mworking on are trying to stitch
together multiple clusters andtreat them as one, but the
approach that we're taking isnot to create a meta resource
that represents both, as much asto create an abstraction that

(20:02):
allows you to treat bothclusters as the same thing.

Eyvonne (20:05):
Okay, right, so, if I'm able to, the answer is always
another layer of abstraction.
It's always the answer yeah.

Nick (20:15):
But like, okay, I know y'all are in networking guards
or at least I know for sureyvonne is.
But one of the hardest, one ofthe most challenging things with
multiple clusters is how do Iget like my pod in cluster one
to be aware of of a pod incluster two in another region?
Right, if you attract, if youmake the networking discoverable

(20:35):
across both, then you've all ofa sudden now increased the
availability of and theaccessibility of that pod you're
.

William (20:45):
You're bringing me back some ptsd here.
So like my first introductionto gke, like you really, I just
got like a blast of informationI totally forgot about.
But so like we had aprofessional services engagement
with Google, I mean it musthave been hold on five, six,
maybe it was like seven or eightyears.

Eyvonne (21:05):
We are a very different place now.
I'm just going to say that.
But go ahead, tell your story.

William (21:09):
Sheesh, I can't really, it's just crazy.
It's been that long.
But anyway, like I'd workedwith like EKS, aks from AWS and
Azure respectively and I canhonestly say GKE at that time
was quite a different experienceand in a lot of positive ways
really.
So we started with and westarted with cloud identity.
You know, we had it tied tolike our unique, like a dns

(21:30):
namespace for like integratingwith all the the active
directory stuff, and then we hadour, you know, we went on and
did like the folder and projecthierarchy, the um the identity
and access management stuff, thegood old networking, the
kubernetes and then all thelogging, monitoring, monitoring,
stack driver stuff.
But one thing that Googleprovided at the time with

(21:55):
guidance on with the Kubernetesstuff, with GKE, was really
helpful in actually how we wentback and optimized other
Kubernetes, environments andother clouds at the time.
But it was not overcomplicatingwhat we deployed versus what
our needs were at the time.
And what I mean by that is likewe had a multi-tiered structure

(22:16):
that balanced, like the riskand efficiency and such and like
.
At the very top it was assimple as we had production and
non-production cluster allowingfor like these very distinct
configurations for how weseparated risk and security.
And then you went one layerdown, like layer two was, um,
like the business domains, likelines of business type stuff,

(22:39):
and each domain got like a.
I think the way that we had itcanned out was like each
business domain or line ofbusiness got one production and
one non-production cluster andthen the third level was like
the individual namespaces withineach cluster for different
products, or, dare I say, whatwe consider microservices.
And this was a great feather inthe cap of developer experience

(23:02):
really, as teams could workindependently with their, their
designated namespaces, you know,minimizing blast radius, making
security, happy.
But this was a big win for usbecause the way that we were
doing it elsewhere is likedevelopers were sort of they had
the, the control and they werelike, okay, each team gets a

(23:22):
kubernetes cluster and it's like, hey, we have like hundreds of
teams here, we've got you knowalmost a thousand applications.
Like how many kubernetesclusters can one organization
have?
come on and doing it this wayactually helped us and you can't
exactly, you know, and it getsexpensive because it's all
compute, you know, on the backend at the end of the day.

(23:45):
So, yeah, it was a very I'd sayit was a pretty positive
experience.
I'm sure it's a lot changed, ora lot has changed.
And I guess my question is soyou're part of the product team
that works directly on gke, butgke is big right, so what
specifically do you work on withgke?

Nick (24:06):
Yes, gke, I think we have something like 25 PMs more or
less.
So I am in core GKE and Iproduct manage fleets and teams.
So fleets is our, you have morethan one cluster solution and
then teams is sort of thisabstraction as a service that we

(24:30):
provide, and then Teams is sortof this abstraction as a
service that we provide.
So you were talking about oh,my developers in this previous
light that I had all had theirown namespace.
Teams is like namespaceaggregation across multiple
clusters as a service, right?
So one of the tough problems oryeah, that customers tend to
have to think through orplatform teams have to think

(24:51):
through, is like how are wegoing to do tenancy, right?
Are we going to do tenancysingle, single tenant, single
cluster, single tenant,multi-cluster, multi-tenant,
multi-cluster?
Um, and the answer is at leastwhat I've seen out in the wild
is that there's never one answer, right.
Like if your organization isbig enough, you have like a

(25:11):
spectrum of tenancy.
You've got some businesscritical app that's got
dedicated clusters to it, andthen you've got this like junk
drawer cluster that's got all ofthese other tools and smaller
services running in it, and thenthere's some, there's some like
spectrum of other clusters inbetween.

(25:32):
And so what I think is, even ifyou don't use Teams, I think one
good thing for a platformengineering team to think
through is how do we not buryourselves with tech debt by
being too over-opinionated onthe tendency model up front, and
how do we build this to beadjustable over time based on
the needs of the business, howwe want to bin, pack and unbin

(25:54):
pack things over time, and sothat's what Teams is out to do.
It's just like this logicalcontainer of logical namespaces
that then can get bound toclusters on demand.
So today you want it bound tocluster A, it's there.
You want to add cluster B?
Tomorrow, you add it.
You want to add moreapplications to cluster a and b.
You then bound, find thoseteams to the cluster and you

(26:17):
sort of have that abstraction offlexibility got it?

William (26:23):
what?
What are the cases?
So, like we had the whole, if Iremember, like before we even,
I think, deployed a singlecluster, the whole big
enterprise thing of hey, we wantto account for every possible
scenario before we hit, go andhave an MVP or do anything.
We just want to account forevery enterprise architecture

(26:44):
again Got it.
Yes, pretty much Exactly.
We had this idea of, okay, well, what happens when we need new
clusters?
Like I remember we got, we hada certain team that got stuck on
this.
We were stuck on this for likeweeks.
It was like, oh, but whathappens when we need to create a
new cluster?
The this is going to happen andthat's going to have to happen,

(27:05):
what this, what that?
And I think, if I remembercorrectly at the time, like it
was like network exhaustion,pretty much like, okay, we ran
out of ips, so we need a newcluster, I think.
And then it was um, like nolimit, and they were so
concerned about the node limit,even though the node limit, even
at the time, was likeridiculously high, yeah, um.

(27:28):
And then I mean, I guess thelast obvious one would be like
isolation, like instances needisolation based on security, you
know, based on, you know,however, your business needs to
run, like maybe some clustersare in you know, hippa sort of
environments and others are justproduction with no protected

(27:51):
data or no, nothing reallyimportant as far as like
customer impact.
But I think those were thethree things, if I remember
network stuff, no limit stuff,and then isolation.
But are those like reasonablecriteria still these days for
needing a new cluster, or whatdoes that look like in 2025?

Nick (28:12):
I think it depends on where that cluster is being made
.
I can say proudly, if it's GKE,I do not think number of nodes
is going to be an issue, like wejust released support for
65,000 nodes last year atKubeCon NA in Utah.
So I don't think nodes is thething we do have.

(28:32):
Just to be real, like we builtsupport for 65 000, that's
because we have demand.
We have customers, like threeof them that that are using that
many um, but it's a veryspecific use case.
But I, I think what I I thinkour job is to be able to build a
product for our end users, theplatform engineers, that removes

(28:55):
the need for them to have toconsider those low level limits,
like I would prefer them tojust focus on the compliance
aspect of it, or like thebusiness reason for them to have
more than one cluster than anactual physical limitation of
one, and over time that'sbecoming more reasonable.
Like one of the things thatwe've done on gke is I don't

(29:17):
know how familiar you are withkubernetes, I think you are.
So I'm just going to say, likewe took scd um right and and
we're replacing it with spannerright, so we've removed the
bottleneck of like a databaserunning on literally in vms in a
control plane and then use ourown managed service that runs
Google search and ads all thisstuff so that greatly increases

(29:41):
the request per second the APIserver can handle.
So that request per second onthe API server was the
bottleneck, so we're making anew bottleneck now.
I'm not exactly sure where it'sgoing to be yet, but again, I
think the idea is that we'retrying to remove those limits
from things you have to consider.
Now there's always other limits, right.

(30:03):
Ips, ipam is just, especially ifyou're a legacy company.
It's just tough.
It's just, especially if you'rea legacy company.
It's just tough.
We are rolling forward withIPv6, but like with the actual
IPs allocated to the pods withinthe cluster, but then they have
to like NAT, everything so itunderstands the world outside.
It's not easy, but that's thegoal is to continue to make the

(30:30):
size irrelevant.
My personal dream would be like, outside of business and
governance rules, you have acluster per region, right,
because they could be so bigthat it doesn't matter.
And then I say that now, butalso there's like the blast
radius of change.
So we need to do more work tomake that real.

William (30:53):
um, but yeah, one ipv6 thing is huge.
Uh, we'll get into that in asecond though, yeah, yeah, so
continue I think, uh, I I.

Nick (31:03):
Often it was full disclosure, like adhd.
My brain's everywhere.
So I'm probably the worstinterview in the world.
It's hard to try to keep me ontrack.
But one of the things that Ithink will help you forget what
I'm doing what I would do if youwere you platform engineer,
regardless of which cloud I'musing, even if I'm using on-prem
is I'm trying to build clustersfungibly.

(31:25):
I'm trying to make them just.
It's the old pets and cattleanalogy.
I want them to be cattle, Iwant them to be replaceable.
I want to add a cluster withthe right label, have it come up
to the right state, know whatits job is and do work, and I
want to be able to remove acluster when I need to.
And if you do that, sizedoesn't matter anymore, because

(31:48):
if you need more, you add more.
Right, just like you're scalingout clusters clusters like we
used to scale out VMs, or likeKubernetes does under the covers
.

William (31:57):
Awesome, yeah, the IPv6 thing, and you're absolutely
right, the pets versus cattlething, like we're still.
I talked to so many folks inthe network automation community
.
They're still having challengeswith this in large companies
where they have names forindividual servers.
They're not.
They're using automation likevery limited and they're still.

(32:21):
It's the whole hey that we stillhave pets in 2025, and I think
that going up to the cloud doesmake it easier to do these
things, though You're notrelying on as much iron, as it
were, but even with, like youwere saying, like IPv6 and the
natting scenario and that's justyou can.
Like I think the what mostcompanies are doing is they're

(32:44):
embarking on this, at least onthe consumer side is they're
dual stacking.
They're doing as much as theycan with.
But even when you dual stacklike that introduces complex
like no matter what, it'scomplicated Like you really want
like a okay, we're just goingto have Greenfield with IPv6
only, but with you know, wheneverything has an IP address and
has to talk with the wholesystem, you can't one doesn't

(33:07):
simply do that.

Eyvonne (33:10):
Well, there will all.
We are still at a state in theindustry where there is one key
critical component somewhere inthe data flow or somewhere in
the path that doesn't support it, or there's some constraint

(33:31):
that still is keeping ustethered to v4.
And we've been talking aboutthis for so long.
I mean, I guess someday we willget there, but yeah.

Nick (33:42):
And then you have the invocations that need to talk
over TCP and they need to knowthe outside of the Kubernetes
service IP address, and thenyour whole network diagram just
gets really fun.
And that's everything, yourhouse of cards.

Eyvonne (33:58):
Your house of v6 cards has just fallen.

William (34:02):
Speaking of your ADHD that you mentioned earlier,
which I'm about to have, and gooff complete topic.
But you mentioned Spannerearlier.
Spanner is one of Google's manygazillion data products.
Isn't that the one thatautomatically can do
transactions globally?
It has the multi-regionautomatic replication.

(34:24):
Is it that one?
It's SQL right.

Eyvonne (34:28):
The way I like to describe Spanner is, first of
all, spanner is the data systemthat runs all of the underlying
metadata for youtube.
So if you, if you think aboutall that's required to make that
work, it is a globallyconsistent database, um, with,

(34:48):
uh, incredible uptime.
It's five nines.
It's been a while since I'velooked at the data sheets, but
it is cap theorem defyingbecause of what it does with the
atomic clock.
Cap theorem is still real,right?

Nick (35:06):
I know Nick is shaking his head at me it does not, but
when you, it does not.
Cap theorem.
I'm sorry Like I can't.

Eyvonne (35:13):
I love, say I say defying, because it it doesn't
prove it wrong, but but what itdoes?
It isn't.
Some implemented Some Reallyclever Atomic Clock Magic, I
don't know, that makes it appearto cap theorem is still real.

(35:36):
Don't hear me say that captheorem is not real so like.

Nick (35:40):
I mean it's amazing, so I don't want to take away from it.
It is amazing and it's theforethought.
To install and create a systemthat syncs atomic clocks across
all of our data centers is whatmakes it possible.
So all we're saying withSpanner is we're guaranteeing
atomicity right at distance.
That said, network partitionsare still real, right, and so

(36:05):
the way that.
But the risk of an actualoutage is almost imperceivable.
So we kind of say, like adefeats cap theorem, but like
the idea here is I'm writing,it's a.
You know, it's a relationaldatabase system.
It has acid semantics and youcommit a change.
We're just saying we're goingto make sure that that change is
committed and timestampedacross all of the replicas,

(36:29):
across all of the regions thatare in that spanner shape, um,
and then, if one of those, ifthere is a primary this is why I
say it's not really defeatingthere is one primary at any
given point in time but we reactand adjust and you almost don't
feel it because it's all on ournetwork, it's super fast and we
, um and we guarantee theconsistency, um, we and we

(36:51):
guarantee the consistency, or atleast guarantee the ordering.
So in your app code you have tothen deal with.
Deal with conflicts, right.
That's the part that peopledon't really talk about that
often, but you still have to beable to.
We can say, hey, there's aconflict, and then you decide
whether you want the latest orthe whichever version of that
transaction commit, but it'samazing.

William (37:11):
Yeah, it is amazing.
I mean because when you thinkabout data, even well, every
cloud and thing is different.
So you've got like availabilityzones that you have to worry
about, and usually those are, ofcourse, going to be synchronous
replication across data sources.
But then the moment you gomulti-region and your whole
infrastructure and applicationdesign, you're thinking about
the moment you go multi-regionand your whole infrastructure
and application design.

(37:32):
You're thinking about the youknow, asynchronous replication,
how that's going to work, activehosting across multiple regions
or active standby, like there'sso many different things that
go into the architecture anddata and data gravity is usually
the challenge there, because,yeah, I mean, most applications
are at least trying to gostateless as possible at this

(37:54):
point, even though a lot of themdo have a lot of state baked in
inadvertently.
But, yeah, it to take thecomplexity piece out of the
database portion of that wholeprocess seems like a really
awesome win.

Nick (38:08):
Yeah, I mean just as a developer or a platform engineer
, the fact that I could justprovision this database system
and not have to program how andwhen it fails over.
I just pick a shape right and Iunderstand, like the trade-offs
for that shape, but with costand availability, and just go as
opposed to like back in the day.
Oh man, I have some morestories from my days at

(38:31):
microsoft of setting up.
They would I mean these folkswould set up sql databases in
two different regions and thenuse um like third-party
technology to sync them or logshipping or any of that stuff.

Eyvonne (38:44):
It was brutal.
Yeah, been there done that?
Yeah, um, and and, and.
You know you have to need thatkind of scale consistency to be
able to use.
Like you know, it's, it's, it's, it's a premium service, but
it's also incredibly valuableand it takes a huge amount of

(39:06):
engineering overhead effort, um,from your, from your
engineering design teams.
You just don't have to thinkabout it and that's not
something that we've been ableto do for a very long time in
this industry.

Nick (39:21):
Y'all there's like.
So we have very brilliantengineers at Google, as there
are everywhere, right?
Some of them are young, freshout of college, and they've only
worked at google for a coupleyears, so their only real life
experience with having to dealwith consistency is spanner
right and I and sometimes I'm oncalls with them like do you

(39:43):
know how good you have it, doyou know?

Eyvonne (39:47):
like you just have a bless your heart moment.
You have a bless your heartmoment, don't you?

Nick (39:51):
yes, not to be the bless your little heart moment, which
my mother-in-law's supersouthern, so I understand what
that means yeah, yeah um yeah,it's like oh, isn't that?

Eyvonne (40:05):
yeah, man, yeah it's, but but then you're starting
that I walked to school uphillin the snow, both directions
kind of a conversation, soyou're becoming that guy yeah,
but in this analogy like 80,because the database consistency
yeah, in this analogy, like 80to 90 percent of the other folks
that are in their position,working for other companies, are
still walking uphill in thesnow that's right, that's right

William (40:28):
yeah.
So what?
Like this whole, I have a lotof questions.
Just I know what the answer tomost of these were back when I
was doing this, but things havechanged.
What, typically, is it stillthe common thing to, you know,

(40:48):
for a large organization that'strying to modernize, or even
like a medium sized organizationgot lots of applications, are
they still trying to profile?
Okay, like we have this set ofapplications, it's like a good
fit that we want to modernizewith GKE.
Then we have some sort ofprocess we go through which is
the applications evaluated.
It's broken apart, thedependency mapping, all these

(41:11):
things.
Okay, it's a good fit.
Now we're going to migrate.
It Is that kind of how theprocess works.

Nick (41:20):
Yes, I think there's still a good bit of customers doing
that.
I do think that there's there'ssort of been like, honestly,
there's been like a little pauseon cloud is the best thing
recently.
So a lot of people who arestill in data centers are like
empowered again to be like, no,we shouldn't change anything.
So I don't know if there's asmuch of a effort across all

(41:45):
these enterprise customers to tomigrate all the things.
Um, but yeah, I mean there'sstill a triage, like if you were
starting, if you're a bigcompany and you're just now
starting which I think most havealready started like you would
pick the low-hanging statelessapplications or something to
experiment with, then you know,then you have to figure, you

(42:07):
have to go down the stack ofcomplexity and and work out how
and when you want to migratethese more difficult
applications.
And then I just want to putthis on wax I'm still here to
say that if that applicationruns on NET that's not supported
on Linux, leave it on a VM, forthe love of God don't put it on
a container.

Eyvonne (42:26):
That's what I was going to say.
I think several years ago,there was a movement to just
modernize all the things.
Everything should always run ina container.
That's the way of the future.
This is the way everything'sgoing to be, I think the more
organizations have gotten in.
First of all, I think some ofthem have learned how much they
don't understand about theirapplications and how they run.
And so if the application isnatively meant to run in a

(42:52):
container, if it's a code thatyou're writing, that you have
access to, that's going to belong-term strategically valuable
for your business it makes alot of sense to replatform that
onto Kubernetes.
There are some applications,though, that do what you need
them to do.
There's not a whole lot ofstrategic value to modernize and
update that particularapplication, and in those cases,

(43:15):
I think what Nick is saying islike leave that thing on a VM,
because you're always doing anopportunity cost calculation of
what makes sense and where youneed to put your effort.
And so you know, I think, if youthink back to the very early
days, there was a set of folkswho just believed that
serverless containers were goingto be the only thing people

(43:36):
used in the future.
The world's just morecomplicated than that, and so
the thing I would recommend forenterprise folks more than
anything is know yourapplication stack, understand
what makes sense for it, foryour specific application stack,
because as much as we haveopinions, you're going to know
your stuff better than anybodyelse.

(43:56):
Anybody else, including youknow the Deloitte's and the
E&Y's and the.
You know all of the consultantsand you need to understand that
and then look for expertise.
Knowing your applications andyour requirements to understand
where it best should live thatwould be my guidance.

Nick (44:18):
But, yvonne, I don't want to know these things, I just
want a magic button.

William (44:23):
You must be an executive, then You're a new fee
now.

Nick (44:31):
That's great.
You put it better than I could.
Um, I think I've beendisconnected a little bit from
it, frankly, as now I'm so hyperfocused on gke and stuff, like
when I I had to answer moreeloquently stacked on my brain
when I was dealing with thosethings every single day for five
, six years.
Um, I will say like another,just taking it back to platform

(44:52):
engineering for a second, Ithink another reason that the
existence of this homegrownplatform is a direct response to
no company can build a platformthat absorbs a significant
amount of the workloads outthere.
The best that we've ever hadwas Heroku and there was a very,
very specific style ofapplications.

(45:14):
You had to run to use heroku.
That was the best, right it was.
I still get happy when I thinkabout like heroku push or
whatever lf push or but it'slike that experience was second
to none.
It yeah, yeah, and but like youcan't do that for a legacy
application, you certainly can'tdo that for you know, dot net

(45:35):
3.2, whatever web forms thingyou have set up.
So, um, the problem with thepaz market the companies are out
there trying to build paths andI some of them are my buddies
like building wasm as a service,right is that it only?
It's only going to handleGreenfield use cases for most of
the time, and I don't have likea statistic anymore.

(45:58):
I would love to know what it is, but I would go out on a limb
and say probably 70% of thecomputers or the applications
that are running out there arenot Greenfield, they're old,
they're making money and they'reprobably 30% of them were
written in cobalt or something Idon't know.
So Kubernetes, though what itdid is, it gives you like a

(46:20):
higher lowest abstraction, Iguess right.
So it can't do everything, butit could do a lot more than
Heroku could, and so you couldtake this common infrastructure
and then build differentinterfaces on top of it.
Like people are basicallybuilding their own Heroku style
paths on top of Kubernetes sothey can have the new stuff,
have the same security model,governance model, service

(46:42):
discoverability as the old stuffand that's the thing that
Kubernetes brings.
That, I think, makes it soattractive.
It's like you could kind of puta lot here.

William (46:53):
Can't put everything here, don't put windows there,
um what, you could put a lot uh,love the windows comments like
I, yeah, like uh, I don't evenwant to, it's a different wait,
wait, you have a war story aboutwindows containers.

Nick (47:10):
Those are always fun.
I have a.

William (47:12):
I've had a few fights.
We went through a period it wasprobably about four months of
just there's just some thingsyou don't do.
If we're technologists and wehave experience, there's certain
things it's kind of likestretching layer to yourself
over homegrown services, overlike a dci, when you haven't set

(47:36):
up infrastructure properlyacross both ends and haven't
considered the distance betweenthe data centers and all these
things and you know the sharedfate thing becomes pretty
daggone, terrible.
It's like hey, we, we know thatthis is just not a good idea at
this point you have shared fateand it's really bad for
everybody.

Eyvonne (47:54):
is how that?

William (47:56):
As in the Internet's down, and like, ok, we actually
put one sensor on this side ofthe DCI for the firewall and the
other sensor on the other sideof the DCI and another data
center, because we were tryingfor high availability like a
hyperscaler.
Sorry, how did we not think?

Eyvonne (48:13):
this is, this is all trauma real quick.

Nick (48:16):
This is that trauma for the user out there like me.
What is dci?
I don't know.
I'm probably data centerinterconnect.
Yeah, just just yeah this isyour working jargon.
Okay, cool, sorry.

William (48:27):
All right, that makes sense now yeah, basically
connecting two data centers andsaying, hey, for high, we're
going to put one thing in onedata center and one thing in
another and cross our fingers,and then we're going to delete,
until the internet goes down forthe fifth time Google database
across it.

Nick (48:42):
Right yeah.

Eyvonne (48:44):
Yeah, we're going to manage state to cross it and use
layer two and all that funstuff.
Yes, yeah, but that wasWilliam's analogy, that you kind
of went down the DCI rabbithole there.
But talking about Windows iswhat started that.

Nick (49:02):
So Windows?
Here's the thing.
So I did work at Microsoft forfive years and for a minute
there I was tasked with tryingto figure out how to help
customers take their Windowsworkloads and put them on not
even Kubernetes, it was Docker,docker, swarm.
But the dirty truth about mostof the Windows containerization
and I'm talking about olderWindows applications, not newer

(49:26):
NET Core and above that can doeither Windows or Linux, so
that's a whole notherconversation.
But the older stuff, the stuffthat needs ias, right, um, the,
the root process of thecontainer image is ias.
So now, where I had a ias farmbefore, let's say it was like

(49:48):
five, five servers and I wasrunning 10 apps on it, but each
server had one instance of IISrunning, with those 10 apps
running across it, sort of likecontainerization and bin packing
.
Now I have those same fiveclusters in a Kubernetes cluster
running Windows server and Ihave 10 instances of IIS.
So I'm doubling the processingrequirement because the root of

(50:14):
the container, the startupprocess for most of these apps
was IIS.
So my question was always likehow is this?
What are you getting out ofthis?
It's not more efficient, no, ordo you just want to be cool
Because you're already not goingto be.

Eyvonne (50:29):
But didn't we all get into this line of work to be
cool?

William (50:34):
I kind of gave one notch.
So one thing that my the only Imean honest.
Can I honestly can say that myonly positive experience ever on
a windows machine was when theyreleased windows subsystem for
linux 2.
Yeah, and that made theexperience of running containers
locally like.
The performance was actuallygood.
If I'm doing it on my Mac withARM, the performance of running

(50:57):
Docker because of the additionalabstraction in the kernel is
not good.
It's not fast.
Doing it on Windows subsystemfor Linux 2 is actually very
fast.
I don't know how the mechanicsof all that work and if it's
really like, yeah, I don't know,but mechanics of all that work
and if it's really like, yeah, Idon't know, but the experience
is really good, although I don'tuse Windows anymore, so I can't

(51:17):
yeah, I think it's like Hyper-Vprovisioned into a Linux distro
and then they built like areally nice, it's great to be
honest, and a lot of customers,a lot of our end users out there
, don't have the option to notuse Windows.

Nick (51:33):
Right, it's like a corporate mandate because of
active directory and all thesethings.
So good for them that they havewsl.
I'm just, I'm I'm a little bitprivileged.
Actually, I'm a lot of bitprivileged because just look at
me, but um, but g, but uh, likewe use, I use g linux now.
So I have, I actually have alaptop with Google Linux

(51:54):
installed on it and that's whatI develop on and it's amazing.
I don't have to like, do anyweird shenanigans, I just
basically can run Linuxprocesses which containers are.

William (52:06):
I'm so jealous.
If I could use a Windows or aLinux machine, I would run
straight Debian for everything.
That's my.
I've been using Debian sincelike the late 90s.
It's my OS of choice.
I love it.
I went to Ubuntu for like a hotsecond.
I actually started off atSlackware, originally Landed in
Debian at some point.

Eyvonne (52:27):
Slackware was my first distribution too, yeah.

William (52:29):
There you go.

Eyvonne (52:30):
Back in the day.

William (52:32):
After using Slackware for a little bit I'm like, do I
really like this Linux stuff?
And then I hit Debian andDebian's just amazing.
If I could run Debian on amachine for work I would be a
happy camper.
Unfortunately I can't and neverhave been able to, so it's back
to the tricks.
But at least now you can justephemeral machine.

(52:53):
You can spin up in the cloudand run something, connect to it
pretty easily and do what youneed to do.
But it's nice to have thatlocal dev environment and that
local experience with Linux.

Nick (53:06):
Yeah, hey.
So my ADH brain is kicking in.
I feel like I mean, honestly, Icould talk to y'all for like
another three hours, but I'msure we don't have that much
time.
There is one spot I guess I'dlike to give yeah.
So, like we talked about a lotof great stuff I'm working on

(53:26):
like GKE, right, but what I'mreally passionate about is
making sure the things thatwe're building for multi-cluster
and GKE land upstream in theopen source.
So, like a lot of the work thatI did last year and when I say
me, I mean I'm a product managerI add value Little to no work

(53:47):
that I did, little to no workthat I did helping the team
deliver some tools in SIGmulticluster, which is part of
upstream Kubernetes, right.
And the purview of like SIGmulticluster is to start to help
build the primitives frommulticluster tool chains across
upstream Kubernetes which wethen use in the managed

(54:09):
Kubernetes world.
So the one that we shippedright at Utah, so at KubeCon or
in that Kubernetes lifecycleversion, was cluster inventory,
right.
So it turns out like all thesetools that a lot of customers
are using to manage multipleclusters Argo, cd, flux, fleets

(54:30):
they're basically just lists ofclusters with metadata that
allow you to kind of manage allof them, right?
So if you want to use ArgoCDand Fleets or Daverno or
whatever tools you want to use,you now have to manage like
three or four or five differentcluster lists, right?
So every time you add or removea cluster now you have to write

(54:52):
glue code to make sure thatthat update is synced across all
those lists.
So what we decided to do upterm, up, up, up term now you
have me thinking about terminalsUpstream yes, is to, you know,
make one list to rule them all.
But at least this list is sortof neutral, right?

(55:12):
You can make changes to thislist.
My stuff with fleets of teamswill respect those changes and
like.
So if you make a change on amember of the list, it'll
actually reflect into whatevertool chain you're using.
With fleets of teams, I'mworking with the Argo CD
upstream team to build pluginsso that it works there too.
We've got multi-cluster queuing.

(55:33):
That's also going to key offthis list.
So the point here is that wehave this centralized list.
That's hopefully going to makethe world easier for platform
engineers so they don't have tomanage like 5, 6, 7, 8, 9, 10 or
whatever.
So the plug is if you'reinterested, we would love you to
join.
So every SIG has a weekly or abiweekly meeting, right.

(55:57):
We routinely get engineers frombig companies like Google and
Microsoft and VMware, and we'reall working together.
But we would love end users tocome and join these meetings to
validate what the heck we'rebuilding.
So please, like participate.
Maybe there's's like show notesor something I can give you all
a link to to look at and yeah,and it's, and it's, it's open,

(56:17):
right, so anybody can join.
That's the beauty of kubernetesand these six now anybody.

William (56:22):
You mean so sick, you mean special interest group with
the cncf, right?
Yeah, awesome.
Yeah, we will definitely linkthe, because I'm sure the you
said multi-cluster Kubernetes,sig was the SIG multi cluster.
Yeah, okay, yeah, I'll findthat group and I'll link
everything in the show notes.
That's a great idea.

(56:42):
Yeah, the more you heard it,the more involvement, the better
and more potential.

Eyvonne (56:48):
It has to be just amazing with your input out
there and I think so one of thethings that I learned I'll say
this, and we do need to wrap upbut one of the things that I
learned after I was anenterprise network engineer and
architect is that once I kind ofcrossed the chasm from the
customer to the vendor side, Ilearned how important smart

(57:11):
customer feedback is, and Irealized I had a lot more value
to my vendors than I understoodat the time, because I was hands
on keyboard, using their stuffall the day, you know, every day
.
I knew where the points offriction were, and being able to
show up and have meaningfulconversations with them about

(57:33):
where I had problems, about whatwould improve the product, was
incredibly valuable.
And so don't sit there and sayI'm at maybe a Fortune 500
enterprise, a medium-sizedenterprise, and we do a few cool
things, but we're not superhuge.
Actually, you're in the sweetspot.

(57:53):
So show up, make a contribution, talk about what your
experience, because that isincredibly valuable.

William (58:04):
Awesome.
Well, so for all the kind folksout there, where can they find
you, Nick?
Aside from your dancing videoson TikTok, when else online are
you these days?

Nick (58:13):
Dude, you've seen those.
I thought I used the student in.
Okay, yeah, so I'm still on XTwitter.
I hate to call it X.
I am on the Twitters, I stillcall it Twitter.
Yeah, I would love, for Ihonestly would love for Twitter
to be no more.
So I'm also on blue sky, um,linkedin, sig, multi-clusters,

(58:38):
like if you actually want totalk to me, like in real life as
a human, join those things.
But also I'd, but if youseriously, if you, if you, if
you have questions or customersout there and users of GKE, like

(58:59):
I am happy to meet with anybodyto get take in that feedback.
So just find me on any of thosesocial platforms and I shall
respond.
Also, I am way opinionated andloud on socials.
It comes with a warning label,I guess.

Eyvonne (59:20):
Enter with caution, you know, proceed with caution.
You've done great on this talk,Nick, though it's been
wonderful.

William (59:27):
Yeah, really enjoyed it .
Yeah, we went over an hour.
We're over an hour right now.
It's crazy, it's good stuff.
Yeah, you know over an hour,we're over an hour right now,
it's crazy, it's good stuff.
Yeah, you know it's a goodconversation when you don't
realize how how long it went.
Yeah.

Nick (59:38):
I feel like you need to thank you very much.
Some like I'm just like you,probably cut this part out, but
you should like close us outwith something about a Lord of
the Rings YAML.
Just have the bookend.

William (59:53):
I should.
I need to come up withsomething clever.
I actually think, though,that's a good idea.

Nick (59:57):
You're always thinking about the good guys.
What about the bad guys?
What is Soran in this analogy?
Is he the underlying API thatYAML is trying to organize?
Is he a DDoS?

William (01:00:08):
What is he?
It could be both, yeah, and hehad many agents in many places.
That's the thing you throw insome VPs and some part of the
bad part of the bureaucracy ofthe organization too.

Nick (01:00:22):
Imagine he's just like a SQL database that has nodes in
10 different regions over theworld.
That's what he is.
There's never actually a commitbecause they can never agree.

William (01:00:37):
And then the world was in darkness.

Eyvonne (01:00:41):
It's that script that syncs all these free Microsoft
SQL databases that exist.
Somebody wrote a homegrownscript to sync all those.
They wouldn't have to pay forSQL and use the free, and that
happens in the background.

William (01:00:58):
So that's the one sync that rules them all.
One sync, there we go.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.