Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:06):
Welcome to the MongoDB Podcast Live.
I'm your host today, Jesse Hall.We have an amazing live stream
lined up for you. We're going to talk about
scaling MongoDB without breakingthe bank.
So we have some new elasticity features that enable cost
optimization without sacrificingperformance.
So if you're a developer, we're going to get into some more
(00:27):
technical details, so stick around for that.
And maybe if you're not as technology savvy or maybe you're
not as technical, maybe you're an ITDMA decision maker, stick
around too because we're going to help you understand why you
should choose Mongo to be as your data platform.
So before we get into the content, just want to remind you
that we live stream every week, twice a week actually.
(00:50):
Every Tuesday we normally have guests on.
We go over some amazing topics with our guests.
And on Thursdays we do live coding.
It's always centered around AI. So we help you show you how you
can build things with AI and MongoDB.
So be sure to tune in every week, Tuesdays and Thursdays.
Without further ado, let's go ahead and bring on our guests.
(01:10):
Today we have Rob from Engineering and we have Alec
from product. So welcome.
Maybe let's just go through somequick introductions.
Alec, would you like to introduce yourself first?
Absolutely. Thank you, Jesse.
I'm Alec, Product Manager for Atlas Clusters in Mongo DB,
focused on the scalability, availability and geographical
(01:33):
expansion, and I'm based in New York City.
Nice and Rob. Yeah, definitely.
I'm Rob. I'm a software engineer on the
Atlas Clusters team. Focus Area overlaps a lot with
Alec focusing a lot on performance and scalability of
Atlas Clusters and also based inNew York City.
Nice. So we're going to talk about
we're going to talk about how wescale, we're going to be Atlas
(01:56):
and we talk about elasticity. Like we didn't talk about this
beforehand, but what would one of you like to high level
explain what is platform elasticity?
Yeah, absolutely. Thanks, Jessie, for the
question. So something that we've observed
a lot over the last few years isthat cost optimization is sort
(02:17):
of top of mind for companies of all sizes.
And the way they think about it,generally speaking is about
balancing availability, performance and budget
constraints. And that's mission critical for
them. We spoke with 10s of Atlas
customers across the board from small start-ups all the way to
Fortune 500 companies to understand their elasticity
(02:40):
needs. If you manage large scale
infrastructure, maybe you do, maybe someone you work with
does. One of your top areas of focus
is probably maximizing that availability and performance
without increasing cost. And that trade off is where
elasticity comes in. And elasticity is sort of the
(03:01):
the property of the database or,or a data layer that allows you
to reach for maximum availability and performance
without breaking the bank. And generally speaking, it's a
really hard engineering problem and generally seen as a tough
trade off. And as developers, I'm sure
you're aware your number one goal is managing complexity,
(03:23):
right? And so this is where Atlas
really shines because we believewe can provide you with the
greatest elasticity so that we can take care of without hard
engineering problem. You can reach for maximum
performance of availability without breaking the bank.
So you can have the cake and eatit too effectively.
(03:44):
Nice. So let's let's bring this back a
little bit like help me to picture this in my mind.
What in like a real world what does elasticity mean?
Yeah. So for example, let's say you
have an ecommerce workload. Let's say you're a small shop
selling, let's say, some handmade items, and you see a
(04:09):
lot of traffic throughout business hours because let's say
it's a local shop that sells only in your time zone, and you
see a lot of traffic from 8:00 AM through 7:00 PM.
But outside of those hours, it'svery quiet, right?
You probably don't want to provision for maximum capacity
for when it's the busiest. You want to see both your
(04:29):
application tier and your data tier change the provision
capacity as the actual needs change, right?
And that's daily, weekly, monthly, annually.
Nice, nice. And so why?
Why does that? Why should that really even
matter? Because fundamentally today,
(04:50):
your ability to do well as a business is determined by your
cost structure and also by beingable to compete globally, right?
And so as the workloads change, as your as you'd introduce new
features, as you sell more items, for example, for your
business, you want to be able tocontinuously adapt to those
(05:11):
changes in in demand effectively, right?
You as a developer wants to be able to build things for your
users such that you can serve serve your users better.
You don't want to worry about all the non functional things
that are happening behind the scenes.
You want to sort of make your dowhatever it takes to make your
beer taste better rather than focus on, you know, something
(05:34):
that we as a database provider can do a great job at.
Yeah, personally, I I hate having to deal with all that
stuff. That's why I love that Atlas
just does it most of it for me. So yes, so, so I and I feel like
or Atlas has, has always been elastic, right?
But how? What?
What have we done to like increase its elasticity?
(05:58):
Yeah, absolutely. So first of all, if you are
developing a new application, perhaps you've already started
in the cloud. If you are developing an
existing application, perhaps you're considering the move to
the cloud. Well, you might be aware that
the the key tenets of the cloud is that it packages elasticity
in a simple interface and makes it easy for you to consume,
(06:22):
right? So that's it's one of the key
value for positions of the cloud.
Now Atlas has always been elastic, right as as you
mentioned Jesse. But now what we see is we are
able to actually vertically scale the database that
dedicated Atlas clusters up to 50% faster.
And what that means is we can complete a scaling operation
(06:42):
within single digit minutes. And that's a huge deal because
if you have a huge workload during certain hours, that
ability to to change that provision capacity quickly is
mission critical for you to retain that availability and
performance and serve your usersproductively.
(07:02):
Now, the great thing about what we've done here behind the
scenes is that this requires no configuration changes on your
end. So yesterday Atlas took a while
to scale up. Today it takes less than 10
minutes. And that just works out
out-of-the-box. It just happened behind the
scenes and you'll get the benefits of continuously
(07:25):
improving database as time goes on.
Nice, nice. So is this we'll talk about like
enterprises versus start-ups like like where does this really
apply? Like I I can understand like
start-ups kind of scale slowly, but enterprises there, there
(07:45):
might be a lot of fluctuations like how, how does this apply to
to different use cases? Yeah.
So the great news is that this is a fundamental property of our
architecture and the the database and the data platform,
IE Atlas. And therefore this applies
across all sorts of workloads. So all the way through small M10
clusters that you deploy for development purposes, all the
(08:09):
way through production clusters with 10s of shards on an M200
and VME cluster, right. So, so those improvements, so
that's elasticity are sort of across the board and in a high
volatility environment like we currently have or perhaps for
your specific use case that datatheory elast is these prices,
(08:32):
right? So so we believe we can help you
grow and scale on Atlas and makeyour workload elastic day-to-day
as well. Yeah, nice.
So we talked a little bit about that, that e-commerce example.
Let's let's maybe dive a little bit deeper into into that
example. How, how can this, this new,
(08:52):
these new improvements help thatplatform specifically if we're
talking about, you know, e-commerce as an example?
Yeah, absolutely. E-commerce is a really
interesting vertical because folks that are very familiar
with the space I think will probably be nodding very
actively right now. But it's a very challenging
(09:14):
environment because the sector is ultra competitive, right?
And what that means is if you operate a, an e-commerce or
retail outpost, you need maximumlevels of elasticity because
you, you need maximum of performance and availability
when that demand increases. But you're also very in a very
(09:36):
competitive environment. Therefore, your cost structure
has to be maximally optimized, right?
So you cannot afford to remain over provision for peak workload
like the, the ability to right size your workload is your
competitive advantage effectively in your, in your
area. And so for example, with the
recent changes we've made to Atlas, a large customer in the
(09:59):
food delivery industry that frequently launches promotions
and notifies customers of of those promotions has observed
that this can cause a huge spikeof traffic to their clusters.
Now they trust Atlas so they canresize that cluster ahead of
time in under 10 minutes, which helps them realize cost savings
(10:20):
because they can stay on the lower tier when need be and only
scale up when desired. And that ensures that they can
effectively manage and develop their delivery experience for
their customers. So they don't have to actively
manage the database. They can effectively focus on
(10:41):
what makes their beer taste better, which in this case is
serving their customers across the board.
Yeah. So for for the IT decision
makers out there, what can, whatcan they learn from from these
these customers and how they areare provisioning these assets?
(11:01):
For sure. I mean, so first of all the
Atlas brings to brings to bird the the fruit of the cloud.
So it allows you as the decisionmaker to as effectively trust
Atlas to make the that elasticity management problem go
away and focus on other operations that are critical to
(11:25):
your IT infrastructure and computer infrastructure such as
the application tier, your business logic, etcetera.
Yeah, in this example specifically, I always think of
Black Friday. So like you, you don't want to
miss out on any sales. You, you want to make sure that
you've you've got enough provision, but then, you know,
(11:46):
weeks later, you don't want to stay at that same tier, you want
to come back down. So yeah, these these features, I
can see how they they definitelycome into play.
Let's it was there anything elsein the in the specific example
that we wanted to call? I think we covered all those
parts. I perhaps I would also add that
(12:09):
to my earlier point that becausethis works across the board, IE
across both small clusters as well as, you know, clusters, 10s
of thousands of 10s of shards, thousands of nodes.
If if it's a work mission critical workload that scale it
we and outlets can very effectively handle that scale
(12:31):
and be able to to give you that provisional resources all the
way from when you start this a business all the way through
achieving critical mass and and becoming a sort of a Fortune 500
corporation. Yeah, yeah, that's kind of been
the thing that Mongo DB has always been really great at is
scale. That's really what it was built
for, a scale, right? Absolutely.
(12:54):
Yeah. Well, let's let's dive a little
bit into some of the the specifics and these new features
and how they work. And, and the first thing that
that we'll kind of dive into is the the faster primary
upscaling. Rob, do you want to like touch
on that point? And I think we're going to talk
(13:16):
maybe a little about some details and then in a bit we're
going to see some things in action as well.
So the viewers out there stick around.
We're going to actually show some stuff that's not going to
be just telling where to do someshow and tell.
So, yeah, Rob, if you want to talk about some of these, these
key things that we've implemented.
Yeah, definitely. And I think kind of going to tie
this back to some of Alex's points earlier.
(13:38):
So the the basic idea of this faster primary upscaling feature
is that you're going to kind of have two main types of workloads
against your your clusters, readworkloads and write write
workloads. In cases where you have write
workloads, it's kind of tougher to scale those because for read
workloads you can scale out to secondary nodes.
(13:59):
For write workload, it's going to be dependent on the primary.
And the idea of faster primary upscaling is that if you have
these write heavy workloads thatneed more write throughput, we
can move when we have a primary node for your cluster on more
powerful hardware earlier on in the scaling process.
(14:22):
So in addition to bringing down the time for scaling clusters,
we've also brought down the timefrom you starting to scale a
cluster to you seeing more throughput for rights by even
greater degree. Because before we'd be scaling
the nodes and it's pretty likelythat the primary would change.
When we were almost done with scaling with the faster primary
(14:43):
upscaling feature, we've kind ofswitched that ordering a bit
where it's pretty likely that the first known that scaled is
going to be the new primary. So if you're in a right
constrained workload or you're seeing things like high CP usage
on your primary operation times increasing for rights, any of
that stuff that Atlas is also going to be alerting you to, you
(15:04):
can start scaling and you're going to see relief in like 2 or
3 minutes instead of the amount of time it takes for the full
cluster scaling. That's nice.
I've heard of this thing called front loading the primary.
What for a layman? Like what?
What does that mean? Front loading the primary.
Yeah, definitely. So a key part of the Mongo DB
(15:27):
ecosystem is its robustness and ability to handle bad situations
or situations where something's going wrong.
And we kind of leverage that when scaling where, you know, we
can take down one of those nodesof the cluster and the ecosystem
is responsive to that, to your consumers, your clients.
There's nothing obvious to them with the infrastructure and
(15:48):
tooling we have in place to handle service going down and
things like that. And part of that existing
ecosystem around Mongo DB lets us ask for this upscale node.
But we can kind of say, hey, youknow, this node is probably
better for being primary. And then the Mongo DB database
can handle all the stuff behind the scenes that goes into making
(16:11):
that node primary. And there is a lot that goes
into it behind the scenes, both on the database side and the
client side. But the big advantage of Atlas
or MongoDB is that as a consumerof it, you don't really have to
worry about that. It's on us to handle all that
complexity. And you can have a simple
interface to your cluster and the scaling behavior where you
(16:32):
can scale on demand and get relief or decrease in cost
quickly other in cases where you're under provisioned or over
provisioned. Nice.
So again, that's the that's the the reason like I'm not a fan of
self hosting. Just just use Atlas because it
does everything for you. It just makes makes your life so
(16:53):
much easier. So we talked about the the
primary upscaling. What about replica sets?
Yeah, definitely. So one, there's kind of a mix of
advantages here. So if you have a workload that
(17:14):
has secondary reads, you can have an Atlas like read only or
analytics nodes. And one observation we've made
here is allowing those nodes to scale in parallel.
So if you have a primary or or write heavy workload, this isn't
going to benefit you as much. But if you're leveraging
secondary reads, this can be a huge benefit.
(17:35):
Whereas before we would be taking one node down at a time
and the scaling procedure and itwas very safe and it worked
pretty good. But the downside was that if you
had these secondary read workloads where you had to read
a lot of data and you are leveraging your read only or
analytics nodes, there'd be moretime spent in the scaling
(18:00):
procedure than was strictly necessary.
And the idea here is pretty simple.
If you have such a configuration, we allow you and
now by default start to scale those nodes in parallel, so you
can have much faster upscaling of those secondary nodes and
more throughput for reads on your cluster than was possible
(18:26):
before. Nice.
And so then the other thing is auto scaling.
Do you had, do you want to walk through the the demo now or or
maybe like, well, I guess let's talk about the auto scaling, the
(18:47):
features and then we can look look at the demo just a minute,
I think. I think we had a couple more
points here. Yeah, yeah, definitely.
I think Auto Scaling is another big, big win for Atlas.
In the demo, we're going to be showing off manual scaling, but
it is in a situation where Auto Scaling would have kicked in
where it enabled. The basic idea of Auto Scaling
(19:08):
is that for a lot of workloads, you can enable Auto Scaling in
Atlas. And in cases where you're seeing
a workload increase or a workload steadily decrease,
Atlas can kind of manage the elasticity of your cluster for
you. So if you're experiencing an
increase in traffic or a decrease in your cluster
performance from a workload that's increasing, Alice can
(19:32):
automatically scale up your cluster.
And if that workload has subsided and we now, you know,
know it's safe to downscale yourcluster, we can downscale your
cluster so you leverage cost savings from not needing all of
the resources that you needed, you know, an hour or two ago.
Yeah. Is there a situation where I
(19:54):
would not want to rely on auto scaling?
It just seems like, again, I love that Atlas just does so
much for me and it seems like a no brainer to just enable auto
scaling if I know that there could be some fluctuations.
Is there a reason why we wouldn't do that?
Maybe I can speak to that. So it's a great question.
(20:15):
We believe for 95% of situationsthe Outlaws auto scaler for
cluster tier scaling will fulfill your needs.
And that's all the way from moderately spiky workloads all
the way to very sort of slow seasonal increases right as your
workload ramps up because your business is doing better.
(20:37):
There are certain edge cases in which we believe some sort of
custom configuration might be worthwhile, but we see that very
rarely in practice. And we believe that long term
this this these edge cases in elasticity still something we
want Atlas to be able to handle.Yeah, that's that's, that's what
(20:58):
I assumed. So thank you for clarifying
that. So talking specifically about
DevOps teams and and ITDMS decision makers, they're they're
thinking about cost, operationaloverhead.
How does parallel scaling factorinto that?
Yeah. So I think it kind of combines 2
(21:19):
big things. One is the fact that a lot of
your workloads don't need the data.
That's absolutely the most up todate.
And this is a huge opportunity for cost savings where you can
leverage MongoDB features like the read preference to say, hey,
you know, for this workload, it's OK if my data is a little
(21:39):
bit out of date. There are ways you can
configure, you know, the limits as to how out of date it can be.
But you know, if you're OK with this data being 500 milliseconds
or a second old, it's much easier to scale up the ability
to read that data than it is to have data that is absolutely the
most up to date. And by leveraging this feature,
(22:01):
you get to use the resources that are provisioned, that exist
for durability and availability,that also can provide
performance. So in a MongoDB cluster you
have, you're guaranteed to have the secondary nodes.
(22:23):
And with the secondary read preference, we'll start to have
operations that are read only going to those nodes.
So we're leveraging their compute resources and IO
resources to serve your businessneeds.
In addition to the critical functionality that they serve as
providing availability and durability for your cluster as a
(22:44):
whole. Atlas also has some features
that extend this for more sophisticated use cases or use
cases where you really have a higher volume read workload.
And we allowed that through readonly nodes and analytics nodes,
and those nodes are nodes we expect you to use for read only
(23:07):
workloads. They're similar in behavior to
one another, but one very nice feature of analytics nodes is
that you can also scale those independently.
So if you have like a lower volume workload or a workload
that is not time critical in nature, we can work on a little
(23:28):
bit out of date data analytics nodes can be a huge cost
savings. If you have, you know, some read
workload where you're OK with that happening at a pretty low
rate or or alternatively if you need that to be at a higher
rate, other situation Atlas can handle and you configure the
hardware and resources for that analytics tier separately from
(23:50):
the resources for the left holdshere that sort of the base
cluster functionality. Yeah.
So kind of when it comes back toit, it's it's always about your,
your use case or your your access patterns, your query
patterns. That's that's that's how you
model your data. That's how you scale your data.
That's that it's, it all has to do with, with how you're
(24:13):
accessing your data. That's one of the critical
things when I'm talking with developers is, is helping them
understand when modeling your, your, your schema, when
designing how you're going to scale your database.
It's all about your access patterns.
Whereas, you know, we coming from the relational background,
it's always, there's just one way to, to do things in the
relational world. It's very rigid in MongoDB.
(24:34):
There's many ways and it's easier to, to scale and do all
these things because of the flexibility that we have.
So it's amazing. Let's go let's go into the demo
part now let's let's get a walkthrough of of some behind
the scenes of what we built and how it works and see the user
interface and all of that. Rob, let me know when it's safe
(24:56):
to share your screen. Are you and you're muted as well
all? Right, it should be good when
I've got the tab up here. Gotcha.
All right, let's so we'll go ahead and share your screen.
Awesome. So walk us through this.
What are we looking at? So this is for the purpose of
being able to jump back and forth.
(25:16):
We've got a recording of this. So right now what I've got up is
an example of a cluster that hasa right workload where the
cluster's a little bit constrained.
So we can see that the executiontime, and this is in the Atlas
Metrics UI, we can see the execution time for writes is a
little bit high and we're around10,000 operations per second.
(25:41):
And I'm going to go through the process that you would do to
scale up this cluster, fast forwarding through some of this
a little bit. So the way that you can do this
in the Atlas UI is we have a lovely cluster editor that
(26:04):
allows users to change the tier of their cluster.
Let me zoom out a little bit here.
So I'm going through and I'm saying, hey, you know, I want to
have the base tier for this cluster that manages the write
(26:26):
workload and the non analytics read workload.
And I wanted to get scaled up and I'm going to Fast forward
through the scaling process here.
The main thing that I want to call out is the time that has
elapsed. So it's going to be a little
over 2 minutes for us to see this change.
(26:49):
And the the thing to call out isthe kind of order of events that
happens here. So this is an example of the
faster primary upscaling featurewhere this note over here was
our original primary and we saw pretty high or relatively high
(27:11):
operation, actually huge times and high CPU usage.
That's not in the graphs here. And we had around 10,000
operations per second after the first node here got upscaled.
The faster primary upscaling feature resulted in it becoming
the new primary. We saw about a 75 or 50%
reduction in operation actually Eastern times and the throughput
(27:34):
increased enough that it went off a graph here.
So even though this cluster is still in the process of
upscaling and we're going to getto more powerful hardware on the
other electable nodes in this cluster pretty early on in the
scaling process, we're at a point where your ability to
handle write workloads or primary read workloads is
(27:57):
massively increased a pretty early on.
So if you were in a resource constraint scenario where the
scaling was as a result are are triggered due to, you know, an
increasing volume of operations from your users.
You're having a quicker time to having the resources that you
(28:19):
need be in alignment on your Atlas cluster, allowing you to
better respond to sudden increases in demand.
Or just in cases where you're, you know, scaling your cluster
having the the upsides of that upscaling happened earlier on in
the process. Nice.
(28:41):
So before we continue, what was it like before we implemented
these changes? So you're talking about this,
this happened in about two minutes.
What was the situation beforehand?
Yeah, definitely. The overall picture is going to
be pretty similar. I don't have the the full
timeline here in the video, but you can imagine that instead of
(29:06):
this node here getting upscaled and becoming primary, we have
this node get upscaled. This node get upscaled and then
the original primary would get upscaled and at the time the
original primary gets upscaled, one of these two nodes would
likely win or would win the election for primary and then
(29:26):
you'd have more resources. So we've bumped up the time to
relief or a time to having more powerful hardware that is in use
by the primary by a significant amount of time for our users.
Nice. Nice.
So, so let's talk about the engineering effort that was
involved here. How?
(29:48):
How could they? How could your team tackle this
and and make this happen withoutrisking any like uptime issues?
Yeah, definitely. And I think for our team, like
the Atlas team, a lot of the great functionality of Mongo DB
made it a lot easier for us and that, you know, there's a lot of
(30:11):
robustness and availability and durability built into the core
database product. For us, the, the biggest
concern, of course, was making sure that this feature had a
positive impact on our users. And, you know, a lot of the
internal tools and reporting allowed us to make sure that,
(30:32):
you know, the feature was havinga positive impact on our users.
I think for the engineering workthat went into this, you know, I
think the, the biggest, one of the biggest points here was just
the, the testing and making surethat the behavior of Atlas
(30:53):
clusters here was matching our expectations and ensuring that,
you know, when we're making these changes in some adverse
scenarios where the clusters under load that things are still
working as expected. Yeah, that's awesome.
So like just because of the distributed nature of the
(31:14):
environment here, there's bound to be some bottlenecks.
Can you talk about some instrumentation that was used to
identify any potential bottlenecks?
Yeah, definitely. I think the one of the coolest
things about Atlas is how platform agnostic it is.
(31:36):
So, you know, you can have a mixof cloud providers, regions, all
that stuff, and that's great foryou as an end user, but it does
add some significant complexity for the engineering team and
that we've got to support a lot of different environments and a
lot of ways those environments can behave.
(31:57):
So part of the optimization workthat went into our improvements
was increasing the visibility that we have into what's going
on from a user saying, hey, you know, we want our cluster to be
more powerful. And every step in that process,
(32:17):
seeing where we could optimize stuff to eke out a little bit or
reduce the time it takes for onestep in the process by a little
bit, any of that stuff can add up over time.
There's a lot of industry are maybe not industry standard, but
ways of like tracing events through distributed systems that
(32:40):
we leveraged. And that was very useful for,
you know, being able to see where we had optimization
opportunities and could have a few quick wins that were smaller
and impact than any of these bigger features that are kind of
user facing like faster primary upscaling or parallel node type
scaling. But definitely add up and reduce
(33:01):
the time it takes to scale by a little bit as well.
Yeah. And so like this sound to me
like this seems dangerous, like like what, what kind of safety
checks are there in place to like make sure that something
doesn't blow up the the clustersdestabilized, etcetera?
(33:22):
Yeah, definitely. So I think, again, there's a lot
of safety checks in MongoDB itself.
So there's a lot of ability of the core database products to
handle situations like, oh, you know, this note has suddenly
gone offline. Atlas, you know, we don't
randomly pull out nodes offline.We're much more conscientious
(33:43):
and cautious. So I think the, the main point
of safety is that, you know, we,we operate on Atlas clusters in
a very methodical manner where even with the most aggressive
scaling option for parallel scaling, we're still being very
careful with the electrical nodes of a cluster and making
(34:07):
sure that we're leaving the cluster in a good state at each
point in the scaling operation. So there's a lot of health
checks on the Atlas side where we can see the, the view or the,
the state of an Atlas cluster and make sure that us operating
on a node to scale it up isn't going to destabilize the
(34:29):
cluster. And this is of utmost concern
for your electable tier because they're, you know, it's very
important or, or very, very important that the primary
(34:50):
election process can go as smoothly as possible if there is
a primary change. So making sure that there isn't
replication lag when we're scaling and being methodical
about what nodes that we scale is an important part of that
process, as well as ensuring that the conditions are good for
(35:12):
a new primary getting elected when a node gets scaled.
And I think a lot of this is kind of a lot of this existed
before any of the changes we've made.
You know, it's been baked into Atlas as a product and we have
some additional safety checks oradditional safety with some of
the faster primary upscaling where you know, the the server
(35:34):
has the opportunity to decline to have this new node become
primary immediately. So if it has to catch up on the
outlook or something, it can do that and then initiate a
priority takeover to become primary.
So there's some additional safety there as well.
Nice, back back to the the pointof.
I'm just glad Atlas does it all for me 'cause I wouldn't know
(35:57):
where to start. So amazing improvements as well.
Let's talk now about like some, some key, I guess takeaways or
like next steps or like let's talk to directly to the, the,
the engineers like So what does this mean for for an engineer?
(36:17):
What is like the real world impact I guess?
Yeah, absolutely. I'm happy to take that.
So yeah, if you're an engineer or a DevOps person or some other
type of operator, well, faster and more elastic database
scaling means fewer manual interventions for you and your
(36:39):
team. I mean, and that just unlocks so
much more productive time for other things and means we can
potentially depending on their workloads, take that elasticity
and scaling completely sort of in house into Atlas and you can
just sort of set it and forget it.
That's that's our goal for most situations.
(36:59):
That also means your analytics tiers can scale more robustly,
right? And something that Rob mentioned
is that the analytics tier is a great opportunity for cost
savings because you can independently configure the
cluster tier there. And very often your analytics
tier or analytics workloads is not nearly as time sensitive as
your operational workload. So at the high level, the
(37:23):
TLDRTLDR is that you can sort ofdelegate this complexity of
scaling to Atlas and essentiallyfocus on making your product
better for your end users. Nice.
And So what about for like the the directors, the ITDMSCIOS
like what's the ROI, lots of acronyms there.
(37:46):
What's the return on investment?Absolutely.
Yeah. That's a great question.
So it's, it's a combination of first of all it's better cost
optimization and performance. And in this environment that's
always top of mind. Obviously increased
availability. I think as the the years rolled
on that level of availability where customers essentially can
(38:10):
always access your product has only increased.
So that goes without saying thatthis is, you know, mission
critical for you as an ITDM. And then lastly, faster time to
market, right? Because if you can, if you no
longer have to manage elasticity, but we can do it
instead, you unlock all these resources to build the next
(38:31):
generation of your products, to integrate perhaps some AI
features into your product, or maybe focus on just improving
user experience. Nice, nice.
And and for those who want to learn more and they want to to
dig into these features, there are some docs that are LinkedIn
the video description, some thatare talking about the
(38:54):
configuring scaling mode, configuring auto scaling and a
setting read preferences. So be sure to check out those
links and those docs to learn more about this.
What, what, what? What are some key takeaways from
each of you that you want to convey to our audience?
(39:15):
Yeah. Maybe I'll take it first.
So I would say from my point of view that we have invested a ton
of effort in the last 18 months into sort of platform elasticity
and cluster elasticity in particular.
And that applies to both manual scaling and all the scaling to
bring that level of availabilityand performance to your
workload. And we're sort of only just
(39:37):
getting started. We have a very long road map to
continue iterating and bringing more opportunity for you to hand
over that complexity to Atlas. Nice.
What has been your your experience in in making these
updates and these changes, Rob? Yeah.
You know, I think it's really exciting and interesting being
(39:57):
able to work on these distributed systems and figuring
out how we can find ways to makeyour experience of using Atlas
faster while still still maintaining our kind of mission
critical promises of availability and durability.
I think I also learned a lot about some of the pretty cool
MongoDB features, especially around like secondary reads and
(40:21):
how you can target. You know, workloads that might
be important but not reliant on super up to date data, which I
think is a really cool experience or really cool
feature for our analytics tier. Especially with a parallel
(40:42):
workload scaling, we can have really reactive changes to those
nodes for your read workloads. It is.
It is amazing every time I talk to developers, how much, how
many features there are in MongoDB and I, I literally
learned something new every day almost.
(41:02):
And, and, and also how many developers don't know about all
of the features that we have. And so especially in, in Atlas,
there's so many, so many things under the hood that you kind of
take for granted. And so it's, it's amazing that
we have such great teams, engineering teams working on
these things and continuing to improve them.
Of course, we're going to continue to improve overtime as
(41:24):
well, but these are these are amazing, amazing things and
really appreciate. We have a a quick question here.
It's not really related, but we have a question here on LinkedIn
and I want to answer it. How does Mongo to be Atlas have
or or does Mongo to be Atlas have a coupon and his operator
and to deploy to Kate's platform?
The answer is definitely yes. Just go check out our docs, go
(41:46):
go to the Mongo to be docs. There is a great documentation
on how to actually set that up. Anything to add on that from
either of you? No, Yeah, the answer is yes.
Go do it. You can do it for.
Sure, we're happy users that internally LSA.
Oh, for sure, yes, yes. Amazing.
All right, so we, we've covered all of this, all the topics
(42:10):
here. I think any last words before we
go to our audience on this topicgo use it.
Exactly, absolutely. I mean, as I mentioned that we
are, we continue to build improved elasticity for the
platform. And yeah, I totally encourage
(42:31):
you to check it out, what we've built, and it was a very fun
conversation. Yes, awesome, amazing, amazing.
Thank you both for being on whenwhenever you finish the next
great feature, we'll have you onagain.
So really appreciate it and audience remember TuneIn every
week, Tuesdays, Thursdays, we live stream various topics.
(42:52):
So keep an eye on YouTube or LinkedIn wherever you're at.
So we'll see you next time. Thank you.
Thank you, Jesse.