All Episodes

February 28, 2025 • 66 mins

In this episode of MongoDB TV, join Shane McAllister along with MongoDB experts Sabina Friden and Frank Sun as they explore the powerful observability suite within MongoDB Atlas. Discover how these tools can help you optimize database performance, reduce costs, and ensure reliability for your applications. From customizable alerts and query insights to performance advisors and seamless integrations with enterprise tools like Datadog and Prometheus, this episode covers it all. Whether you're a developer, database administrator, or just getting started with MongoDB, learn how to leverage these observability tools to gain deep insights into your database operations and improve your application's efficiency. Tune in for a live demo showcasing how MongoDB's observability suite can transform your database management experience. Perfect for anyone looking to enhance their MongoDB skills and take their database performance to the next level.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:06):
Hello there and welcome back again to Mongo DB TV.
I'm Shane McAllister. I'm one of the leads on our
Developer Relations team here atMongo DB.
And today we're going to diving deep into the world of database
performance, reliability and cost efficiency.
And joining me are two colleagues from Mongo DB, Sabina
Fyden, a Product Marketing Manager here and Frank Sun, a

(00:29):
staff Product Manager. And so together, today's show,
we'll explore Mongodb's game changing observability suite,
how it optimizes performance, reduces costs and integrates
seamlessly with enterprise tools.
Plus, stay tuned with us for a live demo.
We'll showcase how all these tools transform and allow you

(00:51):
for full observability and performance analysis of your
database. So without further ado, let's
get started. Sabina and Frank, you're very
welcome both to MongoDB TV. How are you?
Hi. Thank you for having us today.
No, it's great. Thank.
You. I always love to have two
guests. It's a certain sense of comfort

(01:12):
that one person isn't doing all the heavy lifting.
So I appreciate that. It's great.
Before we get started into the details of what we're doing
today, I always like to get people to introduce themselves
properly, talk about their day-to-day role at Mongo DB, but
also, if you can, your path to your day-to-day role at Mongo DB

(01:32):
as well too. So Sabina if I could ask you
first to introduce yourself properly what you do at mongo DB
and also your how you got here. Yeah, absolutely.
Like I said, I'm excited to be here today.
So I am a product marketing manager here at Mongo DB and I
own our observability suite. Everything that has to do with

(01:55):
kind of how we monitored the database, how we alert customers
on the database and all of thosedifferent features, which is
really important to kind of performance and uptime and all
of those factors. And what's interesting is I kind
of came here because I had the knowledge before I worked at a
company called Logic Monitor andthey are pretty prominent in the

(02:16):
observability space. And so I kind of begun my career
there. And with that expertise, I kind
of transferred over here to Mongo DB to kind of build help
build out our messaging and, youknow, our platform for our
observe, our observability suitewithin Mongo DB.
So that's kind of my journey of how I got here and I've been
working with Frank ever since I joined about two years ago.

(02:38):
And I think we have a great partnership and it's been, it's
been a pleasure. Excellent.
So you've been here too. You're Sabina and Frank.
How about you your your path to MongoDB?
Yeah, yeah. Excited to be here.
Looking forward to to the event today.
I've been here for for about 3 years now.
Similar to Sabina, prior to working at Mongo DB, I was also

(02:59):
in the observability space. I was working at a a smaller
observability startup really focused on kind of like
infrastructure and application monitoring.
I was a software engineer there for a few years working on, you
know, observability solutions for various cloud services and
then transferred over to, to join their product org.

(03:20):
And so when I was, you know, when I first started my kind of
product manager journey, I was working on integrations between,
you know, APM, our APM offering and, and very popular ITSM tools
in the market like ServiceNow and Sharewell.
And that really got me familiar with things like, you know, ITSM
processes and, you know, the change flow of like large

(03:42):
enterprises and of course got me, got me, you know, more and
more familiar with product management as well.
And did that for a few years andcame over to Mongo DB.
At Mongo DB I'm responsible for our observability tools which
I'm super excited to to showcasetoday.
Excellent. Well listen a wealth of
experience there between both ofyou.
It's great to have you on the show.

(04:04):
Typically these shows have our our cloud partners and our
customers etcetera too. So I really love when we get
Mongo DB folks to come on and explain in depth kind of our
tools and what we have because it's, I think and this topic in
particular is a very much interest to me.
I think lots of our developer community are using Mongo DB,

(04:24):
they're leveraging Mongo DB, it's helping them build their
applications, it's helping them to do things quicker and
swifter. But often, and I've just come
back from a week at AWS Reinventwhere you meet a lot of
customers and developers and users in one go.
Often they're kind of have maybebeen in the mode of.
I use Mongo DB. It's working well for me.

(04:46):
It's up and running, it's grand,everything's going fine.
But we always know and the topicof today's show is to deep dive
into the performance side of things, but let's set the
context. How does performance of your
database impact your applicationperformance, impact your
application reliability and alsomaybe impact your costs?

(05:08):
So I don't know who to throw that one to first.
I don't know if you who wants totake that, but at a really high
level, you know, as developers, I used to say developers are
lazy, might just do things as quick as we can.
But I've been corrected numeroustimes, say we're looking for
convenience, right? So as developers, we set up with
Atlas, we've connected Atlas, we've got our application
running, it's working, brilliantjob done.

(05:30):
But that's not the end, right? So high level contact setting,
how does it impact their umm application performance of
reliability and costs? Yeah, I can actually take that
one because kind of what you're saying, what we've seen is often
times, like you said, developersknow how do you deploy Atlas and

(05:50):
then it's working. But we've seen that there's
often times kind of a skills gapwith that performance
optimization piece, you know, and how can they that really,
you know, how can MongoDB reallyuse our tools to impact, you
know, database optimization and the performance and how we run.
So that's kind of why we're heretoday because we have seen that

(06:11):
skills gap in the market and we really want to impact power our
customers to, you know, understand how they can leverage
MongoDB to help with that. But you know, there's really, we
have a great how we kind of liketo look at it is that we offer
these tools from conception. So we are really monitoring your
data, monitoring your metrics from conception and we can use

(06:35):
and leverage these tools to really give you MongoDB specific
insights into your data and how to optimize on that, how to help
with resource utilization realization, you know, making
sure everything is running smoothly, getting ahead of the
issues before they impact any sort of performance or lead to
downtime, whatever it may be. So we're really just trying to

(06:58):
empower our users on how they can, you know, be proactive in
that method, you know, in that method.
Excellent. Anything to add to that, Frank?
Yeah, you know, I think Sabina hit it head on the nail.
One thing I just want to add is I think one of the beautiful
things about Mongo DB and the non relational method is just,

(07:19):
you know, how widely adoptable it is from, you know, small
startups and just, you know, it's it's quick to get started.
And that's one of the, you know,obviously selling points of the
non relational model, but it's also widely adopted by some
billion dollar industries and billion dollar enterprises.
And, you know, on both kind of ends of that scale, you know,

(07:41):
it, it works well, it works perfectly.
And I think one of the nice things about Atlas is that Atlas
is that, you know, we do make that kind of convenience back
there very obvious for our users.
And we have a number of tools built in that not only kind of
show you like how your database is performing, but we give you
very targeted guidance and support on how to, you know,

(08:04):
optimize your database performance.
And obviously that ultimately impacts your application
performance. It, you know, impacts the the
reliability and resiliency of your application and ultimately
your bill and your cost as well.OK.
So we're going to, obviously, asI said in the introduction,
we're going to have a demo later.
Let's keep it to the high level at the moment.

(08:25):
So we have the Mongo DB observability suite.
How long has that been around and how did what have been some
of the recent additions to that as well too that we'll see a
little bit later? Frank, do you want to take that
one? Sure.
Yeah. So we're, we're going to be the,
the Mongo DB observability suitehas been around since its

(08:47):
conception. It's been kind of a day one
feature of Mongo DB Atlas. Observability is such a core
part of, you know, just maintaining the availability of
your database. And so it's something that
we've, you know, we've included since day one.
That being said, we've continually added more and more
features to our observability, observability suite.

(09:08):
What may have kind of started asjust, you know, baseline
monitoring with just pure metrics and alerts.
We've kind of advanced from there.
And we've, you know, provided more detailed analysis, looking
at things like query profiler, which you can use to really kind
of take the symptoms that you'rethat you're seeing within your
metrics data to kind of pinpointa specific root cause.

(09:29):
And we've also added more advisory tools that provide
things like index advice and things like schema advice, ways
to kind of optimize your your query performance.
OK. And is the is the help that's
you know offered through the observability suite that's
obviously as you said, it's beenthere since day one with Atlas,

(09:51):
right. It's been around for a long
time, but is it informed by the the many thousands of customers
that we have? So whilst be it that this is
your data and this is your instance of Atlas etcetera as
well too. We've learned through all our
huge amount of customers and developers that are using
MongoDB, I suppose the the common things that we should be

(10:13):
looking at somebody's data in terms of performance,
scalability. And you mentioned it there, I
suppose, advising how things could be better and more
streamlined, right? Yeah, absolutely.
We're constantly keeping a, a pulse on what, you know, what
customer needs are and kind of what the gaps are there and kind
of like what their requests are in terms of understanding their

(10:36):
data. And the tools like getting more
kind of what we, we introduced atool called Query Insights last
year and that was heavily based on kind of our larger enterprise
needs. So we're always, you know,
evolving with kind of customer demands and customer needs and
really fine tuning the tools in which we offer so that they can
understand their data better, sothat they can, you know, work

(10:59):
with their data better. So it's all about meeting them,
you know, where they are and what kind of requests they, our
customers may have. And we meet with them, you know,
regularly to kind of hear feedback.
And once we, you know, we launched, like I said, query
insights and there's been very positive feedback because it
really was in tune with what customers were kind of asking

(11:20):
about. Yeah.
And one of the things that we, we like to emphasize as well is
just, you know, actionability. You know, every signal, every
alert that we, that we provide, you know, we try to make them
actionable so that, you know, even a database administrator
novice can kind of, you know, see what's going on in the

(11:40):
database And they, they have some idea of what to do.
And, you know, where to kind of like connect the dots to
ultimately be able to, to, you know, resolve the database
problem. Right.
What are the, what are the typical kind of metrics or
insights you, you mentioned there that the novice database
user like where are they starting off?
What are they most concerned about?

(12:01):
Performance, speed, reliability,costs maybe as well to to kind
of, you know, what are the extralevels that you know,
observability can bring? Once we get past the general
metrics that most database usersare concerned with, what, what
the insights do we give them? Yeah.
Well, I guess starting from the metrics themselves, there are a

(12:22):
number of metrics that we kind of focus on when we're when
we're observing a typical, you know, cluster deployment.
We kind of break it down into a few categories.
Obviously there's the, there's the hardware, there's the CPU,
the memory in the disk space, etcetera.
And you know, that's, that's allyou know.
The standard stuff that you'd expect, yes, yeah.

(12:44):
Right. Yeah.
The other one that we also put afocus on is more of our kind of
like query or like database level metrics.
So of course we have metrics like, yeah, obviously the number
of like collections, the number of indexes, number of, you know,
databases, etcetera. Yeah, those are all kind of

(13:08):
indicative of, you know, how much workload you're running on
your on your database cluster. But then also we have metrics
like our query targeting or query efficiency metrics.
And that's really where, you know, we start to get into some
of our advisory tools and how those kind of fit in.
So query targeting is, is a metric that we look at often

(13:28):
when we get in, you know, when we see a database problem,
that's a measure of how effective your queries are.
It's essentially the number of documents scanned over the
number of documents returned perquery.
We have that at a few levels. We have that at, you know, the
host level. That's kind of aggregated across
all the queries running on that host.

(13:50):
And that's a good, that's a goodkind of starting measure of
seeing how effective your queries are.
So if you have, if your queries are scanning a large number of
documents and only returning a few documents, that's pretty
indicative of a inefficient query.
And that might be a signal that,you know, maybe there's a
missing index or maybe there's some sort of like schema

(14:13):
optimization that that could be that could be optimized.
So typically, you know, one of the flows that we see is, you
know, if customers do have like a high query targeting value,
one of the, one of the tools that we like to suggest is our
query profilers. So they can look at some of
their slowest queries. And we typically see some

(14:35):
correlation between, you know, inefficient queries and slow
queries. And they can go into the query
profile and they can see specific slow queries and be
able to, you know, get more details about, you know, what
the specific query is, the execution time, whether or not
it's an index scan or a collection scan.
And then, you know, from there we provide additional advisory

(14:58):
tools. You know, we, we, if it is a, if
it is a collection scan, we'll take a look at, you know, how
many documents it's scanning, how many documents it's
returning, the average executiontime, the number of bytes read.
And we'll actually suggest an index recommendation to help
optimize those queries. OK, it's ideal.

(15:19):
So I'm leaving obviously, look, we're talking, you know,
particularly for consumption models, right?
You're talking about, you know, compute.
There's a cost for compute, there's a cost for bandwidth,
and there's the cost for obviously storage as well too.
And so we're looking at that compute time, as you said, if
you do a large scan in a query and you return back a very few

(15:40):
results, you you can advise there as well too.
What are the other types of scenarios that you see, I
suppose commonly amongst people before they get deep down into
database observability? What are the, you know, what are
the common mistakes, as it were,right?
Yeah. Well, I think even before we get
down into, you know, some of thedatabase and query level

(16:02):
metrics, you know, we talked about kind of the hardware, you
know, the hardware metrics earlier auto scaling is
something that you know, is built into to Mongo DB Atlas.
And that's, you know, pretty a pretty fundamental, you know,
observability offering there being able to scale up and scale

(16:22):
down based on, you know, CPU andand disk space consumption.
You know, those are those are pretty, pretty standard.
But that is something that, you know, we do see some of our
customers maybe skip over, you know, that is definitely
something that, you know, we recommend our customers enabled
just, you know, enabling auto scaling, especially as workloads

(16:45):
grow, you know, over time, you know, in this day and age, you
know, things can change very quickly, you know, like things
go viral and, you know, suddenlyapplication triples.
It's it's it's. Its requirements, it's always a
good, it's always a good complaint to go viral, right?

(17:07):
But you need the backup behind it in order to accommodate that.
I mean, thankfully, you know, most providers such as ourselves
are well geared for that. But I'm old enough to remember
the early days of something going viral on the Internet
leading to things just not working and sites getting errors
and four O fours and everything else as well too.

(17:28):
The early days of Twitter was notorious for that with the fail
whale and things like that too. But because they were relying on
essentially physical servers that they were owned, they
hadn't, you know, moved to the cloud at that time as well too.
So in regards to what you've described, Frank, and I'm
assuming that like we have customizable alerts for all of

(17:50):
this Sabina that when you see something happen, you're going
to get notified swiftly and how do they work and and how
customizable are those? Yeah.
And that's actually, it's funny that you brought that up right
then because I was going to say and to add on to that, the
general question is how can customers be alerted, you know,
about types of situations when you know CPU is running high or

(18:10):
whatever it may, whatever the problem may be.
We actually have over 200 event types for alert that custom that
are extremely customizable. So we kind of we have
out-of-the-box alerts that are just kind of like the general
recommendations of what customers should be getting
alerted on. But beyond that that customers

(18:31):
can set thresholds for what theywant to be specifically alerted
on. So any of those issues that kind
of Frank just listed before we can set very specific thresholds
and alert customers within Atlas, but also we have this
great piece where we can integrate our alerts with

(18:51):
customers current alerting system.
So whether that, whether that they want to be alerted on
Slack, whether they want to be alerted on pager duty, we have
that ability to push alerts intocustomers tools that they're
already using. So that's a really, I think
helpful piece for customers because it's, you know, they

(19:13):
don't have to, they're already using Mongo DB, they might
already be using Slack or pager duty.
So it's unless transition into getting informed and notified
into what that issue is and getting ahead of it as well.
And in a similar fashion to whatother integrations do we have
with regard to central observability then, Sabina?

(19:33):
Yeah. So that's a great question and
that's something that's kind of one of our, you know, good value
differentiators is that we are able to offer some really
prominent integrations into tools like Datadog, into tools
like Prometheus. And we can send push metrics
into like I said, tools that ourcustomers already are, are

(19:55):
already using so that there's a single pane of glass in which
they see it. We kind of look at it and the
way Frank and I kind of talk to it, it's a plug and play
experience, you know, so Atlas offers these metrics and we are
monitoring within. In Atlas and you can
troubleshoot within Atlas, but if you want to troubleshoot that
within your own application monitoring system, you're able

(20:16):
to do so with that integration piece that we offer.
So that's really helpful. And we've seen a lot of value
through kind of pushing those metrics to those central
enterprise observability tools. So the goals of these alerts and
these thresholds and the integrations that we have too
are the fact that you have plenty of Fair warning as to any

(20:38):
potential issues with the database itself as well too.
We got a question in from. Sorry Sabina, go ahead add.
To that and just understanding the data in a platform that you
already know and that you're already comfortable and you're
familiar with, I would say is another benefit of that as well.
Yeah. No, that makes perfect sense.
Got a question in from Eric about just expanding on the

(21:00):
database profiler if we can. Thanks in advanced, Eric, and
thank you for the question. Eric, I suppose we're probably
going to see a little bit more in your demo shortly, Frank too.
But is there anything else high level we can add about the the
database profiler for Eric? Yeah, at a high level, you know,
our database profiler, our queryprofiler, and Alice as it's

(21:21):
called, basically just looks at the slow query logs.
So to get a little bit kind of into the details and how it
works, there is a database profiler that basically profiles
or looks at slow queries over a certain, what we call a slow
execution threshold. There's a slow on that
threshold. And so any operation that takes

(21:43):
longer than a certain slow like millisecond threshold will get
profiled into, you know, a slow query log essentially.
And then we the the Atlas query profiler basically reads from
that slow query, slow query log and displays it as a scatter
plot. So in Atlas, as I'll show later

(22:04):
in my demo, we have a tool called the query profiler.
It's, it's really just looking at all the slow queries that are
running on your cluster and it'spresenting that as a scatter
plot. And then you can look at
individual, the data points and you can click on those and you
can see additional details abouteach slow query.
And like I mentioned earlier, it'll show information like what

(22:26):
namespace it's running on the database, the collection name,
it'll show some client metadata information like, you know, what
application it's is, is, you know, calling the query if
there's a username associated, you know, we'll, we'll also
include the username. It'll of course include things
like the query details itself, what the actual command is, what

(22:47):
the operation is. It'll also include some
execution stats like number of documents scanned, number of
documents returned, whether it'sa collection scan or an index
scan. You know, essentially meaning
whether it's not, whether or notit's using index and things like
that. Yeah.
OK. OK.
I think while you were chatting then Paul, who's obviously very

(23:07):
quick at typing, added another question then with regard to
this. So his question is, does the
profiler show the impact of shards and performance stats?
An increased approved locality for a query within a single
Shard, or increased latency for a query that spans shards?
Would that be something we'd touch on then?

(23:28):
So the query profiler does work across sharded clusters.
We do provide stats on so depending on kind of like what
level you look, you know when you look at it from a cluster
level, we are kind of aggregating the the stats across
the entire sharded cluster. If you do want to kind of zoom

(23:49):
in and look at, you know, a specific Shard or even a
specific host, we have additional filter options.
So you can look at, you know, the Shard and the host level and
just see operations against thatkind of scope.
OK, excellent. Thank you for your question
Paul, I hope that helped. So we talked about performance
and we're talking about getting those deep insights.

(24:10):
Talk to me a little bit about cost then, because obviously the
end goal for some sides of the business is to make sure
increased performance also leadsto reduced cost.
So how, how does it leverage theobservability to understand how
we do and, you know, maximize resource usage whilst also
reducing costs if possible? Who wants to grab that one?

(24:33):
Sabina, Frank. Whichever.
Yeah, I keep. You could go ahead, Frank.
OK. Yeah.
So one of you know there are multiple ways that we kind of
optimize cost, but the one I want to focus on is the
performance advisor. So we kind of touched on it
earlier, but our performance advisor, one of the the main
benefits of performance advisor is the index recommendations.

(24:57):
Our index recommendations and performance advisor also based
on our slow queries. So just like our query profiler,
we're looking at any operations that are run against the cluster
that take longer than a certain,you know, millisecond threshold.
That millisecond threshold, by the way, is also kind of dynamic
based on the workload running onyour cluster.

(25:18):
So let's say, you know, your cluster has many slow operations
all over, let's say 150 milliseconds.
Our slow Ms. slow milli, slow millisecond threshold will
dynamically adjust to look for operations over, you know, let's
say 100 milliseconds or so. But if you have pretty well

(25:38):
optimized queries, or let's say you have, you know, queries that
that just don't take a lot of time, maybe they're very kind of
efficient queries. And your typical workload is
seeing maybe let's say like 50 millisecond queries, you know,
are we're, we'll dynamically adjust again and we'll look for
slow queries that are like over 30 milliseconds or 20

(26:00):
milliseconds. So the, the, the slow queries
that we're looking at on your cluster are dynamic based on the
workload on that cluster. But to get back to our index
advice, we are looking at the slow queries on your cluster and
we're looking at, you know, things like the number of
documents scanned, documents returned, obviously execution
time, average bytes per per collection per document.

(26:26):
And we're running an algorithm in the back end that basically
looks for or suggests an index. And then we'll do some kind of
processing in the back end to see the expected performance
benefits of that index. Our index advice today is
limited, so we don't provide index advice on aggregation

(26:50):
pipelines. It's, it's just on, you know,
queries. But that is something that we're
looking at for for the future. OK, OK.
I guess just to, I guess just toput it simply, I think when
queries are optimized, the the database engine can retrieve the

(27:10):
data more quickly with less consumption of effort and that
can lead to lower, you know, resource consumption ultimately,
Yeah. Cool.
Quick question for Sahil. And can we optimize aggregation
pipelines for better performance?
Do we take care of that as well too, or is that elsewhere?

(27:32):
Yeah. So not in the performance
advisor today, our performance advisor today the index
recommendations do not suggest indexes for aggregation
pipelines. We do have some tools in our in
our data explorer that do and aswell as Encompass that do give
kind of better feedback, better statistics on aggregation

(27:56):
pipelines. So I don't have a ton of
information about that, but I would definitely recommend
checking out Compass. You can use it to kind of build
aggregation pipelines and see how each stage of your
aggregation pipeline is performing.
You know, you can kind of break down the, you know, the, the
pipeline as well and get stats about each, each stage.

(28:19):
So that it would be my recommendation.
Excellent. So Hill, thank you for the
question. And I love that you mentioned
Compass. I think it's something that we
don't shout enough about. It's an incredibly powerful tool
essentially to examine and look at all of your collections and
do the aggregation pipelines anda whole host of other things in

(28:40):
there. So if you're using Mongo DB
Atlas and you're not don't have Compass downloaded, go get it.
You'd be amazed at the insights that it can give.
So there's the plug for Compass.I want to get into kind of the
show part of this. But before I do this, one of the
things and again we have a wide developer audience on the on the
Mongo DB TV is to just like givea context of scale.

(29:04):
Like people are building a MongoDB to do their proof of concepts
and their MVPS and we've got customers which are absolutely
enormous financial institutions,etcetera.
So is there, is there any limitation to this scale at
which the database observabilitytools work?
Like if you have many, many, youknow, hundreds of thousands of

(29:25):
documents and, and you know, many, many thousands of reads
and writes per second, etcetera as well too.
Is there any limits on on how you know that can be observed
using the tools that we have? So we've actually been working
over the past year to optimize our advisory tools for our
largest customers. And as you mentioned, yeah, we

(29:48):
have billion dollar financial services enterprise customers as
well as, you know large gaming companies all with, you know,
multinational presence. And so we see that actually part
of their kind of regular workflow is looking at some of
our core profiler tools and looking at our index

(30:08):
recommendations. And they actually bake it into
kind of their like daily or their weekly database
administration workflow. And, and it's something that
they're using, you know, every day.
So it does work for, you know, of course both the POC as well
as those enterprise level skill.OK.

(30:31):
So it covers that wide range, which is enormous, right?
There's quite the range of userson who are optimizing our tools.
Excellent, excellent. So we've talked about a lot,
right? We're going to see some stuff in
action. We talked about obviously
Performance Advisor, the query insights, the metrics charts,
the customizable alerts and and some of our integrations.

(30:54):
So I think people want to see itin action.
So Frank, if you want to share your screen, I saw a little bit
earlier, if you're going to showon the live stream today, it'd
be great to, you know, have a run down to this to put it into
context for those that may not be so familiar with the tools
that we have to hand in Atlas Net.

(31:16):
So. Perfect.
That's coming through nice and clear, and hopefully everyone
else can see that as well too. Perfect.
Yeah, so I have the Atlas cluster here and I have some
workload running on my Atlas cluster.
I'm going to just go over briefly, you know, some of the
tools that we talked about here today.

(31:38):
I'll spend a bit more time on some of the new things that we
added this past year. We talked about the query
profiler, you know, we've made some happens to our performance
advisor, but just to kind of level set and make sure we cover
everything, you know. So our metrics page here, here
I'm running A2 Shard cluster. So yeah, I can see kind of an

(31:58):
overview of each of my two shards and and I can see
specific metrics about each of these shards.
So earlier we talked about things like documents returned,
documents scanned, and here I can see the number of documents
returned for each of my shards in my 2 Shard cluster.
If I want to get additional detail, I can also click into

(32:21):
one of my shards as well. So I'll click into my Shard 0.
Here I can see, you know, these are my Shard level details.
Here I can see additional metrics as well.
You know, we talked about some of our hardware related metrics
earlier, but we have a number. Yeah, we have a number of
different metrics available here.
You can also see additional information about, you know,

(32:45):
what each metric means. We also have things like common
server events like server restarts or re elections.
And that is helpful because you can kind of contextualize how
you know different metric behaviors or you know how the
metric is behaving may be impacted by server events.

(33:07):
These are all, you know, highly interactable as well.
You can zoom in, you can zoom out, you can set custom time
windows. We also have metrics for search
nodes as well. So I am running a search
workload on on this on on this cluster.
And so I can see, you know, how my search nodes are performing.

(33:29):
I have, you know, 2 search noteshere.
But going back to my, my Mongo D, you know, data bearing notes
here, you know, we, we took a look at, you know, some of the
metrics that we have available as well as some of the
interactions on this metrics chart.
The, you know, I also want to show off, you know, some of our

(33:52):
alerts. So we do have a number of
project alerts as well that are available out-of-the-box once
you get started with Atlas. And of course you can add new
alerts as well. So you know, if I wanted to, you
know, add a new alert, I can, I have a number of alert
conditions that I can alert on. I can specify it for specific
hosts. And as Sabita mentioned earlier,

(34:14):
we are super plug and play with a number of, you know,
observability and notification tools in the market.
So, you know, if I wanted to create a Datadog monitor or if I
wanted to, you know, page someone and page your duty or if
I wanted to notify like a Slack channel or notify, you know,
someone in Slack. These are all out-of-the-box

(34:36):
notification options that we have available.
We also have. And so there's nowhere to hide
basically. You can't, you know, once you've
set these things up and added all your integrations, you can't
possibly say I didn't know aboutit, right?
And. You're supper going to know
about it. Yeah.

(34:56):
We also talked about observability integrations
earlier as well. So you know, we have a number of
integrations here, but you can see we also have Prometheus and
Betadog here. I actually have Datadog here
configured. We also have a number of
integrations that are not shown on this page that you can find
through our partner portal with popular APM tools like New

(35:18):
Relic, Dynatrace, Grafana Cloud,etcetera.
So pretty, you know, pretty wellintegrated into the overall
observability market. But getting back to, you know,
some of the tools that I want tohighlight here.
So we talked about our metrics. That page there, the query
insights page is the one that werecently introduced this year.

(35:41):
So what I'm looking at here is our namespace insights.
This is essentially providing metrics at a collection by
collection level. So earlier we were looking at
metrics at, you know, the Shard level or at the host level.
Those are aggregated across all collections, across all queries

(36:02):
per host or per chart. And so there is a level of
fidelity that you kind of lose there when you're looking at
things from that kind of like aggregate view.
And so this namespace insides view provides a bit more of a
detailed view and provides a bitmore kind of data fidelity.
So here I can see, you know, I have a few collections.

(36:24):
It looks like I have a credit cards that secondary deleted
cards collection that is taking on average around 25
milliseconds per second to run its queries on that collection.
And it's, it's, it's a lot higher latency than than the the

(36:46):
rest of my collections. And so that could be something
that, you know, maybe I want to look into.
We're able to look back seven days on this Namespace Insights
page today. So I just started running my
demo script earlier, but you cansee we had a pretty significant
spike in latency. We can also look at the number
of operations run, you know, percollection as well.

(37:10):
We're looking at a number of different collections.
I'm just looking at kind of my top five here.
But if you do have, let's say like a specific collection that
you know is very critical to your workload, you can also pin
that collection here as well. And so you know, if we let's say
the deleted cards, sorry, my keyboard disconnected.

(37:34):
Proper live demo. There we go and let.
Me see, was it primary or secondary?
It was secondary secondary deleted cards.
So if I wanted to, let's say pinmy secondary deleted cards

(37:54):
collection, I can pin that here.And that way I'll always be able
to see my secondary deleted cards collection, even if it
does kind of drop out of my top five in terms of, you know,
total, total execution time. OK.
So if there's something you're particularly keen and observing
you, you pin it basically. So it's always there, even if it
is performing adequately. That's right.

(38:17):
Yeah. Yeah, sorry, I was just
laughing. I have to turn it off and on
again. Comment.
Yeah. Thank you.
Right, Connect live, yeah. So, you know, one thing that
we've noticed, you know, customers, you know, really use
this kind of pinned namespace feature for is, you know, if
they are, let's say, onboarding a new workload, you know,

(38:38):
typically, you know, we may see that they, they create a new
collection and then they start, you know, you know, importing
documents and start running queries against that new
collection. If, if they're already running,
you know, an existing workload on, on their cluster, it may
take some time for that new workload and that new collection

(38:59):
to kind of show up on this page because, you know, it is kind of
sorted by total execution time. And so this is one way that, you
know, especially for new workloads, that may be something
that you want to kind of keep a closer eye on and make sure you
know, it kind of scales up accordingly to plan.
And also maybe keep an eye on other critical workloads that
you know, maybe are more business critical.

(39:19):
And you want to make sure that, you know, any new workloads
maybe don't impact existing workloads.
These are the the pinned namespace is a is a great way to
kind of keep your eye on those things that really matter the
most to you. Sure.
So from here, the next level fidelity that we're dealing with
beyond the the namespace and thecollection is, you know, the the

(39:42):
query level details. And we kind of touched on this
earlier. This is the scatter plot that's
showing OK, individual query details.
These different colors kind of represent the different
namespaces or different, you know, collections that these
operations are run on. And then the Y axis here
represents our execution time. But we can definitely change

(40:04):
this as well. You know, by default, we're
typically looking at execution time, but you can also take a
look at, you know, operation response plan.
If you can take a look at documents, examined documents
returned the the ratio as well. So there's a number of different
dimensions that you can kind of view your slow operations based
by. And if I do want to see, let's

(40:25):
say, you know, one specific operation, click on this dot,
they'll kind of highlight that dot and we'll be able to see
some just high level stats aboutthis operation.
Say kind of, I can see, you know, the namespace it's run on
the, the timestamp obviously hasthe X axis.
I can see what Shard it's running on since this is a 2
Shard cluster. And you know, I want to, you

(40:45):
know, I do care about what Shardit's running on and what host
this operation is running on. I can see some, you know, high
level metrics here as well, suchas the, you know, operation
execution time, whether it's a collection scan or not.
And I actually couldn't, it looks like this operation
actually is a collection scan ifit has a sort stage as well as

(41:07):
the operation type. Again here, you know, I'm only
looking at a few a few collections here, but you know,
if you had multiple collections,you could also kind of sort and
select different collections that you're interested in.
But this operation that I highlighted here is actually
pretty interesting. It's, you know, it has a

(41:29):
collection scan, it's taking 360milliseconds.
So it is, it is quite slow. I'm going to jump to this button
here. I'm going to view a bit more
details. And this is where, you know, I
think the pre the query profilerreally shines.
It's, you know, we're able to see all the previous executions
of this, of this operation. We're able to see, you know, the

(41:50):
one that I just clicked on here as well, but I can see the
actual log document, the actual query details.
And this is where there's just, there's a wealth of information
here. You know, we kind of touched on
it earlier, you know, we're ableto see the actual command here.
We're able to see, you know, some information about the
database here. There's further down below here.

(42:16):
You know, here's some of the client metadata that we talked
about. So we can see, you know what,
what driver we're using, We can see the application name here.
We can see that it's a collection scan.
And so this may indicate to us that there's some index that we
may want to create to kind of cover this, cover this query and
you know, make it more efficientand you know, ultimately improve

(42:39):
its execution time. Here we can see the number of
documents examined. So we're actually examining
25,000, sorry, 250,000 documentsand, and returning 0.
So that's not a great query at all actually.
And then, you know, there's, there's a number of, you know,

(43:00):
other, other metadata that we can look at here as well.
We can see the the execution outside the, the execution time
in milliseconds here. And so, yeah, this is definitely
a query that we want to further investigate, you know, that we
may want to, that we may want tooptimize.
And so this is where performanceadvisor comes into play.
So if we scroll back up here andwe look at our performance

(43:22):
advisor. It's great to see the time stamp
in history because you could have a query that was working
perfectly fine up until the point and all of a sudden it's
taking forever. So you're able to track back and
say, well, you know, when and how did this happen, right?
Right. Yeah, definitely.
Yeah. Being able to look back at the

(43:44):
query profiler here and look back, you know, up to five days.
So you're really able to see, you know, kind of the history of
your queries and how that, you know, some of the, the trends
that kind of happened over time.You'll notice here I'm, I'm
looking back five days, but I only really started my my demo
script a while ago. But you can see here we have

(44:05):
these like bigger kind of bubbles.
One of the strategies that we'verecently introduced this this
past year. And again, this kind of goes
back to your question earlier, Shane, about how we kind of
scale for those large enterprises, those large
enterprises are running, you know, huge workloads, you know,
with, with, you know, hundreds of thousands of collections and

(44:27):
hosts. And you know, they're, they're
huge deployments and their, their workload is, is, you know,
really high scale. This is one of the ways that we
have improve that query profile to handle those that level of
scale. Previously we were showing, you
know, each operation as an individual dot and of course you
know, that started to run into limitations on just, you know,

(44:50):
the browser performance and stuff.
And so the aggregation, the basically what these bubbles
represent is an aggregation of operations around a similar
timestamp on the same collectionof a similar operation type.
And so I can see here, you know,this is this big bubble here

(45:11):
represents, you know, like Monday at a certain hour time
period. And it's representing like 4
point, almost 5000 operations of, you know, around the same
time stamp and the same operation type.
And if I wanted to zoom in, I can just click and drag in here
and it'll dynamically just, you know, the groupings here.
But here, you know, I can immediately I'm having more

(45:32):
fidelity, I'm starting to see some of these outliers here.
And you know, I'm able to see a bit more kind of the, the trend
and the pattern of my query of my query workload.
Again, I can zoom in further andyou know, at a certain point,
you know, this is where now we're just showing individual
query stability and we're able to look at individual queries.

(45:52):
But this is one way that we've kind of scaled our tool to work
both for, you know, the kind of lower levels of scale as well as
that high enterprise level of scale.
Yeah, it's incredibly powerful and the the granularity that you
have once you dive down into it is is brilliant, I suppose to
try and figure out, you know, what was that?
So yeah, that's brilliant, but great explanation.

(46:18):
So what's next on the performance advisor stuff?
Is it? Yeah, Performance Advisor is the
last one I want to touch on here.
And again, you know, for people that are familiar with
Performance Advisor, this may look and feel pretty similar.
One thing I want to touch on as well is that this past year
we've also made some improvements to Performance

(46:39):
Advisor kind of more under the covers.
For anyone that was kind of previously familiar with
Performance Advisor, it really only worked on a host by host
basis. And so, you know, you would have
to go to a specific host and then you'd have to see, you
know, the index advice for that host.
And you kind of have to do some kind of manual correlation to
see how you know, you know, how the index advice or what index

(47:01):
advice you had on the other hosts.
What we've done now is we've nowaggregated it at the cluster
level, including for sharded clusters.
So here I can see, you know, my demo script is only running
operations on a single host here.
But, you know, we do have the ability to look at index advice
for, you know, either a specificShard or a specific host by

(47:23):
default, you know, if I don't have anything selected, it's
essentially aggregating all of the slow queries across the
entire cluster and giving you the index recommendations across
the entire cluster. So this is one index
recommendation that I can see here.
I can see it's on my transactions dot credit card
info collection, and it's givingme the actual field that I

(47:46):
would, you know, want to create this index on.
It also gives me a some additional information as well,
such as, you know, statistics that I might expect if I were to
improve the queries that are, you know, that would benefit
from this index. I can see if there are any other
indexes on this collection. In this case, it's just a
default no under score ID index and it gives me some samples,

(48:10):
you know, sample queries that might be improved by this index
as well. So, you know, if I wanted to
take a look at the in a specificlog line here again, I could see
like, here's a, here's a specific slow query that would
benefit from from this index. So from here, you know, if I
wanted to just create this indexfrom this page, I easily could,

(48:31):
you know, if I wanted to, you know, go back to my CLI or, or
do it some other way, you know, I could easily do that as well.
But for for my purposes, you know, I just want to, you know,
performance advisors telling me to create this index.
I'm just going to create this index here and you know, it's
going to bring up this model. It's going to have everything
already filled out for me. I can just review it and kind of

(48:51):
go through the process and it's going to start building that
index. Building index will kind of take
take a little while, but once that index is built, I should be
able to then go back and look at, you know, my, my name
spaces. I should be able to then go back

(49:12):
at my name spaces and I should be able to see a pretty
significant, you know, improvement inquiry performance
for that for that collection. And so, you know, if I did want
to actually take a look at my index, I think it was on my
credit cards collection, if I remember correctly, actually, I

(49:38):
don't remember what collection Icreated that on.
That's no worries. Go back and check my activity
sheet there. But but yeah, in any case, on on
the collection, on the collection I created the next
time I will start seeing some some performance improvements.
Brilliant. I love the the thumbs up, thumbs
down on the performance advisor suggestions, right?

(50:00):
It's a, it's a nice way to go. We obviously are are getting the
feedback from that as well too as as people use it right.
Yeah. We've been continually improving
our index recommendation algorithm as well, based on the
customer feedback that we've gotten from from this page.
OK, OK. And so for what we have there on

(50:22):
the on the got create index as we went through that improve
schemas then Frank what will that do?
Yeah. So this is another, this is
another advisory tool that we provide.
So we, we provide, we, we look at your 20 most active
collections on your cluster and there are a number of anti
patterns that we're basically looking for.

(50:44):
So here, so here I can see I have one active, if I have, you
know, a significant number of indexes on a collection that can
actually degrade your, your write performance.
So there is kind of a balance interms of, you know, how many
indexes you want. You don't want duplicative
indexes, you don't want indexes that aren't being used.
You know, all of those can kind of impact your write

(51:06):
performance. But you do want indexes that you
know do improve your queries andare actively being used because
that, you know, ultimately improves your read performance.
And so there is kind of a balance there.
What the schema, what we call our schema advisor, what our
schema advisor does is looks foranti patterns such as these.

(51:26):
So here I can see I have some unnecessary indexes on my scheme
advisor dot call to action collection.
And so if I wanted to click on that, you know, bring me to my
collection, I could take a look at my indexes here.
And yeah, I have this is this isa lot of indexes and I can see,
you know, they're not really actively being used.

(51:48):
And so I really should kind of go through it.
I should delete some of these indexes.
Just going back here to my scheme advisor again, there are
a number of other anti patterns that we look for as well.
So if we see, you know, as an example, if we see a number of
dollar look UPS, that that's an anti pattern that we also that

(52:12):
we also kind of, you know, surface to users, that could be
kind of indicative of a more relational pattern to looking at
data. And that, you know, a large
number of ballot look UPS can kind of degrade your your query
performance because it's lookingacross multiple collections.
And typically, you know, we wantdata that's kind of queried

(52:32):
together to kind of be stored together.
And that, you know, that's just one way that we kind of improve
query performance. Unbounded arrays, unnecessarily
large documents, too many collections in in one database
and the use of dollar regex or dollar text, uh, when you know
dollar search could suffice. You know there are a number of

(52:55):
these other anti patterns that we surface, umm through the
performance advisor as well. Excellent.
So it's great, great to see. I know and, and obviously
there's more involved than just the developer relations team,
but we do design reviews with our clients and we've got a
really, really expert team that does that.

(53:15):
But the amount of times that it would be like, did you not look
in performance advisor to understand a little bit about
the things you could have done before we sat down opposite the
table to talk to you about this.So these set of tools are
amazing with regard to database observability.
Is there anything else to show here, Frank, while we're in
here? So that is done right.

(53:39):
Yeah, that that we've gone through most of them.
Those are the the two main ones that I wanted to highlight
there. We do have a number of exciting
new additions coming to Atlas aswell.
You know, we kind of touched on,you know, the namespace level or
the collection level granularity.
We touched on the per operation level granularity.

(54:00):
One thing that we've introduced in Mongo DB7 dot O is this new
aggregation stage called query quick dollar query stats, which
provides telemetry or you know, metrics at a query shape level.
So that's another kind of level of fidelity that we see really
works well with our developers. That is something that we're

(54:21):
going to actually be investing in in the new year is providing
query shape level fidelity within Atlas as well.
So that I think will have a veryclose correlation with, you
know, the actual workload running on the cluster.
And that's a great way that we, you know, we see resonates well
with with engineers and with developers.

(54:42):
Excellent. So watch this space more more to
come. We, we've gone through metrics,
we've gone through the alerts, we've gone through the
integrations obviously, and the,the query insights and ended up
with performance Advisor as welltoo.
We've covered an awful lot in a relatively short space of time.
Sabrina, where can people go to learn more if they want to get

(55:03):
started with our database observability tools?
Yeah, absolutely. We have a number of resources
available to users. We have very comprehensive
documentation, but we also provide a learning bite.
And within that, that kind of goes through all of the
different tools, kind of the value of what we just talked
about. And then we kind of go through a

(55:24):
live scenario of, you know, likean e-commerce shop going through
the issue and kind of, you know,troubleshooting at all the
different tools. So the Learning Bite is an
awesome resource for users to look at as well.
And then we have. I know we've got a longer URL
right which nobody's going to beable to copy.
But if they go. To learn.mongodb.com they can

(55:47):
search for. It they can search for it, It's
in the title. And then also we have a series
that Frank and I actually recently just worked on.
It's a three-part series, blog series that we kind of go
through. We go through kind of the tools
and then the second post is kindof like a real life use case of
how how we conduct basically thedemo that he just did in real

(56:09):
life. And then the third is kind of on
integrating with the different tools and kind of walks through
a scenario on how we can, you know, how like Slack converses
with Datadog and how those toolskind of intermix as well.
So there's a number of resourcesthat users can kind of look into
to see. And again, if you go to the

(56:31):
developer center, Mongo DB's developer center, so
developer.mongodb.com will get you there and then just search
for observability. And search being what it is,
should surface up your your articles straight away and that
as well too, so people can go there and try and check things
out as well too. There also was a video that was

(56:52):
done at dot Local. So dot local is Mongo DB series
of events that used to be just key cities.
We've now taken it to over 20 cities around the world.
We're just finishing. I think we've got, is it Zurich
or somewhere? Somewhere's finishing at the
moment. I think maybe, or maybe it's
already done, but you did a video as well too.
So for those that can type quickor screenshot quickly, you can

(57:16):
get straight to the video there,but just go to the Mongo DB
YouTube channel and search for it as well.
You'll get to see that, right? Yeah, you, you can see Frank
give that live, live demo as well.
So it's a great, it's a great talk and kind of dive into like
patterns and on performance and much more, much more of Mongo DB
Atlas as well on that speech, so.

(57:39):
Perfect, perfect. So look, we've covered most of
the topics. There's been a couple of
questions along the way. We've tried to answer them on
the fly. If anyone else has some
questions to ask before we should I please drop them into
the chat either on YouTube or LinkedIn really quickly.
I'm looking through the questions.
There's no particular questions.There's it was basically telling

(58:01):
us we should do Q&A and but he doesn't have a question for us.
But thank you for the hint. Definitely will do.
But you were so kind earlier to say that Mongo DB rocks.
We do appreciate that. That's going down really well.
Paul, who did have a question for us earlier had to drop.
So to kudos to you Frank for the, the great demo there as

(58:22):
well too. So if anyone has any final
questions, we'd be on for another little bit to be able to
answer those. I suppose with regards to what
you've been building and what you see when you're talking to
our customers and where you're trying to interface with the
engineering teams and building things that typically kind of is
there anything that surprises you that developers using

(58:44):
MongoDB aren't observing, for example, that you kind of go,
why aren't you even doing this? Is there anything come up in the
past that's been left you scratching your heads, Sabina?
And I think for me, it's just kind of the general consensus
that they don't even know that these tools exist to help, you

(59:04):
know, that's, and that's kind ofwhat I was getting at at the
beginning was we've noticed thatthere's, you know, we, I see, I
kind of listen in on some customer calls and I'll notice
that there's just like a general, you know, consensus of
like, oh, I didn't even know that.
We have, you know, a performanceadvisor to give us index
recommendations. And kind of what you were
touching on earlier was like that's often times we direct

(59:26):
developers and customers to lookat these tools and something,
you know, often times they don'teven know that it exists.
So I think that's that's kind ofwhat I've noticed in terms of,
you know, the gap. OK.
And Frank and. Yeah, I just wanted to
highlight, you know, what Sabinasaid.
Yeah, I, I think awareness of some of these advisory tools is

(59:47):
really one of the the gaps that we've seen.
What we see with our kind of most successful customers is
that they do kind of build, you know, reviewing their index
advice, reviewing their scheme advice and their query profilers
into their kind of, yeah, at least weekly workflow.
You know, some of our largest customers, as I mentioned
earlier, you know that their DBAteams are looking at this

(01:00:09):
regularly and. You know, there are strategies
that we've kind of also introduced earlier this year
that we're going to continue to invest in next year to kind of
make these a bit more accessibleas well.
You know, we've kind of seen that a lot of our customers that
are using the Atlas console, youknow, will will kind of
gravitate towards the the overview page.
And so, you know, we've been thinking about ways that we can

(01:00:31):
kind of bring these, you know, index advice and schema advice
insights kind of earlier into, you know, the developer journey
and, and earlier into, you know,their kind of user experience
journey as a way to kind of highlight them more.
And we we have seen some pretty good success with some of those
initial experiments. Google, yeah, yeah.
And even as per your example, Frank, having, you know,

(01:00:51):
multiple indexes, you know that the uninitiated think of add
lots of indexes, things are going to get better.
But as you say, there comes a point of diminishing returns
there and and I suppose that's one of the things that we see
generally when we're talking to developers as well too.
You're kind of going, you know, look at what you're doing as I
think, Sabina, I think you mentioned it earlier, maybe it

(01:01:13):
was Frank, you know that the themantra Mongo DB is that the
datas that access together should be stored together.
So whilst you know, some people have notions that, you know, non
relational databases are schema less, you still need to think
about how you, you know, design your data, how you look at that,
what you're trying to store together, what you're trying to

(01:01:34):
keep, how you're going to query that.
What scale is that going to get to at a certain stage as well
too. I think these are all plenty of
really, really good questions before you start out.
But even if you haven't, I suppose, and this is probably
the main tenet here is if you haven't thought of any of this
stuff, then go in here and use the tools that we've made

(01:01:54):
available to see what's going onwith your data.
And to you know, you know, it's probably an eye opener, right?
Yeah, absolutely. And like Frank has mentioned,
just incorporating it into kind of a weekly admin task or you
know, your weekly cadence of checking in on, you know,
performance. And I think it can be highly
beneficial when users are kind of checking in in a weekly

(01:02:18):
basis. Yeah, and that's probably a key
take away from this is to not neglected incorporated into, you
know, your weekly routine of looking at things.
Anything else to add as a final part in comments?
I'll go to you, Frank, first on on all of this and what we've
covered and and you know, any other advice to developers

(01:02:38):
leveraging database observability?
Yeah, I think my, my final thingis, yeah, as we touched on
earlier, you know that the regular cadence I think is one
part. But also I think, you know,
treating observability as kind of a post production step I
think is also, you know, an antipattern that I've seen as well.
I think, you know, observability, database

(01:02:59):
observability should be something that is kind of
considered from initial inception as you're kind of on
boarding, as you, as you know, you know, you're instantiating
and and increasing your workload.
These tools are things that we should be looking at, you know,
kind of continuously, not just once we've reached kind of a
production state. You know, these index advice,

(01:03:19):
you're sorry, the index advice, the schema advice, these are
things we should be looking at as, you know, we're developing,
as we're kind of onboarding our workload.
And I think at the point where, you know, we treat it as just a
post production activity and I think, you know, it's much more
expensive and much more impactful to make, you know,
certain changes. Yeah, it's kind of.

(01:03:41):
The from the start and yeah, sorry, go ahead.
Yeah. No, it's kind of the carpentry
analogy. You know, measure twice, cut
once. You know, get in there 1st and
think about this from the beginning and don't just set
things in motion and say, OK, we'll go in and retrospectively,
maybe fix them or see what the slow bits are etcetera as well

(01:04:02):
too. I cut across you there.
Sabina, do you do something elseto throw in there?
Yeah, no, just like it's available from the start.
And this is kind of, yeah. Also another thing we've kind of
seen is that often times it is, you know, customers are kind of
looking at looking at it as a post production state.
But I think getting in there andstarting from the beginning,

(01:04:22):
from conception of your deployment as well as and that
kind of goes back to incorporating it into the
routine of just, you know, understanding where your Mongo
DB deployment is. Excellent, excellent.
Well, look, we've covered so much.
Sabina Frank, thank you so much for I know it was super quick.
We went through lots metrics, alerts, query insights,

(01:04:44):
integrations, performance advisor, etcetera as well too.
But as we did point out, if you have a blog series up on our
developer centre, developer.mongodb.com to go in
and have a look at that. And if you are using it and
you've any questions, we have a superb community on Mongo DB
forums. So if you want have any issues,

(01:05:05):
you want to ask for advice, go to community.mongodb.com to our
forums there, Our engineers hangout there, our developer
relations team out there and ourcommunity as a whole is really
good in helping and assisting each other as well too.
So if we've whetted your appetite to understand a little
bit more about how your data is doing and how to make the most

(01:05:25):
out of your queries, how to reduce costs, increase
performance, at least we're sending you in the right
direction there. And we do hope that you get
started. But for me, Shane McAllister,
and to you, Frank and Sabina, and then to all our viewers who
joined and did ask the questionsthat we went through, thank you
so much. Please keep an eye out for
future episodes that we have coming down the track.

(01:05:48):
But for now, on everything to dowith database observability.
Thank you, Frank. Thank you, Frank.
Thank you, Sabina. Thank you.
It's been great to have you both.
I really appreciate your time. This was super fun.
Thank you so much, Shane. Yeah.
Thank you everyone. Excellent.
Take care everyone. Good luck.
Bye, bye, bye.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Bobby Bones Show

The Bobby Bones Show

Listen to 'The Bobby Bones Show' by downloading the daily full replay.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.