Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:06):
OK, hello again, everybody. If you've been watching the
stream all afternoon, you may have seen me about half an hour
ago interviewing Daniel Kupell and I'm back.
I'm back, and this time I'm withMax Markong.
Max, why don't you explain what you do, what your position is
and kind of what you do on a day-to-day basis?
Yeah, so I am a director of the Longwood DB.
I lead developer tools. That means the other CI that you
(00:30):
saw a lot of today, Compass, Margaret, Michelle, ID
extensions, so a few other things, AI developer tools
that's total making. Just a really small list.
Yeah, exactly. Excellent.
And you were on the keynote thismorning.
I was congratulations. Was fun.
I've been very well Skype. So yeah, tell us about, we're
here to talk about the Atlas CLIand the kind of the the GA of
(00:53):
OKA support. Yeah, I see.
So as as Melissa mentioned Nikito this morning, this is a
functionality that we announced in public video last year here
in London. And in this past last year,
we've been working very closely with a number of customers to
see how we could support them best when they developed it.
Mongo DB locally and locally means not on the cloud, not on a
(01:18):
server that's somewhere, but most likely on the developer's
laptops. And then when it comes to
testing that, put those domain on their anime in CIA in your
continuous duration server anymore interactions and so on.
And the, the, the primary reasonfor providing this new
experience tool for Atlas is that we really want to give
(01:42):
developers a very short feedbackloop.
So think about it. You're writing code, you put
some data into the MongoDB, you run some tasks and you want the
result right away. You don't want to wait for a
deployment, you don't want want to wait for a cluster to speed
up somewhere. You want to insert data, read
data and and do that very efficiently.
And that is a fundamental, fundamental capability when you
(02:06):
are building an application. It's even more important when
you're testing an application. So suite of tasks like 1000
tests and and you want to be able to do that before I
recommit. You need the environment to be
fast. You need the environment to spin
up. We did.
So having it locally, having it locally in a docker image means
that it's going to be fast and throw it away and you can create
(02:29):
a new one. And so I.
That's fundamentally the tighteryou can make keep that feedback
like the faster developers. Yeah, it's.
Probably the the biggest thing you can do, I think to to keep
developers, but I'd say my own personal experience bringing up
Atlas CLI, just kind of running a car that I'm running at this
on my own machine. It's absolutely.
Incredible. Yeah, yeah.
(02:49):
And I love it personally. It's the way I used to write
applications, to develop applications.
We've seen a lot of success withboth the offering the CLI as
well as with Doctor. Customers love the idea that
they have Docker already for everything else.
They use Docker Compose for their applicable stack when they
build code when they write us and now we can give them the
(03:14):
same experience for others that I think that's just amazing.
So you mentioned that there likegetting into the the details.
So this is one of the new features in I don't know with
the app of CLI is this integration with SM docker tools
like components right? So if I'm already using compose,
I can add configuration configuration in there to bring
the. Yeah, you so the the Atlas CLI
(03:35):
is just trigger around the experience so but but you can go
and and get the docker image that I could and inserted in
your docker compose file. And when you have that, you can
spin it up with the rest of yourservices, but the CLI still
knows about it. So if you want to look at what
local environments are running ado Atlas deployments list, those
(03:58):
will still track if you want to put data in those environments.
Those that you can still do where the the the normal tools
that you use. If you want to create search
indexes then you can still do with the auto CLI.
That's very cool. And if you once I've kind of set
up that all this environment locally, then when I'm moving to
production, do we have anything in there that kind of helps with
(04:19):
that? So lift and shift.
And not, not right now. We've been talking to customers.
We, we know that there are opportunities, for example, in
the area of life cycle indexes. Think about even what we showed
this morning at the kilo. You build an application, you
realize that there is an index spacing, maybe your IDE calcium.
Now you go create an index locally to make sure that the
(04:42):
the the Mountain View response in the right way when you create
your queries. But then you want to eventually
move that index to production. You can do it manually.
Obviously you can. You can write the shell script
that copies the configuration from local and applies it to
production. But maybe over time it makes
sense to explore this type of capability already as part of
(05:03):
the local ethyl. Yeah, that would be pretty
amazing. Given how fast we're moving on
developer tooling, it wouldn't surprise me if we you see that
soon. Yeah.
I mean for, for the last year, we've been incorporating
feedback and feedback from customers into the product and
we'll just keep doing that. So anybody who has thoughts that
can care of them want to make usa voice, they can get in touch
(05:25):
in the community forums and we'll listen to all this
feedback and we also talk to customers directly obviously and
all of that informs all about that.
Great. So maybe maybe a controversial
question, I don't know, but is there ever good?
Is there a future where people can run Atlas on their own
machines? Like this becomes kind of a
pathway to self hosted Atlas as opposed to self hosted.
(05:49):
That's a good question. So I mean I I let me break it up
in two parts. So one is can I the can I run
Atlas services on my computer but also in AWS for production
levels? The answer to that is yes in the
future you saw this morning. So here talked about search and
(06:10):
active search community community.
So like when that will be available in the same way you
can run Mongo to be committed today for your hobby project or
for the early season of your startup, you will be able to
have search and active search too.
Now the the local experience we're talking about today is a
completely different offering. So we're not thinking about
(06:33):
producting use cases. We're thinking about the best
ergonomic for developers to get something up and running
quickly. And then it's an experience that
we control. We know what's inside, we know
how you connect to it. We have a very predictable set
of services that are running andthat allows us to connect it
with the rest of our tooling. And an example that I just gave
(06:56):
to a customer in a in a meeting that really is right them is I
control this experience. Therefore, if on your same
machine you have Compass, you have PS code, those can
automatically discover those local environments because I
know what they look like, I knowhow to look for them.
So there is no copy pasting connection strings around.
(07:17):
I open Compass and all my local environments are there and I
click and I I'm already working with the data on my laptops.
That's pretty amazing why I haven't actually gone through
that experience yet, so I I guess I have more digging to do.
So that that that's something we're still like figuring out,
but it will happen that, that that's what we can do because
the environment is so predictable.
Yeah, yeah. I so fundamentally this this
(07:39):
isn't designed for scale, it's very much designed for
developer. Experience exactly I have.
To say, my experience with it sofar has been very good.
So can you give us any kind of sneak feature, sneak peek of
some of the features that we're thinking about adding and
feature? Yes.
So something that will come verysoon and and that is again,
really something we are doing because customers were so vocal
(08:02):
about it, is including in the image the Mongo DB shell, Mongo
important. All that's work over dump Mongo
restore and in general the ability to automatically see
your local cluster with data andcreate indexes.
So right now you can do that with the CLI.
It's pretty easy. But if we can do it as part of
(08:24):
bootstrapping the Docker image, that makes the entire
orchestration of your test suite, for example, extremely
easy. Yeah.
And, and the team is, is workingon that almost as we speak.
So you can expect that we'll be there soon.
And then generally we're lookingat what you said earlier,
essentially what's, how can we make the experience of going
(08:46):
from local to Atlas smoother andthen also the experience of
going from Atlas to local. So if I have my actual workload
in Atlas, but then I want to build a new feature, can I get a
subset of Atlas data into my local environment?
So the data I'm working with locally resembles that it by
having production. So all of this is something that
(09:07):
we are still figuring out and and there will be a lot of robot
items going forward around it. I love this.
You, you don't know this about me, but I used to work in
developer tooling within a sort of a big scalable start up.
Oh really? So yeah, it's like I've
specifically built some of this stuff and it's hard, especially
when you get into data. Like data is the hardest thing
to manage within that life it. Is and and there are important
(09:30):
aspects to keep in mind. That's why we don't do it
lightly. So that is PII there.
There is a lot of stuff that is not supposed to be shared when
you talk about data and so we have to be very careful with how
we build this type efficient. Yeah, every time somebody talks
to me about taking a copy of their production database to do
development, I, I, I. Are you sure?
Yeah. So we How much of a conversation
(09:51):
do we need to have about this? So yeah, it's it's good that
you're thinking about that and they're designing for the design
of this tool. So there any other feedback in
that way? What?
What would you say is the biggest integration that you can
get now, given that this is Docker native and?
People can integrate with that too.
Yeah. So something we we've heard a
(10:12):
lot from customers, well mainly two things.
So one is get out actuals as part of your pipeline.
For example, they can have a GitHub actuals, you can run
Doffer containers there. So the integration with the the
local offering is just a straight toward.
And then the other thing is testcontainers.
So a lot of customers use them for testing and they are based
(10:32):
on the operating versus. Therefore you can think of
creating a test container for Atlas Local.
Yeah. Now someone like in the
community create some PRS for that.
Well, we'll probably try to foster that relationship so we
can make it happen. I love that.
I love that. Well, I could have put you for
another 20 minutes, but sadly, we've run out of time.
So I'm just going to say thank you very much.
(10:53):
I'm really excited to see all ofthis.
You excited about the current release?
But I'm also excited to see what's coming down the line.
Yeah. Thank you for having me.
Hopefully we'll talk about this again on the podcast soon.
Yeah, I hope so. Take care.