Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
How'd you like to listen to dot net rocks with
no ads? Easy? Become a patron for just five dollars
a month. You get access to a private RSS feed
where all the shows have no ads. Twenty dollars a month,
we'll get you that and a special dot NetRocks patron mug.
Sign up now at Patreon dot dot NetRocks dot com.
(00:34):
Welcome back to dot net rocks. We're from Build.
Speaker 2 (00:37):
Yeah, Richard, I like being a build Well, Seattle, it's raining.
Welcome to my neck of the woods.
Speaker 1 (00:42):
Right, I'm Carl Franklin. That's Richard Campbell, and it's gonna
be a good show. This is the first show that
we published from the list of shows that we're doing.
Speaker 2 (00:51):
It well, got us to be busy.
Speaker 1 (00:52):
Yes, however, this is the second show we recorded. Okay,
so the keynote uh Hawk that we do with Scott
Hunters next week. Yeah, yeah, all right, it's a couple
hours ago, a couple hours ago, a lot of good stuff.
Ken Exner is here. We're gonna be talking to him
in just a minute. But first let's roll the music
for better know a framework.
Speaker 2 (01:20):
Man, what do you got?
Speaker 1 (01:21):
Well? I haven't seen this one before. But it's trending
on githubs. This is public dash APIs GitHub. It's the
public API repository, manually curated by community members like you
and folks working at API Layer. It's an extensive list
of public APIs for many domains you can use for
your own products. Consider it a treasure trove of api
(01:44):
is well managed by the community over the years. That's cool,
you know, so putting on my career criminal advice hat.
If you want to do a denial of service attack
to test out your new tools, pick one of these
APIs and let it rip.
Speaker 2 (01:57):
Jeezus, that's dark. It's a very dark way to think
about it.
Speaker 1 (02:02):
But yeah, I know, of course, you know you want
to test out your your new harness for calling APIs
and see if they were There's some really good stuff here.
Locate and identify website visitors by IP address. That's a
fun one market stack, free, easy to use rest API
interface delivering worldwide stock market data and Jason format because
you know CNN doesn't work well enough. Weather Stack retrieve
(02:25):
incedant accurate weather information, numb verify, global phone number validation,
look up Jason API and Fixer. A simple and lightweight
API for current and historical foreign exchange rates, some good stuff,
and there's more to it, of course, but that's it awesome.
That's what I got. Who's talking to us today, Richard grab.
Speaker 2 (02:44):
Your comment of the show ten sixteen seven, which is
a while ago. Let's tart twenty fourteen. We were talking
about to edemar a bit about different search strategies and technologies,
and one of the comments in the show came from
Sean mccartall, who was talking about how he pitched a
last search to his organization. And admittedly he's twenty fourteen,
so think about the challenges back then, right, But he said,
(03:06):
only after the second time I had to tell my
boss I don't know why it broke. We don't have
enough information I sought out to solve the problem. Yeah,
so this is this is an EMEN type thinking, he said,
And the problem this is an architecture that spans SharePoint, SAP,
some Tomcat hosted solutions, so a little jab in there,
along with a bunch of good dot net projects in too,
and he worked with Python and power Show forty threw
(03:28):
a Lodge staff receiver into a LASTI search cluster, which
was the easiest part to set up. And then after
by another week doing some tweaks and configurations, he was
able to do an analysis of workflows through all of
those different products in twenty fourteen. That's twenty fourteen. I
mean today the tools are all different. You'd be in
the cloud and doing that, but no, he was cracking
(03:50):
it the hard way now. So now I appreciate that
it's just a realization, like a last search bit in
our lives a long time. Things are obviously different now.
And as Sean, thank you so much for your comment,
and a copy of music Coby is on its way
to you. And if you'd like a copy of music Koba,
I write a comment on the website at dot net
rocks dot com or on the facebooks we've published every
show there, and if you comment there and never real
on the show, we'll send you copy of music Goby.
Speaker 1 (04:11):
Music to Code by of course, the music that helps
you focus, still going strong, twenty two tracks. You can
get it at music toocodebuy dot net. You can get
the collection in MP three wave or a flack format.
Before we interview our guests and introduce him, let's talk
about nineteen fifty two yep so significant events, because this
(04:32):
is show nineteen fifty two kind of have been been
doing this. What happened in that year significant events including
the death of King George the sixth and the ascension
of his daughter Elizabeth to the throne of the United Kingdom.
The US successfully tested the first hydrogen bomb. Dwight D.
Eisenhower was elected President of the United States, defeating Adale Stevenson.
(04:55):
The Korean War continue with the US launching bombing attacks
in Greece and Turkey joint NATO. Other key events include
the beginning of the Mau Mau rebellion in Kenya and
the Great Smog of London caused by industrial pollution and
high pressure weather. The Great Smog not to be confused
with the Great Stink. Do you remember that? I mean
(05:15):
I don't remember it, but I remember hearing about it.
Sewage backed up in the Thames. It was horrible.
Speaker 2 (05:21):
Related to the presidential election nineteen fifty two, was one
of the UNIVAC computers, which again these were all bespoke
machines of the time, right, actually did the computational estimations
to say that Eisenhower's going to win. So it's one
of the first times the computer predicted the win based
on poll.
Speaker 1 (05:37):
Okay, so you know how many times did the computer
not predict the win though?
Speaker 2 (05:40):
Well before that?
Speaker 1 (05:42):
Zero? Oh yeah, okay, yeah, right, okay, shall we get
on with our show. Let's talk to Ken Xner. I'll
introduce him here. Ken is chief product Officer, head of
products and engineering for Elastic. Maybe you've heard of it.
Speaker 3 (05:58):
Welcome Ken, Hey Carl, good to be here.
Speaker 2 (06:01):
Hey Richard, thanks, good to have you. Yeah, thank you.
Speaker 3 (06:03):
You mentioned uh Elastic search US in twenty fourteen. We
actually just celebrated our fifteenth anniversary of.
Speaker 2 (06:09):
The lasted search, so twenty ten.
Speaker 3 (06:11):
Yeah, Elastic Search fifteen years as a couple of months ago.
Speaker 1 (06:14):
That's fantastic.
Speaker 3 (06:16):
H And it's a it's gone on to be downloaded
four point six billion times.
Speaker 1 (06:22):
Wait billion with a B, billion with a B, which.
Speaker 3 (06:26):
Makes it I think one of the most popular open
source projects of all time.
Speaker 2 (06:29):
Yeah, no, without doubt, And I mean it comes up regularly.
It's just one of the tools we've count on for
fast search solving certain certain scopes of problems. Right, see
the smart casing tool. Now, you guys have got a
new relationship in Azure. You're just you're becoming a service
like well, how you guy, what do you guys are
up to?
Speaker 1 (06:47):
But we've we've.
Speaker 3 (06:48):
Been Our partnership with Azure has been fantastic. So over
the last few years, Azure has been integrating US almost
like a first party service within Azure, so you can
go into the navigation and select US as part of
the experience with an Azure So if you wanted to
launch your own Elastic Search, you can do it very
simply from within the Azure console. So yeah, it's been fantastic.
Speaker 2 (07:12):
Yeah, and you guys been in cloud for ages.
Speaker 3 (07:14):
We started going back to the cloud, started going to
cloud about six seven years ago, where we first started
letting customers spin up a VM, have US automatically installed,
have US patching and maintaining their cluster. We've recently actually
introduced a new serverless offering that is kind of a
(07:35):
fully managed experience that you can think of it as
SaaS like. So it is a SaaS like experience for
using Elastic Search, which means that customers can use US
an open source, they can get a self managed license,
they can have a hosted version in cloud, and they
can have this new serverless offering which is kind of
a SaaS like experience.
Speaker 2 (07:55):
So am I just paying by transaction at that point?
Speaker 3 (07:57):
Yeah, you're just paying for data. So if you're if
you're using US as like a logging solution, you're just
paying for data in and data retained.
Speaker 1 (08:03):
That's it.
Speaker 2 (08:04):
That's fair.
Speaker 3 (08:04):
And it scales to zero and it's uh yeah, it's
a magical.
Speaker 2 (08:07):
It doesn't cost you anything. You don't move any data
around it exactly nice. You're if you're running a VM,
you're paying for it. That's right, no way around that.
Speaker 3 (08:13):
But yeah, it's it's been a challenge being able to
deliver elastic search all these different ways that you know,
it's hard to maintain software that many different ways, but
it's what customers need.
Speaker 1 (08:23):
So how does it fit into Azure's new AI focus
that you probably, yeah, are the focus this I did.
Speaker 2 (08:34):
The drinking game I decided on was not to say
we need to take a drink, we say copa certainly
not say drinks, we say AI, you'll be drunk when
you say Windows.
Speaker 1 (08:42):
Yeah.
Speaker 2 (08:43):
I think I only had to take three drinks then.
Speaker 3 (08:46):
Yeah. Well, I think it's really a question of how
is search, you know, playing a role in the I
right boom, And I think one of the things that
you know, when generative the I first happened, everyone's kind
of wondering what's the future of search, and people were saying, oh,
you know, Google is dead, and I think one of.
Speaker 1 (09:03):
The things that you even said, SAS is dead.
Speaker 3 (09:06):
He said that last year.
Speaker 2 (09:07):
Yeah.
Speaker 4 (09:07):
Yeah, yeah, I'm saying for him to say, I think
it seems unlikely, maybe a little premature, maybe a little
we'll get into agentic architectures in the augentic future soon,
but yeah, I.
Speaker 3 (09:18):
Think SaaS is still here. But yeah, what we realized
is that search had a huge role to play in
AI because if you're building generative AI applications, you need
to ground it on, you know, your own data so
that you're not just basing on public information.
Speaker 2 (09:33):
And constantly changing.
Speaker 3 (09:34):
It's constantly changing. And if you, you know, want to
make sure that you know the permissions reflect the user,
you need to make sure that you know that whatever
you're grounding on understands the user's permission to data.
Speaker 2 (09:46):
You're not going to build a model for every permission level.
That doesn't seem practical.
Speaker 3 (09:49):
You know, fine tune or build a model for every
single possible combination combination of permissions.
Speaker 2 (09:54):
Yeah, yeah, it seems like a way, good way to
waste a lot of space.
Speaker 1 (09:58):
What else shouldn't you do let's talk about that.
Speaker 3 (10:01):
Well maybe in video would like that feature, but.
Speaker 2 (10:04):
Don't stick a fork at a toaster. That's a good one,
I think.
Speaker 1 (10:07):
Okay, then we have two.
Speaker 3 (10:08):
Yeah, so search has this new found life in the
age of generative AI. Wheople are using us to help
build applications and doing retrieval mechanisms, and that's been sort
of the source of our partnership with Azure, as well
as helping them be a search engine or retrieval engine
that helps their customers, our customers build these generative applications
on top of their private data.
Speaker 1 (10:29):
Right, Yeah, that's great. Well, I mean we saw the
we just saw the keynote a couple hours ago, and
we'll get into more of that next week with Scott.
But what did you take away from it?
Speaker 3 (10:41):
Well, one of the things in Scott's keynote he mentioned
sort of the move towards conversational search, and this is
one that we think is big within Elastic and we've
seen the evolution of search. You know, started off as
lexical search or text basse search, and then six seven
years ago and moved to semantic search, where people started
to expect their search box to understand natural language and
(11:06):
respond to questions. And then I think with generative AI
it became sort of an evolution further towards RAG where
people wanted to build these applications. I think now it's
moving towards conversational search, where you're expecting your search box
to have a conversation with you and respond in natural
language and carry the context and carry context, have memory
(11:26):
be personalized. And I think one of the things that
we've been working with Microsoft on is how do you
build these conversational search experiences across the web. I'm excited
about that.
Speaker 2 (11:38):
Yeah, yeah, and again it's just it's part of the infrastructure.
You just want you just expect that to work and
to be fast, to be fast.
Speaker 3 (11:46):
Yeah, you know, that's one of the things that elastic
search is known for is blazing speed. I'm always you
can you start typing keywords in a search box and
auto completes. You expect that. So now combine that with
like conversational search, you can.
Speaker 2 (12:00):
Get over the speed of l about the typing in
the box, yeah, yeah, if I have to finish typing
and hit a box, so annoying. That's a broken search tool.
Speaker 1 (12:07):
Yeah, so many of our listeners may not know about
vector databases in general. Can you kind of give us
an idea of how that changes the game sure data
standard relationship.
Speaker 3 (12:20):
Yah. So when we became a semantic search engine, we
had to become a vector database as well. And this
is back around twenty seventeen we started supporting the ability
to store and query dence spectors.
Speaker 2 (12:32):
It was cool before it was cool, five years before.
Speaker 3 (12:35):
Before it was cool.
Speaker 1 (12:37):
I don't know anything about that. It's like first podcast ever.
Speaker 3 (12:41):
I'm there's some debate about it, but I'm gonna still
argue that we were one of, if not the first
vector database out there because we had to support it
for product search like image search, and for semantic search.
Speaker 1 (12:50):
So I kind of know what vectors are because I
took high school physics. But tell us the.
Speaker 3 (12:55):
Basic idea is, you're trying to plot vettings or vectors
in sort of a multiensal space, and you're trying to
look for similarities between vectors in this multidimensional space. So
you're you're basically taking text or you're taking images, and
you're you're vectorizing it, which is turning it into these, uh,
these embeddings, these numbers that can be plotted in vector space,
(13:18):
and then that allows you to find similar terms. So
when you're doing something like semantic search, you're trying to
use uh vector sort of this vector search to look
for similar terms. So if I if I say glass
and cup, I understand that those are similar terms, right,
because they're they're close in this vectorized space.
Speaker 1 (13:40):
Uh.
Speaker 3 (13:41):
Same thing with images, same thing with video. Anything that
can be vectorized and turned and turned into vector embeddings. Uh.
So we started supporting the ability to store these dense
vectors and query dence vectors and elastic search, and then
five years later generative AI happened and we were like, wow,
we have a vector database. That's pretty cool.
Speaker 1 (13:58):
Yeah.
Speaker 2 (13:58):
Right, and that kind of thing that language tookeganization just
lends itself so well to vectorization.
Speaker 1 (14:02):
That's right. Well, that's what makes possible, right like chat GPT,
you type in a phrase or a question, the response,
every single word is based on the probability of what
is the next word based on the vector values.
Speaker 3 (14:14):
That's right. Right, And well, so there's there's a couple
of ways that vectors are used. One is inside the
models themselves, but a vector database specifically is used for
helping ground an l l M. So like an LM
doesn't actually use a vector database, but it uses vectors
in how it's how it stores the information models. But
(14:34):
for a vector database, it's it's trying to get the
l M to ground itself on third party data so
that you know, if it's not been trained on a
product catalog, for example, you can expose it to a
product catalog, or if you needed to use some information
that it has not been specifically trained on, you can
ground it on that. And a vector database is one
(14:55):
technique for helping produce highly similar, highly effective search results
that help ground that LLLM. We actually found that the
best techniques are actually not just using vector databases. You know,
I'm a we're a vector database, and I'm gonna say
the best techniques are not just using the best techniques
are actually using combinations of technologies. Hybrid search. We're using
(15:16):
combinations of lexical search UH and uh en vector search,
or you're using sort of graph traversal together with vector
search where you're you're modeling information and we're seeing the
relationship between entities. But we support all these things. We
support a combination of geospatial search together with vector search.
So yeah, there's also other techniques like reranking. So once
(15:39):
you all the reason.
Speaker 2 (15:41):
You're talk about runing multiple searches and in those different
forms and then somehow munging it together, so you're not
repeating yourself to get a results at that's.
Speaker 3 (15:49):
Right, or you're you're running post process like you might
pull a bunch of results and then run reranking after
the fact, right, and that get further refines the results.
And then what you're doing is you're passing the right
inform to an LM so that it can say, okay,
you're using it. You're passing into the context window and
saying this is the information you should build your response on.
And all this happens blazing really really fast, and then
(16:11):
the LM now has some information in its context window
that it can use to to provide an accurate result.
Speaker 1 (16:18):
So here's a scenario. Let's say I'm building an agent,
and the agent I'm giving access to my database. On
one side, I've got a SEQL server with a bunch
of relational data in it. On the other side, I
have a las to search vector database, and I give
it access to both of those things. Is it going
to be able to find things in the last to
(16:41):
elast to search faster than it would in my relational
data if they were both, if it was both trained
on both data.
Speaker 3 (16:49):
We would be It depends if so I'm assuming this
scenario that we have a connector to that database and
we can run into it will be faster.
Speaker 1 (16:57):
Okay, so yeah, so why would I want to use
the relational database? And my this is this is actually
why one of the first that's why you're here, I
get it. I see, actually one of the first.
Speaker 2 (17:10):
It's the end of databasetase we know it.
Speaker 3 (17:13):
One of the first use cases for elastic search was
actually database offloading because it was the search function was
creating contention in the database, so offloaded it to elastic
search and we were blazing fast. Helped speed up the
database at the same time.
Speaker 1 (17:26):
You know, most people what they do is they have
one database for their product data or whatever, and then
they have another one that's optimized for searching. That's right, right,
So the one optimized for searching doesn't have all the
the joins and everything that's crazy about data.
Speaker 2 (17:43):
Relational database cercually and we're not I don't want to
be mean to relational databases, but they were from a
time when DISPAS was at a premium, and so don't
striplicate data, right, Like, that's why we organize data that way.
It was space efficient. It's not particularly important now. Admittedly,
now we have an infrastructure around it where we have
so many querying tools and other bits and pieces that
just work against it. That that's your instinct, and all
(18:03):
our rms tend to point to. Okay, some ways are
just in place from the momentum of what's been built
on the facto standard, and they're highly integrated, like they
have their advantages, but there are more ways to store
data and we do. The trick is, you know, deciding
on a source of truth and trying to keep everything,
every copy in sync.
Speaker 1 (18:20):
So a last search on Azure is kind of like
the place to be. I think now that we've been
talking about this, if you're going to be building you know,
copilot and agents in Azure Foundry, I guess.
Speaker 3 (18:36):
And we've actually integrated with AI Foundry. We're one of
three launch partners. So if you're using AI Foundry, you
can use us as a grounding engine as a way
to build generative AI applications using our vector database. We've
also been integrated with semantic kernel from the very beginning. Right,
(18:58):
we first started supporting semantic kernel by offering vector search.
Recently we actually extended that to do hybrid search. So
what I was talking about before, being able to combine
different search techniques, right, we're the first offering to do that.
Speaker 1 (19:12):
Now, you know, we've been given SQL server a lot
of crap here. But if somebody is listening and say, yeah,
you know that rings, I think I'm going to try this.
How easy is it to move our relational databases from
let's say SQL azure to elastic search.
Speaker 3 (19:29):
How easy it would it be to change the search
to be using out using US as a search engine
and plugging into.
Speaker 1 (19:37):
Or data or just moving all the data, moving all
the search. Is that something that you want to necessarily do?
Speaker 3 (19:44):
Most people wouldn't use US as sort of a database
alternative database. They would use US if they're using a
relational database. They might use US as to offload search functions,
so they would use it that way. But people do
use US to store logs and other time series data
as a primary data store. Sure you wouldn't store that.
Speaker 1 (20:07):
In a relational database, so not real data.
Speaker 3 (20:09):
It's not relational data. But people do, people do. I
would not, I would not use this as an alternative
to a relational data store. But I would absolutely use
US as a time series database. I would absolutely use
US as a vector datase. I would absolutely use US
as a database offloading tool for SQL and other words speeding.
Speaker 1 (20:28):
Yeah, that's right. Yeah, and so it has it has
access to your SEQL server. It can figure out how
to optimize those queries things that you would do in
a store procedure where you're getting multiple data sets from
multiple tables and things like that. That's the kind of
stuff where you would be in a lasts.
Speaker 3 (20:45):
Those are the kind of workloads that are perfect for us,
Like we will help speed that up, take the take
the workload off of SQL server so that they can
handle the rest of its work without contention.
Speaker 2 (20:55):
So I can also see the data, like I want
to search the incoming data. I think you constantly current data,
so I don't want to keep reprocessing models to include it.
So I'm using these search mechanisms that data is coming
into my relational database, then probably I have an asynchronous
call from that calling out of it. And reorganizing it
into the vector database.
Speaker 3 (21:13):
Yeah, so there would be the connector would would basically
be indexing a as is coming in, say, creating an
index so that it can handle those queries and do
that without bothering the primary database.
Speaker 2 (21:26):
Right, yeah, yeah, so is that actually a hook I
put into a lastic search into elastica just saying hey,
watch this database when yous come in politic exactly now,
I mean, because it's relation to database, I tend to
decompose data, right, I've broken this order down into rows
and columns and stuff. I don't imagine that's how you
want to start in a vector database. You kind of
in a.
Speaker 3 (21:45):
Vectora less to search, everything is sort of document based.
Everything's turned into a document. Uh but uh but yeah,
we'd be able to map that back to the rows
and tables in a in.
Speaker 2 (21:59):
A in a Right. So I mean I started off
with the whole order. Should I just stole the whole
order in in elastic and then decompose it into rows
and columns? Like, what's the right flow there?
Speaker 1 (22:10):
Right? If you're do you have any actual to help me?
Speaker 3 (22:17):
I'm not sure I think you you know, if you
just use our connectors it'll handle this for you and
you'll have to think about it. It doesn't like we
have four hundred different connectors that have been built up
over the years for all kinds of different data stores
for everything from Snowflake to SQL server to you know
other and it handles all of that for sure. And
(22:39):
then these days everything is kind of getting replaced by
mcps or we're starting to build mcps of board into
that as well.
Speaker 2 (22:46):
Yeah, the AI equation of this is flipping huge.
Speaker 1 (22:48):
And we'll talk about that a little bit more when
we come back, and also next week with Scott Hunter.
We're going to take a break right now, though we'll
be back after these very important messages. Don't go away.
You know, dot net six has officially reached the end
of support and now is the time to upgrade. Dot
Net eight is well supported on a w S. Learn
(23:10):
more at aws dot Amazon dot com, slash dot net.
And we're back. It's dot net Rocks. I'm Carl Franklin
Campbell and we're talking to Ken Exner from Elastic about
elasti search and some really important information that we just
discovered about you know, what it, what it is, where
(23:33):
it fits in, and what it doesn't do. Uh, and
how it fits in with with Azure, especially about how
you can make searches stinky fast rights stinky fast?
Speaker 3 (23:46):
Is that I'm not That's good.
Speaker 1 (23:49):
No, that's really good.
Speaker 2 (23:50):
So fast you can smell the burn and rubber.
Speaker 3 (23:53):
That makes it better. I like that, all right?
Speaker 1 (23:56):
So how do you get started with jam ai with
elastic and Azure and the Elastic AI ecosystem through Azure
Foundry AI Foundry.
Speaker 3 (24:06):
Well, I guess if you're if you're uh using AI
foundry or using semantic Kernel, we're there, Like if you're
if you're choosing to use that, like if you're using
semantic kernel as a way to build uh generative A
applications into whatever you're doing. Uh, we are integrated there
so you can use us as a vector of database
within that.
Speaker 2 (24:27):
It's just a feature. It's just a feature to.
Speaker 3 (24:28):
Turn on, so you can go and like, can you
pick which vector database you want? There's a drop down,
there's a couple options there. We're one of those two.
Speaker 1 (24:34):
It's great.
Speaker 3 (24:35):
Uh. And then the same thing for AI Foundry. We're
integrated into AI Foundry into your workflow so you don't
have to think about it, so you can basically, uh,
you know, start using AI foundry select us as a
vector database.
Speaker 1 (24:48):
Uh.
Speaker 3 (24:48):
With our integration into Azure, it's automatically set up for you,
so you don't have to think about any permissions anything.
It's all integrated with the Azure as your client.
Speaker 1 (24:57):
I hate to derail you, but this is the first
time AI Foundry has been used on dot net rocks.
I just learned about it today. Ah, so people probably
won't know what this is.
Speaker 3 (25:07):
It's you can think of it as the new studio
for all things AI.
Speaker 2 (25:12):
It's like the.
Speaker 1 (25:12):
Azure AI portal section for AI.
Speaker 3 (25:15):
Right, Yeah, it's yeah. There there were lots of lots
of studios, lots of different tools for building AI applications
within the Microsoft ecosystem, AI Studio, Open AI Studio, a
bunch of them that kind of got co pilots do
Copilot Studio, that's right. So, uh, the UH AI Foundry
(25:36):
is sort of the the new integrated version of that.
It gives dot net developers one place to go to
if they're building these AI applications.
Speaker 1 (25:45):
All right, so that's where you go on Azure if
you want to get started, I guess you know, if
you want to start building agents and and all of that.
Speaker 2 (25:53):
So when you toggle the integration, is it how is
it deploying elastic that point or are you using the
server list version? Like, what's what's the actual impact?
Speaker 3 (26:01):
You can if you've set it up already, you're just
picking your pointing to a pointing to an existing cluster
or an existing project.
Speaker 2 (26:09):
In the serverless version, that might be your VM, or.
Speaker 3 (26:13):
If you're wanting to create it from scratch, you can
do that as well. Okay, so it's a seamless integration,
which which is pretty amazing for you know, being a
third party product. We're always pushing for these tight integrations
and we've gotten that with Azure.
Speaker 2 (26:26):
Now people like a checkbox and things just happen. It's
just I'm putting my administrator's hat on. It's like the
dev check that box and a VM appeared. In my world,
I'd be more emotional about that. Where if it's the
serverles offering, where it's just now we're just going to
get a new bill. That's the cfo's problem. I don't
mind that at all.
Speaker 1 (26:44):
That's right.
Speaker 2 (26:46):
So is it like do you get to pick the
deploy mode or is it automatically.
Speaker 3 (26:50):
The server This one is not enabled yet. Yeah, the servers,
I mean.
Speaker 2 (26:54):
We're talking the day you announced it, so that's fair.
Speaker 3 (26:57):
But the the hosted version of our cloud offering is there,
you can.
Speaker 2 (27:01):
You can r so you could be hosted by you.
Speaker 1 (27:04):
Yeah, we'll be.
Speaker 3 (27:05):
Going ga with our server list offering in the coming
weeks and then at that point you'd be able to
just select that project as well.
Speaker 2 (27:12):
And is that running in your infrastructure? Is it an Azure?
Speaker 1 (27:15):
It's on Azure? Okay?
Speaker 3 (27:16):
Yeah, yeah, it's all on Azure.
Speaker 1 (27:17):
So can you give us an idea of how much
it costs, I mean relative to other Azure resources?
Speaker 3 (27:25):
Maybe, well if you're so. There's three different project types.
There's the traditional search one, which you would handle use
for typical search or vector search workloads. There's the observability one,
which is sort of built around observability capabilities like log
analytics and metrics. And then there's a security one because
(27:45):
one of the things that happened along the way is
people thread hunters started to use us as a security
analytics platform, so we created entire product packaging of Elastic
for security workloads. So for threat hunters, Wow, if you're
using the observability or the security project, type. It's basically
(28:07):
we just charge for the data going in and the
data retained. If you're not using any data, you don't
get charged to anything as simple as that. For the
search one we charge. We call it a VCU. It's
a virtual compute unit, and it scales to near zero.
There's always a little bit of charge because we have
to keep some cashes warm to handle any kind of
incoming query. But it's like less than twenty bucks a
(28:28):
month at the start, so pretty, you know, scales down
to near zero.
Speaker 1 (28:34):
That's really cool. Yeah, good to know. All right, So
just a selfish question. Maybe the other listeners are thinking
this too. All right, I'm committed to using Elasti search.
I've got a relational database that has on my Goo
in it. I got a lot of store procedures, and
I got some views and stuff. How do I take
all that t SQL in a big store procedure that
(28:55):
returns multiple record sets from multiple places and turn that
into a lab to search. Is there a language I do?
Speaker 3 (29:03):
Is there a good migration into it? Yeah? Well, I'm
I'm hoping we can build something with l MS that
could do that.
Speaker 1 (29:11):
Migra.
Speaker 2 (29:11):
Yeah I do too, actually.
Speaker 1 (29:13):
Because I don't want to do it.
Speaker 3 (29:13):
It's so the there is. So we actually have our
own uh sort of query language. It's called e s
q O and it's a piped query language. So we
use pipes and you can very power, very very linux
very powers. Yeah, so you use pipes to pass uh,
since you one query into another and it's a way
(29:34):
of combining and uh, it's a very powerful query language.
We introduced joins recently. Uh, so it can do a
lot of the things that you can with with the
traditional sequel language, but that's becoming our default language to
that thing, and at least for analytical workloads. So if youre
using this for search, you would just use the search API,
but for analytical workloads, uh, you'd use this new query language.
Speaker 1 (29:58):
So these conversions seem like a perfect job for an AI.
Speaker 3 (30:02):
H oh yeah, yeah, so like this is this is
actually something we do right now. People might use other
solutions like Spunk, they have their own query language. People
have used l o ms to the conversion of one
query language into another. They're surprisingly accurate. So yeah, there's
no reason. You know, once we get to sort of
(30:22):
the functionality parody that you need for for SQL, that
you couldn't.
Speaker 1 (30:27):
Just use an l O And when'd you say your
language is called.
Speaker 3 (30:29):
Again e SQL query language.
Speaker 1 (30:32):
Yeah, all right, so there's a good time kids to
go out and learn the SQL, at least a little
bit to know what you're going to be doing next
year or maybe in a few months, who knows, or
this afternoon or this afternoon really ambitious.
Speaker 2 (30:47):
Generally, my experience when you're moving to non relational databases
is you're calling APIs. It's basically just a coding exercise.
You're different kind of data fetch and start, start effects
and things like that. It's a different way thinking about it.
And part of that to me is you're now handling
the data differently like the query. The old query doesn't
make sense if the data is stored in a different way.
(31:07):
There there are no joins or not joins the way
you're thinking of them.
Speaker 3 (31:11):
No, it's not a join like it is a relation
creating a join on sort of a no sequel data
sert where we use it's a document based data system.
Was challenging. How do you do that?
Speaker 2 (31:22):
Yeah?
Speaker 3 (31:23):
Uh yeah, it's a different way of interacting with data.
It's a document based orientation.
Speaker 2 (31:28):
Yeah, so you know the document the experience you've had
to working with document databases is the right sort of
approach to organizing things like I would be hesitant to
just move a relational database to a vector database or
area a document store of any kind just because the
data is organized differently.
Speaker 3 (31:45):
Which organized differently you're going to augment it with ectory base.
Speaker 2 (31:50):
But now I like this idea of yeah, my order
is still decomposed in the relational database. We have all
the reporting and so f on that, but I also
keep a copy and in the document start that's the
entire order, and still got all those elements senate that
are searchable and findable and so forth. That's right, but
when you find any element of it, you find the
whole thing.
Speaker 1 (32:05):
So would you just call Kim Trip and Paul Randall
and say, hey, optimize my indexes?
Speaker 3 (32:08):
Pretty sure?
Speaker 1 (32:09):
No, No, get kind of retired. Actually yeah, there's that,
but I mean optimize the indexes. I mean, that's that's
kind of where people go when they have search problem
performance on SQL server. Yeah, well, in the.
Speaker 3 (32:23):
Lastic search, that's completely everything is indexed. You don't you
don't you don't pick you know what you want to index.
Everything is index so that's not the.
Speaker 1 (32:30):
End, and the next you're created on the fly?
Speaker 3 (32:32):
Are they created as you're ingesting? Yeah, so it's all
automatically created for you.
Speaker 1 (32:37):
Any good document database, that's right. Yeah.
Speaker 2 (32:39):
So, and I find it interesting to have the sort
of three species there, like this security makes sense, that's
a special thing. But the observability and this is the
This is why I pulled that comment from the listener,
was that guy was doing observability before it was a product.
Speaker 1 (32:53):
Yeah.
Speaker 3 (32:54):
So the interesting thing about search, One of the things
that we've learned over the years is that search which
is kind of everywhere, Like people start using search to
build applications. It's not just about Initially when plastic search
came out, it was about adding search to a website
or adding it to a product search. That's right, So
you know, you see what e commerce sites starting to
(33:15):
integrate US. But people started building applications on top of US.
They started building like matchmaking sites, or signal intelligence systems
or fraud detection systems. It became sort of a platform
for building things. But one of the most popular was
people using US as a log analytics solution, right, using
US to store and query logs. So rather than you know,
(33:37):
do and doing grap They would use us as a solution,
ship blogs to us and then run run fast queries
on it. From there, people said, hey, if I'm going
to use you for logs, can I use you for metrics?
Can I use you for tracing? Can I use you
for other things? Right, we started providing a complete observability platform.
Speaker 2 (33:54):
And then that's the whole thing. Is like part of
the challenge of doing this is all the different sources.
So the fact that you have a place to drop
all those sources routinely, so it's already in one spot
and it's.
Speaker 3 (34:04):
All in one one data story.
Speaker 2 (34:06):
Yeah, and a quick one that can do a joint
between those different rest laces when you need to.
Speaker 3 (34:10):
Well, this is actually one of the one of the
things I love to show about ESQL is being able
to do a query across metrics and logs and traces
and kind of blows people's minds, like, I'm going to
do a query that joins data and creates this, show
me all the all the hosts that have you know,
ninety percent CPU and that have had a log in
(34:33):
failure over the last hour, and that have had and
you can combine different types of queries across different signal types,
and it's kind of magical.
Speaker 2 (34:41):
And it's powerful.
Speaker 1 (34:42):
So I know that our listeners probably know more about
Azure than they do about Elastic, and your audience probably
knows more about Elastic than about Azure. Is there? Is
that a fair statement?
Speaker 3 (34:55):
I think that's fair.
Speaker 1 (34:56):
Yeah, that's fair. Yeah, And so what can we tell
these people about Azure if there if this is a
new concept, do you know? I guess I have a
customer right now that's all on prem and was picking
my brain about moving to Azure, you know, and my
response was, well, why aren't you there? Now?
Speaker 3 (35:15):
You know?
Speaker 1 (35:16):
It's like, because there's an inherent fear. This is my
this is my word, it's the word of the week. Yeah,
there's a fear by people that I think they don't
trust moving their data somewhere else where they can't keep
a watch on it. And my my response is, all right,
so you have how many guys in your I T department? Two? Three? Okay?
(35:37):
Do they all sleep at the same time? Yeah? Okay,
So you know, in Azure, you've got hundreds probably of
people all around the world that are watching over your
data while you sleep, while you're awake, and who do
you trust more Joe in It, who's a sleep eight
hours of the day, or you know, the the Azure
(35:57):
team that is watching over your fate at twenty four
to seven. So that's my pitch. What do you what
would you say to your your customers.
Speaker 3 (36:06):
About why they should move to Azure or yeah, yeah,
if you're on prem. Uh, you know, every every company
that's started on prem is starting to think about the cloud.
They've had this conversation for ten fifteen years. At this point,
you're kind of feeling.
Speaker 1 (36:21):
Like a late to the game. Yeah, but they're still
out there obviously, Yeah, just having a conversation. Yeah.
Speaker 3 (36:28):
So but yeah, I think most customers should be thinking
about it. If they aren't already, you're not going to
get the efficiencies of doing it yourself. There was there
was this movement a few years ago where people started
going back to their own data centers because they thought
they could do it cheaper. And you can't, Like, you can't.
(36:49):
You don't get the efficiencies at the scale that the
cloud providers get like Azure. You don't get the like
the ability to buy bandwidth in huge amounts. If you're
trying to buy bandwidth by yourself. You're trying to buy
servers by yourself, you're gonna pay a lot more. Sure,
so any any savings that you think you can get
is just going to be wiped away by.
Speaker 1 (37:08):
Especially if you're paying by the transaction. That's right.
Speaker 2 (37:11):
Yeah, but yeah, we did this whole series on run
Ass Radio where a whole bunch of folks list lifted
and shifted their vms into the cloud. Then we're shocked
when the bill was big, Yeah, not recognizing you've run
those vms twenty four hours a day, like you're paying
twenty four hours a day for them. Yeah, and then
moving up to the platform pieces was more efficient, but
then they haul them back and they're just ignoring a
bunch of the numbers. You know, you pay for the
(37:32):
hardware once, so you forget about advertising across the period
much less the replacement costs, and you know, all those
other pieces to go into it.
Speaker 1 (37:39):
And the difficulty of servicing them twenty four to seven.
Speaker 2 (37:42):
Yeah, and then what a twenty four seven knock actually costs?
Speaker 3 (37:44):
Yeah, it's expensive and it the minimum the minimum number
of people you need to employ to do it properly
is high. Yeah. Yeah, and you just can't get the
scale like that, that's the biggest thing.
Speaker 2 (37:57):
Like, yeah, if you're big enough in this debate, and
it is still a debate because now we get into
elastic utilization. Right when I had to build out websites,
we built for peak. You know, you had to have
enough gear to handle the peak load and then it
ran its sixty percent utilization if you were lucky and
you don't deal with that with the cloud, your peak
(38:18):
can move around and you you scale dynamically. You only
pay for what you use.
Speaker 3 (38:21):
If you're a retailer and you're having to build for
Black Friday or peak, that's yeah, it's crazy.
Speaker 1 (38:27):
Once you rather just moving on or let it do that,
not moving.
Speaker 2 (38:31):
The knob moves by itself.
Speaker 1 (38:34):
I can see it. It's moving. Yeah, it's not.
Speaker 2 (38:39):
The same at all anymore. But yeah, it's been years
since I've said, you know, if you haven't already moved
the cloud, your CFO is looking at you're like, what
are we doing?
Speaker 1 (38:46):
Well?
Speaker 3 (38:47):
I think yeah, I think that's a conversation most people
had ten to fifteen years ago. Though, the conversation over
the last couple of years is what's your AI strategy?
Speaker 1 (38:54):
Yeah?
Speaker 3 (38:54):
Right, Yeah, every every boardroom is asking their your exact team, like,
what what's your ais ATEU? What are we doing to
uh automate everything through AI? And that's a much harder
question if you're not already on cloud. So if you,
if you're, if you, if you skipped that way, you're
already behind because the next thing is going to be like,
(39:16):
you know, what are we doing to automate things with AI?
What do we doing to automate sales? What are we
doing to automate marketing? You know, are the engineers and
our company using AI tools?
Speaker 2 (39:25):
Most these tools are dependent on cloud. They dependent on
You need to have your data there already if you're
going to take advantage of That's right.
Speaker 1 (39:31):
All right, good, we've beaten that horse. Is there anything
that we haven't talked about that you would like to mention?
Speaker 3 (39:37):
Mostly just uh that you know, we love the fact
that Azure has been such a great partner with us
and integrating us into you know, in such a tight way,
into the experience, uh for customers who want to use
you know, the best vector database out there. We are
integrated already into Azures. So it's a phenomenal experience. I've
said that already, But like you're the number one.
Speaker 1 (40:00):
Yeah, yeah, I love.
Speaker 3 (40:02):
I love that we have such a tight partnership and
it creates a great experience for our customers and I
will keep make sure that people go out and try it.
Speaker 2 (40:09):
It's awesome.
Speaker 1 (40:09):
Well that sounds like the end of the show, So
thanks very much, Ken, Yeah, thank you for the pleasure
having you and talking to you. I learned a whole
lot and I'm sure listeners to too, And we'll talk
to you next time on dot net rock. Dot Net
(40:42):
Rocks is brought to you by Franklin's Net and produced
by Pop Studios, a full service audio, video and post
production facility located physically in New London, Connecticut, and of
course in the cloud online at pwop dot com. Visit
our website at d O T N E t R
O c k S dot com for RSS feeds, downloads,
(41:04):
mobile apps, comments, and access to the full archives going
back to show number one, recorded in September two thousand
and two. And make sure you check out our sponsors.
They keep us in business. Now go write some code,
see you next time.
Speaker 2 (41:19):
You got Jamtle Vans