Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
Welcome back to Adventures in DevOps. I'm your host, Warren
and because today Will is away this week, I have
an opportunity to sneak in a sponsorship. Today's episode is
sponsored by Attribute. I actually met the team and honestly,
what they're doing in the Finnoffs space is absolutely genius
that I believe actually everyone can benefit from. They call
it finops without tagging. It's the first run time technology
(00:30):
that analyzes infrastructure instead of relying on billing reports, exports
and tagging. It's for architecture ops and platform teams that
need visibility into product customer attribution or insight into cost
anomalies without wasting hours of guessing how to allocate spend
too shared services. So there's no spreadsheets, no extra logging.
Attribute solves it all with just one line of code.
(00:53):
They capture costs based on actual application usage generated from
anywhere Kubernetes, databases, Story and over thirty five multi cloud
services in there UI. They break it down by micro
service and even at tribute cost at the database query level,
all tied back.
Speaker 2 (01:08):
To the business.
Speaker 1 (01:09):
I really find that pretty interesting. Recently they were recognized
by six Gardner hype cycles. I honestly have no idea
what that is though, and are working with impressive companies
like Akamian Monday dot com, so you'll want to check
them out. And I'll drop a link in the description
for the episode and that's attribute at ATTRB dot io.
And now back to the show and today, I have
(01:30):
to say I'm actually pretty intrigued by the guests that
we brought on because this is an area of technology
that I have zero experience with. We're going to be
talking all about vector databases, and I feel like we
brought in one of the experts from the industry, from
a company that has been doing vector of databases for
quite some time now, I want to say since the beginning,
but I think she's going to correct me. So welcome
(01:51):
to the show. Staff Developer Relations Jenna Patterson.
Speaker 3 (01:55):
Hello, thank you for having me on today.
Speaker 1 (01:57):
Yeah, I know, I'm really interested because actually, and part
of preparing for this episode, I went around and asked
a lot of my colleagues at different companies if they
had any questions that I should ask someone who's I mean,
you sort of just got into the work at pine
Cone there just under the year. If I I'm right,
what I should ask them and they're like, I don't
know what that is.
Speaker 4 (02:18):
Yeah, So high level it allows you to do to
compare vector emvetting numerical representation of a piece of data,
and so it's and the vector database allows you to
find similar matches. So if you think about searching on
for instance, an e commerce store retail store online, and
(02:38):
you want to find all the shirts that are read,
they might not all have the word shirt in the description.
So we want to find everything that is related or
closely related, the most closely related. And so it does
that based on a distance metric to find everything that
has a meaning similar to shirt or in this case,
(02:58):
a red shirt could be a blow so it could
be a top, it could be a short sleeve shirt.
And then it comes back as scored results, so that
like anything that is close is going to have a
better score than the things that are further away from
that particular query.
Speaker 2 (03:14):
I see.
Speaker 1 (03:15):
So you take the original requests red shirt, and you're
converting it to some just a set of numbers and
then using that to look through the database and get
back equivalent results. So there's some upfront converting being done.
I assume when you're sticking data into the database in
order to store the numerical representations, not just the raw
(03:35):
say text properties.
Speaker 3 (03:37):
Yes, exactly.
Speaker 4 (03:38):
So at the beginning, you're going to ingest all of
your data, so you chunk it up, and then you
upsort it into a vector.
Speaker 3 (03:44):
Database ahead of time.
Speaker 4 (03:46):
As your users are querying, we take that query, we
embed that query and use that as a way to
compare against those existing embeddings that are in the database.
Speaker 3 (03:57):
There's also kind of.
Speaker 4 (03:58):
Another piece about that where if that data changes or
you get new data, you can also reupsert that.
Speaker 2 (04:05):
I see.
Speaker 1 (04:05):
So I mean this I assume is true for all
databases that claim that they're vector databases.
Speaker 2 (04:11):
How do you do the computation.
Speaker 1 (04:13):
Of figuring out what the numerical value should be for
red shirt?
Speaker 4 (04:17):
We do that through what is called an embedding model.
But this embedding model is traded on specific data that
is for embedding, and we pass in the text value.
So that chunk of data or a piece of text
in this case, it's text, it spits out some numbers
and it happens to be in vector form. So if
you remember back to like I don't know, fifth or
seventh grade geometry, we worked with vectors essentially a list
(04:41):
of numbers. But these are very very long vectors, so
very high dimensional data onenty twenty four dimensions in this vector,
and these represent different pieces of meaning about that piece
of data. So it could be about the color. It
could be in this case, like we're outside of the query,
like the data we might embed, like the product description,
(05:04):
we might embed product title, or all of that together.
That could be part of our chunking strategies to put
it all together and embed that whole thing as one piece.
Speaker 1 (05:12):
I just have like a lot of questions now, like
first and foremost, is it your fault that when I
search for red shirt on websites?
Speaker 2 (05:19):
I find things that aren't red shirts?
Speaker 3 (05:21):
Now, I hope it's not our fault.
Speaker 1 (05:25):
If I understand you correctly, that actually the model I
assume you're using like something similar to an LM to
convert from the original text into what the embedded embedding
value should be that you're storing in the database. You're
doing that because pine Cone.
Speaker 2 (05:41):
Has this capability.
Speaker 1 (05:42):
But if you're using let's say one of your I
don't see one of your competitors, but an open source
vector database or I think postcress supports vectors now that
responsibility is on the implementer of or the team that
is actually implementing searching their application.
Speaker 4 (05:55):
Right. Yeah, So there's a couple of different approaches to
it at Pine, and we have two different approaches. We
support two different ways. So you might actually have your
own embeddings already. You might want to manage that part
of the process yourself, and so you might use something
like an open AI model to do embedding or an
(06:15):
Amazon model to do embedding. We also host our own models.
It's an Nvidia model for embedding. We have a number
of those based on your use case. And right now
we're talking about text like product descriptions, product titles. It
could be audio, it could be images, any sort of
data that you want to actually do like a meaningful
search over and find meaning as opposed to specific keywords.
Speaker 1 (06:38):
How is the comparison being done like in the database side,
Like I get the part where you run through an
embedding model and you get back out an array of
onenty twenty four integers or floating point numbers that somehow
gets stored in a database.
Speaker 2 (06:52):
Is like is it being stored as like a row value?
Speaker 1 (06:55):
Is there like a special format that the database saves
data in is this like an incredibly complicated question to answer?
Speaker 4 (07:02):
This is for me, it's an incredibly complicated question to answer,
but I can answer part of it. So the comparison
that's being done is that we are taking those two
vectors and seeing how far away they are from each other.
So if you think of a vector like this representing
you know, the red shirt, and a vector going like
this representing like pants, but maybe you have something a
(07:23):
little bit closer. It's really hard to do this as
a podcast and just with my hands, but visuals are
much better.
Speaker 2 (07:30):
For those of you who are just listening.
Speaker 1 (07:31):
Jenna is attempting to draw vectors with her arms, and
if that somehow makes sense, but I totally get. Like,
you have a triangle of vectors and you're calculating the
difference between those vectors from each other, and that's how
you can I'm your calculating distance, and I assume you're
optimizing for the smallest distances possible. One of the questions
I have is like, you have a lot of these
embedding vectors in your database, and aside from how they're
(07:55):
actually being stored, you still have to fetch some set
of that data optimally rather than fetching like all the
data I assume, rather than catching the entire database in
order to compare each vector one by one. Any thoughts
of like how you are able to pare down the
total amounts that you're only fetching relevant data in or
to do that comparison.
Speaker 4 (08:14):
This is definitely beyond my knowledge. I will say, like,
there are strategies for how it's being stored and how
close the data is.
Speaker 1 (08:23):
It's like the thing where I definitely like try to
pose this to people who come up with new database formats,
because there's.
Speaker 2 (08:29):
Always some in this space.
Speaker 1 (08:30):
And when I asked them about it, they were like, oh, yeah,
we build a new database format. And I'm like, well yeah,
but really, what you did was just use your underlying
database engine and you just put a service on top
of it, and you're calling it a database, like it's
really not, at the end of the day, is just
a relational database. What I really understand here is that
it's fundamentally different how you're storing the data. It's not
in arbitruarrily row based information or you know, binary blobs
(08:53):
that are being stored which can be fetched. Fundamentally, the
vector database is storing data not only in a different way,
but has to be optimized in order to find locality
of these vectors and the ones that are close in distance.
Speaker 2 (09:05):
I'm not a database expert.
Speaker 1 (09:06):
So honestly, you know, what you shared so far is
still find find by me.
Speaker 2 (09:11):
But I'm sure someone will call me out on it.
Speaker 4 (09:13):
I do think it's interesting in that, like it's fun
to understand, like the under underlying technology, and I'm like,
as someone who has been here for six months, I'm
still learning that it's very complicated. Even our customers are
like they're learning it along with us as well, like
and like kind of what the strategies are so that
they can implement it in the in the best possible way.
Speaker 2 (09:33):
Right, I think this is the right word. Uh.
Speaker 1 (09:35):
If you have an application where historically you would have
done something like free tech searching in elastic search, using
an embedding model to calculate the numerical values to handle
a semantic search is just the progression in the industry.
Speaker 2 (09:49):
You no longer need free tech search.
Speaker 1 (09:51):
This is the I don't want to say be all
end all of the future of e commerce websites, but
it does seem like there's just such an improvement in
the strategy here from what was done before.
Speaker 4 (10:01):
It is not necessarily that you're not going to use
like a keyword search. You might actually pair them together,
and I'll talk about that in a second. And then
like e commerce is just a it's a simple example.
It's the use case I typically start out with. But
there are other other reasons why you would use this
type of search, semantic search, for instance, in your AI
applications where you are getting you're maybe you're chatting back
(10:25):
and forth with with a model, but the model doesn't
it has its own limitations as we as we know now,
Semantic search is about the meaning behind your query and
your your intent behind what you're trying to find based
on you know what what data is in what context
the data is in. But sometimes you have you know, keywords,
(10:47):
or you have domain specific language or acronyms or stock
tickers is another common one that that we use as
an example. But within your company, you have product names,
you have your own company specific language, you have technical
technical terms that might not be in the public domain
(11:07):
and might not be trained into those models. We might
pair a semantic search with what is called a lexical
search or a keyword search in order to make those
results even more accurate, even more correct.
Speaker 1 (11:17):
I think you really stumble onto a good thing here
that's worth talking about, because I think where you're going
is for those of you who are unfamiliar or may
have heard of RAG before, I think it really jumped
jumping onto the retrieval augmented generation where fundamentally, if you
think about it, the models that you're using that are
proprietary by third party companies don't understand how you talk
(11:38):
about your business, and so how do you do the
mapping from one of these models to what's internally? And
what you're saying is if you've uploaded all the data
or you have hooks into your knowledge bases, and you
throw that through an embedding model into pine Cone or
another vector database, then using the map of MCP or
(12:01):
some other magic that no one knows about, somehow the
model gets access to this data and understands how to
use the embedding values from their own model to map
to what's internal because there is a semantic likeness between
those things, even though fundamentally the actual words and even
outside and say a dictionary, those things are fundamentally different.
Speaker 3 (12:24):
Yeah, exactly.
Speaker 4 (12:25):
I think one way I like to look at it
is like we you and I, Like we have a
spoken language.
Speaker 3 (12:30):
We speak English.
Speaker 4 (12:31):
It's natural to us, and so like, if we can
interact with our data and gain insights and find information
using the language we know the best, that's going to
be even better for our output. Lllms have limitations and
so and part of that is it doesn't know all
about our data. And so if we bring in the knowledge,
bring in the factual data, the authoritative data into that process,
(12:55):
then are our output can be even better.
Speaker 1 (12:57):
You may be the first person I've talked to that
suggested that English was like a good strategy for communication.
Speaker 3 (13:03):
Here, I don't know if I'm suggesting it's a good strategy.
Speaker 2 (13:06):
I think what can be used.
Speaker 1 (13:09):
I mean, there's just so much that's not shared as
far as the as far as the context goes. Yeah,
that is just fundamentally lost. And I can see how
a trouble and troubling it is to actually communicate in
that way, because we each have our own internal view
of the world. And I know, as a technologist, a
long time technologist, a non tribal, amount of my conversations
(13:31):
have pivoted from talking about whatever the topic is to
like a meta level, like what are we actually talking
about here? You know, let's define some of these words.
It was a recent conversation I was having about the
effectiveness of feature flagged and what came up started with like, well,
how do you use them? Evolved into is it good
to use them? And even if I say that, it's
(13:52):
like what does good mean? And one of the terms
that came up was oh, yeah, you know if you
have everything that's fully tested, And I'm like, what does
fully tested mean?
Speaker 2 (14:00):
Like how do you actually define that?
Speaker 1 (14:01):
In that regard, I feel like it's a very well
I know when I see it, but defining it is
like hugely problematic.
Speaker 3 (14:07):
Super hard. Yeah.
Speaker 4 (14:08):
I remember back in one of my college courses a
long long time ago where we talked exactly about this,
and like how you define a specification that is that
everyone understands and could potentially be executed? Right, So it's
not just that you and I can talk about it,
but that our computer understands it. So I think you
(14:29):
bring up a really good point. I think you're right
in that, like it depends on who we are and
what our experiences are and where we come from and
what language we speak and there's definitely going to be
some of that that happens, and I think there are
ways around that, and I will say right now, I
don't know all of those ways around that.
Speaker 1 (14:46):
Something that I think frequently comes up on the show
is the mentioning that each new release of a model,
public or proprietary is like having another child where they're
like so fundamentally different, not like an upgrade is fun.
How and so I think you brought this up a
little bit. Is the idea where if you're upgrading your
(15:08):
I say upgrading side grading your model for one reason
or another, that revalidating the embeddings that you got previously
match was coming out of the new model to ensure
that you aren't just going to start getting nonsensical outputs.
I mean, in some way you're upgrading the model because
you think the new embeddings will be better, but that
obviously has an opportunity for unexpected results. Does Pinecone have
(15:29):
a strategy for dealing with that?
Speaker 2 (15:31):
I mean, I.
Speaker 1 (15:31):
Assume if you have an embedding model, given what the
company is doing, you're building the model yourself.
Speaker 4 (15:36):
The Nvidia model that one that I mentioned, that's the
hosted model that we have.
Speaker 3 (15:40):
We host it internally.
Speaker 4 (15:41):
We also have our research team and they have created
models for us, so we have the one that I
am most familiar with is Pinecone. Sparse is used for
a lexical search keyword search with sparse vectors as opposed
to dense vectors, which are used for semantic search.
Speaker 2 (15:58):
What's the difference between In a.
Speaker 4 (16:00):
Dense vector, you have a vector of numbers. They represent
different parts of meaning about that particular piece of data
that was embedded. But with a sparsevector, you have more
zeros than you have actual numbers, and it is either
a zero or a one that essentially represents the frequency
of our particular word.
Speaker 1 (16:19):
And I think, I think realistically, if you no one wants,
no one wants to listen to this the you know,
you think back to when you had you went further
and you're like, uh, normalizing arrays where you're you create
some sort of orthonormal basis for actually moving the data
out so that you end up with these arrays where
you have just an amplitude at one position in the vector,
(16:41):
which makes it much easier to identify things in close proximity.
So you can imagine that it's you're not just letting
the embedding model come up with arbitrary numbers to represent
your semantic text, but you're then applying some clever mathematics
on top of it to organize the vectors in a
way so that when you actually go and do a search,
(17:02):
you're not having to pull out every single piece of
data from the database. So being inpecific about how you
can utilize the mathematics who optimize your vector database goes
into some of the creation of these. So a lot
of engineers got into i'd say software engineering or became
engineers in the first place because they told themselves and
I'm going to say a lie that they wanted to
work on hard problems. And it does actually sound like
(17:24):
that building like compared to building up other databases, I
do feel like understanding the mathematics behind a vector database
is non trivial.
Speaker 3 (17:32):
Yeah, I agree.
Speaker 4 (17:33):
I am that person who, like I want to work
on hard problems. I like knowing how things work, and
so again I'm still learning a lot of this obviously.
I think for me that is one of the fascinating
pieces about it is that it's math.
Speaker 3 (17:47):
It's not easy math.
Speaker 4 (17:48):
I mean, like we go to school for it, and
people study it for a really long time. Maybe the
more fascinating piece is that we can actually use math
to do that.
Speaker 1 (17:56):
We're not discarding lessons learned of the past, actually figuring
out how to use the physics and specifically in this case,
the theoretical application in a real and real world scenario.
You know, one of the things that I originally had
thought of when we were talking about vector databases is that, like,
surely this is only applicable to lms. But I feel
like you said that the e commerce example that you
(18:16):
brought up that really is about doing semantic search is
like a simple example, And I don't know if.
Speaker 2 (18:22):
I agree with you.
Speaker 1 (18:23):
Actually, I feel like getting the search right and e
commerce is like literally the most complicated example of search.
Speaker 4 (18:29):
When I say simple, I mean it's simple for people
to understand because as an example, because they shop online, right,
everyone shop Most people shop online.
Speaker 3 (18:39):
I don't want to say everyone.
Speaker 2 (18:39):
I'm with you there.
Speaker 1 (18:40):
I think there is something where it does seem simple
in the service, but like as soon as you get
to thinking about how the search actually works, it starts
to get very complicated. I think one of the examples
that comes up a lot for me when I've reviewed
other companies database architecture, like which kinde of database they're
going with, or you know what sort of indexes they have?
It always sort as a generic problem, like oh, I
(19:02):
have some data and I want to search it, and
I'm like, well, what kind of search are you're doing.
It's like, well, we have some attributes on each of
the rows of our data or the items, and we
want to filter by the attributes. And I'm like, well,
which attributes do you want to filter by? Is it
always like each item has three attributes and you always
filter that on the first attribute and the second, the
second and third are never used, or is it something
complin or something simple? And like, oh, we know it's
(19:25):
always well, the user could give us attribute one, two,
or three, and we need to figure out all of
the items that match one, two or three. And I'm like, well,
that pretty much just eliminated every single note SQL database
out there, because you know, good luck. I mean, I
will say, in some scenarios you can be very clever
where you can just index all three attributes, make three
(19:46):
quaries to your database, and then merge the results.
Speaker 4 (19:48):
I'm just gonna say, but also your products might all
have different number of attributes.
Speaker 2 (19:52):
Oh yeah, oh yeah, for sure, right.
Speaker 1 (19:55):
And actually think the one that that's more complicated is
that like some of the attributes have like an array
of values, right, so it's like, oh, yeah, well this
product we use these commerce example, like this product is
a shirt. It can be in small, medium, and large,
like the sizes that are available. And so when someone
search sizes, you're not going to have an attribute column
that's like you know, small exists, you know, true or
(20:16):
falls you know, you know, shirt can be in medium
true or falls.
Speaker 2 (20:19):
Like this is nonsensical. So I think once you.
Speaker 1 (20:22):
See that, you're like, well, okay, search and e commerce,
that's that's.
Speaker 2 (20:25):
Got to be like the most complicated thing ever.
Speaker 1 (20:27):
And to say like we maybe collectively we may have
stumbled upon the the.
Speaker 2 (20:33):
Limit for improving search.
Speaker 1 (20:34):
In that way, by using an embedding model, we're taking
the core, the essence of what the search query is,
trying to figure out what it is, and then mapping
it to an optimized way of storying the data and
a database that isn't a role based where you're doing
column by column matching or some sort of non efficient
index look up, like in a no SQL way where
you're just like somehow you know exactly which item the
(20:57):
person is talking about.
Speaker 2 (20:58):
But I think that'd be pretty great. Reality you get.
Speaker 1 (21:00):
To where I just, you know, search for red shirt
and I get exactly the green trousers that I actually wanted.
Speaker 4 (21:07):
I don't know if you're familiar with like the the
dupe trend, the duplicates of like high end fashion, where
these like low, lower end, cheaper alternatives come up with dupes.
So we see it a lot in fashion, It's it's
in other other areas too, But what if you could
search for like this, this high end fashion brand name
(21:30):
paired with like white t shirt with rough frilly edges
of the sleeves, and then you come up with like
a different the actual duplicate of that.
Speaker 3 (21:41):
On this particular site.
Speaker 4 (21:43):
I agree that like these products that you're you're talking
about in the categories and in the different ways that
it's not even just categories but just the different features
of them are very complicated. But what if we could
actually search for those based off of, you know, a
few words that describe what it is, but don't necessarily
use those same words you brought up.
Speaker 1 (22:03):
I think one of the major problems with giant online
search engines that are dedicated to consumer buying experience. And
I'll say I think there's one that comes to mind
in the Western.
Speaker 2 (22:15):
World, which is Amazon. They have a.
Speaker 1 (22:17):
Huge problem with basically I don't know if they call
them duplicates as much as counterfeits, where if you send
if you're not doing the manufacturing yourself, that you've sent
the pattern to another company, and in their off time
when they have extra resources, they just print out more
of your item without the logo on it or with
the logo because they don't care. And then when you
(22:39):
go and search on any website you for the thing
you're looking for, even with the brand name, you're getting
the competitor version that's just not as good and people
can't tell the difference most of the time, and this
actually destroys a lot of small businesses that are doing it.
I'm wondering if there really is a smart strategy here
where you can use this for fraud detection. And I
(23:00):
know when I'm searching on Amazon, I always want like
to filter by results where the brand of the product
matches the store where the product is coming from. Like that,
that's almost always what I want. You make your thing
that I don't care what your brand is, but it
should match, and I always get suspicious when it's when
it's something else. Yeah, yeah, I can see who who
(23:23):
uses Amazon more than most?
Speaker 2 (23:25):
Yeah for sure.
Speaker 1 (23:26):
But I mean there is an interesting thing here, like
do you think that not just for search generically for
end users? Are there primary applications for a vector databased
outside of search that you see as like this is
just now a new strategy where if you're not using
pine Cone or you know, one of the competitors to
do this, you're really missing out on some of the
core value that could be being provided.
Speaker 4 (23:48):
I think you touch on kind of a hard problem.
You mentioned fraud. I know, like I don't.
Speaker 3 (23:52):
This is an area I don't know very much about.
Speaker 4 (23:55):
But I know that it is a use case that
we have seen people use vector data bases for to
find uh, identifying patterns and of of fraudulent use in
that type of thing.
Speaker 3 (24:07):
So I can see that like that that could be
a thing.
Speaker 1 (24:10):
Do you know what sort of customers are primarily the
pine Cone is looking for, like or you know, the
the ideal customer profile that usually makes a good match,
Like do you like not pay too much attention to
like the specific use case and it's uh company segment
or vertical or is it you see something about their
technology stack that is a is a good match for you.
Speaker 4 (24:30):
I'm working with developers to help them learn and understand
how to use pine Cone, how to use a vector database,
how to incorporate retrieval into their systems. But I would
say that like one of the things we look for
are people with our companies with like large quantities.
Speaker 3 (24:50):
Of high dimensional data.
Speaker 4 (24:51):
So this is going to be your emails, your your contracts,
your PDF documents in is potentially video or audio, and
need to get insight from it, whether that is you know,
a search results as far as like the e commerce
example we've been using, or it is insights about a
(25:14):
particular you know, business unit and like how they are
operating or how they are how they function. Right before this,
I was reading one of our case studies about a company.
It's about a medical company that's doing research on medications
and they are searching over molecules Like that's a lot, right,
(25:34):
and so like they are essentially embedding those molecules as
vectors and then doing searches over those in order to
gain insights and do research on their their work in
order to develop medicine.
Speaker 3 (25:46):
So I thought, I thought.
Speaker 4 (25:47):
That was really interesting, just in that it's not just
the e commerce examples.
Speaker 2 (25:51):
I wanted to ask because you're on the other side.
Speaker 1 (25:53):
I just like whether or not the ICP of the
of the potential customers, like it does match the types
of of questions or challenges the engineers who come to
community workspaces to like specifically ask questions about like whether
or not they're like already going in the right direction
and the sorts of product areas they're focused on, is
a good match for that, Or you see people just
trying to use vector databases in places that like have
(26:15):
no reason to be used there whatsoever.
Speaker 3 (26:18):
There's probably a mix of both.
Speaker 4 (26:20):
I see people who are they've heard of a vector database,
they've heard of pine Cone, and they like, they're.
Speaker 3 (26:27):
Like me, They're a developer.
Speaker 4 (26:28):
They like knowing, like what's the new shiny thing, and
so they want to learn about it. They want to
learn how it might solve their problem. More ideal customers
are people who have done are a little bit further
in their journey, and so they understand, like what is
the purpose of it, what are some of the problems
that they can solve? They understand that, like their their
problem fits in in some way, and we have a
(26:49):
team that helps them figure out how it fits in
and and how how to actually implement it at production scale.
Speaker 1 (26:56):
Do you see them coming over from first the technical problem,
then realizing they need a vector database to store their
semantic embeddings and then go to pine Cone or do
you see it as a sort of a nuanced play
on top of generic databases or event open source databases
that do offer a vector database, And it's like, well,
(27:17):
you know, if you're doing something in the space, you
may be fine, but if you need something more robust
or at scale, like you would want to switch in
a way?
Speaker 4 (27:27):
Do you the developers that I've been interacting with, I
think they are first and foremost interested in a piece
of technology. They know that they have some data they
want to make their retrieval augmented generation pipeline more accurate,
so they understand that retrieval is a piece of that.
And this is that's all I really know about about
it so far. There's definitely other teams that I work
(27:50):
with that are are definitely more in the weeds with
people and like where they are in their journey as
far as like using a vector database or using coming
to pine Cone specifically.
Speaker 1 (27:59):
Just go and use pine Cone, don't don't use a
There's no reason using generic other vector database, especially, I mean,
honestly from what I the research I've done, that trying
to build your own model to get the embeddings working
right is just such a huge lift in the first place.
And given the challenges from ensuring like similar like you
(28:19):
can't just switch your model from one version. Maybe maybe
you're just going to tell me I'm totally wrong here.
Don't upgrade your model without also replacing all of your embeddings,
because the new results won't make any sense.
Speaker 4 (28:29):
If you want to swap out your embeddings, Like if
you've you've done evaluations, you've figured out, you've done testing,
you've figured out that like it's no longer accurate enough,
you're going to have to reembed that data using a
different model, And so there are approaches to doing some
benchmarking and testing to make sure that like your accuracy
is in the right the acceptable frame of use case
can actually support you would be swapping out the model
(28:52):
and re embedding that data.
Speaker 1 (28:53):
That makes sense to me, But that means that there's
a huge extra cost here to doing a model upgrade,
not just on like future searches and whatever. Like even
if the model is faster you're building it yourself, there's
some improvement or using an open source one, there's some
driver there, but that's going to come with a cost.
And I can see that to be a huge reason
to go with like just take that all off the
(29:14):
table and if you know you need some sort of
semantic search or some other strategy that uses embeddings, to
go with a database that has that baked in automatically
without having to think about how to do upgrades between models.
Speaker 4 (29:27):
That's a big reason why we want to do evaluations
and venuremarking ahead of time on potentially a smaller set
of data before like committing is because you're right, it is.
Speaker 3 (29:39):
It takes time, it costs money to do this.
Speaker 4 (29:42):
Not everybody is going to have the time and money
and expertise to find to or train their own model.
I've mentioned retrival augmented generation a few times here. I've
been spending essentially the last quarter I've been not only
digging in myself and trying to learn about the different
the different parts of but also different like approaches to
(30:02):
doing it and sharing some of that. I've shared some
of that publicly, I'm doing stuff internally related to that,
and also in the future here I've got some stuff
going on. But we are seeing people doing this and
not fully understanding what they're doing. Shocker, yeah, right, exactly.
Probably multiple reasons for that, Like we can we could
(30:26):
go down the Vibe coding rabbit hole of like how
that is contributing to some of some of the good
and bad parts of this. Like obviously it's it's enabling
people to to do more and to like get further
into their problem, but also it potentially brings more challenges
then they even know how to handle.
Speaker 2 (30:47):
I really do have to dive into that.
Speaker 1 (30:49):
So from a vibe coding standpoint, our vector database is
being recommended by coding assistants as a solution for a
problem like it was just like oh, yeah, you know,
generating some code and it pulls in a way to
write to an open source database that requires embeddings or
is that just not happening yet.
Speaker 3 (31:05):
I do know that it comes up. It does.
Speaker 4 (31:07):
It does propose pine code and other competitors and stuff
like that. One of the things that like we have
a challenge with and I expect other people to do
as well.
Speaker 3 (31:17):
Is like, because these.
Speaker 4 (31:19):
Models are trained on old data, like it's using our
old data, our old document public documentation, and so it
isn't always the most accurate. And so we we do
stuff on our end to to try and encourage those
models to or those tools, not necessarily the model itself,
but the tools to you know, generate the right code,
(31:41):
the most up to date code.
Speaker 1 (31:43):
So we're still a little ways away from it being
always the right answer popping up in lms. How about
the how about the LM companies, So companies that claim
they have some sort of AI and they are really
just you know, an LM that's solving a particular use case.
Are they the cornerstone for company case where like a
case study that would be using RAG more often than
(32:06):
not there or is it just there's a spectrum and
it really depends on the vertical or market segment or
product area.
Speaker 4 (32:15):
I mean there's probably a lot of opinions there. I
think that's like depending on who you talk to on
how that is. I think like for me, I see
maybe the model companies are not necessarily advocating for retrieval
augmented generation I mean maybe they are, but like we
keep seeing these models get bigger and better and faster, right,
(32:37):
but there's still those limitations that I mentioned from the beginning, right,
like they're only trained up until a certain period of time.
Speaker 3 (32:44):
There's only.
Speaker 4 (32:46):
It doesn't it's not trained on your company data, your
your private data, and so among a few other limitations.
But those are kind of the key ones that people
really recognize. And so that's where this retrieval part of
RAG is coming in, is like it's the way to
give your model more knowledge, to give it more accurate
and authoritative knowledge.
Speaker 1 (33:06):
So if the primary use case outside like semantic searching
or some sort of comparison search, if you're utilizing it
to extend your data set, either from your own data
or something else that you want to pull in, and
that means you're using RAG.
Speaker 2 (33:20):
Than if you're using RAG, that means you're using a
vector database.
Speaker 3 (33:23):
Not always. Oh it's interesting, it could be retrieving from
some other database. That is, if we think about it.
Speaker 4 (33:29):
I think you recently did like an MCP episode, right,
So that's one approach to getting to kind of interfacing
with other tools and services or even like chat GPT.
Speaker 3 (33:41):
It can go out to the internet.
Speaker 4 (33:42):
That is a tool that it's using to go get
other data and augment your result, your output with more
accurate data.
Speaker 1 (33:49):
One thing I noticed, especially with companies that are still
incredibly technical but aren't specifically in any of the quote
unquote AI spaces, is that they definitely get the difference
between MCP.
Speaker 2 (34:01):
And RAG wrong.
Speaker 1 (34:03):
Often like the bijection of how many of one of
these things they need versus another one? Like I was
one of my colleagues is working out a very interesting
company where they have a need to do RAG on
behalf of their customers, So they're they're pulling in knowledge
bases from their customers and they're trying to figure out
how that interplays with MCP servers. And one thing that
(34:25):
has been problematic is they don't want to own the data.
But at the same time, a lot of them MCP
providers don't have this concept of like multi tenancy, Like
they understand you as a customer can only access your data,
but they don't have a good concept of how to
group or sequester parts of.
Speaker 2 (34:43):
The data into smaller into smaller areas.
Speaker 4 (34:45):
From a vector database perspective, like the way we've implemented
multi tenancy is through name spaces. So if you think
of a company that is offering agents to their customers,
each agent or each user that they are that as
an agent would potentially be within its own name space.
Speaker 3 (35:03):
So that is a way to like actually segregate the data.
Speaker 4 (35:07):
I think you touched on something interesting though, that like
not all the companies out there are AI first companies
or they are not like well versed in AI technologies
and solutions. And I think we're at the point where
that's going to become a huge thing because all of
these companies, there's so many more of those companies out
there that then there are AI companies that have been
(35:28):
doing this for a long time, right.
Speaker 1 (35:30):
I think we're going to have a fight on this episode,
So you know, I'm going to quote some research that
I think came out of Nonda AI report from MIT
that said only five percent of companies are getting value
out of implemented l Like I'm going to.
Speaker 2 (35:46):
Say quote unquote AI because I don't think we have AI.
Speaker 1 (35:48):
That's a different episode where I got into that, but
we can say we can say AI for the rest
of this one aren't getting the value out and it's
just a huge cost SYNC time saying resourcenc. If you
say that going forward, there's gonna it's gonna be interesting.
I can either read that as you believe that companies
will transition to having AI, or all the companies who
(36:12):
don't do it will be out of business and therefore
the only companies left will be ones who do AI.
Speaker 3 (36:18):
I don't mean the latter.
Speaker 4 (36:19):
I think there is going to be a transition, but
I think there's an opportunity there. My background is not
in AI, like me coming to pine Cone is that's
this is a new space for me, right, So I
bring the lens of the traditional developer who's been tasked
with a problem. And in this case, the problem these
days might be, you know, a problem that requires a
(36:41):
vector database. And so as someone who has been a
full stack developer, now I have to go out and
figure out, like how do I solve this problem with
this tool that I've been told to use.
Speaker 3 (36:50):
For sure, I.
Speaker 4 (36:51):
Think we're going to see more and more of that.
Definitely don't mean that companies are going to go away.
Like obviously, like people are still running on mainframes, right
and so like it might not fit in that space,
technology sticks around for a really long time. I think
there's going to be some change and it's not always
going to be easy.
Speaker 1 (37:08):
And I wonder what the turnaround is for this, because
I see companies still claiming that they're introducing Agile and
like that, or worse, they like they've done it.
Speaker 3 (37:20):
I thought Agile was gone.
Speaker 1 (37:21):
Well, I mean, okay, so the manifesto was like in
two thousand or like two thousand and one or something.
I'm terrible with years. I knew what happened before I
really got into software engineering. I think realistically, either companies
say they do it and they don't, or they acknowledge
that they don't do it, and that for me is
like mind boggling, because from my standpoint, everyone should be
(37:44):
doing it all the time. I can appreciate the belief
that that the same thing will happen with AI. But
if you know we're twenty four years out, twenty five
years out from the Agile Manifesto, then I think we're
we're forever away.
Speaker 4 (38:01):
I don't necessarily disagree with you. I think like there's
a lot of legacy code and applic I mentioned main mainframe, right, Yeah,
there are a lot of different companies who are just
at different stages of their maturity in their organization. Some
are big, some are small, and not all of them
are going to get there at the same time. You're
probably familiar with it, but there's this curve of like
(38:21):
where people are at on the the acceptance of a
particular product.
Speaker 2 (38:25):
Or they're crossing the chasm.
Speaker 3 (38:27):
Yes, yes, exactly, that's what it is.
Speaker 4 (38:29):
One thing that I think we have seen is people
are running more production workloads now with pine Cone, whereas
in the past it has like it has been I
don't want to say it has been less production, but
we are seeing more people kind of latch onto that
and doing stuff in production. And so as as we
(38:49):
see more and more of that, like that's that's people
like learning how to do this. So like like I'm saying,
like we're still very early in.
Speaker 1 (38:56):
This, I'm totally on the same page there, and I
think absolutely right those workloads may or may not have
anything to do with LMS. Like we've solved We've identified
a new functional way of storing data and searching it,
and there are primary applications where that's valuable. If you
talk about RAG, we're talking about knowledge bases, we're talking
about semantic search, where you know, we're talking about searching
(39:17):
through tons of attributes and fuzzy searching, free tech searching
that never really was that great to begin with. I mean,
whole companies have made their entire existence to figure out
how to actually do this correctly. And now we have
a fundamental technology or understanding of how to do this
fundamentally at a principle level with you know, raw mathematics.
(39:37):
I do think that there is some coming around there,
and you know, it will be interesting to see whether
or not the companies that gravitate towards needing vector databases
are more and more in the alarm space or less
and less as companies figure out the value that they
can extract from it. So I guess it'll be interesting
to see where where pine Cone's customers end up a
(39:58):
few years from now. I do want to quit maybe
ask you an opinion on something. I think one thing
that has happened in the last five years is companies
having to deal with LM's being used during the interview process.
And I don't know if I should just stop the
question there and just let you say something, or if
I should specifically ask, like I just ask, yeah, has
(40:19):
this impacted pine Cones interviewing practices and if yes or no,
like are they seeing the use of llms intentionally my candidates,
Like is it encouraged to be done or have you
worked around trying to figure out how to deal with
the fact that these lms will be used during the
interviewing cycle.
Speaker 4 (40:39):
In my experience, and I and at my last job,
I did do a lot of also interviewing to hire,
and it's definitely a thing people use them. I did
not use them. It was not part of my interview process.
But I do think as long as like you're setting
the expectation of how and why you're using a tool,
(41:01):
then it.
Speaker 3 (41:02):
Makes more sense.
Speaker 4 (41:03):
I don't think that anyone should be like pretending they
know something when they don't.
Speaker 1 (41:08):
You just destroyed the whole industry there, like just that
one statement.
Speaker 4 (41:12):
Yeah, yeah, I think like I think it just from
a practical perspective, like we use tools in our day
jobs and so like if I can't use cursor or
cloud code to write code and show how I would
use do my actual job, then you know, it's really
hard for an interviewer to understand how you're going.
Speaker 3 (41:32):
To do your job.
Speaker 1 (41:33):
Interesting personal opinion, because and I think this is a
pattern for a lot of our guests that come on
this show that previously you were working at Amazon. I
don't know if it was AWS specifically, so I don't
know what it is with our podcast. And like people
like leave AWS after being there for like four years
or so, I didn't actually look to see how long
you were there and then then are a new job
(41:56):
for a couple of years and then come on with
podcast and have some very interesting things to say.
Speaker 3 (42:01):
So no spilling of the detail. Just on Earth five years.
Speaker 1 (42:07):
There was a question, especially around the interviewing. Did you
see things already starting to be rolled out at AWS.
Speaker 4 (42:12):
During my almost five years I was interviewing people, and
like it's a very rigorous, like you go through training
to learn how to interview, and you've probably seen some
of the questions that were we ask While I was there,
there was no conversation, like I didn't have any conversations
with my leadership about when or how lms are allowed
to be used during the process. Like I don't want
(42:34):
to say it was early enough where it wasn't happening.
I'm sure it was happening, but it was not a
part of my experience there.
Speaker 2 (42:40):
During the interviewing I don't want to know.
Speaker 3 (42:42):
I think it's the Yeah, I know right.
Speaker 2 (42:44):
I'll say this, if.
Speaker 1 (42:45):
You get through the interview round with us, you were
using an LM, and then you continue to use an
LM in your day job for however long you're here,
until the day you leave, and no one finds out
was there any harm?
Speaker 4 (42:57):
For me, it's been like using an LM as part
of my job has been embedded since she at GPT
came out like interesting at Amazon, not just AWS, but
at Amazon, like we were tasked with figure out how
to use this yeah, and same thing at Pinecone, like
we're we're tasked with figuring out how to use these
(43:17):
tools because everyone else is using them. And if they're
trying to use our products or they're trying to do
stuff with pine Cone, then we need to know their experiences.
We need to know what the friction is, what's working,
what's not working, all that kind of stuff. When I say,
I think it is okay as long as to use
it during the interview process, as long as the expectations
are the same on both sides.
Speaker 3 (43:38):
It's because I use them every day.
Speaker 4 (43:40):
I fully recognize like I'm in a special situation. I
work for a company that deals with this stuff right
and builds this stuff.
Speaker 1 (43:47):
So this is actually why I like asking the question
specifically from people that are working out a company that's
in and around the space where they're building tools or
support for things like it would be a weird twist
of fate where you're like, well, we don't know how
to deal with these these interviewers, Like these interviews where
the candidate comes in as using an LLM and like
the product that they're using is using some sort of
RAG that has a vector database that's running on pine Cone.
(44:10):
Like you know that the sort of like weird circle.
There where probably something you should have thought about, but
it's it's good to you know, realize. Okay, no, actually,
not only are our employees but also are you know,
the developers that are utilizing the product to embed in
their own applications. They're utilizing these tools as an important
part of the flow, like expectedly, so understanding how they're
(44:31):
utilizing them is important for us to even design a
better application.
Speaker 3 (44:35):
Yeah, exactly.
Speaker 1 (44:36):
Maybe I'll give you the moment in case there's anything
that we left out here that you're just like, I
really need to share about this thing. Be it a
major pine Cone release. The best feature that has that
has ever come out is about to drop, you know,
one week from now. I don't know what that is,
but you know it, feel free to plug that I
don't have.
Speaker 4 (44:53):
I don't have the next feature that's coming out, and
if I did, I probably couldn't say so. It's a
cool technology. So if you are a person who likes
to understand like how things work, and we have a
lot of really good content on the pine Cone blog
about just kind of search in general and like algorithms
(45:14):
in general, and if you're if that's the thing you're into.
Speaker 1 (45:17):
And whether elms are here to stay or not. I
do think that there is a unique innovation that's happened
here and outside of that, it's definitely worth learning. So
if this weekend you are going to spend some time
rewriting whatever your favorite JavaScript run time is in a
new language like Rust or zig just because you can
or an operator for Kubernetes, it sounds like, you know,
(45:38):
a better use of your time would be potentially to
invest in learning a vector databases and why not use
pine Cone for that?
Speaker 3 (45:44):
I agree with that. I think upskilling is in the
long run, So.
Speaker 1 (45:47):
With there we can switch over to doing picks. So
my pick for this week is how do I put this?
I found that I often plug things into my computers
and unplug them over and over again, and I frequently
get concer learned about the reliability of.
Speaker 2 (46:01):
The USBC port.
Speaker 1 (46:03):
Everything I have is USBC, especially like my ub key.
So I'm a big, you know, physical past key user
because I'm I think too much about security, probably more
than I should, and so I spend a ton of
money more than I probably should un buying magnetic USBC connectors.
So you know, if you're watching the video, it's like
this piece here is just like magnetic which just like
(46:23):
goes together and you can.
Speaker 2 (46:25):
Just walk around with this.
Speaker 1 (46:26):
Here's like a ub key, my micro one, and honestly,
it's just made it better. I plug it like all
my all my my laptops, my cell phone, all my
connectors that are sitting out have a magnetic thing on it,
and it's just been it's just been great for the
last few months.
Speaker 2 (46:40):
Like I can't believe I waited so long to do this.
Speaker 4 (46:42):
Is this magnetic piece like a protector, so it's a
cover when you're not using it or I don't I
don't understand.
Speaker 1 (46:47):
So the piece is just a USBC to USBC connector,
so it's just USBC and this is also USBC and
this part is two pieces, so this gets connected. So realistically,
I walk around with this just this side, which is
not as we see. It's just like a magnetic piece
and it just they just snap together and they're interchangeable.
Speaker 2 (47:05):
So like the same one I use for my cell phone,
like I could, I could.
Speaker 1 (47:08):
I don't know, no one's gonna be able to see this,
but if I hold up my cell phone with the
piece and I take my ubiki, like it will snap
onto here. But it's the same thing that also, like
if I want to charge my phone, it's the same connector.
And it's just been great, honestly because like I don't
have to worry about where on the actual connectors anymore
because I never get pulled in or pushed out. I
had this problem, like every single time I went in
(47:29):
the airplane. I have those one of those really annoying
mini phono three point five millimeter like splitters in the
plane just from my headphones, and I like always break
them on the plane, like I will bash into them whatever,
because you know the air airplanes famous for giving you
lots of room to move around in I.
Speaker 2 (47:48):
Have ruined them, and like this, I just it would
just come off.
Speaker 4 (47:51):
Honestly, it reminds me of the old Mac connectors. It's
not USBC, it's not lightning, but it was like if
you toggled it just a little bit, it would come
off as opposed to like breaking off in the connector.
Speaker 3 (48:06):
Right, do you remember that?
Speaker 2 (48:08):
So I don't own I'm back, Okay.
Speaker 1 (48:10):
I was somewhat jealous of people with the ac adapter
connector that I think it was mag safe or something
like that, yeah to connect Yeah, And I don't know
why Apple got rid of it and then like brought
it back like that.
Speaker 2 (48:23):
It seemed like they were onto something.
Speaker 1 (48:24):
I do get that it falls off, h but for me,
like I'm not using it for like walking around with
so it's like my USB key. I'm on, I'm traveling somewhere.
It's really annoying to stick it into my laptop and
pull it back out again. This is like really easy swap.
I don't know what it was that I just like
clicked for me, like I could actually do this. I
guess I never wanted to do it with like USBA
(48:45):
because it just seemed like such a legacy technology all
the time. But now now that everything's USBC, like this
is this has been a Yeah, absolutely fantastic.
Speaker 3 (48:54):
I'm gonna have to check it out.
Speaker 1 (48:56):
Yeah, be prepared to uh. I think this is like
by Han send Da or something like that. I don't know,
some random brand on Amazon. I'm sure it's ripped off
from like another company that makes really good ones.
Speaker 2 (49:08):
These were actually not cheap.
Speaker 3 (49:09):
Though, So it's the same one, just a different logo.
Speaker 2 (49:12):
Oh yeah, that's the thing. There's no there's no even
I don't know if there's a.
Speaker 1 (49:16):
Logo on it actually, so like that's the most suspicious part.
Speaker 2 (49:20):
So that's gonna be my pick.
Speaker 1 (49:22):
You can't just buy one, though, you have to like
go fall in because like all your power adapters and
everything have to be connected otherwise.
Speaker 2 (49:28):
You're like, well, I have to pull out the plug
in order.
Speaker 1 (49:30):
To switch it. So be prepared if that's going to
be your future. All right, Okay, Jenna, what did you
bring for us?
Speaker 4 (49:38):
My pick is I'll give you a little backstory first.
I have never really understood the allure of mechanical keyboards.
Back before microphones actually had like noise canceling on that.
Speaker 3 (49:54):
My mind's right here.
Speaker 4 (49:55):
So back before microphones had noise canceling on them, they
were very and clickie and clackety, and I just.
Speaker 3 (50:02):
Didn't understand them.
Speaker 4 (50:04):
And anyways, a couple of years ago, I bought one
and fell in love, and I will show it to you.
Speaker 3 (50:10):
It's right here.
Speaker 4 (50:11):
This is not my This one specifically is not my pick.
But a couple of weeks ago, I bought another one
because well, first of all, I I occasionally have wrist pain,
so I wanted a different layout, and I also wanted
a project, and so I bought one that you actually
had to put together. And so I'm I'm only part
(50:32):
way through this. Like I said, I wanted, I needed
a project, like I spent enough time in front of
my screen, like I could actually build something with my
physical like physical hands. But it's it's kind of nice
to just have something something else that is not writing
code or being on a computer, but will still support
my computer years. So I got, I got a different layout,
(50:54):
I got you know, the key caps. I got different
switches that I think they're a little bit different than
the ones I have here. So I'm excited to kind
of try it out and figure out if I like
that this new one as much as I like this one.
Speaker 1 (51:06):
I thought for sure you're going to say that you
bought it and you still don't understand why people like
mechanical keyboards.
Speaker 4 (51:11):
Well, no, I mean, I'm definitely not like a fanatic
about it, like people are obsessed.
Speaker 1 (51:19):
Like, I'll be careful if you say that, We're going
to lose some viewers if you if you.
Speaker 3 (51:23):
No judgment, No judgment.
Speaker 4 (51:25):
When I was at Amazon, I wrote a blog post
about about mechanical keyboards. I just kind of wanted to
understand like who uses them, like who likes them? And
which one do you have? And got a lot of
a lot of a lot of opinions on there. And
one of my friends commented, coworkers commented, and he's like,
as soon as you start like building your own, then
(51:46):
like you're you've gone too far.
Speaker 3 (51:49):
So I've gone too far.
Speaker 1 (51:51):
I mean, I get I get the organomics for if
you have some sort of physical pain, like you know
that there's something wrong with what you're doing or if
you I mean, it took me a lot of years
to realize, oh wait, my whole job I revolves are
on my keyboard, I probably should have one that it's
me as best as I can, and that includes for
me it was a keyboard layout.
Speaker 2 (52:09):
I get totally the physical.
Speaker 1 (52:11):
Result, like actually having to push the keys, and whether
or not that's problematic. I have a thing for sound, though,
and so I did some research on like getting the
quietest keyboard possible, and someone a bunch of people said, oh,
they make really quiet mechanical keyboards, and I'm like, okay, sure.
So I probably spent like a whole bunch of hours
going around to different shops pushing the keys on different
(52:32):
mechanical keyboards in multiple countries, and like going online and
like listening to audio clips of the mechanical keyboards, and
after that, all I conclude is those people have no
idea what they're talking about.
Speaker 2 (52:44):
Mechanical keyboards are not quiet in any way. They're like
the quieter.
Speaker 1 (52:49):
Switches, like they're not They're not quiet. And I actually
my pick actually in a previous episode was my keyboard,
which is dedicated to be a silent keyboard, so people
can hit all they want for that. But I still
will not get mechanical keyboards unless you have a physical ailment,
because then you know, I get it, I tually do.
Speaker 4 (53:10):
Yeah, I also went down both times. I went down
a rabbit hole trying to figure out, like which one
should I get.
Speaker 3 (53:16):
I didn't go quite that far. I didn't.
Speaker 4 (53:18):
I didn't actually go to physical stores like we just don't.
There's not many here that sell what I was looking
for here. But you're right, there's still not quiet even
though like I got. I think I got like one
of the quieter sets of switches, and that the smoother
ones that don't make the as much of the clickity clackity.
It's there's still some sound there.
Speaker 1 (53:39):
So I have to I have to ask you, So,
what's the keyboard brander model that, yeah, you purchased.
Speaker 4 (53:45):
So, so the one, the one that I showed you
is a key cron Q one pro wireless. It's just
is I think it's like the seventy five percent. I'm
holding it up for the people.
Speaker 2 (53:56):
They look small.
Speaker 4 (53:57):
Yeah, it doesn't have like the number pad on the side,
so it's the seventy five percent layout. I think the
new one that I got is an Alice layout, so
it's a little bit more rounded, so that where your
your hands are I'm rotating my hands where your hands
are a little bit in a more natural layout as
opposed to straight up and down like you would on
(54:17):
kind of a regular keyboard.
Speaker 3 (54:18):
And it's a little bit bigger too.
Speaker 4 (54:20):
One thing that like I've noticed with this one here
is it is a it's narrower. It's seventy five percent,
so it's narrower, and so like my hands are closer together.
And the one other piece that I like about this
brand specifically or this this these is that it's pretty hefty,
like this is several pounds and so it's it's solid.
(54:41):
I don't know, I just I like the feel of it.
Speaker 2 (54:42):
It feels like it's real, a post fake.
Speaker 3 (54:45):
Yeah, yeah, it's not going to go anywhere if I
get like carried away.
Speaker 1 (54:50):
I think we have probably like one guest per year
on the show who calls out their mechanical keyboard.
Speaker 4 (54:55):
Is there? Is there? Pick?
Speaker 2 (54:56):
So you're you're in good company, Okay, good?
Speaker 1 (54:58):
Yeah, So that I think we'll call it the end
of the episode there. Thank you so much Jenna for
coming on and being our our target practice for today's episode.
Speaker 3 (55:10):
Thank you so much for having me. It's been it's
been a good conversation.
Speaker 1 (55:14):
It has been I've enjoyed it, and I just want
to thank personally attribute one last time for sponsoring today's episode.
Speaker 2 (55:19):
And I hope to see all
Speaker 1 (55:20):
Of our viewers and listeners next week