All Episodes

May 23, 2024 34 mins

This week, Mark and Shashank sit down with Kobie Crawford, a developer relations expert, and Nishant Deshpande, a specialist solution architect, to explore the impact of Databricks' acquisition of Mosaic ML. The guests share insights on the importance of proprietary data control, the benefits of customizing AI models, and the efficiency improvements in model training. The discussion covers the architecture and advantages of the new DBRX model, the shift from traditional NLP tools to LLMs, and the expanding applications of generative AI in various industries. Tune in to learn about these groundbreaking innovations and hear about upcoming events like the Data and AI Summit

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Let's get started.

(00:02):
We have Koby Crawford here at the Databricks Mountain View
offices.
Koby is a developer relations expert.
Welcome.
And thanks for joining us.
And Mark is here too.
And hello everybody.
You want to tell us a little bit about yourself.
What brought you to Databricks and what's on your mind?

(00:23):
Sure.
Well, I'm in developer relations at Databricks.
The acquisition of Mosaic ML.
So I was working with Mosaic ML, doing technical marketing
type things, connecting with the research community, as well
as we're connecting with practitioners
to try to help get people to the word out
about Mosaic ML, what we were doing,
and get people interested in scalability

(00:45):
and the efficiency things that could be done around how
to improve where ML was headed at the time.
Since we got acquired, we've been continuing that mission
still after that, making sure that people can appreciate
the idea that they can actually get AI that works specifically
for them, works well for them by customizing it for their own

(01:07):
data.
We see in general that people have--
it's essentially like there's two narratives that tend to go
out there.
The first one is essentially like whatever you're
going to build, you can get a general-purpose model that's
exceptional.
It's like the top models out there, GPD4 and GPD40,

(01:28):
and things like this are really amazing in terms
of what they're able to do in general purpose sense.
But what we find in general around how people may ultimately
want to see their application development go
is that they may want to be able to customize.
There are companies that have data that is intrinsically

(01:49):
proprietary.
There's also data that's protected by regulatory concerns
and things like this, privacy concerns.
And in each of these cases, you want to have the ability
to keep control of your data.
You want to be able to make sure that what you're doing
with your data is something that you have full end-to-end
control with that in mind.
And also with the model should be able to do for you in mind.

(02:13):
Customizing your own model becomes something
you actually expect to--
there are these pieces that companies all should want to do.
So that is leading us down this path.
For what we have is a platform on which people can do all
of the things around their data, around every aspect of what
they want to do, including training AM models.
And so the mosaic acquisition fits into the data

(02:36):
versus larger thing to give people the ability
to do large language models and other,
like really high demand capabilities
as part of what they put together for their applications.
Pretty cool.
I'm actually not familiar with mosaic ML.
Can you maybe describe what mosaic did?
And then maybe highlight what Databricks does

(02:57):
just for like I was to know who aren't familiar with that?
Sure, sure, sure.
So starting with mosaic ML, the company
was founded with the idea that, in fact,
our founder CEO, co-founder CEO, Naveen Rao,
wanted to call it mosaic's law, that we
could make model training four times more efficient.

(03:21):
Year over year, every year to make efficiency
sort of like the key sort of thing that you pursue.
And that way, if we had a platform that was giving us
similar very efficient model training, as we saw,
the model was getting larger and larger.
And the computational needs getting just ballooning,

(03:42):
that would be a really good, finally important thing.
So that was really the model that the business got started.
And we worked on delivering efficiency
through algorithmic improvements in software.
And wanted to show people that there were actually
ways to go about that.
There was a thing that the chief scientist
wanted to give people at the center of this jump.

(04:03):
And Frankl talked about, which is like he said,
the math is not sacred.
The idea behind how ML works, how you actually build
these models that there are ways to look at what we could do
that would actually be algorithmic changes that would still
yield models that deliver the performance that you're after.
And so that was where things got going.
And as we went forward, we ended up

(04:24):
training a 7 billion parameter model and PT7B
that we released that and we made it open.
So people could actually go ahead and leverage
and use it.
And the performance that it delivered at the time
was a big boost on what people had with public models
at the time at the size that we were delivering.
Yeah.
And I think that was one of the leading open source models
at the time.
Yeah.

(04:45):
Since then, we have had a lot more and a couple other models.
And of course, Databricks is their own model.
Yeah.
Yeah.
Yeah.
The pace in which people are delivering
releasing new models that are doing just that much better
is dizzying.
It's really wild how many new models are coming out.
And I think maybe we kind of touched about it.
But Databricks came out with a really cool state

(05:08):
of the art model, the DBRX model.
And also, I want to just thank you for making that open source.
So I think the world needs more open source models
than the DBRX that's there.
So yeah, thanks for giving that back to the community.
So maybe, I don't know, would you be able to talk a little
about the DBRX?
Like how does it compare to maybe some other models?

(05:31):
Like what are maybe some of the use cases you've seen people
using it for?
And I'm also curious.
I assume it builds on a lot of the work
from Mosaic's NPT model, right?
It does in some ways.
Yes.
In fact, one of the things that we have talked about
is that our approach to this model was
able to get more efficiency-- speaking of efficiency again--

(05:53):
more efficiency in terms of how much model improvement
we could get per token applied to training.
And so that we were able to even do some comparisons
that showed that if we had gone back and done some of the techniques
that we used for DBRX to train the model that was the same size
and same architectures NPT 7B that even that model,

(06:13):
with our improvements and efficiency
that we developed since then, even that model would be better
at the same size because of the way
we improve the efficiency to do how we process and do things.
So those efficiency improvements,
plus a non-trivial architectural shift,
which was removed to mixed-stribe experts.
And for people to--

(06:34):
I'm trying to do my best to do like a quick 30-second thing
on how MOE works, make sure that the mixture of experts works,
within the context of the model, at high levels,
take one step and say the thing still is one input, one output.
Output tokens are still being generated.
Standard auto-regressive behavior of a language model.
So that hasn't changed.

(06:54):
None of that has changed.
But within the model itself, what you do is you make it so
that for any given token production step,
in terms of what prediction being done,
you don't actually invoke all of the neurons.
You invoke some subset of them.
And this is where this notion of experts
is coming to play where there's an initial routing step

(07:15):
that happens that actually makes a choice of a particular subset
of the rest of the neurons in the model that are going to be fired.
And then those additional ones are fired at that level.
So in our model, there were 16 experts.
And for each model, four of them would be invoked.
So the model total size, 132 billion parameters,

(07:36):
but for any one token being processed,
only 36 billion parameters were evoked at that time.
It doesn't come out to exactly like four or 16.
It doesn't come out to exactly 25%.
You can go into details about the architecture,
sort of fill out blanks.
But at a high level, you're getting a model
that can deliver performance closer
to that 132 billion parameter size,

(07:58):
while only using 36 billion parameters worth of computation
per token generation.
So it becomes much more efficient, again,
fewer computations per token generation,
while at the same time still being able to deliver higher
performance to the click and compare
comparable to larger models.
So that's probably the most important thing

(08:20):
that we've actually made a path through which people
can have a similar MOE development training experience,
where you can actually get MOE and train it on our platform.
And we've shown with DBRX that this can be done
if you can deliver a very high quality model.
Yeah, it's very exciting that you were able to make

(08:44):
such big performance gains.
And honestly, I think that the DBRX model kind of came out
of nowhere.
I don't think that Databricks, in terms of like
ML and AI was--
it wasn't really on my radar.
Like, everybody knows, like, Open AI and like how
Microsoft, Facebook, and everybody's doing stuff.

(09:06):
And then it was like, you look at the benchmark,
so wow, what is this DBRX model?
It's like, it just came out of like completely left field.
So it was really incredibly exciting.
Have you noticed any kind of particular use cases
that people were able to use, maybe with like the DBRX model,
or just like with like some Databricks' platforms in general?

(09:29):
The use cases are kind of still--
there's growth in a lot of areas, of course.
But like, a lot of things that people kind of typically expect,
you know, making chatbots and things like that,
are still by far the most popular thing
that people are doing with them.
I don't have like a set of things that I know I can talk about
in terms of like, what our customers are doing,

(09:51):
that I know I was like really scared of you
talking fair game to be discussed.
So I'm not in the great position to do that.
We do have a number of customer stories that
are published on our blog.
And I would end up going referring back to those
to sort of make sure to just safely talk about the right use
cases.
But it is actually really nice to see how the use cases are

(10:12):
starting to diversify.
You know, the chatbot is still popular,
but it's like there are more things that people are coming out with.
And as we start to see other things,
then I'm going to segue right into talking about compound AI
systems here when I do.
As we start to talk about like what else
people want to see generative AI contribute to.
A lot of the use cases are now talking about like,

(10:32):
what do we want to have happen as a result of having an element
involved?
We take natural language.
We have it come in.
We turn that into SQL and now query databases.
And so like that's an example of something
where you're going to do a standard SQL query on the result
of this, that the LLM has been able to take
a person's spoken input and generate that call.
And then go get the data that way.

(10:53):
And there's a thing that's, that's
a few bits beginning more common is for people
to do that kind of work.
Similarly, from there, you can imagine sort of more generic
function calling, not just using SQL language,
but other things where people want to take in some inputs
and pass in a call to another thing that might be,
you know, a JSON block that gets generated by the LLM
and then make that call to another API that's going to generate

(11:14):
a just based on standard REST API
generated in some response.
Yeah, I mean, I think with that to the workflow,
you could really imagine some really powerful things
being built.
Because I mean, if you can have like an LLM, which
can call it API, and then that API can return response
and the LLM can go do something based off of the response,

(11:34):
I mean, it's like, I mean, the sky's really lit in there,
right?
I mean, it's like a very exciting times we live in.
And I don't think that like, I've seen,
I think that like the models, like just recently,
have become good enough to like actually make
that type of workflow possible.
So I feel like I'm really excited for the next six months.
Well, we start to see people actually leveraging these models

(11:58):
to make like, agente workflows.
And also, like, I think because the models are getting smaller
and also good, like you're going to be able to do that
like much more cheaply than having to pay for like an API
call every time.
So really exciting stuff.
Yeah, so I think that like I want to be respectful of time

(12:18):
because we are hosting an event here at Databricks' Office
in Mountain View.
So that's coming up soon.
So I want to like wrap this up relatively soon.
But is there any like kind of events or things
that you want to plug that maybe we didn't talk about?
Yeah, absolutely.

(12:39):
We've got data in AI summit coming up June 10th to the 13th.
That's going to be hosted at my gun.
I was going to Center in San Francisco.
We're really excited about that.
It's a big thing.
We're expecting something like 14,000 people.
And the selling is already like most of the way there.
So for people who are interested and want to come out,
we should really go ahead and jump on and get on board.

(13:01):
So we're really fired up about that event.
People are going to be there to talk about all things
Databricks across the full spectrum of the Databricks
offers and just the all-space with everything else
that goes into what the company does, which involves--
sort of like a unified governance platform for everybody's
data.
So you talk about any given organization

(13:22):
has a variety of sources of data, whether it's
if it's your manufacturing space.
You've got all of your inputs that are going into how
your product is being built and whatnot.
That may be internet of things data as well as a bunch
of other things.
You could have just standard financial data things.
Companies aggregate so much when they're
doing all the business processes and the idea behind Databricks

(13:45):
as a company is that once you have all of those data
managing in a single place and the Lake House,
or we're talking about this data intelligent platform
as we talk about it now.
You have the opportunity with that to be
able to get insights that cut across departments,
that cut across the different functional areas
that you work in, and you're able to yield the kind of insights

(14:06):
that are going to help companies really succeed.
And the way the CDO all they go is he
talks about he's like, the best company in every sector
in the industry is going to be an AI company.
They're going to be using AI is how they get ahead in the world.
So that's what we put on display for people
at Data and AI Summit, the full spectrum of things,
including the parts that get done with LLAMS.

(14:29):
So they all get put together into a large package
and it works pretty well.
So that's a good--
OK.
Great.
Yeah, that sounds good.
Yeah, we'll try to put the link to that, maybe in the show
now.
It's so fiddle listeners that are interested
that they can come and just click on that.
So anyways, yeah.
Thanks so much for the time.
I really appreciate it.
Mark, it's a shock.
It's been great to be new to you for making the time this happened.

(14:51):
Of course, this was a pleasure.
Thank you.
You're welcome.
All right, Mark.
Yeah, let's get started.
So we're back.
So we just interviewed Koby.
But now we have a new person from Databricks, which
is--
maybe would you be able to introduce yourself
and let me talk about what you do at a Databricks?
Yeah, sure.
So that might be the reason that you're

(15:12):
on the Shvandai.
I'm a specialist solution architect, called SSA internally.
And what that basically means is I help our customers
use Databricks of problems when they come up with problems.
Or I particularly work on--
so I've been in Databricks two years.

(15:32):
I mostly have been working on optimization problems,
spark architecture problems.
But then lately, since we've had a lot of people
trying to use Databricks for a genie-eye stuff,
I've also been helping people with genie-eye.

(15:56):
And so I have some background in machine learning
I've gone between data, stuff, and machine learning stuff
throughout my career.
So yeah, I'm getting back into the genie-eye phase of machine
learning, I guess.
So unlike Kobe, you didn't come from the Mosaic acquisition.
You were at Databricks for a while.

(16:18):
Maybe I'm curious about how that acquisition and merging
of two separate companies is panning out,
because I've worked at Google for a while
and I've been through merging with Nest and Fitbif
and the hardware team.
And I think like acquisitions and merging cultures,
engineering, backgrounds, and priorities

(16:40):
and stuff is a little challenging.
And as far as I know, I don't think Databricks
had much of a focus on building ML models
as far as I understand.
And it seems like a relatively new thing,
especially with the new DVRx model,
where you guys are building models yourself
and releasing them open source or having bigger models,

(17:01):
which are closer.
So how's that focus changing after that acquisition?
I don't know if I'm going to be able to enlighten you
that much to be honest, because I haven't been
interacting a huge amount with the Mosaic team, et cetera.
So the simple answer is I couldn't give you much insight.

(17:29):
But you mentioned that even your focus has shifted from working
on Spark and stuff like that to Jenny.
I work.
Yeah, I don't know.
It shifted more.
Like obviously a year ago, I guess, a year or a bit ago,
we didn't have everyone trying to use Jenny
in some capacity.
I think 2.0 was out one November.

(17:53):
Something like that.
Like a year and a half ago.
And I think it's only in the last year that I think everyone
has used GPD or personally.
And everyone's like, we need to use this for XYZ
for our professional work.

(18:14):
So everyone's trying to do that.
So at a point being that I still do a lot of performance
engineering and optimization stuff like that.
But yes, some more focus, I would say,
on people coming with Jenny and trying

(18:34):
to work out how to use Databricks.
Especially people that already do a lot of their data
processing in Databricks.
So it's the place where they're already doing a lot of data
stuff.
And then it's a natural extension to say, OK, how do we
take that and use it for Jenny?

(18:56):
So that's how those customers that I worked with before
to do data pipelines and BI and how do you spark well for all
of that is now those things extend into, OK,
we have all this data.
How do we fine tune the model?
Or how do you use Rags with all that data?
So that's-- I don't know that answer to your question, but--

(19:18):
Yeah.
I guess so.
Can you take us through an example of like a customer
archetype that has data in Databricks and that's doing one
of the things that you mentioned, either fine tuning or building
a rag on top of it and how the Databricks suite of tools
helps with that?
Or just like anything that people have
been able to take advantage.
Or maybe not some of the people have done,

(19:40):
but something based off of everything
that new has been built, any kind of future use cases
that you think you would like to see built.
I think the fun thing is almost being like where
we see people we're doing.
So they had-- I don't know, let's say--
I won't give out names of companies,

(20:01):
because I don't know which companies are confidential.
It's on the so forth.
So just a word doing that.
But so I worked on something where this company that
is ingesting PDFs.
And they were doing that with more traditional tools
that works scanning OCR type tool scanning PDFs,

(20:22):
getting those text.
But then the images would cause problems.
When the documents were had different formatting,
it would cause problems on and on.
And now with Genie I, they just switch.
And they're like, we're going to use these multi-mortem models.
We're not going to use the whole thing anymore at all.
Or like entity extraction.

(20:44):
This was something that we had an open model to do.
And there are lots of them out there.
And now people are just like, I'm going to use some LLM.
And I'm going to just prompt it to say extract entities
from this.
And tell it.
So all of this NLP stuff that we used before to do like entity

(21:05):
extraction was a very common use case.
Is now shifted-- or I think you should try.
I don't know if it's also shifted.
But a lot of it is just shifted to logistical--
or like classification.
There's lots of classification models out there.
Machine learning model that if people have developed a
bit time and so on.

(21:26):
And now you see people literally just calling an LLM saying,
here are two examples of three examples of classifying.
Where I'm classified this.
So I guess like a few shot example with an LLM is better
than traditional NLP classifiers or entity extractors.
I mean, I couldn't say better in the sense that I have data

(21:50):
to say it's better.
But I'm saying that I have seen people switch to using LLM
for that type of task, but before they were using more
traditional models.
So it probably-- or it's simpler.
And that's what I was going to say.
I wonder if it's just a lot easier to use an LLM as like a
shotgun approach to just cover every single NLP use case

(22:11):
as opposed to figuring out this specific NLP tool to
extract entities and so on.
Even though the NLP approach might be a lot more cost
effective and more targeted with these LLMs, I think you
have to format the output a little bit to parse specific
data points that you're trying to extract.

(22:32):
Because there's a lot of variation in the output.
So despite NLP approaches being a lot more robust, I think
NLMs are a lot simpler to use.
Yeah, I think there's a lot of terms being flown around
here.
We've got-- just for those listening, we talked about entity

(22:52):
extraction, classification, and also NLP.
Maybe can we try to define some of these terms so
everybody's able to catch up to the conversation.
So maybe let's start with just like what is NLP?
Yeah, I can try to take a stab at it.
NLP was like a traditional, natural language processing

(23:14):
where you try to understand what someone is saying.
This can be Google trying to understand what is the user's
query and when they're typing something in the search
engine or someone's searching for something in Facebook,
eBay, whatever.
You're typing a phrase and you want to extract the intent
behind that.
You want to extract some meaning out of it.

(23:35):
And within NLP, there's a lot of different tasks or focus
areas.
So one of those is entity extraction.
So for example, on Twitter, someone posts something, you
kind of want to look at a bunch of tweets and find out,
OK, how many people are talking about Elon Musk, for example,
or how many people are talking about Trump and whoever
else that you're interested in.

(23:56):
And moving on to some of the other focus areas within NLP,
there's classification.
So I guess maybe if you're a marketing agency or marketing,
you can be like an advertising platform.
You want to classify content in different areas.
So if you want to show ads within your platform,

(24:19):
you want to show ads that are within a specific category
next to that specific type of content.
So you maybe want to classify tweets as related to fashion
or finance or help.
That way, if you have advertisers, you
can show those ads next to those tweets.

(24:40):
And there's a bunch of other NLP workloads that have been used.
And this uses parts of speech to break down a sentence
into its semantic structure with subject, object,
verb, articles, nouns, pronouns.
And there's a lot of traditional ML that is focused specifically

(25:03):
for this type of task.
And then you break the down into a bag of words
and use those bags of words to kind of see which words
occur most frequently with this category, for example,
shopping, but you can classify.
OK, since there are a lot of words related to shopping,

(25:23):
we'd be classified as shopping.
But now with LLMs, I think there's
a lot of magic happening with these transformers.
And it does a much better job at understanding
the meaning of--
it frees, a sentence, a paragraph--
much better than any of these manually constructed rules

(25:46):
that have been used in LLP.
I don't know if that makes sense, or if you--
Yeah, no, I think that makes sense.
Yeah, like I said, I don't know if it did a huge amount better.
I honestly don't know that.
But it is so accessible to anyone that we're
talking in English, not in Pergamon language.

(26:09):
And so people clearly prefer to just telepromp, extract
me any names you see in this text, then working out,
oh, I need to download this spacey thing,
or this, I don't know, pick two of it,
and then I need to code it.
And then how do I now haul it?
Oh, that's dumb.

(26:29):
Yeah, that makes a lot of sense.
To me, just thinking about--
you mentioned entity extraction.
When you said just names in the text, right?
I mean, you could go and you could read it PDF,
and then pull out the interesting parts of the PDF.
And then I would imagine--

(26:49):
I'm actually curious.
You were able to do entity extraction amongst data
that you would store within Databricks as well.
So if I have a bunch of data stored,
and any one of your types of data stores,
would I be able to maybe run some queries and do some MLM

(27:09):
workflows on top of that?
Yeah, so we've had workloads that would go over data in a data
lake and tables or whatever, and call out to APIs, which

(27:30):
formerly were art create UDFs, which formerly
would import some MLM library--
not MLM, I'm sorry, it's like NLP library--
do some bunch of work and spit out whatever it was going.
PIs are another good use case, right?
There were all these two kids that were trying to fight PII.

(27:54):
And now people just call an 11 and say,
from this all this text, tell me if it has PII or extract it.
But point being, you can call--
so you spark many of our things, and that runs over lots
of other data.
It can take pieces of data and make function calls, make

(28:15):
API calls.
So now before those API calls were to some NLP library,
now those API calls are to some MLM endpoint.
So yeah.
Yeah, that is a really exciting use case.
You mentioned to identify PII.

(28:36):
So for those that don't know, PII is personally identifiable
information, so it'll be things like names, emails, addresses,
credit card numbers, bank account details, whatever.
All the stuff that you may want censored,
so I think that's a really interesting use case to say,
hey, find me this, then take it out of the response.

(28:57):
So then you don't accidentally inadvertently
start saving information that you don't want.
Because oftentimes, API data could be really unstructured data,
maybe hard to know exactly where it is.
And yeah, I think that's a really interesting use case
for LLM.
Like maybe there's like tools out there that
potentially do it better, but those may be more expensive.

(29:19):
And LLM's want to be just like really easy, hey,
like find the PII and take it out.
Yeah.
And again, maybe I don't even know the necessary
better honestly, maybe LLM's do it better.
But like if you think about--
and yeah, if you've all heard about people,
they actually regulation to many industries,
or that you have to--
you can't keep PII for X time, or you have to remove it

(29:42):
if people ask for it at all.
So that's number one.
But then think about like if you're fine tuning,
in some way, training in and along, right?
You're feeding it information.
Imagine if like your address and social security number
and stuff is in that information.
And then someone asks a question and outcomes

(30:03):
like your address and social security number
as an answer to all kinds of stuff, right?
So yeah, I think it's becoming-- as people are working out,
more and more people are training or fine tuning or maybe six
months ago, right?
They were like, I don't know, in the US, maybe 20 companies

(30:24):
that were fine tuning in LLM, right?
And now there are hundreds, maybe thousands.
Like we've seen lots of use cases at Databricks.
So I'm imagining there is, I don't know, 100X out there,
they're training, or at least fine tuning, LLM's,
with data that they have.

(30:46):
And a lot of it's going to be in trouble and so on.
So yeah, I mean, that's-- it is a systems problem.
And I think over that, Riches, how do you make sure
that you train with the right data
and inadvertently introduce things that you don't want to,

(31:06):
basically?
So yeah, so I think we get--
a lot of people asking, how do we do that?
How do we make sure that happens?
And so that's not unlike other systems problems, right?
It's slightly different, but there is data.
Also data, and we need to make sure

(31:28):
that it's used properly and goes to different checks
and filters and whatnot.
And you can see what data was used and all this type of stuff.
So a lot of the problems are similar,
but they're not being applied to feeding
an LLM in some way.
So--

(31:49):
Cool.
Yeah, I know we're running low on time.
We have an event at 6 o'clock, and it is now 553.
So I appreciate us just pulling you
into the conference room and for us
to ask you a few questions about Data Bricks.
But before we kind of wrap up, is there
anything that you want to bring up that maybe we didn't

(32:11):
ask you about?
Maybe that you think that we should know.
Maybe the audience might be interested in.
Ah.
Not-- not-- I mean, I'm--
yeah, just looking forward to the event.
And yeah, I mean, this is--
I think it's like I've been doing this stuff since--

(32:31):
20 plus years.
And I definitely think this is one of the bigger waves.
And that, in two, it's literally like mobile and almost
internet and so on.
So it's-- yeah, I mean, I think it's pretty crazy what we're
going to see, I think, in the next three, four years.

(32:53):
So it's going to be fun.
It's exciting.
It's awesome.
I think that's a sentiment I've heard unanimously from everyone
that we've spoken to.
Yeah.
What do you think?
Do you think it's a bubble?
Do you think it'll be going?
No, I mean, we just had a crypto bubble, right?
Like three years ago, maybe, or whatever it was.
But if you went and asked people--

(33:16):
not even random people, I mean, you don't mean random people
in the barrier, right?
Have you used crypto?
Like mostly, it was like no.
And right now, if you ask a 14-year-old, have you used chat?
They're like, yeah, I do my homework with it.
All the way from-- if you ask everyone,

(33:36):
and they're like, yeah, of course I've used it.
So at least I'm out here.
So I think it's a very different quality
to other hype that we've seen.
So there's hype, but I don't think there is any doubt
that there will be millions of elements deployed.
And things will be using them.

(33:59):
That seems like databases.
Like if I say databases, yeah, of course every company
has uses databases.
And I think in X years, when you have companies that say,
we don't use it in one, I really doubt it.
So--
Yeah, that's true.
It does feel kind of like a fundamental technology

(34:20):
that maybe everybody's going to adopt, right?
Or crypto, you mentioned.
I think that has its place.
But it does seem like everybody's the LLM.
It's kind of like everybody's using cell phone,
the cell phone, the internet.
So I don't know.
Maybe it's a-- maybe it's appropriately hyped.
Yeah.
Yeah.

(34:41):
We'll see.
OK.
Perfect.
Well, yeah.
Thanks again for your time.
Yep, I think it's so much cool.
[Music]
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.