Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 2 (00:12):
Hey, people with a
coat, mel Hibbert here.
We've been out for a while butwe're getting back to it.
Before we get started, a coupleof words about the Pit.
You've been watching the Pit.
The Pit's pretty amazing.
It's gotten such great reviewsand it's so intense and no
spoilers.
But boy, it gets pretty intenseand it's real.
Right, it's real.
A lot of stress, a lot of ptsdand I've said on a lot of
(00:35):
programs for those of you thatare clinicians watching this,
nurses, docs and you've got thatptsd feeling, please go, get
help, please go and get help.
There's a lot of stuff thatworks.
We've talked about it on DirtyWhite Coat here.
We've talked about things likerapid alternating currents and
movements and eye movements andketamine and psilocybin and
(00:55):
there's a lot of stuff, becausethe job is not normal and just
watch the pit, if you can, toremind yourself of the
ridiculous nature of the work.
It's just crazy.
I'm super excited to be.
You know, I was a consultant inthe first season.
Now I'm actually in thewriter's room for season two
hanging out with Joe Sachs andNoah Wiley and Scott Gemmel and
(01:16):
all the rest.
It's an amazing experience.
It is amazing, and the plan, ofcourse, is to make season two
even better.
I'm not sure how you can do that, because this has been quite
remarkable, but we're goingthrough a lot of cases and story
arcs, but the idea continuouslyfrom everybody there is how do
we tell the real story of thedocs and nurses that work in the
(01:36):
emergency department, of thepatients, of the stressors, and
what it looks like in 2025, in2026.
Not what it looked like 30 yearsago, but what does it look like
now?
What does it actually mean?
So, if you're wondering if thepeople that make this show
actually care, they care sodeeply about the work that
they're doing, to represent thework that you do, the real work,
(01:57):
the work that's actually in thereal world on real patients but
they're getting to tell thisdramatization of that and it's
been incredibly well receivedbecause you know why the work
that you do really does matter.
And now let's have a discussionabout search and AI in
Corpendium, and this has muchbigger implications and we're
(02:18):
going to be talking more aboutthis, about where AI and
antigenic AI is going inemergency medicine and medicine
in general, as we continue tothis discussion on the code.
That is quite filthy, stuart.
Last night on the pit yeah,there was a shout out to MRAP.
Speaker 3 (02:33):
MRAP.
Yeah, it's pretty awesome.
Do you have a clip?
Do you have a clip?
Speaker 2 (02:37):
of it that you can
post.
Yeah, I'm going to.
I haven't ripped it yet.
I'm going to rip it and put itin there.
Speaker 3 (02:40):
Rip it and put it in
there.
Speaker 2 (02:42):
I'm going to
eventually watch it, Dude last
night I was in tears, I wasfucking PTSD-ing.
Yeah, I don't like that.
It was pretty bad and I knew itand I've seen it and I made all
the cases with it.
I'm still losing my crap.
Here's what we're doing.
So we've got Corpendium andyou've got these LLMs and
there's lots of different ones.
(03:02):
And, and you've got these LLMsand there's lots of different
ones, and what we're trying todo is say, hey, LLM, large
language model, use corpendiumand only corpendium to do a
search for us and then give usthe citations and for some
reason, it doesn't want to dothat.
So I've got here Matthew andStuart with me, who are really
working the problem, but one ofthe issues is that it
(03:25):
desperately wants to go useother shit.
Is that what you've beenfinding?
It wants to go out of corepenum.
Even though you say you usecore penum, it's like no, I want
to look over here.
Speaker 3 (03:33):
Which one do you want
first?
Do you want like thelayperson's explanation?
Speaker 1 (03:37):
or do you want?
Speaker 3 (03:38):
Matthew's explanation
.
Just tell us which one you want.
(04:05):
I mean, from my point of view,this is my understanding okay,
is that you can?
It has to do, based on all ofwhat it knows, based on the
internet.
That's the nature of what theAI is, that's the nature of what
an LLM is, and so if you ask itto limit itself completely just
(04:25):
to what's in corpendium, itwould be illiterate, it would be
unable to do it, and so whatyou have to do is you have to
allow it to be informed by allof the things that it learned in
its internet searches,especially things like synonyms,
like heart means cardiac, etcetera.
Right, and then, when youintroduce that intelligence, you
(04:45):
can get these fantastic answersthat are generated from it.
The only problem is thatsometimes I don't know what
percent we're finding maybe 5%of the time it's given you just
garbage, because it's justdesperate to give an answer.
Speaker 2 (05:03):
All right, so now
Matthew, who is our tech guy?
He's a techie.
Give us the technicalexplanation.
And what's the name of that?
I've forgotten the name of whenyou ask an LLM just to do your
internal data, or whatever it is.
Speaker 1 (05:13):
It's RUG or something
I can't remember that RUG, yeah
, rug, so RUG, yeah, retrieval,augmented Generation.
So one of the challenges isthat with RAG is a way of taking
the internal information wehave, let's say, with all of
Corpendium's information right,the structured data, the
(05:35):
structured information we have,you know, in chapters and
sections and providing that toin the system with the LLM where
we're telling it.
We want you to referenceprimarily this Corpendium data,
pull from that data set wheneverwe have a question that we're
asking you.
And so one of the challengesthere is how we provide that
data to the llm.
To work in concert with the llmin this rag model is that we
(05:57):
need to chunk that data andthere's a whole system for how
this works.
So we first take all theCorpendium data and we chunk
that data into sections that canthen be turned into what we
call embeddings representationsthat we can then search and find
(06:28):
similarities between thesenumerical representations,
between different concepts,between data concepts.
So we've got to take all thatdata from Corpendium, chunk them
into chapters or sections, turnthem into these numerical
representations that live insideof this vector space.
And vector space is kind oflike it's like a
three-dimensional space whereyou can see how close terms like
(06:51):
cardiac and heart are together,versus just someone typing in a
keyword that says heart, right,heart's only going to find
heart in the keyword search.
But in a semantic vector searchyou type in cardiac and it will
return heart because they areclose together in that numerical
space.
Part of one of the challengeswhere the llM will revert to
more of the information that ithas is when we can't provide
(07:14):
enough data from Corpendium thatmatches the incoming search
term.
So the incoming search term or,if it's a question, right, what
happens on that side is thatalso gets turned into an
embedding, which gets turnedinto this numerical format, to
an embedding, which gets turnedinto this numerical format.
And we try to pair the incomingsearch query numerical format
with what we have in thisCorpendium embeddings vector
(07:38):
database and we try to matchthose two.
And then we provide the matchedinformation from Corpendium
along with the prompt to the LLMand and say use this data in
your response.
So sometimes the llm, I think,wants to provide such a now
robust answer.
Maybe it doesn't have enoughdata that it's being provided,
(07:59):
so it will reference its owndata, more so perhaps than than
data maybe we do or don't havein corpendium or we're able to
provide it with that sort ofthing.
Speaker 2 (08:09):
So obviously this is
an issue that everybody is going
to have.
It's not unique to us.
Are there some of these largelanguage models that are better
than others that we talked to?
I can't remember who you weretalking to about it.
Oh, it was actually a groupthat was working with us
externally saying that theychange models every week, like
their underwear, to try and findthe best one.
Are you doing the same?
(08:30):
What's sort of the industrydoing?
Speaker 3 (08:33):
Do you mean in terms
of changing?
Speaker 1 (08:34):
our underwear every
week.
Do you change your underwearevery week?
Speaker 3 (08:40):
I'll let Matt answer
that.
Speaker 1 (08:42):
Yeah, we're not
currently.
We found a specific model thatseems to be working quite well
for bringing specific model thatseems to be working quite well
for bringing you know, puttingthe textual information between
the pieces of data that we'reproviding from Corpendium
together and then providing thatin a summary to the user.
But it's definitely somethingwe could look at in the future
(09:02):
and should look at as thesemodels continue to improve and
costs continue to come down.
Speaker 3 (09:07):
So, mel, you know
this whole issue practically.
It's something that's reallyemergency.
Physicians are very familiarwith it.
It's basically sensitivity andspecificity, right, you're
basically.
You know we're struggling.
On the one hand we want a modelthat is really really accurate
to what is in Corpendium.
We don't want it going offscript at all.
But if you go strictly withthat model you're going to get
(09:31):
tons of frustrating answers toqueries.
You know I can't find this, Ican't find that, and you know
it's just not going to go thatextra step that we expect for it
to help you to get your answerright.
And then if you go to the otherextreme and you dial it to, you
know, to let it run a littlewild in terms of using its
outside information, then yourun the risk of what I think
(09:53):
some people might callhallucination, but really is
something of just an effort onthe part of the computer to fill
in missing gaps.
It's not like it's.
You know, it's not.
It's not a psychiatriccondition on the part of the
computer, just the way thesethings work.
And so there is no perfectsetting right.
You're always going to get alittle bit frustrated because
it's going to say stuff isn'tthere, on the one hand, or run
(10:13):
the risk of having to be, youknow, a little more skeptical of
the answer to make sure thatyou're going to have to double
check that everything is there,and so we're really struggling
with where to lie on that scale.
Speaker 2 (10:24):
So that's a really
good analogy.
You know, for every patient,you see, well, you see it could
be a PE.
So I could just dial up thesensitivity and just scan
everybody, and then there'll belots of downstream effects of
that.
Or I can like I only want toscan those people who are
clearly dying, and it's clearlya.
Speaker 3 (10:39):
PE.
Yeah, absolutely, that'sexactly.
And you know you can imagine,mel, that in our community there
are users that are going towant it to be stricter and
they're not going to want AI toplay any role in making anything
up on its own.
So, residency directors, peoplethat are starting off learning,
(11:01):
we're really really concernedabout that.
On the other hand, veryexperienced practitioners when
they get something, they'reasking a lot of questions and
they get something that seems alittle bit sus.
It's okay because they'reseasoned practitioners and
they're they know, no, no, no,that's, that's totally it
misrepresented, that that's off,and then it just, you know, and
and the other thing that I'venoticed is that when you do get
an answer that's off, it'salmost always just a matter of
(11:24):
rephrasing the question ormaking it a little more specific
to fix it, and that's somethingthat we all need to know.
And so at some point we have to.
You know, at some point thishas been really we've been
consternating and consternatingtogether.
We meet about this every weekand, you know, should we launch,
and at some point we're justgoing to have to say, look,
we're going to have to releasethis thing and at the same time,
(11:44):
we have to educate ourselves onhow to use it properly, because
otherwise, you know, we'llnever end up going anywhere.
We have to become AI literate,so the whole world is learning
this as we go.
Speaker 2 (11:55):
It is magical when it
works the answers.
It takes a little while, it'sgot to think and it's got to
look, but the answers are oftenquite magical.
But, matthew, what Stuart'sreally been trying to work on is
like hey, now embed thereference so that there can be a
second check.
It trying to work on is likehey, now embed the reference so
that there can be a second check.
It says you know rabies, youshould use imnoglobin of this
dose.
Now I want to click on thatreference and I want to find
(12:16):
where Sean Nort and his team ofhumans actually wrote that down.
Is that difficult?
It seems difficult because I'mlooking at you guys testing and
sometimes it's again choosingthe wrong thing, even though it
comes up with the right answer.
Speaker 1 (12:31):
Yeah, and there's
actually a case that we came
across recently that I thinkreally is a good representation
of some of this.
Stuart, I'm referring to theDurkin test, right?
So that term Durkin test wefound doesn't actually exist in
the corpus of information, butit was coming back.
When you type in Durkin, whatis the Durkin test, it would
come back and it would give youan AI search summary result
saying, hey, this is fromCorpendium.
So what's happening in thisvector space that we're talking
(12:52):
about is the term Durkin test,when we create this numerical
embedding right, is very closeto actually the term carpal
tunnel compression test, whichis what this is.
So what will happen is it willfind a match in our data because
it understands the distance isbetween Durkin test and carpal
(13:13):
compression.
Carpal tunnel compression testis very small, so that's that's
what it returns, so it wouldgive you a whole summary around
the carpal tunnel compressiontest when you asked about Durkin
.
So these are some of thechallenges we've been having,
where we're saying, hey, thisdata word for word or even
character for character, isn'trepresented, but when we go
through the whole search process, the search system would say,
(13:34):
yes, it is.
Speaker 2 (13:34):
That's a great
example of sort of the magic of
this, and we're going to have toget used to this tension,
because I didn't know what theDurkin test is.
If you had asked me to searchthrough corpidium, I never would
have found carpal tunnelcompression.
So it is a genius in some wayslike that, that it would even
find that.
Speaker 1 (13:49):
Right, exactly, and
of course, then you know
throughout all of our testing.
The question is are there caseswhere something's being
returned, where between thatthreshold where we'd say it
shouldn't return something, itshouldn't return, you know, b
for A or it's really it'sdetermining A for A right and it
shouldn't return B for A, andwhat's that threshold and how do
(14:11):
we want to set that and howdoes that look over time and
making sure we get users to beworking with this and giving
feedback and so we can set thosethresholds appropriately?
Speaker 2 (14:19):
So tell us about
those, the references now that
we're trying to pick out as thatsort of secondary check that
you can do.
How is that going?
I think you're on version 857of this thing right now.
Speaker 1 (14:33):
Very important that
we've got citations that people
can look at, this summary that'sreturned from our AI search and
then be able to click there andgo and read exactly about that
information.
What we do is we, as a part ofeach of these embeddings, we
break down the chapterinformation into these chunks,
right?
We also store.
We store the embedding, thenumerical value as well as the
chapter information, as well asthe metadata, which is this
(14:54):
reference information thatallows people to be taken back
to that chapter section where wesay this information came from.
So that's what we're providingat this point and we're pulling
in for any search.
We're pulling in up to 15chapters of information you know
around any and then using thatfor the most relevant 15
(15:16):
chapters of information andusing that for any specific
search, giving those citationsand continually checking.
We have humans in the loop.
Actually, we're doing thisevery single day, making sure
that we're being pointed back torelevant sections of
information that are listed forany search in the citation.
Speaker 3 (15:32):
Well, I mean, yeah,
exactly, I was just going to say
that what we're experiencing ingeneral, just to take a step
back is that the information ischeap and anyone can type
anything into Google.
They can you know pretty much,I think at this point even ask
for a summary of studies andwhat the studies conclusions
were.
You can, you can do all thatkind of stuff.
(15:53):
The coin of the realm is reallywhat our experts have to say,
and that's what's becoming somuch more important.
And so the other thing, mel,that we're really trying to get
to is when we craft statements,like when we all get together
and say, hey, what are we goingto say about controlling the
heart rate in aortic dissection?
How are you going to say thisin a way that's most helpful to
(16:14):
our practitioners and doesn'tcorner them?
You know, we really craft astatement that's helpful.
The last thing in the world wewant is for an AI engine to
rephrase it or to mess with itor to make its own statement,
and so we're also reallyfocusing on the ability to have
sacrosanct text boxes that theAI can identify.
(16:38):
As hey look, the core P teamhas already sat down and
discussed how to deal with thesituation where a patient has
both heart failure and thyroiddisease, and this is the way
that they've decided they'regoing to present it.
There's going to be all kindsof variations of this all across
the internet, and lots ofengines can put this stuff
together, but people want toknow what their editorial team
(16:59):
has to say about it, of EMpractitioners that are their
trusted source, and so that'sreally such a struggle, because
what AI is so good at is makingwords and making statements, and
we don't want it to make themost sensitive statements.
We want those to be ours.
Speaker 2 (17:15):
Just jive on that.
You can go to any chat GPTright now and ask it medical
questions and it will give yousome pretty good answers.
But it has no subtlety thereand there is no human
necessarily behind it and mostof medicine we are unsure what
the right answer is, and so whatyou really want is not a
computer that says here is theanswer, but an expert with
(17:36):
experience to say I don't knowthe answer, but this is what we
do right now, given all theevidence.
So we thought that AI wouldsort of be the end of Corpendium
, but it's actually just thebeginning of it, because the
humans now, you realize, aremore important than ever in a
world where people can just putthis stuff in.
So are you using this now as afeedback loop?
I did these searches, or yourteam did these searches.
(17:58):
It didn't quite come up withthe right answer because we
didn't quite put it in the textthe right way, and you're
rewriting chapters because ofthat.
Speaker 3 (18:13):
Yes, that's the
answer.
The short answer is yes, butit's a manual process and it
involves a lot of people, and sowe can't sustain that.
And as the usership goes up andmore and more people are writing
in with their comments andsuggestions and possible
corrections, we're going to haveto have some sort of an AI
integrated approach to this,where it's sort of feeding us
information like, hey, fiveusers have said they've had
(18:35):
trouble finding this piece ofinformation, or take issue with
this piece of information andfeed it up to us as editors and
say, hey, listen, you've got toaddress this, it has to be
changed.
And so what's really exciting,mel and we've talked so much
about the user end of this wehaven't even touched on the
editorial aspects of it, whichis that as editors, we spend so
(18:56):
much of our time, even as aneditor-in-chief, grammar, commas
formatting, just lots of stufflike that.
Ai can make all that easier forus, and so we're hoping that
the editor's time will be muchmore just coming up with these
special statements that I'mtelling you about weighing in on
(19:18):
controversies, helping usersresolve their issues you know in
conundra that they come intoand much less with just worrying
about the commas and the textand the formatting and all that
stuff, and so that's what I'mhoping is going to happen after
we get an adjustment period here.
Speaker 1 (19:34):
Also add into that to
the loop to for editors to make
critical changes that we needto then publish to go back
through.
You know, the embedding processto then show up in these
searches is basically nearimmediate after we publish.
So that's really the great partabout this is we can receive
feedback.
We can respond to feedback,especially in these early stages
.
That really helps us to refinehow all of this is delivering
(19:55):
value to users.
Speaker 2 (19:56):
Okay, so what if I
say I don't trust it or I'm a
new user?
I'm just not sure if I'll beable to pick up when it's making
a mistake.
Can I go back to the old searchand what is that old search
based on?
Speaker 3 (20:08):
What's it based on?
Matthew, tell us what's itbased on.
I'll tell you.
I can't figure out what it'sbased on with some of the
results that we got out of thatthing, but maybe you can tell us
yeah, so the old search alsohas this semantic capability,
right?
Speaker 1 (20:22):
So you're still going
to get results that are
returned, and when I say results, I mean the old search will
give you direct links tochapters and sections, that and
other and other types of content.
You know we have images andmedia that we think are related
to your search term that you putin.
So, instead of giving you asummary of a result that maybe
is directionally actionable, itwill point you to those areas in
(20:44):
Corpendium.
That, then, would give you abroader scope of that
information you might be lookingfor.
Speaker 2 (20:50):
Does all search at
this point involve some form of
in air quotes AI?
This has always been a mysteryto me how Google can possibly do
what it does, and so for meit's artificial intelligence.
Are we just living in a worldwhere it's just where you define
the line is?
Or is this other search juststupider and you're like I told
(21:11):
it to know the differencebetween heart and cardiac and
that's actually the same thing?
Or is it even basic search now,using some form of LLM in the
universe out there?
Speaker 1 (21:20):
Yeah, so artificial
intelligence is this very large
umbrella which includes machinelearning and then things like
LLMs, and there's many, many.
You know it's artificialintelligence really.
You could date it back tosometimes in the 60s or 70s, if
you want to define it as such.
So searches, you know you canstill have today what is known
(21:42):
as a keyword search, so justliterally searching for direct
keyword matches.
But most things, including whatyou're going to find with
Google, it's using kind of thesesemantic capabilities where
it's determining what yourintention is behind your search
and then it's going through andit's pulling back these
similarities within the semanticspace right To provide you with
(22:04):
the most relevant results ofwhat it believes you're looking
for.
Speaker 2 (22:07):
We've used this
example a few times, but I think
it's a really important one.
I think it was Mike Weinstockwas using a different search and
he asked what's the best musclerelaxant in pregnancy?
And it came back withrocuronium, which is technically
true.
It is incredibly good musclerelaxant in pregnancy, but it's
also paralytic and will kill you.
So I think what we're alllearning is we still have to go
(22:27):
to med school or nursing schoolor wherever it is, and we still
have to learn this information.
It is a great tool, but it canmake profound mistakes.
Have you had any funny errorsof recent that are similar to
the rock uranium case, or areyou getting it so good that it
doesn't do that kind of mistakeanymore?
Speaker 3 (22:44):
You think that's
funny, mel.
I mean it's so scary to me.
I mean you know someone to giverock eronium to a pregnant
patient?
I mean not in the context ofintubation, I don't think so.
And so I mean, for the timebeing, I mean we're going to
release this on beta for sure.
I mean there's no questionabout that and we want people to
(23:06):
have the ability to turn it off, and I completely get that, and
the risk is real, like you justmentioned, the risk is real.
I want people to think about itfor the time being as just an
advanced search, just a way toget you to the material in
Corpendium.
I'm really emphasizing that andeverything after that is beta
(23:26):
to me, and I do believe thatwhen you look at the trajectory
of how fast we've progressed onthis in just a few months, I'm
pretty sure we're going to getto the point where it's 99 point
something percent reliable interms of the AI answers, with
the cross-checking and with theverifability.
Is that a word?
Did I just make up a word?
The verifiability.
(23:48):
We're going to get there andthat's the time when we would
take off the label.
But I think, just for the timebeing, I think everyone should
just think about this as areally really good search to get
to the content in CorPee.
Not a thing to answer yourclinical questions just yet.
Speaker 2 (24:06):
All right.
So I've got a question forMatthew, because people almost
certainly have no idea of howcomplicated this is, because we
have web-based search, we haveiOS search, android search and
we have offline search.
So, matthew, how do they alldiffer?
Can you give us a quick summary, particularly the offline mode
of Corpendium?
These large language models donot live on that phone, they
(24:26):
live out there in the universe.
So if I'm offline, what kind ofsearch can I expect?
Speaker 1 (24:31):
to get Essentially
different types of searches have
to be well-tuned to thehardware they're going to be
running on.
So there are different searchalgorithms that we use for
offline mode which can't provideyou this type of AI summary
that we're getting on these LLMsbecause they require massive
amounts of computeinfrastructure to do that kind
of what we call inference right.
So you will get a kind of abasic experience out of the
(24:55):
search on offline mode and then,as you move up to the semantic
search we have when you'reonline and, of course, then this
AI search summary we're talkingabout rolling out here, which
gives you which partners ourcorpy data with that LLM, to
give you that summary response.
Speaker 2 (25:11):
When you release this
and I won't ask you when,
because we keep moving thegoalposts here but will people
be able to give feedbackinternally?
What we can do is I can do asearch and say, tell me about
measles, and it comes back aschicken pox and I can say, hey
fools, this is wrong.
Are we going to release thatability for people to do, or is
that going to be so muchinformation?
Your heads are going to explode.
Speaker 3 (25:30):
Yeah, Matt, we're
going to let everyone give
feedback, of course, except foryou.
We're going to block it.
We're going to block it.
We're going to block yourchannel.
We just, you know, we got todraw the line somewhere.
Speaker 2 (25:42):
Yeah, we just yeah.
I say rude things when it getsthings no, no, I think you did.
Speaker 1 (25:45):
I think you gave that
.
You absolutely uh want to getuser feedback and uh create user
feedback, and it's so importantto to um really delivering and
helping to continue to shapethis product in a way that
people are going to get maximumvalue from it it just just what,
what, what our what matthew hasto put up with?
Speaker 3 (26:02):
uh, just so everyone
knows, is it like you know, mel,
and on a bender, like in the?
You know, in the middle of thenight you wake up and you're
like, okay, I got to ask, I gotto ask the AI a bunch of test
questions, right?
And then you just go on and onand on and you're giving
feedback and then in the morning, um, you know, miranda and
Matthew have this inbox full of,like you know, a hundred
responses from Mel and myself,basically, you know, with a
(26:25):
million different contradictorypieces of feedback saying, no, I
checked it this way, I checkedit this way, I got that, I got
that.
And I'm like, oh my God, I'm sosorry.
Speaker 2 (26:33):
I am impressed with
how quickly it's improved that.
Even a month ago it was like,oh boy, this is a real problem.
So now it's like, oh, this isreally good, we got to release
this soon.
It's getting that good, butit's still not perfect and, as
we said, we'll never be perfect,but I'm itching to get people
to start playing with it.
I think that's coming soon-ish.
(26:54):
Any final statements, any wordsof caution for people whenever
we release this or wheneveryou're using any search like
this, when you're using theseRAG things, that now you
understand that it has to.
If you want it to be magic likean LLM, it has to use LLM
content.
So any other words of cautionfor?
Speaker 3 (27:11):
the world.
I mean, I would say that it'syou know, this is the most
incredible genius tool thatwe've ever had.
But like every human genius,the AI has its faults, and
namely it lies, it cheatsats andit steals.
It does all the things that itlearned from us, and so that's
(27:31):
going to always be there at theback of the, at the back of my
mind.
Speaker 1 (27:35):
Yeah, you made a
really great point, mel, about
you know this doesn't replacethis, doesn't replace the
learning that we go through toget to this point, right.
And so I found that AI is justexceptionally helpful in the
hands of of experts, right,where it's giving you back like,
let's say, an example for thisAI search summary, where it's
giving you back this summary andit's like, wow, I know
(27:56):
immediately that's the rightanswer, or it triggers you.
It might trigger something inyour mind that you didn't quite
recall.
You look at it and you say, oh,that's absolutely the right
answer and that's a geniusanswer that I might not have
necessarily come up with myself.
But I would say, yeah, it'svery important to scrutinize,
always scrutinize.
Speaker 3 (28:15):
I love the way you
said that.
No, I was going to say I lovethe way that you said that,
matthew, and what I was thinkingwas there's a reason why you
can't take a high schooler andput a medical textbook in front
of them and expect them toexecute on that material in the
same way that you need to be anexpert to use this type of a
system.
That's an assumption that wemake.
It's just not for lay use.
Speaker 2 (28:38):
So I want to thank
you.
Gentlemen, now get back to work, because we want to get this
out to the people and we'rereally very excited about it.
We want to get this out to thepeople and we're really very
excited about it.
But again, it all comes back tothe humans, and Stuart has, I
think, 700 humans that areworking on this little project
and if you haven't seen itrecently, it continues to get
better.
So it's just one aspect, butit's sort of the most exciting
(28:58):
thing right now.
There's so much in there.
Now we've got a much better wayto find it.