Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Imagine this a corporate lawyerdrowning in document review,
spending weekends, combingthrough contracts, looking for
inconsistencies and respondingto repetitive.
FAQ's.
It's a grind that feels endless.
And it's all too common in thelegal profession.
Bryan Lee (00:19):
At Google I think
early on it was literally like
two lawyers when the whole thingstarted out.
I would say the problem starts ahundred to one, like that's
where the ratio where it startsto break down for an attorney.
if you don't have a good filtercoming in, that's where that
problem really exasperates
that's the spark behind today'sconversation with Brian Lee.
Co-founder of Ruly AI.
(00:40):
Brian's journey from lawyer totech innovator at Google.
Has uniquely positioned him totackle one of the legal
industry's biggest challenges.
Inefficiency.
were trying to
reimagine legal work with an AI
teammate.
That's a North Star.
In this episode, Bryan shareshow his AI powered solutions are
helping lawyers reclaim theirtime.
(01:02):
Whether it's streamliningdocument reviews.
Or just automating FAQ's.
So we have a legal
platform that creates an AI
teammate experience wherelawyers can pair their knowledge
with AI and helps you automatesome routine.
Tasks like answering legalquestions, as well as you have
this copilot partner that helpsyou with, summarizing documents,
(01:25):
interrogating, asking questionsof documents.
Stay tuned to learn how reallyAI is redefining legal workflows
and hear Brian's take on thefuture of AI in law.
Shikher Bhandary (02:13):
Talking about
just lawyers, right?
And the legal space.
So a friend, a really closefriend of mine who works within
a tech company.
So in house legal counsel, hewas telling me how on an average
he's spending about 15 to 20hours.
On the weekends, reviewing casesbecause the legal team is small.
(02:37):
He ends up spending hours, like80 hour weeks, almost every week
working through contracts notcreating it, just like combing
through these documents andlooking for errors and
inconsistencies, basicallydocument review.
I know the three main, Things,three main buckets of a lawyer's
job is drafting, review andresearch.
Bryan Lee (02:59):
You nailed it.
Shikher Bhandary (03:00):
Yeah.
I was just talking to him aboutit yesterday and he was just
talking through some problemswhere he's I'm doing so much
review that we made a pitch andwe now have external partners,
external companies, to take someof the excess workloads.
From him, but even then he'sworking these long hours, right?
(03:22):
So there are potentially just aton of efficiencies here because
when he was talking aboutdocument review He's if we had a
database and if it is fine tunedreally I think this could be a
great place for driving thosetechnological efficiencies that
we are here today to talk aboutright?
So now imagine if he my friendhad a way You to leverage an AI
(03:46):
law partner to take on some ofthat repetitive work, right?
Document inconsistencies,summarizing key points some
fixes here and there, flaggingissues.
It would really make his jobeasier and more efficient.
So that's where our guest today,Brian Lee steps in.
Brian is the co founder of Ruli,a legal AI company.
(04:09):
Brian, great to have you on.
Bryan Lee (04:10):
Yeah.
So I really were trying toreimagine legal work with an AI
teammate.
That's a North Star.
I'm classically trained as apeer engineer first, and then a
lawyer went to law school, didbig law.
Doing like capital markets,securities law, basically.
And then did a little bit inhouse doing a lot of reviews,
(04:31):
contracts and whatnot at GeneralElectric.
I wanted to get back to my firstpassions in tech and then just
lucked out a little bit and I'mjoining this stealth project at
Google, which turned out to bethe Google assistant team.
So I was able to work on AI,like early days the voice
assistant when I came out, itwas on the Google assistant home
team versus business roles thenmoved over to the product side.
(04:55):
And then'cause of my backgroundin legal, a lot of the work
shifted to product trust,safety, private privacy where
you work, where you'redeveloping these feature sets
within products.
But a lot of it was also workingwith like internal in-house
counsel teams privacy teams,policy teams, et cetera, to make
those features right.
And then that also led me toworking on early.
LLMs at Google, likeTransformers, which were
(05:18):
invented there.
I was actually using them on theads sort of privacy safety,
brand safety team.
Yeah before, so this opening eyeexplosion
Shikher Bhandary (05:25):
When was this,
if I can ask, like, when
Bryan Lee (05:28):
yeah, it was like I
joined that team 2019, 2020,
2021 effectively, right?
And then 2021 was when the.
So sort a chat, UT explosioncame out.
That's actually when I actuallyleft Google to go to meta.
I then was working on augmentedreality glasses.
You probably saw some of thethat's my co-founder, CTO.
(05:49):
He was also on that teameffectively, and he was a
machine learning engineer.
And that's his background andhe's built things like trust
safety models as well.
Interesting enough that LinkedInand other places also done a lot
of machine learning work atAirbnb and AWS before.
And then that's how, the teamcame together.
You both left, you find eachother afterwards through the YC
(06:12):
matchup program.
And then we started really fromthere earlier this year.
And we, but we really focusedon, cause, cause he had similar
experiences working with thehouse from the engineering side,
like in house legal.
So we, we realized that likeoutside of a lot of contracts
with there are, where there aretools, often people are
reviewing sales contracts andthere are tools for that
(06:33):
Salesforce, connects to anironclad and like a CLM is what
people call it.
Contractor contract life cyclemanagement.
I'd say some CLMs are really.
Like ancient, they're basicallyjust Google drive or SharePoint.
They'll do anything.
Some of the more modern ones aresaying we can help you review
terms and things like that.
Usually it's still very hardcoded to a template.
(06:55):
So I think where people gettricked up is when it's what
they call third party paper, orit's something that's not the
standard, your standard salescontract, people are trying to
renew a review or likeprocurement or contract,
obviously.
The contracts of the other partytypically, and that's where
people get tripped up.
But then we also realized that,like all our engagement with
most, it's council was like justthrough email pinging all this
(07:18):
stuff because there's all thisprivacy review that has to
happen for a P.
R.
D.
Level engineering design docs topost ship to reviewing P.
R.
Statements like post launch.
Marketing, all that stuff,right?
And a lot of it's driven by, notall privacy is one big factor,
usually.
But it's just because like nowthere's so much risk management
(07:38):
across this product developmentthat there, there's no tool for
that.
And sometimes there's a lot ofFAQs and whatnot.
That's actually what we builtfirst with Ruli.
And that's what we were fundedon.
That was our prototype, whichwas like we, we built, you think
of it as an AI paralegal thatsits in.
Slack or Microsoft teams oremail and you can like paying
(07:58):
this paralegal first paralegalreally first and it can answer
like the standard processquestions or check on like light
level things for you for apilot.
Right now, we're automating somelower level compliance work
effectively.
We're just reviewing the contentthat comes in and in certain
conditions, we're auto approvingit.
Shikher Bhandary (08:17):
it's pretty
incredible that you have sat in
engineering, big law, in housecounsel, and then product
management, right?
So you've got like this,literally like this set of
skills that is probably superimportant for the role that you
are leading right now, which isco founder, CEO of truly a legal
(08:40):
startup.
At least in house legal counsel,a lot of the work, maybe 60, 70
percent of the work is actuallyjust responding to FAQs, like
questions that are asked all thetime.
So that's number one.
But also realizing that thereare tons of additional
efficiencies here in the space.
(09:03):
that you can just have maybe anAI law partner to help one with.
Is there like a specific case,specific instance that you
remember in either Google orMeta where you actually thought,
hang on, this is far too processheavy.
Bryan Lee (09:21):
Yeah, I think that's
a great question.
I remember back at Googleinitially when we're this like
assistant Google home team, andthen they green lit this very
large effort to build out all ofconsumer hardware from there.
And then that's a side that Islotted into like the product
area I slotted into afterwardsand like the teams overnight
(09:42):
went from, like a 500 personteam.
5, 000 try temperaturedirectory.
And so it was actually me and alegal director because I was at
the time running productstrategic deals team.
A we would just get questionsall the time.
Just flooded as new people cameon and they were like do I need
(10:03):
to sign a DA to talk to thisperson?
When does legal need to reviewX, Y, Z.
I had to write a policy guideswith the legal person to say
this is what slots in on thebusiness side or products that
it's what slots in on the legalside.
This is when you, here's likethings we've built that are
routine.
Check this thing first and thengo into this section.
We have to like manually assign.
(10:23):
People have to go in and there'slike a light macro here, but you
basically have to, the legaldirector to go in a few times a
week and just figure out whatshe could answer herself and
then what she could basicallyassign to a team member with
some context.
Actually, that was a lot ofinspiration for the first.
Part of our platform
Shikher Bhandary (10:40):
were the first
AI partner
Bryan Lee (10:42):
I was like, we're
fashioned up the spreadsheet.
And ironically, it's crazy.
This thing is still inexistence.
Like Google to run a lot of thisengagement in this part of
consumer hardware is what Ifound out.
Jed Tabernero (10:57):
pitching to them
soon,
Bryan Lee (10:59):
yeah.
We've heard still some peoplethere.
They're looking at things withGemini, which is a different
topic.
There is low level work wherelike the lawyers are, if you
think about current state,either the lawyer who needs to
review it is not getting to it,and then sometimes the business
or engineer marketing just hasto run without even any legal
guidance just make timelinesanyway.
(11:20):
So you're really a currentstate.
It's actually.
I don't think, I think peopleget caught up on the error rate
a little bit in an isolationbox, but I always think of it as
what's that human error ratetoday too, right?
Today it's clear that lawyersalso make mistakes.
I remember, in my experience asa lawyer, I made mistakes.
Also, I worked with a lot of inhouse counsel because I, have
(11:40):
that legal eye, I would call it.
But even if you don't, a lot ofpeople were.
Pay attention to detail.
We'll see their typos, theirmistakes, right?
Because, volume spikes, butspikes up.
And then during certain times ofthe year, people are rushing
like
Shikher Bhandary (11:53):
Yeah.
Bryan Lee (11:54):
Of people too.
And so I think the benchmarkingquestion is really important on
like people keep chasing forsome sort of mythical a hundred
percent accuracy.
Shikher Bhandary (12:03):
What was the
size of your team?
The organization went from 500to 5, 000.
How many lawyers were actuallynow responsible for those 5,
000?
Bryan Lee (12:14):
I can't remember the
exact number and I think At
Google, it's probably betterstaffed, but like early on it
was like, it was only literallyit was like, I think early on it
was literally like two lawyerswhen the whole thing started
out.
It was like, and then theyexpanded out to other lawyers
within the team.
But the point was literally likeyou had two or three people, and
then for this entire productarea on the Google home side,
(12:37):
maybe one person on like theassistant side and that's how
all these things start out.
And then, you get to thisinflection point and it does
expand out pretty quickly tomore support, but I would say
200 to one or the problem startsa hundred to one, like that's
where the ratio where it startsto break down for an attorney.
if you don't have a good filtercoming in, that's where that
problem really exasperates,right?
Shikher Bhandary (12:58):
that's a
great.
explanation of the problem spacewhere, you have these multiple
orgs, suddenly they havedifferent requirements and two
lawyers in the in house councilis not going to cut the just
varied requests, but stillsimilar in many ways.
Bryan Lee (13:17):
Yeah.
You'll find right now, causewe're targeting more on the mid
market and kind of late stagestartups for early customers.
You'll find that like our sweetspot is like three to five
lawyers where we find theproblems really start
compounding and then up to 10more team.
And then at that point they haveThe muscle memory as well as
some resources to write theknowledge documents.
(13:37):
That they can put into oursoftware to make that makes
things repeatable.
But it also takes this effortwhere, maybe your hair's on fire
and you have to actually find alittle bit more time or maybe,
carve out a little more time soyou can document some of that to
tell the AI what to do.
Sometimes the hard part is justconvincing folks that like,
Okay.
You spend literally a few days,not even a few hours a day for
(14:01):
maybe just a few days talking afew of these things.
And it can be in plain language.
I think that's the power of AInow.
some lawyers, older lawyers havePTSD of integrating with last
gen's tools where you have tospend six months with a vendor
and Okay.
Try to do machine learning, somesort of like hard code,
programming rules, no code foryour own stuff.
And then they have that PTSD,but I try to show them that,
(14:23):
look, you can literally justwrite a Q and a document the way
that you would to a person onteam, just give us existing
policies that exist that you,you try to, share among the team
already.
But nobody's really reading themor misinterpreting in the wrong
way.
And that's the sort of likeplain language document.
When you onboard, really, it'sthink of it as you're giving it,
(14:44):
onboarding documents the sameway you would to somebody new
that joined your legal team,right?
And that's just more like humanintuitive way of looking at it.
And then I would say that weactually got a lot of interest
after we launched the initialpart of our platform on we call
it legal hub.
This legal efficacy part that wejust discussed.
We actually got a lot ofinterest for product we didn't
have at the time, which iscopilot.
(15:05):
So we like you're a tragedy forlawyers because that's actually
a very more one to onerelationship of assisting the
lawyer themselves.
And so we just launched that inOctober.
And so now we have both sides ofour platform.
The way you think of it is copilots like this personal
assistant for one for eachlawyer.
And then the legal hub helpsscale your support knowledge to
(15:26):
your clients internally at thecompany.
And that's really how our Twohalves come together, but it's
all powered by a knowledge basethat you can basically add files
in that you're making theexperience work for you.
Jed Tabernero (15:38):
Brian, do you
find that your target customers
have a pretty robust knowledgebase?
Or is that something you have toforce them to go and develop and
create?
Bryan Lee (15:48):
Yeah.
It's very interesting to hearthat.
I think You'll find that as Isaid, around the three to five
lawyer base, people have thisrepeatable knowledge.
They've probably taken a step ofI remember back when I was doing
in house counsel work, I'd getthe same questions even.
this is like over 10 years ago,I would put things In a word
(16:09):
document, and then when I getemails coming in or things like
that, I just copy paste theanswer.
and this is actually quitecommon today still, for many
attorneys,
Jed Tabernero (16:18):
Oh yeah.
Bryan Lee (16:19):
when I uncovered it,
so I realized oh yeah, I keep a
word document without making useof it, I just copy paste out my
answer.
And so you'll find that aroundagain, that kind of three lower
range people have now repeatabledocumentation, they want to put
that in and that's, that doesexist, I think even, but for
copilot itself, it's such a goodassistive tool that we've had
(16:40):
people who are like solo GCsthat are interested in this
because it's, if you add justYour templates or your, things
that you're already working withthat can make it helpful, but it
can work without knowledge baseto we just have this, fine tune
copilot with some like legalshortcuts built on top.
And so you'll find usuallythere's no issue there, I think
on the larger enterprise side,what we've discovered is.
(17:03):
It actually manifests in adifferent way.
You'll find some companies thatwe've talked to are like, oh
yeah, we have all thisdocumentation.
We actually have talked to otherAI vendors.
We don't like.
How they're implementing itbecause they give us standard
model answers.
We actually like what you havebecause you can put in the
documentation and then it runsthat way, which is actually how
(17:25):
we run as an organization.
But then you'll also find thereare some legacy companies that
I've talked to where But we have200 lawyers and actually like
the problem, it's we don't evenknow how they're answering it
like themselves.
So there actually is noorganizational consistency.
And so that's actually adifferent problem at that point.
That's actually a, like changemanagement.
(17:46):
Like you have 200 human lawyersthat actually don't have a SOP
and everything's in their headand they're actually like, Just
they're just it's based on theirexperience and like what they
like agreed upon with thebusiness unit that they support
and there's no documentationeven for themselves, which is
like a greater organizationalproblem where that's why with
(18:07):
those organizations, and usuallylegacy companies like.
Allure leaves, all the knowledgeis gone they don't know, and a
new person comes up and theyhave to almost start from
scratch, re engage the businessengineering teams and figure out
what to do, but that's somethingI don't think that we are trying
to solve today, but maybe AI ingeneral I think will improve.
We have systems in place thatretain that knowledge for a
(18:27):
customer.
I think that will make adifference for enterprise down
the future.
We actually privacy is a bigissue.
So we actually do like a uniqueinstance of our product for each
customer so that it learns.
More specifically for thatcustomer, we don't train a
custom model, though we're usingopen areas like reasoning model
with like our own sort ofmachine learning level rag on
(18:49):
top for each customer and that'show we're addressing that issue
as well.
Shikher Bhandary (18:54):
Jumping into
the Rooley solution.
You have pointed out so manyinefficiencies in the space from
a small, medium, and large lawfirm or large company with legal
counsel in inside of them.
So can you explain us, can youpeel the layers of what Ruli is
(19:17):
now focused on what the solutionis?
And you had mentioned the copilot and different verticals
within Ruli, but can you give usa broad overview of what Ruli AI
is currently designed to do?
Bryan Lee (19:31):
Yeah, that's right.
So we have a legal platform thatcreates like an AI teammate
experience where lawyers canpair their knowledge with AI and
helps you automate some routine.
Tasks like the answering oflegal questions, for example, at
that intake stage, as well asyou have this copilot partner
(19:52):
that helps you with, you mightbe summarizing documents,
interrogating, asking questionsof documents.
You can do light legal researchon it as well.
We also have the ability to goto external search.
And we have citations built in.
That's something new that we'vejust launched that allows you to
zoom into a specific section ofa document that we've cited and
(20:14):
that really eliminates itmitigates that, that
hallucination factor to give youthe ability to audit really
quickly how we come up withresponses.
And we've also built this.
Interesting new feature todaycalled magic prompt that
launched yesterday.
Cause a lot of lawyers havethis.
Like page problem, which is whythey have trouble using co pilot
(20:35):
for example, or even chapter BT.
It would magic prop.
You can type in almost like somechicken scratch or an idea of
what you're trying toaccomplish.
And then we actually expand theprompt to be more comprehensive,
robust to get you a more legaloriented answer that you're
looking for but really we'refocused on this you call it like
the frontline.
(20:56):
Support that in house councilteams deliver to their
stakeholders,
Jed Tabernero (21:02):
It's interesting
because you keep mentioning that
kind of your target focus isthese smaller firms.
But I guess me being in a largecompany with quite a bit of
resources, it just seems likesomething that would be
extremely useful for us becausewe also have a ton of lawyers,
(21:24):
right?
But our correspondents are allvia email.
There is no tool that I canconsult.
There is no FAQ that I can go towhen I have legal questions.
The stuff that we work on aretypically PNC stuff, right?
Privilege and confidentialstuff.
And we have always questions tolegal counsel, right?
We have external legal counselfor some for tax reasons,
(21:45):
especially we reach out a ton toexternal legal counsel.
And it takes a while to get anopinion back, man.
And dude, it's not, We have somany questions that there is
like this massive log thatthey'll share with us.
Hey, listen, we got actually 85questions right now, inquiries,
and we can't get to yourquestion until next month.
So I guess all that to say,like, why not?
(22:08):
Why not large organizations?
Bryan Lee (22:10):
I think part of that
is actually more just on like
startup go to market, right?
the interesting thing here isthe problem actually exasperates
usually on both ends of thescale when you have not a lot of
lawyers and when you have a lotof lawyers, because typically
then you're a large organizationwith more volume to so actually
it's ironically, you're servingeither likely serve very small
customers or very largecustomers.
(22:30):
I have very large customersites.
It's just more down to the go tomarket efforts of sales cycles.
I think with larger companiesthe self sectors are just going
to be, nine months plus.
And, as an early startup, weraised our.
Like our large precinct justclosed in June.
We have to be focused on cellcycles that are shorter.
(22:50):
Thank you.
And so I think that's thebyproduct of just, like startup
life, right?
I also think the budgets areless constrained with large
companies still.
So the traditional tendency isto go to outside counsel.
And then you still have budgetto pay for outside counsel to do
that.
And so there's definitely moreresource constraint for smaller
(23:13):
companies that do have tomaximize on efficiency as much
as possible.
So I think there's still thatbudget shift that, that is
happening.
And There are more senior legalleaders that I think have not
made that shift to what can Iuse AI for?
So their default muscle is I'mgoing to go to the person I used
to work with at the law firm andthen use their services as
(23:35):
outside counsel, right?
Which is the typical legal pathof how that's solved
Jed Tabernero (23:39):
it's funny
because what you're mentioning
is the same.
So I worked in an accountingorganization for a long time and
it's pretty similar inaccounting organizations.
We have all these complex usecases that we need to think
through.
And instead of just relying oninternal resources, we'll hire
PwC or we'll hire Deloitte.
Bryan Lee (23:56):
Yes.
Jed Tabernero (23:56):
for us because a
lot of our people come from
there.
Bryan Lee (23:59):
So there's this
interesting psychology where I
think I don't know how thisplays out exactly because humans
still rely on like this comfortsecurity or whatever, this
notion of relationships withother humans.
So I don't know if that's stillthis sort of like linchpin or
whatnot, until we actually fullytrust.
(24:22):
And yeah, I, rely on AI inparticular, I think there's
still this.
Human in the loop and a lot ofthings and that, yeah.
So I'm curious what yourthoughts are on that.
Jed Tabernero (24:34):
I think they'll
always will be.
Humans will always be there andwe'll probably.
Always lack trust in AI forthese really big projects.
So I think it's reallybeneficial that you're always
calling out that the job ismore, it's a draft, right?
For example the drafting piece,right?
It's not creating a formaldocument for you, because I
would never for myself, even iflegal counsel didn't review this
(24:57):
document that they had madethrough our internal, maybe GPT
tools, right?
I'm not going to trust it.
I want a lawyer to look at this,right?
Not just AI, right?
So I think that's it's a trustbreak, which I'm curious to hear
your thoughts on just the wholehumans having trouble trusting
AI, because I feel like that is,that's a huge barrier that you
have to go through in convincingthese very traditional, roles.
(25:21):
And you mentioned earlier, somelawyers have PTSD from getting
on board into these projectsthat take six to eight months,
right?
What are your thoughts on how doyou get them comfortable?
This is a really traditionalindustry.
Bryan Lee (25:31):
So what we actually
do is at different events, like
more recently, just inSacramento for the California
lawyer's association, we teachlike a complimentary prompting
for lawyers class.
We start with the basics of whatis hallucination, what are LLMs?
We spend a few minutes justusing really elementary examples
of probability like this is how,models are probabilistic, not
(25:52):
deterministic, examples and thenwe just show them what you can
do.
In terms of prompting andchatting, and I actually break
down prompting to it's like whatis prompting?
It's really just masteringcommunications with a computer.
So if you're a goodcommunicator, and I, and all the
lawyers in the room were like,yeah, we're good communicators.
(26:12):
I'm like, then you should haveno problem.
Prompting, I think people arelike, have been like tipped off
or pushed away by some of thesetechnical terms.
Cause like prompting likenatural speak, natural language
instructions to like, in thischat interface and then people
get put off a little bit.
They're like, we'll get alittle, they get a little timid.
And so I just break that down toyou're just, it's just.
(26:35):
You're a master communicator,which I think everyone in this
room is and that starts to breakthings down.
I'd also say that I think youhave to really break down the
nuance of certain types of legalwork because not all of it, will
be fully end to end automatedbecause the nature of the legal
work is just not binaryentirely.
And so the example of that is Ihave, two lawyers in a room and
(26:57):
ask him about these questions.
I'm actually get four answersBecause it depends and then it
depends on interpretation.
That's just the nature of howlaws constructed particularly in
like commonwealth countries theu.
s on the common law systems andso Interpretation is a big part.
And then the next question thereis Will you trust an AI
(27:18):
interpretation over a human'sinterpretation or is it can be
weighted enough?
I think that is a Time will tellonce you have more proof in the
pudding right now, because wedon't have enough use case.
It's what it will take is it'llhave to take AI interpretation
that gets litigated and thenenough AI interpretation gets
litigated.
(27:39):
And they're like, look, the AIinterpretation was right.
It got litigated.
And the result is arguments.
The result is what the AI alsosaid.
But that's going to take likeyears from now because of the
litigation path, to get lawyerscomfortable with, because
usually that's what happens.
If you look at the human way,it's like law firm partners are
saying, look, here's my advice.
And then it turns out to be trueor whatever.
(28:00):
It gets to my trouble or it goesto court and this is what it is.
And then now you have more trustin that law firm partner because
you're like, okay, he knows whathe's talking about.
Look at his 10 year trackrecord.
But like right now AI doesn'thave that track record yet.
So that I think once you startgetting there, then you might
get into a situation wherehumans still have opinions, but
AI is opinion is probably betterthan some measure of some
(28:23):
humans.
And it may not still not be asgood as some other humans where
you hold that higher esteem,basically.
That's my thinking of how thissort of plays out.
Shikher Bhandary (28:32):
the cohort of
lawyers are probably harder to
change their thinking, right?
But then you get a new set oflawyers who are a bit more open
and then suddenly you get broadbased adoption.
It reminds me of the whole oflike Waymo, right?
we were seeing Waymo in SanFrancisco in 2015.
(28:52):
And now suddenly you've got usthis wave of maybe it's also
age, it's being comfortablewithout a driver, just things
that take time.
Bryan Lee (29:06):
Yeah, and a lot of
younger folk, like Gen Z folk,
don't have driver's licenses
Shikher Bhandary (29:10):
Yeah, exactly.
And I was just looking at thestats.
The wait list for Waymo Austinis like over a million people.
Bryan Lee (29:19):
I'm on it.
Shikher Bhandary (29:20):
Yeah I'm on it
too.
And a friend of mine got towrite it last night.
He shared with the group chat,everyone's even we need to do
this,
Bryan Lee (29:27):
so we find that even
in our user base, you'll find
that it's the the search endsthe millennials that really can
use our copilot immediately justrun with it.
And then they're like thetesting and they then often give
feedback back to their boss, whomight be the general counsel or
a law firm partner who'stypically.
(29:48):
Probably the older, generationsof X or Boomer, and then they're
the ones who are the decisionmakers and they actually don't
go into the app to test it.
Shikher Bhandary (29:59):
All you need
is a Gen Z lawyer, become
partner, and then you're set.
Seems like that's the adoptioncurve.
Bryan Lee (30:07):
So you need.
Yeah, that's the offshoot.
So it's.
Likely 15 years or whatever fromsome 10 or 15 years for some
Jetsy.
But it's funny you mentionedthat on adoption.
Cause and I'll show this on thispod because it can be something
users will find it interesting.
So because of the early startupphase, of where we are,
ironically, I was showing ourproduct to finance friends and
(30:29):
they were like, wow, I would usethis immediately.
And I was like, what do you meanyou use this immediately?
And then, cause a lot of, so acombination of finance friends
corporate finance, for example,it's one use case, they often
have to review the contract tojust pull out.
The payment terms and pull outthe indemnification or
(30:49):
limitation on liability terms.
And then they model the risk.
This is my, some of my friendsback at some of the big tech
companies, and they're notlawyers.
Interpretations contract lookslike I just need to pull out
these terms and then put it intoa spreadsheet model basically
and they're like your tool doesthis for me and so Whereas the
lawyer is still going to readthrough the whole contract to
(31:09):
all this other due diligence Andthey're like, I just need a
better control f That's likereliable and I need to pull it
out.
And then your tool could eventell me in plain English what
does this actually mean?
And so actually we're chasingthis down as an experiment.
And then we actually talked withmy finance friends a little bit
more.
And cause we were actuallyexploring this on putting the
(31:31):
SEC database in there withregulatory filings.
And then he said that he, andsome investment banking friends
that we were talking to havesaid Oh, yeah.
Like folks like that, as well asinvestor relations, they would
use this multiple times a day tobasically pull 10 K filings, 10
Q filings, and then look for theright documentation, pull it
out, like I need to know whatthe RQ is, or I need to know who
(31:53):
owns 5 percent of this company,et cetera And we can, what we've
built is actually, we're able todo that really well.
Cause we started with, Legaldocuments.
And we could actually then lookat all this, all these
documents, like regulatoryfilings or other things like
that, or earnings calltranscripts as well, but
actually deliver it end to endand actually solve the problem
(32:13):
for a lot of financeprofessionals.
And this approach was looked at,but there's still like a legal
community, like M& A lawyers andcapital markets lawyers that
look at this information too,but actually that solves a very
interesting problem.
And that's something that we'reactually chasing down right now.
Jed Tabernero (32:28):
That's
interesting because I was going
to bring this up later on.
I looked at one of, the productsthat you have called data grid
Bryan Lee (32:34):
That's right.
Jed Tabernero (32:36):
I work in this
space.
I mentioned I worked inmanufacturing, right?
One of the things that we do iswe model what we're supposed to
make and what we're supposed tobuy.
And guess what?
I spent a shit ton of my time onwith the supplier.
Supplier contracts, right?
These procurement contracts islooking for the damn terms.
Sometimes I'm just literallylooking for net 30, net 45.
(32:57):
Like sometimes that's it.
Bryan Lee (32:59):
No, you nailed it.
You nailed it.
So I actually think that's likea very interesting learning that
we've had was that like causemost lawyers know roughly where
to look for that agreement.
And then somehow I can look atthose payment terms.
I know exactly what they say,but there's this like lawyer
adjacent group, which also hasto look at documentation.
(33:20):
Or regulatory products or otherthings.
And they don't have that sametraining or don't need to look
at the whole contract.
They just need to have a bettercontrol app, basically.
And ideally one that'sinterpretive.
That's exactly what DataGriddoes.
So maybe I can show you thisafter the call.
And you can play around with it.
Jed Tabernero (33:37):
think that would
be interesting.
I was looking at the columns andthe first thing I asked myself
was, why can't Excel do this?
It's because I'm spending sixhours making a database on Excel
on these documents that I haveto look one by one.
Okay.
What column do I look at now?
Okay.
That's actually the paymentstructure.
Okay.
This is actually limitedliability.
Okay.
I'm looking at all these terms.
(33:57):
The lawyers ain't helping mecause they're expensive.
Okay.
That's expensive time.
I can't get their time.
I gotta go by myself and pullall these contracts.
Bryan Lee (34:06):
Yeah.
I love that, Jed.
so for us it's actually pretty,DataGrid is really intuitive.
You can ask a high levelquestion I need to pull finance
terms of these contracts.
Then it starts writing thecolumn level spreadsheet
questions for you.
So all the prompts are alreadypre written.
Then you can just go in and editthem based on, it'll probably
understand what you're trying topull out already Payment terms,
(34:28):
et cetera.
you can actually tune thequestions to be very exact.
You have a column that says pullout exactly.
what the clause is you can haveanother column that says I just
want to know the net days andit'll just say 30 days or
whatnot and then i've done thiswith earnings calls as well
where I just say how many timeswas ai mentioned in these
earnings calls
Shikher Bhandary (34:50):
Shot that
stock.
Bryan Lee (34:51):
No, it's very funny.
i've got when I run into thingslike intel and nvidia
Shikher Bhandary (34:55):
Nvidia.
Bryan Lee (34:56):
Yeah, don't show on
video and we know where Intel's
going right now.
But yeah, and you can even, it'sreally good with sentiment
analysis too, and you can justsay what was the sentiment,
analysis for the CEO was abullish, bearish, neutral, or,
et cetera.
And then I have another one thatsays aggregate and summarize the
top like analyst questions sothat I see a pattern.
(35:17):
But again, this is where I thinkit's pretty powerful data in a
way where it goes as precise asyou want for extraction to all
the way to the other end ofsentiment analysis, which is.
Super cool.
Jed Tabernero (35:29):
it also
contextualized like the data
grid, for example, beyond, let'sjust say I'm looking for payment
terms and I want to give it abunch of procurement contracts,
let's say, that's a data set.
I'm training it on, but also atthe same time, maybe as an
accountant, I'd like tounderstand, what is our, Fixed
asset policy, capitalizationwise, like the things that are
(35:51):
happening, which rely on maybe aper unit price negotiation or, a
bulk purchase type of policy.
And you're saying some of thesetools are providing context and
some of these tools canactually.
give sources, right?
And for us, we have a shit tonof documentation on just rules
on everything, just commonpractices that we do.
(36:14):
And maybe, I don't know, maybethis is a step further, but just
to understand okay, is that, isthis stuff capitalizable?
The stuff that I'm buying fromthe supplier, so is that data
contextualized?
Can that be trained on somelocal data sets
Bryan Lee (36:25):
Yeah.
I think that would have to bethe next phase of where we are
with where data grade is.
But I think that is entirelypossible.
What you're describing isbasically like I would read this
analysis.
But here's like a baseline of.
Knowledge documents that I wantto baseline against.
It actually is, it's part ofthis data room due diligence use
(36:47):
case that we're looking at andyeah, so I think that's like the
2.
0 of where this would go.
But right now we're, Finding alot of demand on the finance
side where we are exploring thisuse case as well as putting
copilot directly with the SCdatabase and transcripts
earnings call transcripts to seehow we can better improve that
(37:08):
finance research use case.
I think there is a strongerdemand and appetite that we've
seen from that space where we'rejust leaning into that or just
think what you have to do as astartup.
And yeah, we're excited to seewhere that goes.
I'm excited to explore you, Jed,on, your latest project and how
we can make that better.
Jed Tabernero (37:25):
Yeah there's
probably two general places
that, that I would maybe just ashigh level suggestions.
If you were looking at, otherareas to go and sell to, which
is one is procurement.
That's one big thing.
I know finance professionalswe've mentioned this whole time,
but procurement will benefit aton from that, at least for us,
like really large organizations.
We have a really complex supplychain.
(37:47):
All of them have differentterms.
So I think that's one space.
And then the other space Ilightly mentioned earlier is.
That's a huge practice.
A huge practice where a ton offolks require a lot of context.
They have their internalpolicies that you can feed to
the LLM.
But there's also generalpolicies that, come out of PwC,
Deloitte.
These are all public stuff.
Stuff that you can train, themodel on.
Bryan Lee (38:09):
Yeah.
I love it.
I love it.
Judd.
Thank you.
Shikher Bhandary (38:13):
We need a
place to Jed was thinking of
actually flying out to Austin.
So we should just hang out andjust talk through all the
Bryan Lee (38:20):
Oh, for sure,
Jed Tabernero (38:20):
be interesting to
just talk through use cases.
Bryan Lee (38:22):
Yeah.
Let's do it.
Jed Tabernero (38:23):
and there
Shikher Bhandary (38:24):
people we are
We could be sitting here for six
hours and just keep going.
So I will we want to berespectful of your time Coming
to your technology stack, right?
The word on the street is youhave the big LLM Companies you
have open AI you have Anthropicyou have cohere you have all
(38:44):
these big players Gemini andstuff What stack are you using?
And do you see the need to builda custom solution?
I was just reading yesterdaywhere, building a custom
solution a lot of folks aregoing that way, but it costs a
lot of money.
(39:05):
It's 2 million plus for acustom.
built LLM because, data is anissue and engineering talent is
an issue and
Bryan Lee (39:15):
I think it does
depend on your application.
But even last night, I thinkthat I was reading something
where.
initially there was a lot offlack on people building on
these original models and thennow they're looking at the
analysis and it's look atperplexity, like it's built on,
these big models and it's donereally well I would say and in
the legal space, you want to mapthings back to the product
(39:36):
experience, for the user andwhat they're looking for.
And I think we thought aboutdoing custom model.
The reality is we don't need acustom model to achieve the end
result for the user.
I think it would just, itactually adds other
complications where in a lot ofcompanies, they're actually very
comfortable with open AI becausetheir infrastructure teams have
(39:58):
evaluated it, for example, ormaybe they're already using
Microsoft Copilot internally.
And then so we're using, thedeveloper enterprises over the
eye doesn't train on your data.
That's gets people verycomfortable.
Whereas we've actually heardpeople will have more pushback
if you have a custom model wherethey don't understand how to
test it, And they don't know.
(40:19):
And then what's the valuebenefit of that custom model?
It's very hard to prove whatthat is because opening up for
legal use cases.
does a pretty great job atreasoning.
And then I think that extraeffort we do is more on the rag
layer plus classic machinelearning that we can do.
For each customer, whichactually you see more of that
(40:39):
last mile of results and betterUI is probably where we want to
invest in.
So we have a full time designeron the team, which we made that
investment early, and we alwaysget compliments of our UI is
really intuitive.
And so I think that's where I,we're making those trade offs,
but there is, at one point therewill be a, like my CTO mentions
(41:00):
that at one point there will be,it will make sense for us to
build a custom model when costswill be too high for maybe using
the open air model orperformance issues.
Cause it's just too big of amodel for us and we don't really
need it.
And that would probably be themain reason we'd switch out.
Is that rationale?
But for now, I think it's okay.
And even as we're looking atcustom data sources, we don't
(41:22):
have to put that at the modellevel.
You're just putting it on thismiddle layer of model
effectively.
Shikher Bhandary (41:28):
That's great
to know.
And just to give the audienceinsight into the rag technology
that Brian mentioned.
So rag is retrieval, augmentedgeneration, and it's you can
think of it as a way to make AImore grounded, so the answers
are more accurate with rag andperplexity is a big user of rag
(41:49):
because it actually uses like asearch database.
So like how Google search works.
So it's grounded with realityand that way.
You reduce the risk of thehallucinations and the guessing
work that has gone very viral onsocial media.
So rag is something that'sprobably here to stay.
(42:11):
And, it's pretty cool thatyou're, it's probably critical
for you to have rag in yoursolution.
Bryan Lee (42:19):
Yeah.
Because even I think if youtrain a custom model here, rag
will probably deliver moreaccurate results than your
custom model, because of thenature of the LLM model, it
could, It can still morehallucinate, even if you've
customized it.
Yeah, I agree with you that Ithink right here to stay until
we may figure something outbetter.
I think always fascinating to meis this idea of using, AI as
(42:43):
judges on outputs and thingslike that.
We have a little bit of that inour systems.
But I'm interested to see moredevelopment in that space and
from folks that are pushing thelimits there because I think
that will be continue to bereally helpful Is AI as judges
in the system as well.
Shikher Bhandary (42:58):
That's a
really interesting takeaway that
commonly used LLMs like OpenAIand with RAG, with a custom RAG
for your purpose.
In this case, Ruli AI might inmany ways be better or
equivalent to a custom builtLLM.
This conversation has beenamazing, Brian.
(43:19):
Thanks for walking through thepain points within the legal
system.
We are not experts.
You're the subject matter expertand walking us through where
those friction points are wherethe inefficiencies are and
where, how we, our solution canactually be used to.
Make lawyers lives not asmiserable as it was.
I think as we wrap this call,give the founders the stage to
(43:42):
talk about their team, talkabout where the audience can
find them, can learn about thenew product features.
Bryan Lee (43:53):
Awesome.
No, thanks to you both forhaving me.
Really enjoyed this call.
It's just been super fun andthought provoking.
Yeah, like where we are, like,we're not we've raised a really
great pre seed I'm probablygoing to raise a seed sometime
next year.
A larger seed.
We've got a pretty good handleon the legal use cases that
we've built out and we'resupporting on in house counsel.
(44:14):
And we also have some large lawfirms that have been interested
in using our copilot side.
But I think as we, we touched ona little bit with Jed we're
really going hard in uncoveringsome of these finance
professional use cases.
So if you're, an investmentbanking, private equity And or
maybe corporate finance.
Let us know.
I would love to get in touch.
(44:35):
Take a look at the data.
Great data.
Great feature.
Really dot a I and as well asthe copilot side as well.
And let us know what you think.
And if there's something we canhelp you out with, we'd love to
just.
Jam more and brainstorm withfolks.
If you're in Austin give a shoutout to me.
I'm based in Austin.
My co founder is an SF.
If you're in either of thoseplaces reach out, we'd love to
(44:56):
get in touch.
Jed Tabernero (45:01):
Sweet.
That wraps us up.
The information and opinionsexpressed in this episode are
for informational purposes only.
And are not intended asfinancial investment or
professional advice.
Always consult with a qualifiedprofessional before making any
decisions based on the conceptprovided.
Neither the podcast, nor iscreators are responsible for any
(45:24):
actions taken as a result oflistening to this episode.