All Episodes

June 6, 2024 33 mins

In this episode we talk about litigation automation, and another case in which innovators are using artificial intelligence to transform legal operations.

We also speak with our guest about his transformation from a litigator to a tech entrepreneur, and how the company he co-founded is using modern tools to do in minutes what used to take hours. These tasks include responding to demand letters, complaints, and discovery requests, and executing matter profiling and data analytics, all of which are traditionally rote and repetitive and time-consuming undertakings.

He is James M. Lee, co-founder and CEO of LegalMation. James conceived the idea behind LegalMation -- which is to leverage the power of generative artificial intelligence to transform litigation and dispute resolution -- while managing a litigation boutique.  An experienced and recognized litigator and trial attorney, James received his J.D. from Stanford Law School. 

Also joining me, I’m pleased to say, is the ever-inquisitive and always attentive Sara Lord, legal analytics professional extraordinaire, who raised questions from the litigator's perspective. 

I hope you enjoy the conversation!

*******

This podcast is the audio companion to the Journal of Emerging Issues in Litigation. The Journal is a collaborative project between HB Litigation Conferences and the vLex Fastcase legal research family, which includes Full Court Press, Law Street Media, and Docket Alarm.

If you have comments, ideas, or wish to participate, please drop me a note at Editor@LitigationConferences.com.

Tom Hagy
Litigation Enthusiast and
Host of the Emerging Litigation Podcast
Home Page
Follow us on LinkedIn
Subscribe on your favorite platform. 


Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Tom Hagy (02:49):
What is your background and what brought you
to to L egalMation?
W hat's your path there?

James Lee (02:56):
I've been a lawyer for close to 30 years, um, and
I started my career at Morgan Llewis and quinn emmanuel and
then, uh know, five years ofbeing an associate, I started a
litigation boutique called LTLAttorneys with a few friends and
that boutique grew to about 40lawyers and we were doing some

(03:16):
really cool stuff.
We had developed a greatreputation for patent defense
cases, alternative feearrangements on business
contingency cases.
We were doing high profilecases like the Snapchat
co-founder dispute case, whichwas actually quite fascinating.
And then, I guess about five,six years ago, I attended a one

(03:41):
week program at Harvard LawSchool with Scott Westfall and
David Wilkins I don't know ifyou know either of those guys,
but you should have them on yourpodcast for sure.
But what this one-week programdoes is it actually invites law
firm managing partners andpractice heads to essentially

(04:01):
learn about all the challengesof managing a law firm, and
we're talking about most of theAMLA.
200s are sending their managingpartners and practice heads to
this program, and one of thedays was focused on actually how
AI had been infiltrating themedical field and that it was

(04:24):
just a matter of time before itwas going to.
Matter of time before it wasgoing to sort of penetrate the
legal industry.
And I was looking around at allthe other managing partners and
you know I can just see thelook on her eyes.
But it was both a combinationof fear and greed that I saw.
And so I, you know, I wasfearful.
I was fearful that the largerfirms would basically steal our

(04:44):
lunch on this.
And so I went back to mypartners and I said, hey, we got
to do something about this.
And you know, I mean our firmhad done a number of software
development cases, so we hadsome in-house expertise and so
we just started to experiment onthe proper applications of AI
at that time and, as a result,legalm nation was sort of born

(05:07):
out of that sort of conceptwhere we were looking at ways of
automating certain portions ofbrute force activities and
litigation.

Tom Hagy (05:14):
Leads to the next question I was asking you know
if you can elaborate a littlebit more on what the initial
problems you were trying tosolve and what challenges did
you face in trying to solvethose?

James Lee (05:27):
Well, you know, at that time I'm talking about sort
of older AI, machine learningsort of approaches, which are
still quite valuable today, bythe way, we'll talk about that
later.
I'm sure you know, the beauty ofthese automated approaches or
machine approaches is that theydo target brute force activities

(05:47):
, stuff that people are doingover and over again, very
repetitive.
It is stuff that you know, asyou guys know, in the practice
of law half of the stuff is justprocedural, just BS that you
just have to get through andthat's perfect for AI and

(06:15):
machine learning.
And so we were asking ourselves, ok, from our perspective as
litigators, where can we pointthis awesome technology to
investigators, where can wepoint this awesome technology to
?
And you know, our philosophywas well, let's start at the
very beginning, which is, whenyou get a lawsuit, can we answer
it, and then can you drafttargeted discovery questions

(06:36):
based on the allegations of thelawsuit.
And so that was sort of ourfirst project.
And you know, sure enough,after about three, three months,
we were able to get a prettygood alpha version working, to
the point where, you know, oneof our firm clients was Walmart,
and so we went to them andshowed them what we had and they
were just astounded.
And you know Walmart has somuch employment and personal

(06:59):
injury litigation.

Tom Hagy (07:00):
They do yeah.

James Lee (07:02):
Yeah, that they were a great first.
You know they were our firstclient.
They were our client from afirm perspective, but they were
also our legal nation's firstcustomer.
So we rolled it out on behalfof their employment and personal
injury cases.
And so this is work that takeseight hours or so.
When you look at a lawsuit andyou have to read it, digest it,
you have to answer it, thinkabout the affirmative defenses

(07:25):
you want to assert, then youwant to generate or draft
targeted discovery questions.
That's work that takes six toeight hours, maybe more, maybe
less, but our system we weredoing it in two minutes.
And when I showed litigators, Imean their jaws would drop and

(07:45):
you could just see that theyknew the world was going to
change at this point.

Tom Hagy (07:49):
Right, yeah, Really hit them.
You know, and I'm looking atyour, at your team, on your
website.
First of all, why did you shaveyour beard?
Secondly, secondly, why who'swho is at LegalMation?
Who are the professionals thatmake up your company?

James Lee (08:08):
Yeah, I mean right now there's a group of lawyers.
I'm proud to say that we have anumber of JDs on our team, and
that's always very helpfulbecause they do have some
subject matter expertise.
They can talk shop.
They understand the pain points, you know.

(08:41):
They can talk shop.
They understand need more data,and you know I would tell them
no, you're just going to get thesame signal to noise ratio at
that point, because what theydon't understand is that in law,
you know, they somehow thinklaw is a science where there's a
right and wrong answer.
You know, but, as you guys know, there's ranges of right and

(09:03):
there's ranges of wrong, and youhave to sort of architect a
system that captures theseranges, these gradients of right
and wrong, in order to sort ofcapture and classify properly in
order for a model to work.
And so I find it immenselyhelpful to have lawyers on the

(09:25):
team and then also on our AIdata team.
It's headed by Martin Will, whowas actually one of the early
Amazon Alexa architects and hewas also heading SAP's new
venture group for a while, andso we're very fortunate to have
him on our team and the peoplethat he manages.
We have a development team.
They're mostly all US-based,which is important, I think, and

(09:48):
they're really good about justmaintaining the nuts and bolts
of our platform.
I think we have a 99.9% uptime,which is really good, so really
really stable for the clientsthat use our system.
And then we have a greatproduct and customer success
team.
They're really really bright,really sharp guys and gals and
you know they make sure that thetrains are running on time for

(10:09):
our, for our clients.

Tom Hagy (10:12):
Good, and Sarah, you feel free to chime in.
I wanted to, but I'll keeptalking, so you can't.
I wanted to kind of go throughyour offerings, not necessarily
do a commercial, but I mean,these things sound really
fascinating and useful, so let's, we can just go through them
one by one, but you tell me ifit's repetitive Complaint
response.
How does this work?

James Lee (10:36):
complaint response.
How does this work?
Well, we make it as easy aspossible because we're dealing
with lawyers and paralegals.
The beauty of our system, Ithink, is that we provide an
end-to-end platform to basicallyprovide a response or an answer
.
And so if you were to ask mewhat part of our system is
really AI machine learningfocused, I'd say 20, 25%.
A lot of the other stuff that wehad to develop was really

(10:56):
designed to answer a lot offeedback that we got from users,
because I think if you make iteven a little difficult to use,
lawyers and paralegals willrevert to their old ways.
There's no doubt.
And so on our system, the ideais that you take a lawsuit, a
PDF copy of the lawsuit, youdrag and drop on our system.

(11:18):
There is a confirmation pagewhere we do want you to just
confirm spelling from OCRpossible OCR issues, bad copies.
But after that preliminary step, which takes 15 seconds at most
, you just press go and boom.
Most you just press go and boom.

(11:40):
Now you have a productivitytool where we have the ability
to pre-answer all of theallegations based on historical
data that a client gives us.
So we provide training modelsfor them and then they're able
to point and click and then justpress go and then they get
draft documents in Word or finaldraft documents that they can
just print out to PDF and justfile at that point.
But you know, we're reallytrying to expedite the entire

(12:03):
sort of workflow in a way that Ithink makes paralegals and
lawyers more, more more likelyto use the system.

Tom Hagy (12:10):
OK, yeah, ease of use is huge, huge.
Ease of use is huge, huge Inthe crush of a day.
I don't think any of us whosays, oh, here's some new system
, we could use new app orsomething.
You get in there and you'relike, oh, I got to do this, I
got to do that.
You know, if you're 20, 30minutes in, you still have hell
with it.
Just move on.
But, sarah, that's a good pointfor you.

(12:32):
You were asking me about someof the data behind the platform.

Sara Lord (12:38):
Yeah, a couple of questions.
You can't really talk about atool these days that drafts
responses, drafts motions,whatever it's writing, without
hitting on hallucinations.
How do you address thehallucination issue so that
lawyers are confident using thistool in the results they're

(13:03):
receiving?

James Lee (13:04):
Yeah.
So I'm very proud to tell youguys that I like to brag that
we're never wrong when weprovide answers to our clients,
because we are only using theirtraining data, their data, to
our clients, because we are onlyusing their training data,
their data, to essentially buildout models that essentially
identifies the allegation,comprehends it and then
essentially provides them whatthe response is.

(13:25):
Now let me back up and sort ofgive you more context on that.
When we were initiallydeveloping our platform, what we
discovered is that lawyersthink of themselves as artists.
This has to go with thegradients of right and wrong.

Sara Lord (13:45):
They tend to have art style degrees, I mean.

James Lee (13:56):
Yeah, well, actually, between objections that our
system was not providing that,he thought that you know it
should do some other transitionwords.
And this is where it gets kindof.
You know I'm not silly, but youknow the reality is that
because lawyers want to puttheir personal touch on a lot of
the responses, you have tobuild in biases in the model.

(14:22):
I've said this before publicly,I'll say it again we build bias
in our systems because if wedid not just use the type of
preferences or decisions that anorganization likes to use, they
just wouldn't use it.
Because, in other words, if youbuilt a compromise model which
is really what I think peopleare talking about where you have

(14:45):
different organizationscontributing, quote, unquote
data, what you end up having isjust a compromise model that no
one's going to be very happywith.
And so what we do is we actuallybuild out specific data
warehouses for the clients andso like.
For instance, we have a numberof auto manufacturers as clients
and they face the sameopponents.

(15:07):
What's really interesting isthat they respond very
differently, philosophically,from one organization to another
.
If I was using some sort of youknow, joint community model, no
one would be happy.
So we're just building outspecific data warehouses for
specific clients, and sowhenever they are facing

(15:28):
particular allegations ordiscovery requests, that's where
we're just pulling from thatpool of possible responses.
So you know, the risk ofhallucination is almost zero at
that point.

Sara Lord (15:40):
Okay.
So then, if you're utilizingthat contained data set to train
your model, it feels like it'sreally going to work best for
firms that have a recurring casetype.
So when I get a novel case insomething I don't generally work
on, how well does this toolwork for that?

James Lee (16:03):
Yeah, we are just primarily priming this to
specific domains, and so there'sa certain swim lane.
So, in other words, works justreally well on that swim lane,
but if it's another personalinjury case it goes into another
swim lane.
So, in other words, it worksjust really well on that swim
lane, but if it's anotherpersonal injury case it goes
into another swim lane becauseit has different models that
it's pulling from.

Sara Lord (16:23):
Okay, and because you're using only that firm's
information, that firm's sourcedata.
If you have another client thatdoes a lot of work in personal
injury and I don't, thatknowledge from that personal
injury data set doesn't flow into the legal mission that I use

(16:48):
it doesn't.

James Lee (16:49):
And you know, here's something else that is not
surprising, I think, for all ofus on this and the people who
may be listening.
Lawyers only think that theirwork product is the best, and I
get this all the time fromcertain lawyers and law firms
that are worried that theircompetitors are going to steal
from them.

(17:09):
They're not going to steal fromthem.
They don't like, they don'tbelieve that any other firm's
work product is going to bebetter than theirs.
It's really incredible.
Whenever we try to introducethis concept of a community
model, they reject it almostinstantly.
They don't want that.
They want their own stuff, theywant their own language, they
want their own level of, theywant their own tone, their tenor

(17:29):
.

Tom Hagy (17:34):
It's just they're very particular in terms of how they
want their work product to lookIn defense of people with low
opinions of themselves.
I do know a lot of lawyerswould grab pleadings and things
and borrow generously from thema lot.
Sarah, did you have a follow-upto that?

Sara Lord (17:47):
Yeah, just within a firm, especially a larger firm,
which larger firms may have morethan massive data required to
effectively train the system.
Admittedly, there are smallerfirms that work in one swim lane
and they will also have a goodset of data, but for firms that

(18:10):
are large enough that you havedifferent partner personalities,
all of whom have their ownlanguage that they like to use
their own style, does the toolhave any ability to provide
results customized to the actual?
Yes, awesome, tell me aboutthat.

James Lee (18:31):
Yeah, no.
So essentially, when we collectdata and we store it, it's
essentially adding a metadatafeature label of the group or
the lawyer.
And so when the user comes inand wants sort of the selection
of the knowledge management of aparticular group or a lawyer,

(18:55):
you can basically switch that onand so it would only start then
pulling and looking at thatselection.

Sara Lord (19:02):
Great Tom, I'll let you talk again.

Tom Hagy (19:04):
Oh no, thank you, sarah.
No, I appreciate it.
No, thank you, you know.
I just want to ask you onething.
You said earlier.
You used the word bias.
You said you build bias.
Could you just elaborate so forpeople who hear the word bias,
they immediately think ofnegative things.

James Lee (19:23):
What did you mean when you said that?
Yeah, no, it's like I said,there are leanings that certain
lawyers have about the way theymay want to answer certain
requests or certain perspectives, perspectives, perspectives and
, like I said, having workedwith particularly lawyers but I
also believe this to be true ofjust general users across the
board that I think you need totake into consideration their

(19:49):
style, their preferences, theirviews, their worldview of how
they want to express themselves.
And in order to do that, youneed to build out sort of these
mini models.
Which is actually what'shappening in sort of the
enterprise world, just so youguys know, is that the use of
these large language models isprobably going to give way to

(20:10):
sort of smaller, domain-specificmodels, and I think what's
going to end up happening as youmove forward is people are
gonna begin to realize that inorder to have more successful
results or outcomes, you'regonna have to build out more
preferences that a model willhave to pick up in order for a

(20:31):
user to start using it.
Otherwise, like I said, I thinka compromised model is going to
end up being novel at first,there's no doubt, but when
companies and enterprises try toimplement them, they're going
to be very frustrated, which isactually what's happening with a
lot of the large language modelexperiments that are going on

(20:52):
right now.

Tom Hagy (20:53):
Yeah, it just reminds me of a couple of things as I've
written for other people for alot of my career.
Uh, learning people'sindividual attorney styles is,
uh, it's, it's, it's tricky, Um,but you know, I've gotten
things back Like I would nevertalk like that.

James Lee (21:10):
No, I mean, look as a .
It was very difficult draftingstuff for different partners,
Yep, and what I ended up doingmostly was I would look at
exemplars from the partners sothat I could conform the style
and tone to what he or she mayexpect or want.

(21:31):
And so you know, people don'trealize it, but that sort of
work process has been going onfor decades.

Tom Hagy (21:39):
Now you have these other services discovery,
response, matter, profiling.
I guess it's all basically thesame idea, but tell me how
they're different or if they are.

James Lee (21:49):
Yeah, I mean.
Well, they're all basically thesame in the sense that you know
we were starting at a start ofa lawsuit and then, like what
happens at every what happens asdominoes begin to fall, what's
the next natural process?
And all of them basically dothe same thing.
There's an input of a demand ofsomething.
You have to respond todiscovery questions.

(22:09):
You have to respond to a demandletter, you have to respond to
a lawsuit.
There's some input document ordocuments that you need to
essentially respond to.
We tie that into essentiallyknowledge management data
warehouses of the historicaldecision-making that
organizations have that enablesthat person or a user to

(22:40):
essentially very quickly gothrough and respond to the input
document so that at the end ofthis process you have a fully
fleshed out document to serverfile.

Tom Hagy (22:45):
Matter profiling.
I think that's extremelyimportant and if you could talk
a little bit about how is matterprofiling done, or how is it
done traditionally and with yourservice?

James Lee (22:55):
Traditionally it's a highly manual process and I hear
horror stories of organizationsthat try to do it manually in
order to build out maybe sometraining data and then what they
find out is that number oneit's inconsistent.
Number two, it's hard to scalebecause oftentimes the turnover

(23:17):
on these human teams, becauseit's a low skill task and people
would get very bored.
What we do is essentially we usea combination of all automated
processes, but essentially whatwe're doing is we're extracting
the data and then building outmodels to essentially determine
what that data field is, and soit could be incoming forms, it

(23:41):
can be even documents likesettlement agreements.
You know we're able tobasically figure out how much in
settlement was actually paid orin the document itself.
You know we're working on aproject right now where we can
determine the arbitration rulesthat are involved in an
arbitration document and whatthe other.
You know whether it's onearbitrator or three arbitrators,

(24:03):
or you know it's silent.
I mean we can do all thosetypes of things that I think
again, it takes a lot of timemanually and now you can do
automatically.
And that's really encouragingto see because now you can start
building what Martin Will likesto call a data flywheel where

(24:25):
you can imagine as a case ordocument comes in.
You basically collect all thatdata, but then it's actually now
being pushed to the next seriesof activities throughout the
entire life cycle of thatlawsuit.

Tom Hagy (24:40):
You all just came out with something new.
A lot of excitement arounddemand letter responses.
Tell me about what problemyou're trying to solve there and
again.
If you could do kind of likehow it's been done traditionally
and now how it's doing you, doit with your service.

James Lee (24:55):
Yeah.
So again it's the same sort ofthing about it's an input that
requires a response, and thenyou know, we want to try to
capture the knowledge managementstores of an organization and
then help out the user, generatean output, and so again, just
the next sort of thing and sortof a litigation lifecycle is
sort of demand letters, and, bythe way, the number of demand

(25:18):
letters that an organization hasto respond to is much higher
than the number of lawsuitsfiled in America.
And so you know, the idea is,for instance, you know, one of
the solutions that we thatyou're talking about is the
ability to answer an EEOCadministrative charge document,
which the output is justbasically a letter, a letter

(25:40):
form of a response.
And so there on our system, theuser is able to upload not only
the charge document but anysupporting information, and then
we're able to essentiallyprocess that, identify all the
key issues, again a version ofthe matter, profiling and then
essentially offering suggestionsabout what the legal arguments

(26:02):
ought to be.
And this is another goodexample of where an LLM actually
is appropriate, where you beginto provide some short summaries
of certain key aspects of thecase so that it really speeds up
the process of responding toone of these letters.
The feedback that we've beengetting is that this is the type

(26:23):
of work that may take up tomaybe four to eight hours to
basically draft a letter, and onour system they're basically
getting it out in less than anhour.
So it's working out great andso.
But you know again, demandletters comes in many.
You know flavors.
It's not only EEOC documentsbut it can be like insurance

(26:44):
denial letters.
You know all those types ofthings that make this
essentially a very robust areafor automation.

Tom Hagy (26:53):
So you mentioned Walmart, you mentioned insurance
industry kind of who are thebest targets you see for this
kind of thing?
Walmart obviously has a lot ofcases.
Insurance companies have a lotof cases who else?

James Lee (27:07):
I have a funny story about this.
It has to do with me washingdishes.
Okay, I was washing dishes onenight my wife makes fun of me
and says hey know.
Says hey, you run an automationcompany, why aren't you using
dishwasher?
And you know, I, for two weeksI could not answer this question

(27:27):
and it really bothered me.
I was like why am I not using adishwasher?
And I realized I just don't doenough dishes.
That's not a pain point for mewhere I'm going to bend down and
stack the dishes in adishwasher.
And so I tell you that storybecause for any automation to
work, an organization has tohave a lot of dirty dishes.

(27:48):
Who has a lot of dirty dishes?
Ie lawsuits and these types ofclaims.
Well, insurance is the largestby far.
You talk about the number ofpersonal injury cases.
They have insurance, motorvehicle premises liability.
They are by far the number onespenders.
I think the next series arelarge manufacturers with their
warranty litigation and consumerlitigation another large group.

(28:12):
Again, just a lot of dirtydishes there.
And then I would say any largeorganization, any Fortune 100
company that has a lot ofinteractions with consumers,
with government, with just withanyway, just where they have a
lot of litigation.
Financial industry, servicesindustry has a lot.

(28:33):
Those are the type of primaryorganizations where automation
really works really well.
Right, you got a lot of dirtydishes.

Sara Lord (28:41):
When you're evaluating fit.
So, when you're Iei talking toa new potential customer and
trying to decide if this toolwould make sense for them, are
you looking for a specificamount of data?
Is there a specific mass thatthey should have when it comes

(29:02):
to getting started with yourtools?

James Lee (29:05):
Yeah, I think that's a great question, because we did
this experiment with one pharmacompany where the question was
well, how many cases or how manyexamples of a lawsuit do we
need to see before the systemwill perform in?
I'm going to call it a 90% plusaccuracy range, and what we

(29:26):
found out was if we're talkingabout like one pharmaceutical
drug, so in other words thecontours of that dispute are
very well defined, because it'sone drug with very well-defined
issues, as little as 100lawsuits and answers was enough
to get that level of accuracy.

(29:47):
For other domains where, let'ssay, employment, where you have
up to 20 different causes ofactions and then within each
cause of action you have manysubtypes, you're going to
require a lot more than that,maybe 500 to 1,000 minimum,
before it really starts actingin a way that is fairly

(30:09):
predictable.
Otherwise you end up with somewonky results and some
inconsistencies.
So it's a sliding scale ismaybe the best way I can
describe it.

Tom Hagy (30:18):
Good, Well, that's all I had.
I just want to know what's next.
Is there anything next comingout that we should be looking
for from you guys?

James Lee (30:28):
Yeah, so I mean, we're building out a subpoena
response tool that should becoming out in a couple of months
.
We're also releasing adeposition assistant tool and I
realize there's a couple outthere in the market already.
But one of the great thingsthat we discovered about our
system is that all that sort ofknow-how that we've built in on

(30:50):
responding to discovery, forinstance, and being able to find
semantic similarities, meansthat we can increase, signal,
decrease the noise when sendingmaterials out for analysis.
And one of the exciting thingsabout this is the ability now to
look across dozens and dozensof depositions for consistencies

(31:11):
or inconsistencies, and one ofthe things that I'm really sort
of.
You know, you'll be the firstto know about this, actually in
public, but we have a projectwith the California Appellate
Project this summer.
Who are they?
Well, they are the deathpenalty appellate lawyers that

(31:31):
the state of California funds.
So in the state of California,every time there's a death
penalty conviction, thedefendant has an automatic
appeal, and so this organizationis basically a law firm, is
essentially being tasked withproviding an appellate defense.
You know thousands andthousands of pages of

(32:00):
transcripts for you knowpotentially relevant testimony
that can help in overturningsome of those death penalty
convictions, and so we have acouple of fellows from Stanford
and Georgia Tech that will behelping in that and, you know,
apart from anything that we doon behalf of enterprises, this
is just a really cool,interesting use case where, to

(32:21):
the extent that we can find realevidence of discrimination,
bias and testimony, and closingarguments and opening statements
, and voir dire, it's going tobe a great way for us to sort of
, you know, help out, you know,marginalized guys.
It's just one of those thingsthat I think is pretty cool to

(32:44):
do.

Tom Hagy (32:44):
That's important work.
Sarah, you had anything else.
I'm going to let James get backto it.

Sara Lord (32:49):
Sounds good.
This has been a greatdiscussion, so I appreciate you
inviting me to participate.

Tom Hagy (32:55):
Well, James Lee, thank you very much for talking to us
.

Sara Lord (32:57):
All right.

Tom Hagy (32:57):
Thanks.
Advertise With Us

Popular Podcasts

Fudd Around And Find Out

Fudd Around And Find Out

UConn basketball star Azzi Fudd brings her championship swag to iHeart Women’s Sports with Fudd Around and Find Out, a weekly podcast that takes fans along for the ride as Azzi spends her final year of college trying to reclaim the National Championship and prepare to be a first round WNBA draft pick. Ever wonder what it’s like to be a world-class athlete in the public spotlight while still managing schoolwork, friendships and family time? It’s time to Fudd Around and Find Out!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.