All Episodes

May 15, 2023 47 mins

On this Rebroadcast episode of ADJUSTED, we welcome a special guest, Matt Murphy, Vice President & Managing Actuary at Berkley Industrial Comp. Matt discusses all things data and decision-making: risks, rewards, and nuances.

Season 5 is brought to you by Berkley Industrial Comp. This episode is hosted by Greg Hamlin and guest co-host  Mike Gilmartin, Area Vice President, Sales & Distribution, for Key Risk.

Comments and Feedback? Let us know at: https://www.surveymonkey.com/r/F5GCHWH

Visit the Berkley Industrial Comp blog for more!
Got questions? Send them to marketing@berkindcomp.com
For music inquiries, contact Cameron Runyan at camrunyan9@gmail.com

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Greg Hamlin (00:12):
Hello, everybody, and welcome to adjusted. I'm
your host Greg Hamlin coming atyou from beautiful Birmingham,
Alabama, and Berkeley industrialcomp. And I'm excited to share
with you today this rebroadcast.
We did an episode over a yearago on data driven decision
making with Matt Murphy. AndMatt Murphy is our actuary. And
I often joke that he's a bit ofa unicorn and that a lot of the

(00:35):
actuarial people I've met, speakat a level that I'm not able to
understand with my commonclaims. Now, it's, and Matt does
such a great job breaking downcomplicated ideas and being able
to explain them to those who maynot have a graduate degree in
mathematics. And so I reallyloved this episode, because Matt

(00:56):
talks about the importance ofdata driven decision making. I
have the opportunity to workwith Matt on a daily basis here
at Berkeley industrial comp, andfeel very privileged to have
learned from him in my time hereat Berkeley. And hope you do
too, with this episode. So withthat, we'll move to the episode.
Hello, everybody, and welcome toadjusted. I'm your host Greg

(01:17):
Hamlin coming at you fromBerkeley industrial comp and
Sweet Home Alabama, and with meas my co host for the day.

Mike Gilmartin (01:27):
Yep. Michael Martin here, coming to you from
Greensboro and key risks andexcited to be heard, Greg.

Greg Hamlin (01:32):
Awesome. Always glad to have Mike along with us
for the ride. We have a specialguest today, Matt Murphy, who is
the Vice President and managingactuary at Berkley industrial
comp. And we're excited to havehim with us today. Matt, if you
say hey to everybody. Yeah, hey,good. Good

Matt Murphy (01:51):
to be here with Hambone the pot father himself
and glad to be on here.

Greg Hamlin (01:57):
Oh, we're great to have you. We're glad to have you
back. So we're going to betalking about today data driven
decision making. But before weget too far into that, Matt, I
wanted to ask you, how did youget into the industry? Did you
know when you were a small childand kindergarten that you are
going to be a managing actuaryof insurance company?

Matt Murphy (02:19):
Yeah, good lord.
No. So having been in theindustry for a while, I find
that the vast majority of usfell into insurance, right? None
of us were really sitting there,like you said as kids imagining,
studying for actuarial exams andthe nice rewarding career
insurance. So I actuallystarted, I went to college and

(02:40):
from New Orleans, so I went tocollege, in the Bronx, New York
at Fordham University. And I wasI was a student who was good at
physics. In high school, I thinka lot of people saw the subject
difficult, whereas, you know, Ifelt like I could do the work.
So I decided to make that mymajor in undergrad and graduated

(03:01):
with a physics degree, headedback to New Orleans, where I
kind of had this summer after Igraduated to figure out what the
heck I was going to do. Is itgoing to be grad school, which,
which really was where I wassort of tilting towards. And
then, you know, justserendipitously, my father went

(03:22):
out to a dinner, and was satnext to the president of an
insurance company. And, youknow, basically said, Yeah, I
got a deadbeat son at home andhas no idea what to do with his
life. The president of theinsurance company said, Well,
why don't you have him come inand do an interview? I think in
hindsight now, this guy had justtalked to him best. And they had

(03:46):
impressed upon him. They neededan in house actuary. So I think
he saw sort of my, my physicsdegree, my advanced math classes
that I had already kind of takenand said, Hey, this, this might
be our key to get that and youguys might not know, but the
actuarial field is replete withthese really onerous exams in

(04:07):
order to get credentialed. So itwas something I didn't even
really know about the theindustry or the actuarial
professional field, until thisand then I said, okay, yeah,
I'll sit down, I'll take acouple of tests on these
mathematical concepts. And, youknow, I obviously did a little

(04:28):
bit of reading up on it, and itsaw that, you know, if you can
pass these onerous examrequirements, it's a very
lucrative and rewarding career.
So, I jumped in from there and,you know, somehow made it
through all the exams. Andabout, I'd say, 2017, mid 2017,
I jumped from New Orleans andjoined Berkley industrial comp,

(04:53):
which was American mining at thetime, but that's that's kind of
my story, how I got here andobviously big aim steeped in
data along the way.

Greg Hamlin (05:04):
So how long? How long did it take you to pass all
those exams? That seems like anightmare?

Matt Murphy (05:09):
Yeah, it is a nightmare. I was definitely, I
failed a few of them. I knowthere's there's some folks who
never fail a single one. So Iguess it was good character
building to learn to fail examsagain. And I'd say I got my FCA
S, which is the fellowship in2016. So, all told, it was about

(05:33):
it was about eight or nineyears, from first to last, was
pretty terrible.

Mike Gilmartin (05:42):
My brain hurts just thinking about what goes on
in your brain. I could just Ican't comprehend what you think.
I say that to Doug Ryan all thetime, like you guys just think
on a different wavelength. Forlisteners who maybe don't know,
and I know we're gonna dive intothe data aspect of it, but what
do actuaries do? Like what whatis what is your if you could
some of your day to day job?
What is it? Yeah, yeah,

Matt Murphy (06:05):
I get asked this a ton, especially living in the
southeast, where people havealmost never even heard of the
title, actuary. And, yeah, Ilike to just say, Hey, we're the
glorified mathematicians of theinsurance companies, right. So I
would say there's really twomain prongs for being a property
casualty, actuary. And it's onthe reserving side and on the

(06:27):
rate making side. So, you know,we're in an interesting industry
where, you know, a lot of times,you know, the cost of goods
sold, right, so we, you know,you're you're serving up
hamburgers, you know, the costof the meat, you know, the cost
of the buttons, the overheadetc, before you go in. Ours is
an interesting industry where wedon't know what the hamburger is
going to cost when we sell it.
So, so you really do need theactuary to come in and sort of

(06:48):
estimate based off of historicaldata, and maybe industry data,
if you're getting into a newline of business, what that cost
is going to be so we'reinstrumental in setting those
rates, or the premium that we'regoing to charge to get a risk to
turn a risk on. The other sideis the reserving side. I know,
Greg Hamlin, and I talk quite abit about this in our company,

(07:12):
but it's basically once we'vehad the claim, okay, how much is
it going to cost us now? Howmuch do we need to hold up on
our balance sheet as a liabilityto eventually pay this claim? At
Ultimate is what we call it wesay ultimate, and, you know,
those that have been in claim toknow that, you know, we may put
up a reserve on a claim when wefirst see it, several things can

(07:34):
happen to the claim, it can gosouth, we can get a favorable
result. But but in the end, it'sthe job of the Actuary to take
an inventory of all of ourclaims, or open claims and say,
estimate, hey, this is what it'sgoing to ultimately cost us at
the end of the day, when it'sall said and done. So those are
the two main problems, I wouldsay ratemaking. And reserving.
And then, as Greg can alsoattest to, we also just fill in

(07:57):
the random in between any sortof data requests from any any
department, it usually falls onthe actuary to figure out how to
do that, or at least it does.
And in my

Mike Gilmartin (08:11):
experience, it does here too. And the fact that
you like Greg, do math is alittle scary.

Greg Hamlin (08:23):
Yeah, you know, that's why I picked claims. Math
is important. And we have somepiece of that. But I'll say
like, that wasn't my firststrength, although you can get
good at anything. If you putyour mind to it, it would not. I
definitely would not be able tobe in your shoes, though, Matt,
with the amount of high levelmath that you're doing on that
end. So one of the challengeswhen you're trying to I assume,

(08:47):
you know, as an actuary in a lotof ways you're trying to predict
the future, or get a good ideaof what that's gonna look like?
What are the challenges thatcome along with that? For you to
really be able to provide a goodanalysis of what to expect going
forward?

Matt Murphy (09:07):
Yeah, so I think it's a process. I think
everybody likes to jump to theend really quick, right? And
say, Oh, great. Why don't youjust get out your crystal ball?
Right? Why don't you augur thefuture for us here, read the
bones, the tea leaves, right?
It's not that simple. I think,in my experience, now, maybe
when I jumped in, I thoughtthat's what it was going to be,

(09:27):
you know, it's all of a sudden,I'm just gonna get this data.
I'm gonna tell you what's goingto happen tomorrow. I think now
I view it more as a process,right? So it needs several tiers
of data and accompanies sort ofrelationship with data, needs
several tiers to we can get tothat sort of Holy Grail of
predictive analytics. And Ithink the first thing to do is

(09:50):
just to get a culture thatenjoys data, right? That isn't
scared of data that doesn't sayhey, give it to the Okay, but
actually, you know, I rememberas a kid I knew I would have I
love data because I would openup the sports page. And I would
go to the stats pages, right?
And see who's who's leading inhomeruns. What are the records

(10:12):
and all the sports? Who's gotthe most assists in the NHL? You
know, it was it was things likethat, that first drew me into
statistics and stuff like that.
And, and I think everybody hasthat sort of cursory curiosity.
You know, even if you're notinto math, I think you can find
some stat nerds out there. And Ithink the first goal in this
process is to get a few peoplein your company starting to want

(10:34):
the data to see the data, right?
Because, you know, first you gotto measure it, but because
because I do like this thing,you can't manage what you don't
measure. Right. So measuring is,is is great, but I think, until
you have that sort of secondtier on top of it, which is
actually analyzing it, and doingit at a frequency, you know,

(10:58):
like you do, Greg, right. So wehave, like dashboards that we
spin up for Greg, he's lookingat them, you know, at at bare
minimum, weekly, but sometimesdaily to see what's happening,
right? If we get that sort ofiteration, and I throw up data,
you take a look at it, you whathappens quickly is you find out
when things are going awry,right? Or, Hey, this looks, this

(11:20):
looks really wrong. And so Ithink that frequency and getting
that culture to start churningover to opening that sports page
every day, and telling me whenblank values are coming over,
for example, or Wow, this is1,000%. That's way off. I've
never seen higher than 20%. Whenyou get that iterative feedback

(11:41):
from people, and you get thatculture that wants to look at
these things, I think you'rethen at the next year, right
now, now we're starting to feelcomfortable with our data. We're
starting to have some people askfor more, right? And get a
little bit excited about it andsay, Well, hey, Matt, this is
cool. I really liked this part.
It gave me this idea to go alittle deeper, right. So I think

(12:03):
what I see a lot of times is youhave folks in the industry want
to jump to the very end, liketaking that flag out and running
as fast as they can before thearmy is really in lockstep with
them behind them. And I thinkit's much more powerful when you
get this wave of, hey, we'recoming out this as a group. And
I think that makes these andthat's not to say guys that I

(12:27):
think we've made it to the veryend at Berkeley industrial, I
think we still have a lotfurther to go. And you know, as
new technology comes out, it's amoving goalpost. But I think we
have gone a long way in thatmiddle tier, to getting a lot of
senior leadership. Everybody inthe company starting to say, oh,
okay, I like the data, I want tosee the data more. And I want

(12:48):
it, I want that to push me toask for more things. So I, at
the end of the day, you know,I'm kind of rambling here, but I
think Predictive analytics isand the crystal ball part is
what everybody wants, andthinks, you know, hey, I'm gonna
sign you know, I'll hire thisdata scientist, and I'm gonna
have this tomorrow. I think it'smuch more of a process involved?

Mike Gilmartin (13:13):
Well, I think you hit on something
interesting. And it's somethingthat I've talked to Doug ran
about a number of times, butit's hard to start if your data
is incredible to begin with. Andthere's so much in our industry,
and I would imagine otherindustries, that is human input
is what you get, people areputting into a claim, you get
out what adjusters are, youknow, clicking boxes and

(13:36):
checking things. And those aresome of the fields we utilize to
drive some of our decisions andlook at data and look at
different reports. How do you asan actuary, how, maybe it goes
back to culture, right, makingsure everybody is in line with
the fact that even the littlestthing you do affects the data
that we have on the back end.
But how do you make sure you getto the first step of your data?

(13:56):
The incredible thing, that's ahuge thing that you look at is
great, this report is awesome.
If it's right. So I think thatthat's where I find it
fascinating what you guys cando, but how do you make sure
data is credible? Yeah, okay, so

Matt Murphy (14:12):
first thing, you know, to use the word credible,
or credibility and actuarialcircles. You know, we think of
it slightly differently.
Usually, when we're thinkingabout credibility, we're
thinking of just volume, like,how much of it do you have? Are
you trying to make inferencesoff of a small amount of data?
That's not credible enough, evenif the data are clean, right,
completely clean? I think whatyou're more getting at Mike here

(14:34):
is, you know, dirty data. AndI'll give you a little anecdote
on this. My, my first, my firstactuarial conference, I mean, I
don't know I might have had oneexam under my well, I might not
have even had any. But Iremember going off to this with
my CFO at the time. And Ibelieve the title of the talk
was like, bad debt, bad dataanathema to actuarial. estimates

(14:58):
or something like that. And whatthey did is this woman who was
the speaker, asked everybody inthe audience to just quick poll,
like, what percent of your datawould you say is clean? And here
I am like, Oh, hey, I mean, myCFO, he's great. He knows, you
know, he's preparing thesereports, these reports are
clean, they're great. And so Ithrow something out there, like

(15:21):
9095 or something. And then whenall the results came back in, I
think it might ended up as 10%.
And it was gobsmacked by like,Oh, my goodness, while I'm
keeping the data being clean, alot more credit than they
deserve. And now where I am, youknow, 15 years into this

(15:42):
industry? Oh, yeah, that 10%Maybe overestimated. So, great,
great point to bring up there,Mike, that there's a lot of
dirty data out there. And, and alot of it is back to that
feedback. mechanism I justtalked about, like, we know,
like it's been, it's been, it'sa mantra, we got to go out
there, we got to collect, we'vegot to hoover up all this data

(16:04):
in insurance, right? Just Justget it log it. And I think a lot
of times people don't see thepoint, right, or they see this
is, man, I keep checking thisbox, it's onerous. Hey, guess
what, I can actually skip this,I can leave this blank and go
and look, if you're not managingthat, if you're not analyzing
that. And noticing folks dothat, they will do it so long

(16:26):
that you'll look back when youfinally want to analyze that
data in three years and say, Oh,my Lord, we never put anything
in this field. And now we've gotto, you know, go through, let's
get somebody to try to backfillit all, which is never as good.
So the important part, I think,too, like clean data, is
constantly sort of analyzing it,and going over it and make sure

(16:48):
that it's clean and giving thosepeople who are inputting this
data feedback. And that's notjust you know, browbeating them
to say, hey, you missed this,you missed this, but giving them
a sense of the fuller pictureabout how every click, they're
doing, you know, maybe givingthem some of these summary
results that are built off ofthose clicks. So they can see

(17:10):
how important they are to thewhole analytic enterprise. But
yeah, data data's dirty.

Greg Hamlin (17:17):
I man, I think you hit on something that was really
important in that you talked alittle bit about involving the
people who are doing the work,because I think that's important
for two reasons. And I think youcaught you hit on those, but I
want to make sure I emphasizethem for people who are
listening. One is, if the peoplewho are going to use the data in
the claims department or theunderwriting department, have

(17:39):
buy in, they're more likely towork to get the right data in
there. And if you're asking themwhat they want, which is what
you'd mentioned earlier, you'realso more likely to have an end
result that is functional andactually works for the people
who are using it as opposed tobuilding something in a vacuum,
that you don't have the buy init in. And so I think that's,

(18:01):
that's really important. I'veseen that in our department, and
I appreciate the way you workwith us on that piece to get it
right. Because I think thatmakes a big difference. Yeah,

Matt Murphy (18:11):
I think when you're analyzing too, you're gonna find
that some of the data areextraneous, like, you know, yes,
when we set this up, we said, Iwant you to do XY and Z. But if
we're analyzing this later, andkind of saying, Wow, this is an
onerous thing for these folks onthe front lines to be putting
in? And do you know what? Whyare we even asking for this?

(18:32):
We've got another way to getthis data over here, right? So
we're maybe being redundant orduplicating this entry. And so I
think, looking back on it tosay, you know, again, that
feedback, that analysis to say,we don't need that. And the job
isn't just we say this on dayone, and you're gonna put all
this information in, and don'tworry about it. But that process

(18:55):
can change as we continue toevolve and to go through and to
comb through what's being sweptup in the data. So

Greg Hamlin (19:04):
I think that's great. So when you we may have
some people who aren't familiarwith predictive analytics, or
what it even means, and how thatapplies to insurance. I know
some folks have probably seenthe movie Moneyball, which is
maybe a good example of usingstatistics to drive outcomes.
But when you when you think ofpredictive analytics, are you
are explaining that to somebodywho really has no experience

(19:25):
with it. How would you do that?

Matt Murphy (19:27):
Yeah, I mean, unfortunately, it's like one of
those terms, like, you just takeanalytics, say, for example, or
AI or predict it means somethingdifferent to different people. I
think to me, when I think ofpredictive analytics, I think of
looking out of the windshield ofthe car, like we mentioned
earlier and saying what's goingto happen tomorrow? I think when

(19:49):
people use analytics, it can bethe the simplest thing, right?
That doesn't involve any machinelearning, but I think that it's
gotten a lot of hype, not justin In the insurance industry in,
you know, a myriad sort of usecases from, you know, image
recognition, right. It's huge.
Man the speech to text on yourphone. I don't know if you guys

(20:11):
have noticed, but in the lastthree or four years, it's
phenomenal compared to where itused to be. Right. So I think
all of these things aretogether, I think, how does it
apply to insurance? It doesrather well in what I would
call, narrow use cases. So youknow, that's, that's the one
thing I think when people hear,say, AI or artificial

(20:33):
intelligence, they're thinkingthat there's this. Watson, you
remember IBM Watson, right?
Brain that's going to winJeopardy, but it was designed
for one purpose, and it was justsort of win Jeopardy, right? You
couldn't have it, give it adifferent task and say, Oh,
Watson, you're so smart. Here'sthis other task, how are you

(20:53):
going to do well on that? Andthey would say, Well, I'm not
trained on any of it, I would doterrible. So I think people need
to keep in mind that with AI andpredictive analytics, where we
stand today, there was a veryfamous example of the Google
team, creating an algorithmcalled AlphaGo. I don't know if
you guys have heard of this, butgo. Apparently, in the world,

(21:15):
you know, you've got chess,right? Which is, you know, more
complicated than checkers. Butthen there's this, like, sort of
supreme complicated game calledgo. It's very popular in Asia.
And they had this Korean who wasthe world's best Go player, play
against this algorithm? AlphaGo.
Right. And I think a lot ofpeople in the world thought that
that was, AI was still very faraway. And I think this was like

(21:39):
2016. So this is, this is aboutfive years ago at this time. And
I'm not sure if it wasspecifically, I think the the
pro beat AlphaGo on the firstrun, and everybody was, you
know, feeling good aboutthemselves. But I think over the
next few games, the algorithmjust crushed the best human
player. And a lot of people takethese things and make headlines

(22:00):
that oh, here we are, we'vepassed the singularity, right?
It's, it's going to take overand, you know, I would caution
that and say like, yeah, playingthe game go. I mean, that's
that's this thing isn't doesn'thave a use case where we're
Google can take this, and thenimmediately applied somewhere
else, it's going to needtweaking, it's going to need a
whole new set of data. So Ithink, when we talk predictive

(22:23):
analytics, in insurance, in insome small, narrow use cases, I
think it does really well likeif you can say that it's kind of
like the game go, right? Ifyou're, if we're talking about
like, what I've seen do prettywell is what we call like a
claims triage model. Right? Soin claims, we get a first notice
of loss and right, and if wejust have one rule, like all

(22:46):
we're asking this algorithm todo is take these inputs and tell
us is this claim going to beabove X or below X and set that
now where you set that level, itmakes it a lot harder for this
to get it right or wrong. But ifyou said that sounds like a low
level, maybe if $2,500. So anyclaim that's you know, really

(23:07):
going to be a loss, allocateallocated loss adjustment
expenses, really, very littleindemnity on it, it's it should
be an open and shut case. Itdoes rather well at finding
these distinguishing theseclaims. I think that's a great
use case. So, you know, onintake of these claims, we can
shut these claims off to if thisis going to be above X, hey, we

(23:29):
can give this to a moreexperienced adjuster. We need
humans in on this. If it's not,we might be able to program some
automated rules to just dealwith this claim, and get, you
know, take this claim to thefinish without any human
intervention. I think that'spossible. I think that's here
today. I think when you start toget into, look, what we really

(23:50):
want, what I think the HolyGrail is, is you give me an
insurance application. And Iknow what the loss costs is
like, I know what the cost ofthe burger is by what you gave
me in your application. I thinkthat is a little harder to do
nowadays, even with, you know,all the datasets we have.
Because, you know, this termgets thrown around in insurance

(24:12):
a lot. We it's a fortuitousbusiness. And I think a lot of
people think the word fortuitousmeans fortunate it does not.
It's more of happening by chanceor random. And, you know, in our
industry actuaries we love totalk frequency, which is how
often does a claim happen, andseverity when it does happen?
How bad is it? I think when youget into areas that are very

(24:35):
high severity, but don't happena lot, low frequency, high
severity. It's very hard forthese predictive analytics to
get this right, right. So we'vegot a roofer or you know,
somebody who falls off a roofWell, we can write a lot of
roofing accounts, how do wedetermine that one? policy
that's going to pop and have avery complicated multimillion

(24:57):
dollar fall claim. The truth isthere's just too much randomness
in it, that the inputs thatwe're giving the model, it can't
it can't do it, right? I mean,it would have to have so many
more inputs than we could everfit on an on an application for
insurance. So I think it alonglong winded here. I think

(25:21):
Predictive analytics is herealready. And it does great in
some narrow use cases. Muchbetter than I think we would
have thought 510 years ago. ButI think it still has
limitations. I mean, you will,you will notice, sometimes, you
know, your speech to text goeswrong. And it goes wrong big

(25:41):
time, right? Or the Googlealgorithm that wants to predict,
oh, hey, I can tell you what acat is. I can see a cat in this
photo a million times, right.
But you put a cat in a swimmingpool or something. And it's
like, well, I have no idea whatthat is. Right. So there's
certain little tweaks that wecan do to fool them that makes a
lot of people still say thatit's it's still forming

(26:05):
predictive analytics and machinelearning for industrial
applications. While very good,it's in narrow use cases, still
has a long way to go. Andactually, that's, that's
exciting. You know, the to knowthat. There's still much more
that we can do. But I think itwill hit a limit, if that makes
sense. Because of the fortuitousnature of some of these high

(26:26):
severity, low frequencies. It'snot going to be a black swan
predictor. Right. You know?
Well, I say that, but who knows?
Who knows where we'll be in 10years,

Mike Gilmartin (26:37):
honestly. Yeah, I mean, I think you said a lot
of things there. So I'm tryingto digest all of it and figure
out what my next question is.
But so take, and it'sinteresting, you talked about
predictive analytics, and anykind of being good for the one
thing it's assigned to view.
I've never really thought of itthat way. And it's a really
interesting way to think aboutit. So a lot of people I think,

(26:58):
would ask, okay, take that smallclaim model, we have one key
risk, I think you guys utilizeone. How do you confirm? And
this is just more for people toknow, how do you confirm it's
working? Or like how do you knowyou pick the right data elements
to make it a successful model?
Right, that seems and I wasinvolved in how we pick our
sticky risks, but it's stillinteresting, as you say, okay,

(27:18):
these 12 data points will get usto a comfort level to say this
claim is gonna be over x orunder X. How does that process
work? Yeah,

Matt Murphy (27:28):
I mean, honestly, there's tests we can do with
mathematics. And we can we canlook for certain types of
errors. Right? So, number one,if it is a small claim on our
training set, we want to get itright. Right. So that's one way
to look at it. Like if there's100 of these small claims that
we trained it on? How many didit find and say that that was

(27:48):
small? Right? That's one thing.
The other side is, how many doesit? How many false positives
does it have? How many did itsay like, because I could, I
could easily make a modelthat's, that can predict every
one of the small claims, right?
I can do it. Here's what I'lldo. My model is a simple
algorithm, every claim has asmall claim. So I got a small

(28:09):
claim. I know that soundsridiculous. But you do have to
take this to the extremes tosort of elucidate this. But the
other thing is, how many falsepositives did you have? Right?
And I think when you're talkingabout claims that can
potentially get really big andbad. And Greg, you would know,
like, wow, how did this claimever get filtered off to this

(28:34):
automated algorithm? With theseinputs? You know, this is a
multimillion dollar claim, weshould have been on this, you
know, from the first 30 minutesthat it was in the door, I think
that becomes the bigger test onsomething like the small claims,
like, is it gonna miss? Whichones does it miss that goes
above, right? So if there's twosort of ways to look at a lot of

(28:56):
those classification problems,your sort of accuracy, your
false positives, you know, thatsort of thing. And
mathematically, we can quantifythose and talk them over with
senior leaders, such as Greg andsay, you know, what's your
tolerance? You know, because ifwe want to use this, you're
gonna have to have sometolerance for mistakes. I mean,
let's be honest, we makemistakes as humans, sure. But

(29:18):
it's, it's about drawing thatline of where are we comfortable
in knowing that there will besome mistakes here. But maybe it
becomes it surpasses what ahuman could do. Still a little
bit scary, right to take yourhands off the wheel. But I think
that's kind of the way you wouldyou would measure this. And also

(29:40):
it would depend, you know, we'retalking about a classification
problem here, right? Is it aboveor below those are those are
nice and predictive analytics,but sometimes we're also asking
for a What's the last cost,right? So what's the premium I
need to charge on this policy?
That's one we're well I neverreally going to get it right,
exactly. But how far off was Allright. So there's all sorts of
different analysis that you cando to test these things,

(30:04):
depending on what type ofpredictive analytic problem
you're trying to do. So,

Greg Hamlin (30:13):
we talked a lot about insurance, I know you're
pretty plugged into how data ischanging the way we live. And
for those who maybe haven't hadas much experience on the data
side and insurance, what aresome of the things you've just
seen overall, that has changedthe way that we live? Because of
analytics?

Matt Murphy (30:32):
Yeah, yeah. Good question. I think I've read some
statistic recently, where, inthe last two or three years,
we've created more informationthan we had from the dawn of
time, two, three years ago,right. So the first thing you
need to know is we are justspinning off data everywhere.
Each one of us is right, youlook at all of your smart

(30:55):
devices. We call it like theInternet of Things, all of these
devices are putting off data,reading your phone, whether you
know it or not, is pinging yourlocation to all of your apps.
Right. So we're, we are justshedding information at an
incredible rate now, so thatthat I would say, right off the

(31:16):
bat is the biggest thing thathas changed is just there's so
much more and with so much moredata comes some trouble, right?
Because now there's a lot ofnoise. Well, what's good, what's
bad? I think it look if I was totalk to anybody who would be
getting into this or thinkingabout data, what I would say is
like, there are some, there'ssome sexy parts to it, right?

(31:38):
Like when you when you put upthis visualization where you put
up your model results at theend, and you show people the end
product, what you don't seethere is like that is the last
5%. You know, it's like aniceberg thing. Most of your time
is done, like is spent with themenial, I gotta clean this data.
Oh, my gosh, can you believethis? Why are these results

(31:59):
coming in here? Look at allthese outliers. And so before
you can even get to the lastpart, right? Which becomes so
much harder when you have somuch more data if there's just a
sea of it. Now, how do I cleanit, transform it, mutate it,
loaded in and make sure it's allin the right hoppers before I do
my analysis. I think that takesup most of the time. And and I

(32:23):
think now with, you know, thesurfeit of data we have out
there that is that much harderto do. Now, we do have
technology that's coming alongand making things a lot easier
to I mean, just, you know, Idon't know if you guys have
heard of Moore's law, it's afamous thing where basically, I
think it's the number oftransistors that will fit on a

(32:44):
computer chip, you know, doublesevery year, right. So we can
progressively get more and moresmaller and smaller to where in
the 60s MIT, I think had thefirst computer it was took up a
whole warehouse. Now you've gotso much more power in it in your
pocket, right? I think the samething is happening for data and
the ability to deal with largeamounts of data and to like sort

(33:04):
of automate a lot of thiscleanup process. So that is
happening in lockstep. But yougot to beware, right? Because
there's so much more out there.
If you're not using these thingsproperly, you could get some
very, what we would say spuriousresults, that that aren't what
you what you imagine and arepurporting to be something but

(33:26):
well, you made one mistake here,you missed a comma. And that
made the result look verydifferent. So I think the stakes
have gotten higher, with withdoing sloppy, sloppy data work,
just because of how much moreinformation there is out there.
And there's more. I mean,there's gonna be more this year.
There's gonna be more next year.

(33:48):
I don't see that pace slowingdown, similar to sort of the
Moore's Law of, Hey, man, wekeep making double. You and I
know it, I mean, you probablyhave some, you know, Amazon,
what is it Alexa at home orecho? Whatever it is for the
Google. I mean, it's recording,all of it, right? being stored.
I mean, that's a scary thing.
And I think I should touch onthis too. The other very scary

(34:10):
thing about having all this dataare privacy issues, right. And
especially with respect to how Ideal with it, right, I'm dealing
with a lot of data. And some ofthis is personally identifiable
information, right. And so weneed to be very careful, we need
to be very good stewards of thedata we're given and adhere to
all of these, you know, we'vegot regulations coming out,

(34:33):
really, I think Europe has ledthe way in data privacy and
governance, but California isright there in New York is right
there. And so that's veryimportant with all of this data
becomes a lot of responsibilityto use it in an ethical way. And
with respect for for individualsprivacy, so I think that's the
other thing that has changed alot, Greg is, you know, a lot

(34:56):
more consideration With respectto privacy. Awesome. So

Greg Hamlin (35:05):
when you are working with people who haven't
used a lot of data, what aresome of the challenges in
getting people to adopt the useof data in their decision
making? You know, I know, maybefor somebody who's recently out
of college that's maybe not asbig of a hurdle. But somebody
who's been doing insurance orbeen involved in the industry

(35:28):
for 2030, maybe longer? What aresome of the challenges in
getting others to adopt the useof data and their decision
making?

Matt Murphy (35:39):
Yeah, I think that's a great, great question.
I have seen a lot of a lot ofdifferent folks in my time in
the insurance industry. I thinkthe thing I think of immediately
when you say that is bias andnarrative bias, I think the
thing we all need to know isthat we're human beings, and we
are full of biases, right? Wemay think we're special but no,

(36:01):
our human brains are broken inslight ways. That that make us
very biased. So I've seen a lotof this, Greg, I've seen, say,
folks who say, you know, thedata can't tell me anything I
don't already know. And I thinkwhat you see in a lot of those
folks, is they do what you callcherry pick data, right? If this

(36:23):
confirms my narrative bias,right? If this adheres to this
narrative, I'm going to takethat data. If it doesn't, I
pretend it doesn't exist, andput it away. I think we talk a
lot about at Berkeley industryabout evidence based decision
making. And I think that's whereyou have to start. And honestly,
there's a humility to knowingthat you make mistakes in your

(36:45):
brain, right? We have intuitionand intuition is amazing. The
human brains intuition is simplyoutstanding, it's works well,
95% of times 5% of the time, itgets you into a lot of trouble.
Right. And I think being awareof that is great. And as you
said, I think I think theyounger folks in this industry

(37:06):
are much more receptive toevidence based decision making,
as you know, counterintuitive asthat may sound, some of the
older folks in the industry havebeen doing this for 30 years.
And so they've gathered what wecall a lot of priors. Okay, this
happened this way. Back when Ifirst started, I'm going to see
this again. And that's notalways the case. And I think

(37:28):
that that can be very dangerous,when people try to use data only
to confirm their biases. And notjust say, start off with a clean
slate of like, okay, well,first, what did the data tell
me? Right before I startcrafting a narrative? Let's look
at the evidence first. I thinkthat's big. So so how do you get

(37:50):
there? It's tough sometimes. Ithink. I think what you do is
you try to get people like Isaid, in these tears, right? I'm
not going to take you all theway to a this model is going to
give you the underwriter hasbeen doing this 30 years the
answer? I think we need to walkthat underwriter through every
tear in this process, before weget to the right, get them

(38:12):
comfortable with you know, justanalyzing the data that we do
have or the history right,looking at the rearview mirror
as opposed to the front, right.
I think if we get them along, wecan we can start to move them
towards the evidence baseddecision making. That sort of
suppresses those those biases.
But hey, I'm human, I do it too.
So honestly, I fall for a lot ofthese still. And I know that I

(38:37):
need to watch out. So so it'stough. I do think that the
younger younger minds are morereceptive to it, though. To your
point, right.

Mike Gilmartin (38:50):
First of all, sexy data is a new term I'm
going to use everywhere. Sothat's done like I'm very
excited. You're welcome. Well,you have a couple things that I
find I find really interesting.
And I, I've been kind of one ofthe guinea pigs and curious and
getting involved in the data andkind of some of our data
visualization tools and how farthose have come. But I think you
said some people can spin datain a way that either confirms or

(39:14):
negates an argument. And thatcan get kind of scary, right?
Instead of looking at the datafor what it is. They kind of
look at it as Yeah, confirms mystory, or no, it doesn't. Or
maybe they're not even lookingat the right thing. And one of
the things that Doug and Italked about all the time is, is
what's your is the data thatyou're visualizing, actually
telling the story you want it totell meaning? Is it accurate?

(39:34):
Like are you actually if youthrow up a graphic, is it
actually driving to the graphicyou're trying to show? And do
you have the right elements inthere to get there? And I think
so many times nowadays, there'sso much visualization out there
and there's so much data, youhave somebody put up a graph or
a visual, it has like 700different things on it. And it's
like I don't know what you'retrying to tell me. I'm not

(39:56):
really sure what the story isbeing told. I just don't know,
how do we how do we continue tomanage that as we get more and
more data involved in thedecisions we're making on a
daily basis? Because sometimes,I mean, there's just too much.
So I didn't know from an extrastandpoint, how do you kind of
dumb it down so that what you'retrying to show tells the story
you want it to? If that questionmakes any sense at all? Oh, no,

(40:18):
it's

Matt Murphy (40:21):
a great question, Mike. And I see it all the time.
So all you need to do I thinkUSA Today does a great job of
giving you terriblevisualizations that may look
pretty, but do not confer to theaudience any sense of what they
were trying to say. It'severywhere. It really is
ubiquitous. I see it all thetime, where somebody's using the

(40:43):
wrong visualization, right? Orthey're trying to display
something, but they use thewrong type of chart, right? Hey,
this is a time series you shouldreally be using alive, even
though and yes, it's a big upthe edge of again, sexy. Some of
these visuals are like a stat.
I've seen a lot of them. And I'mlike, wow, that looks really
cool. And then just like you Ispent 30 seconds, I'm like, but

(41:04):
I have no idea what they'retrying to tell me in this
visual. Other than that, lookhow cool this looks. So I think
that that, that is definitelysomething that we have to, we
have to measure against, we haveto weigh the pros and cons
because on the one hand, themarketing is great. And honestly
a great visual can bring in anaudience well, and get them to

(41:26):
the next level of understandingthe data. But at the same time,
you can confuse a lot of people.
And it's very difficult. Ifyou're an actuary, say like me
or Doug, who's like, oh, yeah,you guys, you guys just don't
get this way. Come on. It'sthree axes here. You don't see
the Z axis. Like no, some peopledon't see it in the same way. So

(41:47):
I think it is kind of levelsetting, putting yourself is in
your audience shoes, right? Ifit's the USA Today, readership,
you better you better get thislowest common denominator,
right, we might just want to usea simple bar chart. And that's
it. And I've seen some of thebest examples of, you know,
here's this great sophisticatedone. And here's what you really
should have done. And it looksso simple. And it's lost all of

(42:10):
that appeal of visuallystunning. But man, if it doesn't
get the point across is so well,and so cogently. So I think it's
a it's a trade off of you know,because we do still want to
bring in audience members, andwe want to wow them. But I think
you're right, the PrimeDirective here is to give them

(42:31):
the right data. And if we go offof that, and anything we can do
underneath it, to make it lookbetter, cool, approachable. I
think that that's kind of thenext layer on top. But yeah, I
see it all the time. Mike is agreat, great question. And, you
know, some folks, you know, juststart using a lot of, you know,

(42:52):
words, a word soup to describesomething. And, you know, you
really could have said it inthree words that that the layman
could understand. So, greatpoint. You know, it's a tough
trade off, I think betweenbetween the appealing look and
sounding smart, and thenactually getting something

(43:12):
across to your audience thatthey understand.

Mike Gilmartin (43:16):
That's what I'm doing all the time, the level of
chart that he understands versusthe level of chart that I
understand things, but I thinkit's just a great point, when
we're talking about data drivendecision making it to me, the
people that we put this in thehands of are not actuaries,
they're not folks that generallyspend eight hours of their day
digging into data and reallyunderstanding what goes into it,

(43:37):
they need a simple, kind of,here's what the information is.
And now I need to I need to acton that. And I think it's just
sometimes we missed that mark ofthe folks who are actually gonna
utilize this data or this chart,or, you know, this visual needed
as simple as possible, becausethat does drive the next
decision they're gonna make, Ijust think it's, it's an
important point to make. Andbecause you see it all the time,

(44:01):
everywhere of like, I don't evenknow what this is telling me.
And I'm not sure I understandthat. And I'm a fairly smart
guy. So

Matt Murphy (44:09):
yeah, I think that's definitely one of the
skills, you know, that Iwouldn't have thought of coming
into this industry. As anactuary, you gotta have, right I
mean, a lot of people think ofthe actuaries disease, you know,
eggheads, who are, you know,nerd out about everything and
could get really narrow andfocused on something and go to
these deep levels. But I thinkwhat makes you a really good one

(44:29):
is if you could come up to thelevel, know your audience, and
convey complicated complexinformation to a regular Layman
and do it to where they come outof it being like, yes, maybe I
didn't go completely deep andand understand everything you
were talking about, but I gotthe gist of it. I think. I think
that's a really important skillto have, especially when you're

(44:51):
dealing with data even if you'reoutside of actuarial.

Greg Hamlin (44:55):
I agree completely.
And and I think, you know, frommy experience, what That's
really added a lot of value tomy job is the ability to use
that information to makeeverybody's jobs easier, make
better decisions, you know, Ithink about some of the things
we've worked together on, youknow, from managing plane counts
to making sure they're balancedto making sure the right claims

(45:16):
are with the right people to,you know, be able to dive deeper
into some of our more difficultclaims and analyze them and in
new new ways, has been gamechanging. And so for those who
are thinking about a career inand data, I think there's a
bright future. And I thinkthere's so much that can still

(45:37):
be done. So I want to thank you,Matt, for joining us today and
going over this topic with us.
And really appreciate all thatyou've added in that. And I hope
that for those who are listeningthat you'll continue to join us
for future podcasts releasingevery two weeks. If you can't

(45:59):
get enough adjusted in yourlife, check out our adjusted
blog for our resident bloggerNatalie dangles. bootstrappers
it drops on the opposite Mondayof the podcast, and it can be
found at WWW dot Burk incomp.com. One thing we're doing
different if you have questionsregarding this episode, or
previous episodes, we'd love tohear from you. So please send

(46:20):
your questions via email tomarketing at Burke in comp.com.
We read everything you send usand we'll try to address
questions and future episodes ofthe podcast. So especially if
you got some questions for Matt,or on this podcast or previous
ones, send them our way. And ifyou liked your Listen, please
give us a review on Apple'spodcast platform. We want to

(46:41):
express Special thanks toCameron Runyan for our excellent
music. If you need more music inyour life, please contact him
directly by locating his emailon our show notes. And thanks
again for all your support. Andremember, do write think
differently. And don't forget tocare. That's it for today,
folks.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.