All Episodes

June 18, 2025 • 38 mins

Summary


In this episode of Insurance Unplugged, host Lisa Wardlaw engages with Koli Perry, VP of Insurance at Koford, to discuss the transformative role of AI in the insurance industry. They explore the importance of building trustworthy AI systems, the challenges of integrating AI into existing infrastructures, and the need for a paradigm shift in how the industry approaches technology. The conversation emphasizes the necessity of designing systems with accountability and transparency in mind, as well as the evolving landscape of distribution in insurance. Koli shares insights on operationalizing AI, the role of regulators, and the importance of being proactive in adopting new technologies.


Takeaways


AI in insurance requires a strong infrastructure of trust.

The industry must rethink how it approaches AI integration.

Trust and auditability should be built into AI systems from the start.

Regulators play a crucial role in the adoption of AI in insurance.

Event-native systems are essential for effective AI integration.

Operationalizing AI requires a shift in mindset and operating models.

Distribution in insurance is lagging behind technological advancements.

AI is a leveler that can disrupt traditional distribution models.

Complacency in adopting AI will hinder progress in the industry.

Design AI systems with the assumption they will be scrutinized.


Chapters


00:00 Introduction to AI in Insurance

05:52 Rethinking AI: From Trust to Prove It

11:57 Event-Native Systems and AI Integration

18:03 The Future of Distribution in Insurance

24:00 Conclusion and Final Thoughts


Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Insurance Unplugged in the hot seat where the complex world of
insurance is laid bare. Hosted by Lisa Wardfall, this
podcast promises an unfiltered glimpse into the industry like
never before. Each episode invites you to
listen in on the candid conversations that usually
happen behind closed boardroom doors.
From deep dives with industry leaders and thought leaders to

(00:23):
innovative discussions with minds shaping the future of
insurance, we bring the most genuine talks directly to your
ears. Our guests take the hot seat
alongside me to explore the inner workings, challenges and
triumphs of the insurance world.If you've ever wondered what
goes on in the shadows of the insurance industry, from the
boardroom banter to the behind the scenes strategies, this is

(00:46):
your chance for a front row seat.
Prepare for unguarded, enlightening, and engaging
discussions that cover every angle of insurance presented in
a way that's both insightful andaccessible.
Welcome to the conversation. Welcome to Insurance Unplugged
in the hot seat with Lisa Wardbaugh.
Welcome to today's episode of Insurance Unplugged, proudly

(01:09):
sponsored by Iris Insurtech, your gateway to the future of
insurance distribution. At Iris, we harness the power of
generative AI to revolutionize data processing and decision
making across the distribution spectrum.
Our platform integrates Gen. AI to provide not just insights,
but actionable intelligence, configurable workflows, and

(01:29):
dynamic form generation, all underpinned by continuous data
quality management. Discover how Iris is pioneering
smarter, more efficient operations in the insurance
industry, paving the way for a new era of distribution
excellence. Let's dive into how Gen.
AI is transforming the landscapeof insurance distribution today
on Insurance Unplugged. Welcome to another episode of

(01:53):
Insurance Unplugged. I'm your host, Lisa Wardlaw, and
this time joining me in the hot seat.
I'm so excited for this guest because he and I have lots to
talk about. Can't wait to share it with all
the listeners. Today I have Coley Perry who
works as a VP insurance focused on core transformations, and
he's just recently joined Co Ford.
So Coley, welcome, welcome to the hot seat.

(02:16):
If you don't mind giving everyone in the audience just a
little bit of background about yourself because our topic today
is going to get spicy and hot. If it's not auditable, it's just
a toy, real AI and insurance with Koley Perry.
Welcome to the show. Awesome.
Thanks Lisa for having me today.And I love the hot seat.

(02:37):
It's a place that I've spent most of my time from grade
school until now, so it is not unfamiliar to me.
So yeah, a little bit of background that might be
relevant if we're going to talk about AI and insurance today.
My background's fairly broad, but from a technical
perspective, it started in telecom and I grew up building
old voicemail machines and putting Octave voice cards in

(02:57):
them and have kind of followed the progression through the
beginnings of the Internet, computers, PC Dawn, all of that
stuff. So I've kind of seen the last 20
plus year arc going from the emergence of a personal computer
through Internet to cloud and now the emergence of AI.
So it's an exciting time, but I believe there's a lot of

(03:20):
opportunity if if we look at it the right way.
I love that and I love all of our banter.
We've, we've been meaning to do this podcast for quite a while
because every time you and I getto it, I'll say like a really
deep, you know, polarizing topicon LinkedIn, You and I seem to
jump at it. And occasionally, Roy will join

(03:40):
in as well. Well, I think so.
And I think it's important as wego into this topic for AI and
insurance, right? I, I think there needs to be
strong voice on this in regulated industries.
I think it's critical for for the ability to have it adopted
and for it to scale in a meaningful way.

(04:02):
Well, let let's get right into that because I think you're so
right. Like everyone focuses a lot on
which is fine, you know, the what or the outcomes or the use
cases. But I think most people
genuinely forget that there's somany layers of architecture that
need to be considered when you're doing things at scale.

(04:24):
And by the way, I'm not saying the CIOs aren't thinking about
these things. I'm saying more of the people
that like, like to talk about AI, you know, it's like it,
which is fine. Like this is what it can do for
me. I, I totally agree.
It's like, you know, but when you build a car, you don't just
focus on like the destination where you're going to take the
car you like, you focus on the engineering and what the car is
meant to do and where it's meantto drive.

(04:44):
So I think of AI and in particular, one of the subjects
that I know you and I both are passionate about is the
trustworthiness of AI, which to me really gets into the
infrastructure and how you're thinking about infrastructure.
So let's kind of step back, you know, I'll say like, let's talk

(05:06):
about the myth of AI and the readiness of insurance.
You know, you, you recently did a post, you know, not sales and
slides, real world application. Maybe let's start there.
Like what do you think is brokenor misconstrued in our industry
in the way we think about AI? And I'll just say maybe our

(05:27):
muscle memory has been conditioned incorrectly because
of the way we thought about other things, IE digital.
I think that's a great place to start and I think you hit on it
a minute ago. I I made a few notes and
comments here. It has become a bit of theater,
meaning you're talking about infrastructure.
I could probably make the case that I could put on a four hour

(05:50):
filibuster on electricity alone,meaning I could probably make a
case for or against scaled AI and insurance based on that
single element. And no one's really walking
around thinking about that in their boardroom right now.
So I think part of the problem is it's theater due to the
ambiguity of it #1 what do I do with it #2 is it trustworthy #3

(06:14):
curiosity comes, how does it work?
So when you mentioned trust, I think of trust not in our
traditional sense of we have long running programs, projects
and processes and bolting it on.I think of it as a rethink.
It's an opportunity to re platform that layer and I

(06:35):
believe it has to be re platformed.
It's not a bolt on for AI. Like I don't think trust,
auditability or any of those things will be after thoughts.
I believe they have to build be built to a new way of working or
new system of working if you will, if AI is going to deliver
its promise. Well, I mean, you, you know,
clearly I, I agree on that. I, I spent a lot of time

(06:58):
provoking and prompting the industry on that specifically
because the foundation necessaryto create a trustless
architecture is very different for AI than it was for all the
other things. And, and you know, Kohli, like
clearly for people that listen to the show, right, We, we

(07:19):
started with, you know, traditional AI, we moved into,
I'll call it generative AI. We, we started hyper indexing on
LLMS and, and all the capabilities, which is of course
amazing for our industry where our heavy language, rich, dense
industry. But where it starts to all
really come to fold is as we start to take this next layer,

(07:40):
which is a AI. And when you start thinking of
headless architecture and you start thinking of decisioning
and rules and codification of behaviors, of actions being
taken, next best actions, you get into a world where audit and

(08:01):
verify after the fact, reconcileand report.
I mean, Coley, I grew up as an auditor, right?
I spent a ton of time actually oddly talking to chief risk
officers, which I find fascinating because I think CR
OS are actually leading this, not from a do we use AI, but
like how and where are the faultlines?
And some of my, some of my greatest advocates right now are
actually on the chief risk officer side.

(08:21):
But how do you think about that?Where are you seeing people
think about that? Because I think to me that's
like a hallmark sign of maturation when they're coming
to me and they're saying this isbeyond what our business wants
to do with it. How do we embed organically and

(08:43):
natively? I'll use the word native Coley,
the fluidity of immutability andtrust in the actual fiber of the
processing in itself. That is, by the way, not the way
systems are designed today. Well.
Maybe you can move some overviews to like what people
are thinking, how you're seeing that and what are you seeing

(09:05):
mature clients maybe think aboutwhen it comes to.
That, yeah, let's put a, let's put a pragmatic lens on that.
We're talking about a 1000 year old industry that's currently
regulated by governments typically and entities, right?
So that history tells us that regulated industries suffer when

(09:28):
innovation comes. So my first question would be
the chicken or the egg. Does the auditor on this
platform and drive the change within the the ecosystem or does
the ecosystem force the auditor to respond or do they work
together? So insurance is loaded with
inability to consume do to tech that and operational model.

(09:52):
So I think as AI continues to come, that's a question that has
to be answered and our pace of adoption is not high.
I would say here in the hot seat, it needs a voice, strong
industry voice needs to adopt. And if they had agile cloud
based models, they would have experimentation labs and

(10:14):
wouldn't be afraid to put $10 million into something like
this. But that's the barrier I see is
nobody ever wants to be first. This is a full ecosystem
challenge, meaning whoever figures it out, it's going to
matter, then everyone else will follow.
So I think it's a matter of somebody putting the flag in the

(10:35):
ground and it's probably a combination of ecosystem
participants, could be platform providers, could be carriers,
could be regulators, DOI. But I assume we're all thinking
about this and we saw what's going on with the government and
their there, you know, the current change and the approach

(10:56):
to streamlining operations in the government, for example.
Well, what if that actually worked and regulators could
respond? Now the tables are turned and
I'm pretty sure sovereign AI around the world has every
government contemplating this. So what I would say is I don't
know who owns it, but it's coming somewhere and I'd rather
be involved than not involved. That's what I would say.

(11:19):
I like that, yeah. I always say it's like the best
time to be an innovator and a true business architect because
the people that can think through this now clearly the
layers at which you have to think through.
And here's a, here's another problem or opportunity.
The layers that you have to think through aren't the

(11:41):
traditional layers that we've thought through in the past.
So I want to kind of get into this next segment.
And it's, it's really why it's like for people who are
constantly been frustrated because they see, but beyond the
people that they're talking to or like whatever and they design
for the future and they can bring it back to the current.
I think this is you like this islike your Nirvana, like this is

(12:02):
like your, your time to shine, like the rays should come out
and like that that clearly, I don't think people have yet
realized that. So, you know, maybe they're
being dimmed, but I wanted to gointo a little bit because we
hear so much about explainable AI, explainable AI, trustworthy
AI and I think provable AI. So let me frame this for you,

(12:24):
Collie, because I want to hear your blasphemous clearly.
We call for ethical auditable AI.
Like I would just say like, duh,that's like table stays like
getting out of bed in the morning, right?
So it's. Anthropic reason for being.
It's the entire reason anthropicexists, sure.
Okay, got it. Check.
I'm not taking it lightly, but yes, okay, but let's go deeper.

(12:46):
What does that mean technically?How do we go from trust me,
which is where we've been with systems and architecture to
prove it? What do you see people doing
differently in that space and what how do you think about that
differently? And I think immutability
etcetera. Perfect.
So yeah, the way I think about that is I think if you put a

(13:07):
pragmatic lens on it, right, Humans are first.
Humans have created the machine.So you need a layer between the
human and the machine in its instantiation, right?
The machine just didn't wake up one day.
There's a human at the beginningof it somewhere, and I believe
the trail starts there. So my belief is just like when
we're talking about a good old fashioned data migration, say 10

(13:30):
years ago, and we wanted to moveour data from the mainframe into
a new platform. If I couldn't understand it
manually, if I could not reconcile the transaction, if I
could not do the math, if you will, and do it manually, how in
the world would you suppose I would ask an engineer to
automate it? I believe the same pragmatism

(13:53):
holds. So you need a natural language
processing interface to the machines that levels the playing
field. It's not a machine language,
it's natural language #2 you need to know what that natural
language is. It is coming in.
You need to know what model it went to, with what request, and

(14:15):
what the return response was. And you need to know it always.
That is the simple answer I havefor you.
And I'm not sure there's a single carrier platform
consultant or anybody that can deliver that Nirvana at an
enterprise unified way yet. And that's what I meant earlier

(14:36):
when there is not a packaged perfect answer for AI yet for
anybody and certainly not an insurance.
I believe your operating model and mindset will you be your
biggest differentiator to be a part of these problem solvings.
So I believe it's pragmatic really in that it has to be

(14:56):
complete. It cannot be after the fact and
it must be scaffolded in to yourconstruct.
You can't make it up and then try to catch it later.
The technology is moving too fast.
Well, and I think so let's let'stake that down another knot.
You know, I think everyone thinks of in, in our current, in

(15:17):
our former eras, we could bolt things on, we could do
reconciliations, we could do provability, we could do explain
ability, right? We, we could do all that in this
world. I think we have to get to true
event native systems, which by the way, holy, we are not event

(15:39):
native today. Like, sorry, I'll ask you in a
question form. Are we event native today?
I don't. I don't think so.
Well, so let let's take now you've hit you've hit the core
of this this issue really for mepersonally, about 2018 when the
cloud became important to insurance, according to the

(16:01):
marketing news, I took a role tostand up AWS services for global
consulting company for insurance.
Do you know how many carriers bought cloud in 2018?
None. So, So what?
So what did we do? Hold on, what did we do?
We took the technical debt and stuck it in this cool thing

(16:21):
called Kubernetes because somebody figured out how to use
that infrastructure with that old stuff.
So now I picked up that old stuff and I now run it over
there and I told my board that I'm in the cloud.
I'm still not event native yet, am I?
But I get to say I have some events that manage IT,
operations work. So now I'm in the cloud.
This is not to say it's a rub, it's learning.

(16:44):
It's the right motion. But the pace has not been there.
And cloud makes this all work with electricity and GPU.
And this is a pure cloud native construct.
It is not a construct of mainframe, client server wise
terminals, Visual C++ with SQL databases and compact servers,

(17:06):
Kubernetes boxes sitting on Azure.
That's not what it is. It is a redesign of the system
for which you work. Other industries will get there
faster that are either unregulated, more consumer
oriented, meaning insurance doesn't even own the sale in in
their ecosystem typically. So it's a very interesting

(17:29):
dynamic and it seems to be centered on our old ways a
little bit. Point solutions process large
chunks of data. A good example might be Coforge
has a lovely story of building our Quasar AI platform
backwards, and it comes from doing real work in the field.
So I've learned in my time here,our platform, we've got 5 or 6

(17:52):
things, one of which is submission for commercial
submissions. Everybody has one, Everybody
thinks they have one, Everybody needs one.
Well, we have one that was codified back in the day when
you had to spend 400 hours just to do OCR.
But guess what? When you've been building that
workflow, that machine learning for four or five years, and the

(18:14):
tooling all of a sudden changes,your ability to move up the
chain is dramatic. It's one of the better AI
solutions I've seen that really has workflow where it's moving
through steps, it's got a confidence rating, it's
measured, and it has enough training data to be trusted, at
least in today's construct. That's what I think I'm saying

(18:37):
is that's a four year journey, say for Co forge from there to
right now in 2025. But it's a very smart journey,
right? It marched the term and there's
five or six other things codified.
That's how insurers should startthinking about it.
As small as it may be. They need to build a library,
and this is new muscle memory and a new mindset and operating

(18:59):
model, even if it doesn't bolt onto your current one.
Maybe it doesn't. You see what I mean?
I don't know. Yeah, and I there, there's
definitely, and you know, many, many of my other peers, we, we
debate about this, right, because you're going to have to
like, in essence, run down your current tech debt and your

(19:22):
current architecture while you're scaling up your new but,
but I really do believe in this.And I, I debate this with a lot
of people about APIs not being event native about you just
can't handle the load of truly aheadless agentic AI system.

(19:43):
Like it, it, it doesn't need to wait for call and response like
it's there like in the moment drinking from the higher fire
hose. So your, your decision logic,
your aptitude, your agility. It's so for me.
Let me give let's go with the use case.
I speak to my phone, to an AI application.

(20:05):
It's processed by Siri. Siri screws it up.
Siri sends it along to the natural language processor on
the other side of whatever I'm using.
They screw it up some more. I put in a couple keystrokes, it
gets screwed up some more. Maybe I lost a couple of packets
along the way. Is it AI?
You see what I mean? That we, we haven't even
mentioned cybersecurity. Let's pretend we stand up all of

(20:27):
this infrastructure. Who's making sure that they
haven't stuck the thing in the in the ATM to grab my magnetic
stripe right? Who's watching every single
packet? Who knows how to explore a GPU
and understand if there's a bad actor in there?
Yeah. So Coley, that's why I started
thinking, I mean, humor me. That's why I started hitting

(20:49):
mine on an immutable database ona cryptographic, not crypto like
the cryptographic processing, like I, I mean, clearly I went
way further. I get it.
But what I started thinking about, and I've been lucky
enough to have some of like great minds on this show with
you. Not at the same time, but if you
start to think about this logically, you have to have a in

(21:14):
essence, A0 knowledge, proof of trust.
Where did it come from? How did it act?
And you need that done act scale, like at a scale that
exceeds what we've done when we had humors doing humans do,
sorry, humans doing workflow processing, right?

(21:35):
Like we're at a different level now.
And I don't know, I love your point and I want to come back to
this. How do you get people to think
principles first redesign? Because I think most people
think incremental improvement, that's one of the biggest
hurdles that I think we face. I.

(21:57):
Mean, I think I have an easy answer for you there.
There is a company that everyoneknows and most people use.
Their name is Amazon. They're the reason we're having
this conversation. They couldn't sell enough books
on Black Friday back when they worked in an office park with

(22:17):
grainy videos. Yeah, the the team got together
and said we need to build, to take every order on Black
Friday. And they said, well, we'll be
out of business. That's too expensive.
They said we need to build elastic compute capability to
handle Black Friday, but not putus out of business on a
Wednesday in June. Well, that's why we're all here.

(22:38):
I'll ask this question, Lisa. Do you think Amazon could tell
you how that package got there? Do they have an image of it?
Do they have all the GPS data ontheir truck?
Do they know who the driver was?Do they know which warehouse it
came out of? Do they know where it came from?
Do they know which supplier in China Senate and on which
container it came? Yes they do.
Yes, they do. Yes.

(22:59):
There is your business model. By the.
Way there is your business model.
They build, they use, and then they give it to the world and
nobody has figured it out yet. They all run their business the
old way. In the meantime, people like
Amazon are taking over the global economy.

(23:22):
The cloud substrate is running the world.
There's a mine in North Carolinathat is more guarded than
anything in the world. It's where all the GPU chips
come from. You know what I mean?
People are not thinking about this, but what I would say is
cloud substrate is running the global economy primarily,
whether anybody knows it or not.Those are the principles that

(23:46):
should be deployed in your thinking and operating model.
That's the easiest answer I can give Everybody is go study what
Amazon does not literally, conceptually and repeated.
So let's take that right. Let's take that point that you
bring up because the problem that I'll say people that are

(24:07):
not like you and me is they don't have conceptual frameworks
that like I think in patterns, you think in patterns.
We see things, we conceptualize them, we relate them, we
associate it, which is fabulous for a new evolving era.
It's also how LLMS work by the way.
It's the. It's the only way they work is
pattern. I was actually having a debate

(24:28):
with a product engineer that couldn't understand how to use
LLMS to map accord forms. I'm like, well, it's conceptual,
It's not literal. And they're like, I need the
relational matter anyway. It doesn't matter.
You and I can talk about that ona different daytime, but I was
like, it's conceptual, it's conceptual, it's conceptual.
So my point in this is my call is my call to action.

(24:49):
If you're hearing this, Coley's not saying literally copy
Amazon, he's saying conceptuallylearn the pattern, understand
the pattern and and Coley to your point about how people just
basically move mainframes into the Kubernetes and said, bam,
we're in the cloud, right? We do a lot of emulation in our

(25:12):
industry. We don't do a lot of creation in
our industry. Yeah, we we have missed the sunk
cost theory class in economics and insurance that like that one
has been missed completely often.
I wonder about that, but that's OK.
I mean, it is what it is. So let's go into like the the
how do we make AI operational? So what use cases would you say

(25:38):
are actually getting to broader first principles design and use?
Are you seeing it get beyond thesubstrate level?
I think a lot of people are playing in the Pocs, but have
you seen those Pocs permeate down into scale and operational
scalability and is it coming into that substrate level?
I think you see some of it and certainly at Coforge we have

(26:00):
some of it for real. Look, like I mentioned our
Quasar platform and the commercial submission example is
a good one, meaning that one is mature.
It started as something else. But again, it's a workflow
issue, it's an unstructured dataissue and it's a very linear
agent job and in the realm of, you know what I mean, simple AI

(26:21):
processing and it's a repeatableprocess with lots of training
data. So that that's a use case that I
think like any and every insurershould look to something like
that around their intake. Regardless of that method,
whether it's from a web interface, a broker, an agent on
handwritten on a napkin like that, you have to contend with

(26:43):
it. And it's a ways of working
issue. And if if you can contend with
it, you have two choices, let the behavior continue or don't
let it continue. And interestingly, I was at an
Atlanta Insure tech event not long ago.
There was a guy, I can't remember his name, but he spoke
about this. Fascinating how he's using AI on
the broker side in a specialty. Let's go in.

(27:07):
Specialty industry because everything we think we have on
the like reinsure or the carrierside, the broker distribution
space is just, it's interestingly been lagging the
technology even more, right, because it's so CRM focused.
And I think the enterprise things that we were doing at

(27:28):
those levels, it just, it wasn'tan investment that was
affordable and it just wasn't mass deployed like we had at the
carrier reinsurer level. I mean it's no fault of
anybody's. It was a, it was thought of as a
CRM only business model, which of course it wasn't.
How do you see distribution principles being stress tested

(27:49):
first? Because it's a total pressure
cooker for margin, sale and scale.
Wow. Well here, here's the way I
would think about it from a human perspective.
First, a lot of depending on theconstruct, right?
A captive agent, a group of independents, large, you know
what I mean? A large, you know, family owned
where we've got 40 offices or, you know, Joe Smith, independent

(28:12):
broker. That's where this action's gonna
start. Everybody needs to remember AI
is a leveler. It's a playing field leveler,
meaning if you're slow to adopt,there will be native companies
and people that come from nowhere with capabilities that
you have no idea we're coming. So my guess is that
disaggregated ecosystem of quotedistribution, whether you think

(28:37):
you control it or not, you're not connected well.
Your systems have not historically made warm and
fuzzy. And that tells me people will go
rogue. They will build their own.
You will not have control of that data set before it arrives
to you as the carrier and it maygo through a bunch of AI
maturations before you ever evensee it.

(29:00):
Now what happened? The broker's fault.
They gave me the data and didn'taudit it.
I just processed what they gave me.
Uh oh, you see what I mean? This is an ecosystem wide truth,
accuracy, auditability issue. Now it exists today when the
broker sends the napkin and it gets lost 10 years from now when

(29:23):
this is more mainstream the the auditors not going to let you
off the hook for the lot, you know what I mean?
You see how the system will selfcorrect itself?
Yeah, as it develops it. It is a continuous improvement
game in AI when you enter machines that do the job well.
Just like an SRE tune at Google,you can say run it at 92 and it

(29:44):
will run at 92. Yeah.
And I think that's fascinating, right?
Because and, and the only layer because I, I've spent so much
time in the distribution space over the last, you know, 18
months. The one thing that I want to
just emphasize is that many people in distribution do not
come from enterprise architecture and, and a big
enterprise technology backgrounds.

(30:06):
I don't even care if you're a technologist.
No, they come from hustle and sales and community engagement
and relationship and caring and empathy and insurance.
They come from there. Exactly, which is phenomenal.
But to your point, like as that shift of verifiability and
accountability happens, you needto be mindful of what your

(30:29):
people, what your teams are doing when it comes to that
leveling of the playing field. So coolly, like as an example, I
see a lot of people on LinkedIn a lot, you know, and there are
many agents. I'm, I'm just going to say
generically that will say thingslike, well, we can just use open

(30:50):
source AI and we can load all ofour contracts in and do all this
stuff. And I'm like, I hope you're not
doing that in like actually opensource AI, Like I hope you're
doing that in your like ring fence version of it.
And I'm only saying that to justlike more about like an
awareness. You don't have to be big and
bureaucratic. You have to be super stealth and

(31:13):
smart and aware, like technological literacy to me.
I mean, ask your ChatGPT. You don't even really need a
complex enterprise architect. That's a great place to use an
open source, by the way. What are the five things I
should have in place as I think about this, You know, like that,

(31:33):
that's a great use. Use it as a sparring partner and
intellectual thought partner, etcetera.
But Kip Perry, I would love to know, like, are you seeing an
increase in consulting or request by what would seemingly
have been maybe somebody that wouldn't be needing that level

(31:53):
of consultative advisory services?
Because, you know, maybe those sorts of distribution players
kind of word DIY and now they'recoming to the table and they're
like, well, I still want to DIY it, but I need a little bit of
framework. I need a little bit of this.
Like are you seeing any evolution there as well?
Yeah, it's funny you mentioned that completely randomly.
I had a colleague mentioned to me an opportunity.

(32:15):
He's working for a small distribution player in the
ecosystem, basically rethinking like small enough to rethink
their entire model. Like that's the request.
You see, that's what I mean. I'm getting back to what I said
in 2018, nobody could buy the cloud.
Like if we look across insurancetoday, I see the stories.

(32:38):
I know some of the carriers. I don't know all of them, but
cloud as infrastructure is one thing, cloud native application
at scale is another. And this is just leapfrog that
like I don't I agree with you. I don't believe there's gonna be
a RAP and renew here that's verymeaningful unless it becomes the

(33:00):
promise of generative AI and everybody's talking about SAS
eaters. Well, if it actually works, I
believe that is what would happen because I would only
build for need and deploy cloud native, not build 7000 workflows
and 58 modules that my customer may not need.

(33:22):
And then if I leave the construct behind, they just
build closer to real time when they need it.
And cloud Substrate supports that.
But somebody has to make sure our new collaboration partner,
the AI, is auditable and trustworthy, and I can talk to
it like a human. That's kind of the problem.

(33:42):
Yeah, no, I love that. Well, I I could seriously talk
to you forever. I love having you in the hot
seat. Maybe we could do this more
frequently. What would my, you know, my, my
always. If you get to sit in the hot
seat, you have to answer these three questions.
So I can't let you go without this.
So as we think about wrapping this up, what is your call to

(34:04):
action? Let's bring it home for our
listeners. If they're trying to move
forward with AI and distribution, and they're trying
to think about trust in AI as we've described it, what should
they start doing, what should they stop doing, and what should
we continue to do? Yeah, that is a good one.
And I thought about this one fora minute because I knew you were

(34:25):
going to ask. I'm that predictable?
I wrote down start designing AI systems like they will be
subpoenaed. That is the first thing I wrote
down. Assume they will.
It is insurance. That was the first thing I
thought that would bring it to everybody's mind.
If you assume it'll be subpoenaed, you know what goes

(34:46):
into that discovery. Assume your system log will have
to participate in your LLM log or your open and whatever you
want to call it, you will have to have it who, who did what
with who, what happened when andwhere, just as if it was humans.
You see what I mean? There's really no difference.

(35:06):
You have the same constraint to contend with, except these are
machines using zeros and ones versus English or some other
spoken language. Fair.
Very. That's the first one.
Stop. Stop confusing explainability
with accountability. If it cannot prove what it did,
it cannot be trusted. That was like don't like don't

(35:30):
let the AI theater and every youknow, start up announcement of
$8 billion. Like I still need to see more
than demos from the majority of these players and then continue
what I thought was smart was kind of like we talked about
continue building where it's hard, where the real insurance

(35:50):
decisions live. That's where you'll find the
whatever trust means to you in that, in that way, you'll find
it there, right? So if you're not ever expecting
a blind answer or you're never trusting an answer as you build
from scratch, it, it, it brings that pragmatism.
It's the AWS mindset and operating model.

(36:12):
It's an understanding of what this is.
Everybody doesn't need to be an LLM PhD engineer.
You need to understand how to orchestrate the components and
how they work, and then you needto have subject matter in your
industry. Those are kind of the three
tickets to ride here. I think that's all you really
need to do is jump in. But complacency will kill you.

(36:35):
Like do nothing is the absolute worst possible thing right now.
Doesn't matter if you're taking a seminar, going to a training
class, upgrading your account from 3 to $20.00 or $20 to $200.
Like you have to get in the gameand see what happens to be able
to understand what happens. I think is.
I love that and I love your lensand your spirit.

(36:59):
Thank you for having so much energy for our industry, for
technology and for what's ahead.You're definitely somebody that
I put on my like, you know, listof people.
I'm like, have I gone too far? I don't know, Cocoli.
I like the hot seat. I like to stay at the edge.
If you're not at the it's the Howard Stern principle.

(37:21):
If you want 100% listeners, you have to make sure 50% of them
hate you and 50% of them love you.
It's a great model. Exactly well, thank you for
being part of the hot seat Kohliand for those of you who may not
know Kohli or may not follow him, he does great posts on
LinkedIn. I, I love your Rottweiler and
you know examples and and all the things you do they're

(37:43):
they're super informative, they're very engaged, they're
very passionate. So definitely follow him on
LinkedIn and all the great work he's doing.
And to everyone out there, stay informed, stay curious, and stay
plugged in. Thank you, Coley.
Thanks Lisa, that was fun. Today's episode of Insurance
Unplugged, the AI and distribution series, is proudly

(38:07):
sponsored by Iris and Suretech, your gateway to the future of
insurance distribution. Iris harnesses the power of
generative AI to transform data processing and decision making
across the distribution landscape.
The Iris platform integrates AI driven decision engines, dynamic
form generation and configurableworkflows, all underpinned by

(38:29):
continuous data quality management.
Discover how Iris is powering smarter operations and more
efficient distribution with cutting edge AI setting a new
standard of excellence across the entire industry.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.