Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Hey everybody,
fascinating chat today on AI
Governance with a true innovatorin the space at ModelOp Jim.
How are you?
Speaker 2 (00:10):
Doing good.
Doing good today.
How are you doing?
Speaker 1 (00:12):
I'm doing great.
Thanks so much for joining.
From your off-the-grid locationI see the solar in the
background.
We got Starlink going Reallyintriguing.
But before all that fun stuff,maybe introduce yourself and
what's the big idea behindModelOp.
Speaker 2 (00:28):
Sure, yes, I'm Jim
Olson.
I'm the Chief TechnologyOfficer of ModelOp and actually
did the architecture and designof the original system, so I
know a lot about the space andmyself and some of my colleagues
have actually been working inthis space with previous efforts
for a long time 10 plus years,et cetera, and that kind of
things, so have a lot ofknowledge about not only the
(00:51):
newer generative AI, but alsothe traditional AI, traditional
statistical techniques, etcetera and how they affect your
business created the ModelOpssolution to actually bring in
the full lifecycle management ofall kinds of models, everything
from an Excel spreadsheet to anLLM foundational model and now
(01:12):
agentic AI solutions as well.
So we put that together to makethat process a lot easier,
because we found a lot ofcompanies are struggling getting
their solutions using thesetechnologies out to production
and getting their solutionsusing these technologies out to
production.
Speaker 1 (01:26):
Fantastic mission.
And you released a governancebenchmark report on AI recently.
Ideal for this audience.
So let's kind of start with thebig picture.
What was the big idea, themotivation behind this benchmark
study, and what did it tell?
Speaker 2 (01:40):
us.
Well, a lot of it wasunderstanding, really, where
companies are at with their AIsolutions.
You know there's a lot ofdisparate information and
articles and I mean, if youlisten to the Silicon Valley
digital native companies, youknow everybody's using it for
absolutely everything.
And you know it's the nextbiggest thing that you talk to
some of the enterprises andthey're more hesitant about,
(02:03):
like, how does this impact mybusiness?
And you know what?
What am I willing to put intoplace?
And et cetera.
So there there wasn't a lot ofgreat clarity into what the
plans for an enterprise businessactually are in a variety of
different spaces.
And you know, what we found isa lot of companies are
struggling to build trust withintheir organizations about these
solutions because they don'thave the insight.
(02:25):
We're seeing a lot of ITdepartments pushing back because
they're finding shadow AI,where we've seen things, where
we've heard where hospitalspeople were posting customer
data into chat, gpt-4 to getsummarizations going around the
IT, which obviously that's ahuge risk.
I mean it's breaking severallaws and that kind of things.
(02:45):
So how do they get theseprocesses in place?
And so that's where we saw thata lot of companies are just
struggling with those conceptsas a whole and in the report you
can see a lot of our findings.
Where it's a lot of peopleplaying with it, they're not
getting the solutions out therequickly, so they're losing on
business value, butunderstandably they also want to
(03:09):
make sure they have trust inthese solutions.
Speaker 1 (03:10):
Well done, and one of
the headline stats from the
report 56% of Gen AI projectstake 16 to 18 months to reach
production.
So how do we get out of thiskind of quagmire?
Speaker 2 (03:23):
Well, that's the
foundation of why we built the
company Much like today.
Think of software in the 90s.
People would actually go andjust develop it on their desk,
throw it out into production.
There wasn't really processesand because of that things broke
and at least programming isdeterministic in nature, so
(03:44):
you're able to put processes inplace.
Now what company would not havesome kind of a CICD pipeline
nowadays Common practice stuffbut back then those didn't exist
.
What we found is those same kindof processes for enabling
basically more efficientdeployment of these solutions
and understanding and insightsinto those solutions and
(04:04):
reproducibility didn't exist forbasically the model world, for
AI models, spender models,foundational models, et cetera.
So that's what we created was aprocess where you can actually
automate a lot of this to makeit easier, because software is
very deterministic in nature.
Ai models et cetera are theexact opposite of that.
(04:25):
So what can you put in place toprovide those insights and
build that trust within yourorganization so you don't hit
all of these red box?
And we found when we deploy oursolution into customers, they
were easily cutting that time inhalf, if not even more so
depending on how sophisticatedthe company was to start with
and creating just a formalizedrepository where people can find
(04:49):
out who's using this stuff forwhat use cases and what is
already approved out there.
Could I leverage this et ceteraand that kind of thing, so
having that centralizedinventory and an automated
lifecycle process to drive thesoftware out to production?
Speaker 1 (05:05):
Well done, and you
surveyed a number of sectors
financial services, pharma,manufacturing.
Did any particular verticalstands out in terms of maturity
or challenge with AI governance?
Speaker 2 (05:18):
mature because
they're forced into it.
So, uh, you know, some of ourvery first customers were
obviously financial customers,heavily regulated.
You know you can't have models,uh, making trades or
predictions, or whether somebodygets a loan or not.
That isn't well scrutinized andunderstood.
(05:39):
So that was obviously the firstplace that had the most
challenges, because they couldbe audited constantly, um, and
had to have everythingdocumented, because, you know, I
don't remember which bank itwas, but it was like a
multi-billion dollar fine fornot doing this properly, and so
you know there's a lot at stake.
So, obviously, for them, thatcreated the necessity.
(06:02):
Necessity is the mother ofinvention.
So you know, you'd see a lot ofhomegrown processes within
there that weren't always soeffective because they weren't
stepping back from their ownbusiness to develop them, and
that's by having our ownsolution that is more neutral
and takes all of the ideas intoconcerns, creates a more
efficient solution.
We have several firms in thefinancial sector.
(06:24):
What we're starting to see,though, now is obviously AI is
coming into the healthcareindustry, and we're literally
talking life or death decisionswhen we get into healthcare.
So I see that as a space witheven more challenges, because
they weren't born and bred inthe statistical nature of a
financial institution wherethese things are well laid out,
well regulated.
(06:44):
There's patchwork legislationacross different states as to
what AI can be used for withinhealthcare and of course, it's
just the very real concerns.
Nobody wants to have one ofthese models blow up and be a
stain on their reputation, likethere was an example in
healthcare where long-term carewas more being not recommended
(07:06):
for minority groups thannon-minority groups and actually
resulted in a lawsuit.
So there's really real-worldsituations here that come up
when you bring these solutionsinto life-or-death situations.
So we've seen definitely a lotof interest there as well.
Speaker 1 (07:23):
Yeah, we all saw what
the challenges were with IBM
Watson many years ago, trying toearly stab into the healthcare
space with a lot of challenges.
I think we've matured a lotsince then but still a lot of
work to do.
And you mentioned the reportshows 50 plus generative AI use
cases in many cases, but only ahandful make it into production.
(07:47):
What's that disconnect?
Why the drop-off?
Speaker 2 (07:49):
but only a handful,
make it into production.
What's that disconnect?
Why the drop-off?
Well, I mean, there is anatural to be fair, generically
there's a natural drop-offEveryone's got great ideas and
then bring him to production.
There absolutely is always arevision on that.
But what's even more drivingthat now is this lack of trust.
People are more skeptical ofthese solutions because they are
non-deterministic in nature.
(08:10):
You know, if you have a modelthat predicts whether a cell is
cancerous or not, that's fairlyreadily verifiable and, you know
, testable to a degree.
You know you have known labeled, use case data and things like
that.
When we start to get into morelike even something as simple as
summarize this patient recordinto a recommendation or this
(08:33):
prospectus from a company into asummary that I can use to make
quick decisions about whether weshould be investing or not, or
these kinds of things, that'snot deterministic in nature.
So people are naturallyskeptical because they can't
look at it and say, for sure itsounds good, but is it right?
That's one of the challenges,especially foundational models.
Skeptical because they can'tlook at it and say, for sure it
sounds good, but is it right?
You know that's the one of thechallenges is, uh, you know,
especially foundational models.
They're known for sounding veryprofessional and intelligent,
(08:57):
etc.
But not always quite so factual, so they're very convincing of
giving you the wrong informationand convincing you it's right
so that it creates this trust,because it it's so much harder
to you know, one bad kind ofrecommendation or something from
one of these situations is alot harder to overcome than a
(09:18):
thousand correct ones.
You know, people tend toremember where it went wrong.
So you know, how do I buildthat trust that, yeah, we are
holistically looking at the, notonly the foundational model
itself but it's applicability tothe use case, that I'm doing it
and what risks and mitigationshave I put in place to different
tools that they use, etc.
(09:39):
Into a single pane of glass.
Uh, inventory like we providehelps provide that clarity of oh
well, it's also been used overhere, it worked really well over
there.
Oh okay, now I got a littlemore.
(09:59):
You can start to build thattrust.
And these six people reviewed itand these risks were identified
and said, yeah, this will beokay because of this.
You know, need that kind of astory behind the model getting
out there to build that trust.
But then, likewise, you can'thave building that story be a
manual process on an Excelspreadsheet or a SharePoint file
(10:20):
or just a bunch of Jira ticketsscattered all over wherever
that doesn't give you the story.
So that's where our softwarehelps actually pull that
information together intodocuments and mitigate risks and
findings et cetera that all arein one place and you can
actually get that and make sureall those Jira tickets et cetera
, et cetera are tied back andhappen.
(10:42):
So by just automating all that,making sure it's there and
making it readily available.
You've got to do that, whetheryou build this off yourself or
you buy a solution like ours,because otherwise, yeah, the
process of trying to do themodel lifecycle then becomes
overwhelming.
Speaker 1 (10:56):
Amazing Spreadsheets
offer AI governance in what's
this 1999?
I mean, come on, we need to upour game a bit.
And that's for you healthcarewith your fax machines and your
email.
It's like a zombie that justwon't die.
The other challenge in theenterprise, as you know, is
fragmentation, lots of silos,lots of technical debt.
(11:20):
What does that look like in thereal world in terms of
impacting AI at scale?
Speaker 2 (11:27):
Well, I mean,
obviously, if you have different
people taking entirelydifferent approaches, using
different technologies et ceterawithout any consistency, it
does create more of a burden onunderstanding, not only getting
these things deployed, becauseit's just work to do that, but
then also in doing the reviewsof the technologies et cetera.
(11:48):
And a lot of that justnaturally comes out because
these are kind of a lot ofgrassroots efforts that we're
seeing.
Initially, it doesn't tend tobe as centralized at a higher
level, so water finds its ownlevel.
So the individual groups arepicking their best breed tools
and their solutions and kind ofrunning at it without the
knowledge of the other teams andwhat they're doing, because
(12:08):
they can't find them.
When you get to very largecompanies, that's just a reality
across business unit, team etcetera.
Collaboration is a challenge.
You do want to have acentralized process,
understandings and then theability to automatically
generate findings about hey,have you thought about this,
this and this, because we'vealready seen this be a situation
(12:29):
in another organization usingthese.
So have you taken this intoaccount?
Or who's the responsible peopleto talk to, et cetera.
So that's the challenge,without having some kind of a
centralized, understandable andautomated process is there's an
inconsistency even in theprocess itself, which then
becomes frustrating to all theseindividual teams, and you're
(12:52):
not standing on the shoulders ofgiants within your own company.
You're instead all trying toforge it on your own, and we
know that that never works outas well.
Speaker 1 (13:00):
It's not.
So let's talk risks.
There's still lots of landminesto avoid out there on the
regulatory side, lots ofcompliance risk and fines and
other challenges.
What do you advise customers tobe aware of when it comes to
real world?
Speaker 2 (13:17):
exposure.
Regulations are definitelyimportant because obviously if
you're not compliant with aregulation that's pretty cut and
dry, you're going to get introuble.
And what degree of trouble youget into is going to also depend
on how much process you canshow, because nobody's going to
be perfect If you did nothing,just ignored it and everything,
they're going to be a lotharsher on you than if you tried
(13:37):
your best and tried to doeverything right.
Things are still going to gowrong.
That happens.
Right, things are still goingto go wrong.
That happens.
They're probably.
You know, if it goes wrong justbecause of a black swan event
or something like that, you'reprobably not going to get in
that much trouble from aregulation standpoint.
But what we really talk aboutis, more importantly, is also
even just your brand, so it'snot even having to do just with
(14:02):
a regulatory get a fine.
You know many, many products andespecially in the consumer
product space and we work withseveral on that kind of things
your whole value is in yourperception by the customer.
You know, by one toilet paperversus another.
Yeah, there's some differencesin the things, but that's not.
Maybe your way you make moneyis by making the better toilet
(14:23):
paper.
You make it by making thebetter toilet paper.
You make it by making thebetter, having the brand that
has the name recognition and youtrust the quality if you put ai
solutions out there that have ablunder.
Like you know, mcdonald's putout its automated uh ordering
scheme and there's tons ofvideos posted online of people
yeah, I'll take one fry.
Okay, add 11 fries.
Oh, no, remove that.
(14:44):
Only one one.
Okay, we have 12 fries.
You know, and it kept going onlike that.
That was a hit to their brandthat made them look foolish.
Now that's gonna destroymcdonald's?
No, probably not.
But those do have impacts.
They have real financial impactthat is even harder to measure
in the long term.
That can still cause you maybeeven more problems than the
(15:04):
government find.
Speaker 1 (15:06):
I bet, so you're in a
very hot space at the moment.
A third of companies, evidently, are budgeting $1 million or
more annually on AI governancesoftware, so congratulations on
being in a hot market segment.
Maybe talk to us about yourspace in general, where it's
headed, obviously up and how doyou see yourself competing
(15:27):
versus other players out there?
Speaker 2 (15:30):
Well, one of our
biggest spaces is we're always
staying ahead To us now, juststraight like a RAG architecture
foundational model, that's kindof yesterday's news yes,
everybody's doing it, and thatyesterday's news doesn't mean
not very relevant to anorganization and we still have a
great focus on there.
But obviously all the buzzright now is around agentic ai
and what that means, becausethat even has larger
(15:51):
implications.
You're literally givingautonomy to these foundational
models to make decisions aboutactually changing data within
your database or sending emailsor any of these kinds of things.
So that's what we've beenworking on specifically is how
do we bring agentic AI solutionsinto the model lifecycle
process and we've done a bunchof work there.
(16:11):
We actually have webinars on iton our website.
But you can actually start tomanage these and things like MCP
tools.
Everybody's talking about thosenow.
The Anthropics MCP model contextProtocol is kind of one of the
tools or for lack of a betterterm of how do LLMs communicate
(16:31):
to actual things that can affectchange or read specific data.
So we've incorporated agentictools right into our solution so
you can actually use agentic AIto do model governance itself
and that kind of things.
But more importantly, we alsohave ways of like how would you
approve an MCP tool to use andknow which use cases are allowed
to use it, and what filters canyou put in place, like PII
(16:55):
protection Maybe?
I know specifically that thisparticular model may have access
to PII data, so I want to blockany PII data coming out from it
, et cetera.
So we've been building thingsin that space, knowing that the
Identity AI solutions are goingto literally change the
landscape in that way and that,as these companies put these in
(17:15):
place, how do they know whatthey're doing?
How do they know where they'reused?
How do they know you knowprotect against you know
deciding all of a sudden to sellall of its stock or something?
Truly, with autonomy comesgreater danger.
Speaker 1 (17:29):
Got it For sure,
including personal danger.
Getting into these robo-taxisnow all the time, I'm always
scratching my head How's thisgoing to go?
But I digress.
So when you talk to a customer,maybe they're a little
skeptical or uncertain as towhere to start, how to
prioritize this journey.
What's your advice to them?
Speaker 2 (17:48):
Well, what we suggest
is we have a thing called
minimal viable governance, whichis kind of like here's the
minimum you need to do.
If you try to start I meananything if you try to start out
doing it all, then you're nevergoing to get there.
It's just like uh, you knowcoding, we we use more kind of
an iterative approach now, asopposed to the waterfall design
(18:10):
approach of the past.
Same thing with governance isget started, um, start small.
Get the things in place thatyou absolutely need that's going
to change by your businessmaybe you have.
If you're a financialinstitution, you need your
minimum levels a little higherthan if you're just protecting
your brand on that kind ofthings and get the processes in
place, understand what's thereand treat it iteratively,
(18:31):
continue to grow and add.
And that's where our solutionproviding a configurable
approach to the model lifecycles that doesn't require
writing code or changing theproduct itself really enables
you to do that iterative processand even version the process to
carry forward, so that way youcan evolve and if a new
regulation comes up tomorrow,you can plug it in or you have,
(18:52):
as I said, as your business.
But the important thing isdon't wait, it's the problem's
only going to get worse.
Get started now, because gettingany process in place means
there's a process and there'sthings identified and you know
what's going on, versus kind ofburying your head in the sand
and just waiting until it bitesyou, because it eventually will
and it's going to get harder tounravel it later, when there's a
(19:13):
whole bunch of them out there,than if you get started when, as
we see, there's only so many inproduction.
You have that big backfillsitting behind.
Per this report, staying behind.
Per this report, you want theprocess in place to help that
backfill not only make sure it'sgoverned and doing the right
things, but also help identifythose and push those out into
production, so you don't lose alot of those maybe good efforts
that are buried within yourcompany.
(19:34):
Great advice.
Speaker 1 (19:37):
So we're halfway
through the year.
I can hardly believe it, butwhat are you up to the second
half?
Any travel events beyond thesummer?
What's on your radar?
Speaker 2 (19:49):
Well, we're attending
a whole bunch of different
things.
I'll be honest, I don't knowall of them because I don't go
to all of them.
On that kind of thing, we justrecently went to the CHI
conference out in Stanford andparticipated in that, talking
about, specifically, ai usagewithin the healthcare industry.
You know, we've got cdaoconferences we've been going to
constantly doing webcasts.
(20:10):
We do our own webinars um, Ijust presented one, uh, actually
last week on agentic ai andwhat we're doing there on that
kind of things.
And you know, uh, a lot isvirtual nowadays still, but uh,
you know, we're doing somein-person events as well, with
conferences et cetera that aregoing on and starting to pick up
on that.
But really we're kind ofparticipating everywhere in a
(20:35):
lot of different things.
So usually, again, this is kindof more of an iterative space,
so things come up and you neverknow where you're going to go
next week, potentially.
Speaker 1 (20:44):
Exactly.
Well, speaking of virtual, I'madmiring your real background,
not virtual background.
What's up for the summer inColorado?
Any hiking or fishing orhunting or birdwatching?
What do you get up to there inthe woods?
Speaker 2 (21:02):
Well, yeah, the
wildlife we get to watch right
from the deck.
So we get moose and elk andmarmots and everything come
right up to the deck.
On that, we actually myselfpersonally we have a lot of.
We have 14 acres here and wehave a lot of beetle kill, so
I'm always working on cleaningthat up, unfortunately.
Yeah, so I don't need a gymmembership.
I do it by moving trees aroundand things like that.
So we got that.
But, yeah, we get out into thewoods and hike and all kinds of
(21:25):
things as well, take our UTVsaround, et cetera, too, and just
yeah enjoy nature where we can.
Speaker 1 (21:32):
Fantastic.
Well, thanks for joining,taking some time away from all
that, and congratulations on allthe success onwards and upwards
.
Speaker 2 (21:40):
Yeah, absolutely, and
thank you for taking the time
to talk with me today.
I really appreciate it.
Speaker 1 (21:44):
Thanks everyone for
listening, watching, checking
out our new TV show atTechImpact TV now on Bloomberg
and Fox Business.
All right, take care.
Thanks, jim.
Speaker 2 (21:52):
Thank you.