All Episodes

February 12, 2025 48 mins

Send us a text

Navigating AI Implementation with Sultan Meghji

In this episode, we welcome Sultan Meghji, co-founder and CEO of Frontier Foundry, to explore the nuanced progression of AI implementation in organizations. Sultan shares his extensive experience in artificial intelligence, cybersecurity, and financial technology, highlighting the risks of rapid AI adoption, the importance of security, and the transformative potential of quantum computing. The discussion covers the evolution of AI in businesses, from simple automation to AI-powered autonomy, along with insightful use cases such as AI's role in combating fentanyl trafficking. Sultan and the host emphasize the importance of proper metrics, governance, and organizational commitment to change for successful AI integration. This episode offers valuable insights for tech enthusiasts, entrepreneurs, and businesses looking to stay ahead in the rapidly evolving AI landscape.


00:00 Introduction to The Human Code 

00:50 Meet Sultan Meggi: AI Pioneer 

01:57 AI Implementation in Organizations 

02:47 Challenges and Failures in AI Adoption 

05:00 Metrics and Governance in Digital Transformation 

10:00 AI in Counter-Narcotics: A Case Study 

22:02 AI in Legal and Compliance 

26:11 AI in Private Equity 

35:43 Future of AI and Business Innovation 

43:02 Red Flags in AI Projects 

47:59 Conclusion and Sponsor Message

Sponsored by FINdustries
Hosted by Don Finley

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Don Finley (00:00):
Welcome to the Human Code, the podcast where
technology meets humanity, andthe future is shaped by the
leaders and innovators of today.
I'm your host, Don Finley,inviting you on a journey
through the fascinating world oftech, leadership, and personal
growth.
Here, we delve into the storiesof visionary minds, Who are not
only driving technologicaladvancement, but also embodying

(00:23):
the personal journeys andinsights that inspire us all.
Each episode, we explore theintersections where human
ingenuity meets the cutting edgeof technology, unpacking the
experiences, challenges, andtriumphs that define our era.
So, whether you are a techenthusiast, an inspiring
entrepreneur, or simply curiousabout the human narratives

(00:44):
behind the digital revolution,you're in the right place.
Welcome to The Human Code.
Today on The Human Code, we arehonored to welcome Sultan Meggi,
a pioneer in artificialintelligence, cybersecurity, and
financial technology.
As the co founder and CEO ofFrontier Foundry, Sultan has
spent more than 30 years drivingAI innovation, working in senior

(01:05):
government roles, and shapingthe future of technology.
In this episode, we explore theprogression of AI implementation
in organizations, how businessesevolve from simple automation to
complex AI decision making, andeventually to AI powered
autonomy.
Sultan shares his insights onthe risks of rushing AI adoption
and the importance of securityand decentralization, and how

(01:26):
quantum computing and AI willtransform industries.
We also dive into AI's role inhuman longevity.
financial systems and cybersecurity and what individuals
and businesses need to do todayto stay ahead of the AI curve.
Join us for a fascinatingdiscussion on how AI is shaping
the future and why understandingits progression is key to
successful implementation.

(01:47):
This is one conversation youwon't want to miss.
I'm here with a good friend ofmine.
It's probably a fair statementnow.
I feel like

Sultan Meghji (01:55):
Yeah, I think so.

Don Finley (01:55):
just a really nice experience.
So Sultan Meghji is here with meand we're going to dive into
basically implementation of AIin your organization.
And really going from that zeroto one perspective, but like
things that we're seeing in bothof our businesses, Sultan is the
CEO of Frontier Foundry.
they've got a great product thatI'm absolutely loving on.

(02:18):
And additionally, it works in avariety of spaces.
So Sultan, I just want to jumpin and get your insights and
thoughts and kind of, we'llfreewheel this for a while about
what it takes for companies togo from that zero to one.
And additionally, what are someof the failures that you're
seeing in that, the aspect ofI'm sitting here going, I got to
get AI implemented.

(02:38):
The boss is harping on me.
where do you see the success andwhere have you seen the
failures?
and I'll jump in and drop someof our insights from FINdustries
as well.

Sultan Meghji (02:48):
Yeah, this is, I think one of the biggest
challenges that mostorganizations are going to face
for the next few years,Everybody got so excited about
AI over the last, 18 to 24months.
And a lot of people just jumpedin with two feet without really
thinking about it, in terms ofdigital transformation or in
terms of what the actual resultsof that work would be.

(03:08):
And you and I have both spenttime in digital transformation
and that if you don't know whatthe end result is and you don't,
In essence, map backwards fromthe end result, you're not going
to get there.
And I think there are a lot oforganizations that have gone
down this AI path and either aregoing down and they wanted a,
and they're getting B and B isnot as much value or they're

(03:29):
going into it.
And it looks like every otherenterprise IT program where it's
a three year journey and theynever quite get there and the
CIO leaves after two years and,we've seen this, rinse and
repeat, and then finally, theydon't understand that it's not
something you get to get wrongfundamentally.
Whether it is a competitor ifyou're in the private sector or
it's another nation state thatmight be hostile to the United

(03:52):
States if you're talking aboutthe government.
AI and the subsequentautomations that are coming from
it are non negotiable in termsof how you operate.
And if you are thinking that theway you've always done it before
is going to be true two yearsfrom now, three years from now,
at the rate that this technologycontinues to grow and expand,
you're wrong.
Or you're building a three yearprogram against, what the

(04:12):
technology looked like fiveyears ago, which is, where a
number of the name brand AIfirms live, you're really making
a huge mistake.
And I think we're at a momentwhere it's time to stop playing
around with it and actuallythink about it cohesively
relative to the overall businessthat you're running.

Don Finley (04:27):
It definitely resonates as far as like the
companies that are successful inthis transformation so far are
looking at it from thestandpoint of use cases.
The

Sultan Meghji (04:35):
Yeah.

Don Finley (04:36):
getting some quick wins, knowing that you can get
that feedback loop tightened.
around Hey, is thisimplementation actually helping
us, hurting us?
are we engaging our customers ina positive way with this
technology?
Or even internally, we have somesort of metrics that showcase
the result of it?
and lacking that has been a, akey indicator of whether

(04:57):
somebody is going to fail.

Sultan Meghji (04:59):
Absolutely.
the metrics comment is, I think,a great place to start is, I
think a lot of people areinventing new metrics around
artificial intelligence projectsin their organization.
And that's a mistake, You shouldbe looking at how your AI
projects are actually workingagainst your existing metrics,
So that way you can actuallystudy longitudinally if you're
having a positive impact.

(05:19):
Whether it's on the bottom lineor wherever you measure your
organization.
And, that to me is one of thefirst red flags when I talk to a
customer as to whether or not Ithink they're going to be able
to be successful.
Whether they're using us oranyone else is, are they
thinking about it backwards froma metrics perspective?
and secondly, do they have anexisting governance mechanism

(05:39):
for that change that they canthen lean on?
And I'm sorry, like we'retalking, we're supposed to be
talking about AI, but now youand I are fully in the
vernacular of digitaltransformation.
It's metrics and governance, Andit's not sexy.
It's not interesting, but youhave to have that.

Don Finley (05:53):
and I think you're hitting on this, we're both like
tooting each other's horns here.
Cause we are looking at these asdigital transformations.
and if you look at like wherethe last time we probably all
saw metrics being created, itwas around the two thousands in
the internet, bubble that wascreated and like companies being
valued on views, clicks,everything else.
But it comes down tofundamentals.

(06:14):
You know your business and youknow the metrics that are going
to move that business justbecause you're adding AI to the
equation doesn't really changewhat those metrics are.
and at the same time, yeah,digital transformation isn't
sexy, but it's about how do youbring the organization along
with you?
And I think that's one of theareas that I resonated with what
you're doing at FrontierFoundry, is that it's a solution

(06:38):
that allows you to bring theorganization along, do it in a
safe way, create the governancestructure that's necessary for,
an organization to succeed.
And additionally, crosses thatchasm of the AI solutions that
are out there that you're goingto go buy and integrate with,
are cool.
Consultants.
and I don't mean that to bedisparaging, consultants have a

(07:00):
place, but how do you transitionthat knowledge and that
infrastructure to be part ofyour core team as a member of
your organization is?

Sultan Meghji (07:09):
Yeah,

Don Finley (07:10):
that's what Limney and Kuhnney offer to the
environment.

Sultan Meghji (07:14):
thank you.
the ability for anyone to beable to understand that there's
a difference between aconsultant and a valued member
of their team is a big deal.
And that's a great way to thinkabout it because so much of the
artificial intelligence that'sout there right now is a
consultant, as you've rightlypointed out, It's, it comes in
and it throws a PowerPoint atyou or throws some generic.
You answers at you.

(07:34):
but because so many of thesesystems aren't tuned on your
data in your environments andare not, in essence walked from
generic to specific by peoplewho are domain experts from your
own organization, you don't getto that level of value.
You don't get to that.
AI becoming a team member thatreally does radically

(07:55):
accelerate, the best of yourorganization and help with
manage the worst of it out, Andthat's really what we're trying
for, And it's a fascinatingmoment to realize that this
technology is at a point whereyou can, and we do it with Limny
and Kundi often, and I can talkabout some specific use cases,
and where you put thistechnology in the hand of a very

(08:17):
small number of people inside anorganization to take it that
last five yards in the buildprocess and that turns that
generic tool into something thatis uniquely yours as one of our
customers and you just have itand it's your own tuned version.
It's private.
It's secure.
it's a black box to us, but it'sentirely explainable and

(08:38):
transparent for you.
And it can fit inside ofregulatory environments.
That's one of the key featuresthat we built into the system,
out of the box.
And so it can live in a varietyof different regulatory regimes.
And the idea is that you get allof that and then it becomes an
expert in your business.
And it's you've grabbed thatreally bright.
young kid and giving them thethree year tutelage they need to

(08:59):
become an absolute expert.
And you get to do it over thecourse of a few weeks or a few
months.
And on the output of that, youhave a trusted partner.
You have someone that you can betalking to interactively.
Daily and getting direct valueout of daily because you're
building it with metrics.
You're building it backwardsfrom that and you're building it

(09:21):
with the domain experts on yourteam.
You're not trying to get rid ofthem.
You're trying to augment them.
You're trying to make them maketheir decades of experience
available to everyone else inthe organization, but guided in
a way and automated in a waythat humans just can't do.

Don Finley (09:35):
does it look like getting from that, day zero to
two months down the line whenyou have that person in your
organization, Because I can goto one of the main frontier
models today, sign up for anaccount and, I'm spitting out,
new emails.
I'm spitting out presentationdecks, Like I'm getting that
consultant sort of flavor rightaway.
But how do we get like somebodyinside the organization?

(09:57):
Yeah.

Sultan Meghji (10:00):
you about one use case we did earlier in this year
that I can talk about now.
And it's a project we did withthe Department of Homeland
Security on counter narcotics.
We were specifically looking atfentanyl interdiction.
And what we were trying to dowas identify ways to basically
make it riskier for drugtraffickers to cross and how to,
in essence, guide theirbehavior.

(10:21):
and out of that what we wereable to do is.
start with an end result, whichwas we knew we wanted to have a
map of the southern border thatlooked at every single border
crossing that basically rayraided it in real time, trailing
a couple of years to say, hey,red, yellow, green.
If I was a drug trafficker and Iwas looking at that crossing,

(10:43):
would I want to cross there?
And when I say cross, I meanlike a significant amount of
fentanyl.
I don't mean like a backpack ora bag.
something that would kill.
thousands of people, And thatmap is what we need.
The output needed to look like.
And and that's what they askedfor.
They wanted something that thedomain experts look at.
So then we worked backwards andworked with a bunch of domain
experts from a variety ofdifferent organizations, both

(11:04):
current and historic, and cameup with, in essence, a tuned
Artificial intelligence that hada built in volatility model,
which anyone from the hedge funduniverse would know what a
volatility model is, but weactually built a custom
volatility model for drugtraffickers.
that would be as if you were thehead of the Sinaloa cartel
looking at the volatility riskof crossing the border.

(11:27):
And so we started at zero, wetook our technology out of the
box and we applied a specifichumans and specific domain data
and created a custom AI platformthat would And this week, by the
way, we did all of this.
The first version of this wentfrom zero to first test in less
than six weeks.
So just to give you a sense ofthis time of scale, And we were

(11:48):
looking at the last couple ofyears of data and that gave us
the ability to say, Hey, listen,if you look at Yuma, Arizona and
compare it with, insert, insertsome other place, I can tell you
that for a certain window oftime, Yuma was the hardest place
in the United States to getfentanyl across the border.
And there are a variety ofreasons why that's true, which

(12:09):
our system called out, some ofwhich was obvious and
straightforward, and some of itwas subtle and nuanced.
And then there are others where,you look at, in the San Diego
region or you look at kind ofWestern Texas and there are
places where it's in that samewindow of time, it was far
easier.
and at one point, there was aone and I'll just hold back
which city this is because, I,Prefer not to name it.

(12:31):
there was an over 93 percentchance that if a drug trafficker
tried to take a truckload, anamino truckload of phenol across
the border that they would getacross.

Don Finley (12:39):
Woo!

Sultan Meghji (12:40):
Yeah.
And you go to Yuma and thelikelihood of that getting
across the border is like 2%.
3 percent depending on, andthat's a pretty, that was a
pretty consistent number for awhile there.
and kudos to whomever wasrunning the operations in Yuma,
as I've said, so that's what theresult looked like.
And that's a really interestingand sexy result, but from a, how
would that apply to someone whoisn't trying to counteract the

(13:03):
Sinaloa cartel?
What is it?
It's having a small number ofdomain experts with decades of
experience being.
brought into a conversation witha system that's 95 percent of
the way they're already, andthen you add the relevant data,
you add the domain knowledge,you add their domain experience,

(13:23):
and you get to a unique customAI model that, as new data comes
in, it just keeps runningthrough it, and you have two
interface points.
One is, you can interface withit like you do with any of the
large language model systems,and you can ask it questions.
You can say, hey, what was goingon?
I saw a spike on this day.
What was happening in the newsthat day, as an example, was

(13:45):
there something in the localnews that relates to this, or
you saw a drop off and then thatallows the domain experts to
say, Oh, when we see somethinglike this, like we see a drop
off in seizures over a two weekperiod.
Maybe there was a shift change.
Maybe one of the senior guys wasoff training somewhere else.
Like you, you can find causalfactors and you shorten the
cycle time very quickly to getto a place where the system is

(14:07):
just sitting there automated andcan send alerts and say, Hey,
listen, I'm, So and so is going,on vacation.
Maybe you want to shift the dutyroster around and make sure the
next most senior guy is takingover because you don't want
anybody to, because they're,they've tagged that guy's phone
so they know he's off.
So maybe there's a vulnerabilitywindow that's opened up, you
those kinds of explorationsbecome, Almost, too many to

(14:28):
handle early on.
And so that's why, again,focusing on really narrow use
cases with the domain experts asyou're doing the tuning and
training is important.
But that gives you a sense, Ithink, of kind of one example of
how we've used this technology.

Don Finley (14:41):
it's a great, one, it's a use case that clearly has
benefit.
Like we all understand thechallenges that we're going
through with Fentanyl.
And then additionally the speedat which you're able to get the
system up and going and thattrained.
And then in today's world oryesterday's world, we would have
seen that probably take a coupleof years actually go through as
well as not even being able tohandle the amount of data points

(15:04):
that you're talking about on areliable basis.
And

Sultan Meghji (15:07):
Yeah.

Don Finley (15:08):
the old solution would have been to do that.
Basically as an audit, onceevery year,

Sultan Meghji (15:14):
Something like that.
Right.

Don Finley (15:15):
and then basically have another year that goes by
as far as that feedback goes.
But now you're talking about afeedback loop of a couple days
in regards to, Hey, Steve'sgoing on vacation.
Let's bring in, Emily to ensurethat we actually have some
stake.

Sultan Meghji (15:29):
Yeah.
the interesting thing aboutthis, and I'm glad that you're
highlighting the age of some ofthe technologies and kind of
these cycle times, these kind ofbatch cycle times that we've,
we're used to historically,because that's just all humans
can handle, it's very rarely.
Will you be able to have aperson getting email at eight
o'clock in the morning every daysaying, here are the four things
you have to be better at, that'sa tough, work culture.

(15:49):
And I wouldn't.
Suggest anyone use technology todo that.
but you get to turn around andcreate systems that do that, And
so then it's a recommendation.
It's a highlight.
And it's basically saying, Hey,listen, Joe over here happened
to notice this one thing thisone time.
now the system can check forthat.
every single time with everysingle data point, And you don't

(16:12):
need to build it against aspecific technology set at that
point or a specific data set atthat point.
it becomes a dialogue and it'sinteractive.
So it's not just that you'regoing from batch to real time or
batch to iterative.
It's batch to interactive.
And I think that Evolution issomething that if the
organizational culture and theorganizational processes can

(16:33):
support it, then that is a greenflag for me, in terms of them
asking those kinds of questionsas a potential or current
customer.

Don Finley (16:42):
So what are some of those green flags around going
from batch to interactive?
Because I think if you sit downwith any executive, they would
love to go in that direction.
And like everybody that I talkto wants to be there.
But few are, Like it does take aspecial case to be fully ready
to handle this amount oftransactional information.

Sultan Meghji (17:02):
Yeah.
again, it's funny.
Let's take the metrics andgovernance stuff and just take
that as done.
Those are kind of base stakes,the next step becomes
organizational will it changeand at scale change.
and that's, again, you narrowthe use case to make that, a
less difficult conversation fororganizations, but fundamentally

(17:23):
from board members to the Csuite all the way down and
translate that to whatever yourorganizational model looks like.
You have to be willing to takepeople out of the normal
processes, the normal dailyprocesses and start chipping
away at that from a humanperspective, If organizations

(17:43):
don't, for example, have a builtin model that allows for their
staff to have time to getcomfortable with the technology
and have time to get to learnabout these kinds of
technologies and to createtraining and educational
resources availability and timejust time in the day so that
people when they're in theircritical thinking, bands of high
effectiveness can actually dothis, and have staff that

(18:04):
they're capable of doing it likethat.
becomes a huge green flag for meif they have those or if that
becomes part of theconversation, if we get lots of
questions for, Hey, can you comein and do a town hall?
we've signed this deal with you.
We were going to do this thingor, Hey, can you come into the
office?
we're going to do a brown baglunch once a month and X percent
of the company is going to bethere.
And we'd just like you to be inthe room and have the

(18:26):
conversation and talk like youand I are, again, a huge green
flag.
so that's on the human side.
The other big green flag for meis that this isn't a big
enterprise IT project, that thisis fundamentally a business
project.
find that the vast majority ofpeople who come to us thinking
about us or thinking about anyAI project is a big enterprise
it project.
that's more of a red flag forme.

(18:47):
if it comes in as a businessvalue proposition conversation,
then I'm more likely to, I wouldsay, entertain the nuances
because, you know, the, thereare only so many hours in the
day for us.

Don Finley (18:58):
I 100 percent agree.
And I think along those lines, Ido enjoy when people want to
have the whiteboard sessions,the bag lunch sessions as well.
Those are great.
And then the other side of thisis I completely agree that if
this is being driven bytechnology, your adoption is
likely going to be low.
when it comes in as a, from thebusiness.

(19:19):
Those are the projects that youknow that they're buying into
making those changes and theywant to see that ROI come
through.

Sultan Meghji (19:27):
Yeah.
I'll give you an absolutelyfantastic example of a customer
that it was a mind blowingdiscussion because maybe second
conversation they brought intheir chief risk officer.
And I immediately downgraded thelikelihood of that organization
doing anything massively.
cause I'm just like, Oh, bringin the risk guy.
What it ended up doing was justcreating a third use case for us

(19:49):
to talk about.
Because the risk guy wasthinking about this in the way
that you and I are thinkingabout this.
And so when the chief riskofficer of a big financial
services firm is saying, wait,this is going to make us better
at this so that my people don'thave to bug.
And he pointed to the guysitting to his right, who was
purely on the business side.
He's if my guys don't bug hisguys, then that's a better day

(20:09):
for everybody.
Because my guys can sit andstare at their screens and not
walk over to Joe and tap him onthe shoulder and say, Hey, why
did you do this?
and that turned out to be, Iwould almost call it a green
flag.
Now, if someone wants to, thinksabout a risk management oriented
use case or compliance use caseor something like that, that to
me becomes really interestingbecause then, it's Not only just

(20:29):
a business decision, but it's arisk management and
conversation.
And our company is not fit foreverybody, but we're really
designed for organizations thathave really strict regulatory or
compliance burdens, or, youworry about, senior members of
your organization being calledup in front of Congress or
something like that.
and we want to be, focused onthat.
I guess it makes sense inhindsight that I was definitely

(20:50):
apprehensive and now I think ofit as a green flag.

Don Finley (20:54):
and I love that because we were also talking
about some use cases pre show.
And the idea that risk could bean hindrance to the
implementation.
And it's along the lines ofthere's use cases that I think
we see coming from the bankingsector that people get excited
for, or even from the legalsector where I was talking to a

(21:16):
family office and they werelooking for, AI to help them
with legal solutions.
And my first take on that was, Idon't know if you really want
that today.
from the standpoint of even thebest LLMs and like the lack of
ludicinization that we see is,going down, but we're still not
100 percent there.
And I don't think that anybody'sreally accepting, AI written

(21:38):
contracts as like pure, goodlegal standing, but There is a
place for AI in the process, andit could be used as a, I think
we were talking about, as areview, or identifying places
where there is human bias inthese natures, and that could be
part of the process and part ofthe compliance framework, going

(21:58):
forward to double check theefforts of what you, the human,
is doing.

Sultan Meghji (22:02):
absolutely.
I was very surprised that towatch our legal business grow
this year, because I approachedit with quite very similar
skepticism, I would say to whatyou just described.
and it wasn't just a technologyquality issue.
I think that, the legal industryas a whole is very broad,
obviously, but also it's a veryhuman part.
It's very human.
process, and I look at our firstcouple of legal customers and

(22:24):
they all came in I would say,looking quite similar in terms
of what their asks were, It wasfundamentally advanced document
management, let's call it that,and every single one of them has
ended up in a completelydifferent place.
we're working on a project nowto identify, basically ways to
optimize discovery processes incertain categories of court
cases for a fairly significantlegal organization at the

(22:45):
government level.
And you're talking thousands ofdifferent documents, thousands
of different cases, and simplygiving them the ability to
basically highlight subsectionsof documents.
So that instead of reading 150pages to get to the one answer
they need, they can read fourparagraphs and get to the answer
they need, With a source thatthey can click on so they can

(23:06):
see the actual document and,things like that.
And it's absolutely fascinatingto look at how the fundamental
process of shortening the humanintellectual effort from a lot
of low value activity and a verysmall amount of high value
activity to removing as much ofthat low value activity as
possible.
So that again, if you've got aprosecutor who knows exactly
what they're looking for, thatperson who's got decades of

(23:27):
experience should not be sittingthere like scrolling through a
screen or finding a juniorperson to scroll through a
screen for four hours, It shouldbe a, Hey, here is the tuned,
multimodal system for this case.
It has.
All 3, 286 documents or whateverthe number is.
I need you to tell me with allthe history and everything
you've been trained on, whereare the places that similar

(23:49):
cases have started?
Where are they going?
What documents are missing?
what haven't we asked for yet?
What haven't had been disclosedyet?
And you can do all of that outof the box basically with these
technologies now.
And, but you have to be able tobe in a compliant framework and.
data model because you can't put3000 documents from a

(24:10):
prosecutor's office into a cloudenvironment beyond it just being
unfeasible.
It's just, no one should dothat.
That's a terrible idea and youshouldn't put it into a generic
model because it won't be ableto give you any useful
information.
But if I say, Hey, listen, Ihave 47, 000, and this is a real
case, police reports of acertain type used to train a
system.
guess what?
That AI is going to be prettydarn good at understanding that

(24:33):
specific police report, and thepolice officers and really being
able to find ways to drive outthe low value human activity so
that the humans can focus on thehigh value stuff.

Don Finley (24:44):
I love this example because you're also talking
about organizations that havegone from 0 to 1, but are now
going for that, next phase ofthe iteration, we talk about it
from, crawling to walking,perspective.
And, what are you seeing as thequestions that successful
organizations are askingthemselves as they get the first
batch done and then they'relooking at that second batch?

Sultan Meghji (25:05):
the single most interesting thing is every
single customer that hasgraduated from zero to one,
Onward, they have one commontrait, which is before they even
get the very first version ofthe MVP of the tuned, that very
first tuned model in the zero toone process, they are already
adding things to a parking lotof what they want to do next
because just the discipline thatyou have to go through to do the

(25:29):
tuning and training.
we ask, a lot of very specificquestions.
We have people who will go andliterally sit and be like, okay,
we're going to write a scriptfor you.
And this is how you interactwith the system.
And, we do this kind ofinteresting way of doing it,
just that process of askingthose questions.
exposes what they want to trynext.
And so to me, I would sayeveryone that I see as

(25:51):
successful in going from one totwo, you can see the key
indicators of it because thecreativity, the key players, the
domain experience, to use aslightly technical term, they
grok.
That there are orders ofmagnitude, more value to be
captured, And you see it in thezero to one process.
And here is a great example.

(26:11):
So we work with a couple ofprivate equity firms and we've
applied the same model, thisexact question, like how, they
are obviously looking atmanaging long term investments
and things like that.
And it's you can absolutely seein due diligence documentation
for an acquisition.
The key indicators of successfor the sale six years later,
four to six years later,depending on the PE firm.
that's a super interesting, veryearly use case for us.

Don Finley (26:34):
Say that again?
You're talking about you have anearly indicator of in your early
discussions with them, somethingthat is going to come out six,
six years later.

Sultan Meghji (26:42):
Yeah.
So when a private equity firmacquires a company, they go
through, a due diligenceprocess.
they look at documentations,they interview people, et
cetera, et cetera.
I've actually written a bunchof, sub stack articles on the
specific applications of AI toprivate equity.
it's an area I actually reallylove and I've spent a lot of
time in, and I think it's afascinating area that people
should look at.
It hasn't really used AI as muchas it should.

(27:03):
And I'm trying to get moreawareness, which is why I
mentioned this use case, but youcan look at the due diligence
activity that was done for aprivate equity firm to acquire
one of their portfoliocompanies, and you can see key
points that will make the saleeither more likely to succeed or
less likely to succeed when theprivate equity firm decides to
sell the company.

(27:23):
You 4, 6, 7, whatever the termof the fund is.
And we're starting to see somereally interesting data.
that is very easy to get, inthose environments.
Cause you know, these privateequity firms will have tens, if
not hundreds of portfoliocompanies in their different
funds.
And you can absolutely without awhole lot of heavy lifting,
identify places where the guyson the ground, on the banking

(27:46):
side will want to get a dealacross the board.
line because they want to getthe sale done because they want
their commission.
And then they will come backafter the fact and you will see
they're like, Oh yeah, this duediligence document was slightly
off or, whatever the consultingfirm did something wrong or
whatever.
And you can find very easily abunch of places where the
emotional will to get thetransaction across the finish
line actually violated thescripture, if you will, of the

(28:08):
spreadsheet that it had to besat inside of.
and you can see it.

Don Finley (28:11):
that is intriguing because I'm also thinking about
this from the standpoint ofyou're looking at what due
diligence showed you and thenadditionally when you're going
to exit from that company aswell in the future and so if you
have those highlights it's alsoanother point of information for
the dealmaker to basically sayis this deal looking any more
you know

Sultan Meghji (28:31):
Attractive.
yeah

Don Finley (28:32):
yeah and

Sultan Meghji (28:33):
That's it.
And that's exactly what it'sbeing used for.
Yeah.

Don Finley (28:36):
Yeah, that's an exceptional piece of information
because just coming back tothis, whenever we help companies
do that due diligence, they'relooking for something that is
going to give them that alpha onthat company, what makes it
attractive for them to buy, butemotions always come into deal
making.

Sultan Meghji (28:52):
Especially, especially in that universe.
Yeah.

Don Finley (28:54):
Yo, yeah, absolutely.
And I'm still processing thisbecause I love the aspect of
what you're doing.
And, when we look at what AI canhelp you with, that's probably
not a zero to one kind of likeimplementation.
That's really that that one totwo, that crawl to walking type
of space.
You're comfortable with thetechnology.
you have an understanding orfamiliarity.

Sultan Meghji (29:17):
I would say that on the private equity side, no,
that's more of a zero to onekind of thing.
think the real value, yeah, thatthe real value gets, much more,
I don't know.
Private equity is early.
And when I say early for us,it's less than two.
the company is not two years oldyet.
So just you have to allow forthat.
but I would say that on theprivate equity side, there's
just so much data out there.
And so many of these funds haveso much of this data that it's

(29:37):
really not a challenge becausethey've already self selected
down to a narrow subset becauseof how the funds are structured,
whether it's banks.
By market or, portfolio company,target size, et cetera, et
cetera.
a lot of the documentation thatthe consultants do in the due
diligence process is incrediblysimilar.
So this is the old McKinseyjoke, the photocopied McKinsey
PowerPoint joke, the fact isthere is a lot of that and
they're just changing a fewnumbers, which makes it easier

(29:58):
for the systems to identify,issues and find commonalities
and create structured data outof it that they can use as an
intermediate steps in thequantitative analysis, but as
you go through that process, Ithink a lot of people don't
appreciate about privateequities, A, just how much
awesome data they have becausethey have amazing data and it's
very reasonably structured andthey don't have big, clumbering

(30:19):
enterprise IT systems in theway.
A lot of the times it's just a,a folder of documents, it's not
big databases or anything.
And we're past the point wherethere's a meaningful difference
between those two.
that's number one.
Number two is, It is absolutelyclear going into a transaction
what the exit has to look like,because you can't spend the
money to buy the company unlessyou know how you're going to
make whatever you need to makeon the back end of it, And so

(30:41):
you have your metrics out of thebox.
You have that result out of thebox.
You have a timeline out of thebox.
You have an overall risk modelfor the fund out of the box.
It is an absolutely target richenvironment.
And I'm just, incredibly excitedabout it because I think we're
going to see a bifurcation intwo markets, and this is one of
them, between the nimble, fast,data driven guys and the others,

(31:03):
as I would call them.
And I think we're starting tosee glimmers of that.
I think, in the hedge fundspace, we already see that
bifurcation occurring.
And that's why you saw so many,especially multi strat hedge
funds in 2024, struggle.
in an absolutely amazing marketenvironment, and you saw funds
that were, would have 150people, billions of dollars,
struggle to compete with asimilarly sized firm that had 30

(31:24):
employees or 25 employees.
And that's just the value ofwhat that technology can bring
and the automation that itbrings.
And so I think hedge funds gotinto this a little faster.
but I would say that the otherside of it is a little slower.

Don Finley (31:37):
I think we're definitely going to see a pretty
strong bifurcation, as youpointed out, between the ones
that actually can act upon thisand the ones who are just
deciding not to, or, like wetalked about before, not doing
this in a way that is a, digitaltransformation project, cause
the farther you get away fromthat or just implementing tech
for technology's sake, you'renot going to see the results or

(31:58):
you're not going to see theimpact that you want to see in
the business.
And, along the lines of like,where we see companies,
Succeeding, we've touched uponthe crawling aspect of this,
the, getting the basics done.
We've talked about a couple ofuse cases, around like the
actual like successfulimplementations or like how
people can look at this, theaugmented intelligence side of

(32:20):
this, not relying on it to go.
are there other areas thatyou're seeing, successful
organizations, like what aresome other green flags?
That are coming out.

Sultan Meghji (32:30):
the other one that I think is probably worth
mentioning is that, there are asubset of organizations that
never are comfortable with astatus quo culturally.
And that gets expressed in abunch of different ways, whether
it's efficiency, bottom line,whatever, there's the old Ford
model where you fire the bottom5 percent of your employees
every year, no matter what.
and that eventually justiteratively over time, that you

(32:51):
end up just pushing the wholegroup North, Whether that's true
or not is a separate issue, butthere's that kind of cultural
process is in place.
What's interesting is that weare now far enough down this
technology discussion for thelast 30 years that people need
to do the same thing withtechnology.
And so to me, I look at greenflag being the people know that
technology is always a federatedarchitecture.

(33:13):
There are always going to bemultiple systems.
They're always going to betalking to each other.
And so if you are talking likethat and acting like that and
have a built in, okay, everythree years, we know we have to
put some, energy intomodernization of function X,
whatever that is.
And you have this nice littlepretty Gantt chart that just
shows how every year there'slike a.
X percent of your operating,your tech operating budget goes

(33:35):
into doing that.
That to me is a big green flag.
There's a huge law firm thatwe're, we're finalizing, I would
call it the phase two of, andthat entire discussion was a
green flag for me from the firsttime I talked to their CIO,
their head of tech.
And that's how he was talkingabout it.
and it was just, a, freakingfantastic conversation and he's

(33:57):
a funny guy too which made iteven better but it was amazing
because his view was absolutelyfocused on listen there is a
longevity that most peopleignore in technology and we
don't want to be 10 years out ofdate on something or five years
out of date on something andthen have a compliance issue
come across or a cyber securityissue come across.

(34:18):
so that's one other big greenflag.
So the other green flag thatgoes in parallel with having
that kind of longevity viewaround humans and technology is
also understanding that thereare new opportunities that those
technologies automatically giveyou.
That you have not thought of yetand to have a process in your
organization to say, listen, weknow every three years function

(34:42):
X is going to get a technologyupgrade.
So how are we going to findanother value that it creates?
And so that inherent improvementmodel, that's in essence,
greenfield.
And so it's, and so it's therarest, the top 1 percent of the
top 1 percent of firms andorganizations do this.
But when I hear that, I knowthey're going to be successful.

Don Finley (35:00):
You know what's funny is I never put two and two
together on that one, and thankyou for enlightening me on this
side.
So selfishly, This wholeconversation has been worth it
just on that one idea.
But think you're hitting areally strong note that we're in
the middle of what some peopleare calling an AI bubble.
And at the same time, I don'tagree with that.
I think we're just at thebeginning of the implementation

(35:23):
of the models that we're seeingtoday.
And people aren't really fully,grasping what can be done with
it yet.
Like we haven't hit That stride,amongst the use cases, but
having that concept, that ideathat, that innovation is
happening and that you're goingto see something changing in the
next year, two years, threeyears.

(35:43):
And I'm specifically thinkingabout, We're just the beginning
of like agentic AI inside oforganizations and we're just
seeing like reasoning models atthe beginning of like where
they're at and so what we can doin the future is gonna be a lot
more than what we can do todayand to have the idea that like
you're building a foundation forwhat you're looking to expand

(36:06):
upon and knowing that you don'tknow exactly what that AI will
be able to provide you in threeyears is a pretty strong moment.

Sultan Meghji (36:14):
Yeah, and it's also, the way that I think a lot
of firms, I get a lot ofquestions from firms about how
not just the risk of AI, but therisk of their competitors
getting AI and doing a betterjob than they are.
Or new companies getting startedthat are hyper competitive to
them.
in the post Silicon Valley bankcollapse venture capital
universe, a massive percentageof the venture capital money,

(36:36):
that's been spent over the lastfew years has gone to a very
small number of companies, it'sa consolidate, it's a massive
consolidation that I don't thinkis, Quite well understood.
But what it means is that, a lotof the air in the room for a lot
of AI companies got sucked outbecause it's gone into a very
small number of firms who arethen investing in creating their
own ecosystems, They're hyperverticalized ecosystems and
they're doing what AmazonSalesforce, did over the last 15

(37:00):
years in the cloud andenterprise application space.
And they're just Copying that,just doing it much faster.
But what it means is that theinnovation cycles are actually
hyper accelerating in verynarrow areas in those
environments.
But for the averageorganization, what it actually
means is that since all thatinvestment is happening over
there, the actual place where AImoney is being well spent in the

(37:22):
early stage is being spent onbusinesses that are going to
compete with existingbusinesses.
Not on tech stacks.
And so I would say that a redflag for me is if an
organization doesn't recognizeevery organization is at risk.
I've publicly said, I think atleast 50 percent of the S& P 500
is going to change in the nextthree to four years.

Don Finley (37:41):
Oof.

Sultan Meghji (37:42):
we'll see that maybe I'm probably on the
aggressive side, but the fact isit's not impossible and we're
seeing more companies stayprivate longer.
I think there was a, I thinkthere was a joke that the series
J is the new IPO, not too longago.
and as you step back and realizeif I am an organization that is
a human process organization.
And you look at agentic AI, youlook at multi modal AI, you look

(38:04):
at, privacy centric AI systemsthat operate in those
environments.
You are now at a point where, ifI was running a venture fund, I
wouldn't invest in technology.
I'd be investing in businesses.
They're basically photocopyingan existing business.
And instead of having 5, 000employees, have 50.
Or instead of launching, havingX customers per unit of work,

(38:26):
have 50 X customers per unit ofwork, and this is where I think
SpaceX is probably a reallyearly and really good example of
that, right?
They fundamentally created afactory for building rocket
engines.
That's what SpaceX did.
And then they built the rest ofthe organization around it,

Don Finley (38:40):
That's a fantastic way to look at it.
Yeah.

Sultan Meghji (38:44):
SpaceX is an awesome company.
it's like every, every couple ofmonths they do something that
I'm just like, wow, cool time tobe alive.
and that's great, but likeSpaceX shouldn't be the
exception.
That should be what everybusiness should be doing, And
the, going back to the PEexample, in our conversations

(39:04):
with some of the PE firms, thereare a number of portfolio
companies where I've said, youknow, hire, I will show you 10
companies that are going to tryto disrupt that business.
If this company loses 3 percentof its market share in the next
two years, you're toast.
So you should sell that companypretty quickly, unless you think
that their leadership team is inthe top 1 percent of your
leadership teams.
And that can go through thisjourney in 18 months or have

(39:27):
enough of a balance sheet thatthey can buy that competitor and
take them off the market, whichis the normal historic way
that's worked.

Don Finley (39:32):
I have a billion dollar product that was
basically squashed like the bigpen, moment by a large
competitor that just wanted totake the technology Off the
market.

Sultan Meghji (39:43):
The, that you see that in banking, there are three
companies that control the vastmajority of the technology in
the banking system.
And that's why 80 percent ofbanks in the United States have
Tech platforms that are morethan 10 years old.
And that has been their MO for20 years is anytime an
interesting tech company comesalong that looks like it's going
to hit a reasonable, inflectionpoint, they get bought out or,

(40:04):
or commercialized out.
One of the two.

Don Finley (40:06):
Yeah, I've seen that as well.

Sultan Meghji (40:07):
Yeah.
Oh yeah.
This is, this is not, none ofthese are new conversations,
just new tools.

Don Finley (40:12):
I think that's, exactly.
It's a new tool in a new space,and I love the concept of A, you
need to be looking at theinnovation that you're driving
year over year inside of yourown organization.
additionally, there's a lot ofhope and glimmers that we can
identify from this conversationaround the value of AI isn't
going to be driven by the techplatforms.
It's going to be driven by thebusinesses that are actually

(40:34):
doing something with it to servemore customers, to lower their
operational costs, to grow theirreach, to hyper personalize,

Sultan Meghji (40:44):
yeah.
and this is why the genericlarge language models,
especially the non agentic ones,I think are on a really limited
window of success time, And Iwould compare it to the late
90s, Google was not the firstsearch engine, Not remotely.
And there were a bunch of themthat raised lots and lots of
money that, Disappeared quitequickly.
And if I were to talk about AskJeeves or Alta Vista, some of

(41:05):
those, no one would have anyidea what I'm talking about, but
those were all, early ones.
I think a number of people outthere will be very surprised at
some name brand.
AI firms that are talked aboutin the news a thousand times a
day, just flame out.
and there'll be some prettyirritated investors I'm sure.
But, the fundamental notion thatgeneric AI with the current
state of technology and bycurrent, the next 10 years is

(41:25):
going to work is I think achallenge for most
organizations.
They need something hyper.
customized for them and theyneed dozens if not hundreds of
those based on the different usecases the different value
creators inside of thoseorganizations and it is
absolutely a waste ofelectricity to throw a 10, 000
GPU farm at something like that.

(41:45):
When in reality, what you needis a 24 CPU, 4 GPU server
solving one specific problem ona narrower case.
Get 10 of those and you're notgoing to break the electric
grid.
we don't need you know, thiswhole thing, this whole race to
nuclear just as a total tangent,I think is a long overdue
because it's fundamentally avery reasonable, it should be a

(42:07):
piece of our electricalinfrastructure, but the notion
that we need.
billions of GPUs out there inorder to do the vast majority of
things that businesses need tobe successful, I think is just
people who don't understandcomputer science, hyping this up
a lot.
And that's to the hype cycle ofAI, I guess we're in.

Don Finley (42:22):
yeah, I would definitely agree with that.
we're looking at an efficiencymetric on AI that is, your cost
per intelligence is going toactually rival what it is to put
a human in the seat, with

Sultan Meghji (42:33):
Oh yeah.

Don Finley (42:34):
at it from that standpoint of building out the
supercomputers, 10, per setup.
And for most aspects, I thinkyou're 100 percent in correct
that we don't need that.
All right, so I think we've hita nice like natural point in the
conversation to drop this and Ithink we've covered a good, zero
to one and one to two a littlebit.

(42:54):
Is there anything else that youwant to drop into the
conversation as we wrap it up?

Sultan Meghji (42:58):
No, we've covered quite a bit.
I think The only thing I wouldsay is you talked about green
flags.
I would say, there are a coupleof red flags that we haven't
explicitly talked about yet.
and those are, I would say,maybe not quite as important as
green flags, but there are acouple of red flags that
certainly give me pause.
And probably the biggest onethat's worth mentioning is if I
ever hear about a dataconsolidation program going on

(43:20):
or that needs to be finished inorder for the AI project to get
going, it's a huge red flagbecause 99 percent of the time
it is a three year program witha CIO who's a year into the job
who will be gone within one totwo years has nothing invested
in getting it actually finished.
It's a way of tapping the brakeson anything getting done.

(43:42):
And, If we go out two years intothe future, and we look at 2027,
we can, with a reasonable degreeof certainty, know that there
are a variety of other thingsthat people are going to be
caring about in 2027.
And they're going to bedifferent business drivers than
there are today in a lot ofways.
The competitive environment isgoing to be different.
just basic operations ofcybersecurity are going to be

(44:02):
different.
every organization in the worldover the next two to three years
is going to have to replace nontrivial amounts of their
infrastructure to become quantumresilient.
Just as one great example, So ifyou're two to three years into a
data consolidation program forAI and all of a sudden you have
to take the next phase of moneyand spend it all on quantum
resilience, what have you got?
You've put all your data in asingle basket and can't do
anything with it.
And you're paying a very largemonthly operating expense fee

(44:25):
because you've probably put itin a big enterprise cloud and
you're just gonna be spending alot of money on data that's not
actually doing anything.
Because very rarely are youactually also changing the
systems of records for thosedata.
So you actually have a second,you're basically paying for a
hot backup.
and that's really all you'regetting out of it.
so that's a pretty big red flagfor me.

Don Finley (44:42):
Okay.
And I think that falls into thebucket of, Technology for the
technology's sake.
Right, like, data consolidationprojects, I've rarely seen them
tied to ROIs, or having a stronguse case of like, why you're
doing that.
If it comes to my table as adata consolidation project.
If it comes as hey, we're doingX, Y, and Z, in order to get

(45:06):
this ROI, And part of that isdata consolidation.
That typically has a nice,metric to it.
On a corollary that's similar tothis, I sat on a non profit
board as a volunteer, and allthe board members were
volunteers as well.
And what I can tell you aboutthat We had one of our members

(45:27):
was also an accountant whospecialized in non profits, and
he said every non profit boardwith non profit members suffers
from a lack of a cohesive visionthat everybody can get behind.
when you get into theseorganizations that have a very
high drive as to what theirvision is, but the
implementation of that visioncan happen 12 different ways.

(45:49):
You tend to lack the focus ofhow to get to the next stage.
And so I think you're hitting agreat point on the consolidation
piece, that it's coming back tothe green flag, the opposite of
the red flag here.

Sultan Meghji (46:01):
technology for technology's sake, I think is
something that a lot of peoplereally should take to heart.
And I have this conversation inthe crypto universe more often
than I like, which is, what'sthe actual thing that's being
done better, what's the usecase?
And the great thing is we're ata point now where we are seeing
positive use cases in crypto,There are places where it's
great.
It's, I think, challenges, thereare other challenges there, but

(46:22):
like technology for technology'ssake is a huge problem.
And we now have Especially Since2008, and especially since 2020,
we are struggling with a cohortwho just like to implement
technology because they want toimplement technology.
They want the shiny new thing orwhatever on one side.
And then the other side is, it'sworked for the last decade.

(46:42):
I'm not going to touch itbecause I don't want the risk of
touching it, And in both cases,those are extremes that aren't
the right answer.
Yeah.
it's designing fororganizational longevity,
designing for technologylongevity, right?
That's where people need to beputting their energy in.
and this to me becomes Theantithesis of the technology for
technology's sake, it becomesthe, I'm going to build the best

(47:05):
company or best organization Ican.
And then I want to build anorganization that outlasts me,
that is systemically relevant,et cetera, et cetera.
And we're just not hearing asmany people talk like that in
the last 15 years as we did, in,let's say the better part of the
20th century.

Don Finley (47:18):
Yeah, which is interesting because, we were I'd
say, blessed with the researchof, Jim Collins, And good to

Sultan Meghji (47:25):
Totally.
Yeah,

Don Finley (47:27):
I think it was chapter seven where he was
basically like, technology isthe, the match that lights the
fire.
And if the process is defined,technology can be great.
If it's, just thrown in there,that's usually where money is
burnt.

Sultan Meghji (47:40):
that's exactly right.

Don Finley (47:41):
Yeah, Sultan, thank you so much again.
it's been an absolute blasthaving you on,

Sultan Meghji (47:46):
You too, man.

Don Finley (47:46):
really enjoy the time that we get to spend with
each other.

Sultan Meghji (47:49):
Nah, this has been great.
I'm, 2024 was, there were upsand downs in 2024, but getting,
getting to call you my friend in2024 was a definite up.

Don Finley (47:57):
Absolutely, my friend.
Thank you.

Sultan Meghji (47:59):
Awesome.

Don Finley (47:59):
Thank you for tuning into The Human Code, sponsored
by Findustries, where we harnessAI to elevate your business.
By improving operationalefficiency and accelerating
growth, we turn opportunitiesinto reality.
Let Findustries be your guide toAI mastery, making success
inevitable.
Explore how at Findustries.

(48:21):
co.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Introducing… Aubrey O’Day Diddy’s former protege, television personality, platinum selling music artist, Danity Kane alum Aubrey O’Day joins veteran journalists Amy Robach and TJ Holmes to provide a unique perspective on the trial that has captivated the attention of the nation. Join them throughout the trial as they discuss, debate, and dissect every detail, every aspect of the proceedings. Aubrey will offer her opinions and expertise, as only she is qualified to do given her first-hand knowledge. From her days on Making the Band, as she emerged as the breakout star, the truth of the situation would be the opposite of the glitz and glamour. Listen throughout every minute of the trial, for this exclusive coverage. Amy Robach and TJ Holmes present Aubrey O’Day, Covering the Diddy Trial, an iHeartRadio podcast.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.