All Episodes

June 30, 2025 24 mins

Interested in being a guest? Email us at admin@evankirstel.com

From cutting-edge experimentation to business-critical infrastructure, the AI landscape has undergone a dramatic transformation. Ron from KungFu.ai shares an insider's perspective on this evolution, drawing from his experience dating back to AI research in the 1990s.

The conversation reveals the pivotal factors driving AI's enterprise breakthrough: exponentially increased computing power, unprecedented data availability, and the democratizing effect of open-source libraries. These elements have converged to create capabilities early researchers could scarcely imagine, requiring millions of times more resources than initially anticipated.

What distinguishes KungFu's approach is their unwavering focus on production-grade AI systems that deliver tangible business value rather than impressive but unreliable demos. Ron shares a striking success story of a financial services client whose AI implementation reduced loan decisioning time from 48 hours to just 9 seconds while simultaneously reducing fraud rates – all without eliminating human jobs but rather redirecting human attention to the complex cases requiring judgment.

The discussion tackles the profound challenges enterprises face during implementation. AI systems differ fundamentally from traditional software in their probabilistic nature, making human-like mistakes that can be difficult to predict or debug. Data quality emerges as the critical determinant of success – "garbage in, garbage out" applies more powerfully to AI than to any previous technology. Ethical considerations, especially regarding bias and explainability in regulated environments, demand sophisticated approaches that go far beyond typical software development concerns.

Looking ahead, Ron provides a sobering yet optimistic assessment of agentic AI systems, suggesting that failure rates may exceed Gartner's 40% prediction while maintaining that these technologies will ultimately revolutionize business faster than most anticipate. For companies navigating this complex landscape, the talent equation remains daunting – building effective AI systems requires a blend of mathematical expertise, domain knowledge, and hard-won intuition that remains in critically short supply.

Ready to transform your business with AI that delivers real results rather than just impressive demos? Connect with Ron at ronallen@kungfuai.com or explore their "Hidden Layers" podcast for deeper technical insights into the future of enterprise AI.

Digital Disruption with Geoff Nielson
Discover how technology is reshaping our lives and livelihoods.

Listen on: Apple Podcasts   Spotify

Support the show

More at https://linktr.ee/EvanKirstel

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey everybody, excited for this chat today
about AI and enterprisetransformation with a true
expert and innovator in thefield at KungFuai Ron.
How are you?

Speaker 2 (00:14):
I'm great Thanks for having me on.

Speaker 1 (00:15):
Well, thanks for being here.
Let's kick off withintroductions and also your
origin story.
What inspired the founding ofKungFuai Great?

Speaker 2 (00:24):
name, by the way.
Oh, thank you, thank you, thankyou, yeah, so you know, I did a
master's in artificialintelligence way back in the 90s
and I've been working in AI for, you know, decades now.
I'm one of the old guys and Ijust sold my last company in
2017.
And I knew I wanted to dosomething in AI, but we felt it

(00:47):
was just a little early forproducts.
Things were moving so fast theystill are but it was definitely
clear that if you spend a bunchof time and money on a product
18 months later, they could justbe washed away by some new
breakthrough and it wasn'treally a great time for, I think
, product development.
What we believed companieswould need eight years ago was

(01:09):
guidance strategy, helpunderstanding, implementing,
building custom solutions, youknow, based upon their
proprietary data, and it'sworked out great.
And you know, since ChatGPD hascome out, the world has woken
up to the full promise of AI andwe're just having a great time
helping companies build AIsolutions.

Speaker 1 (01:29):
Fantastic.
So, as you know, ai has gonefrom being a kind of cool
science project to missioncritical, seemingly in a year or
two.
What's changed in your opinion?
What's driven that sea change?

Speaker 2 (01:43):
You know, it's several things.
You know, in the 90s we knew weneeded more data, we knew we
needed more compute.
Honestly, I think if you'dasked us, we just said, oh, 10,
100, 1,000 times more.
We did not realize we neededlike millions of times more and
billions of times more compute.
So it's really a combination of, I think, a few things.
One, we just didn't have thecompute necessary for the type

(02:06):
of capabilities that we seetoday.
We were literally off many,many, many orders of magnitude.
The other really big element isthe data.
Even if we'd had the computeback in the 90s, we didn't have
the data.
Most AI systems today aresupervised systems.
They're supervised learningsystems, meaning they are

(02:27):
trained on really large amountsof data, and until that digital
data existed, we wouldn't havegone anywhere without the
compute.
And then the other really bigpart of this is the fact that
there are these open sourcelibraries.
All the best libraries in theworld around AI, pytorch and
TensorFlow and NumPy and allthis sort of stuff are open
source, so anybody can getinvolved and stand on the

(02:50):
shoulders of those who camebefore us.
And there's many other factors,but I really feel like those
are the top three.

Speaker 1 (02:57):
Interesting.
So it's been fascinating towatch all of the big tech
services firms jump into AI,from Accenture to PwC and the
long, long laundry list.
How do you define your role asa services company, helping
businesses become, I guess, sortof AI native?
What's your perspective there?

Speaker 2 (03:20):
Yeah, that's a really good question.
When we started eight years ago, we were absolutely adamant
that we were going to buildproduction-grade AI systems or

(03:44):
toy systems that demoed reallywell but had so many reliability
issues or sort of edge casesthat you just couldn't put them
in production.
They couldn't deliver realbusiness value, and I think that
that decision back in 2017 wascritical for us becoming who we

(04:05):
are today, and I think it's moreimportant now than ever.
So the way we help companies is, you know, it's really through
this journey and part of it isunderstanding that just doing AI
for AI's sake is probably awaste of time and money.
You have to have real businessROI associated with it and just

(04:30):
because you identify aninitiative and you see that if
it's successful, you know it'dbe worth the effort.
That's just part of the battle.
Do you have the data?
Do you have the buy-in?
Do you have the ability todeploy and manage those types of
solutions?
It's not a 180 from software,but artificial intelligence as

(04:56):
it exists today is so datadependent that you have to start
with the data.
You have to start with ananalysis of the quantity, the
quality, the distribution ofthat data.
It will determine your successor failure, more than any other
aspect of an artificialintelligence engagement, and

(05:18):
that's very different thantraditional software.
In software, it's just yourability to execute.
With AI, you are beholden tothe data garbage in, garbage out
.

Speaker 1 (05:31):
Fantastic, and you've worked with Fortune 500,
startups, everything in between.
What's a big misconception yousee in working with your clients
across the board?

Speaker 2 (05:55):
who will come in, who thinks that you can just take
AI off the shelf and you justsort of point it at the data and
it will just go figure thingsout?
We do occasionally get clientswho think you know, ai may be
almost sort of mistake-free oromniscient.
No, these are probabilisticsystems.
They can absolutely makemistakes.
One of the challenges withmodern AI is that they make very

(06:17):
human-like mistakes, right, sothey can be unpredictable in the
way that a human can beunpredictable.
For example, you could be anexpert speaker, but doesn't mean
you won't misspeak occasionally, right?
Well, we see that with AIsystems, and that's not really
something we had to worry about.
With traditional software.
They were much, much moredeterministic, less

(06:40):
probabilistic, and there's alsosort of an underestimation, as I
already mentioned, about thedata.
A lot of companies have reallyreally, really strong intuitions
.
Now Things have changed quickly.
They have strong intuitionsabout how AI could help them,
but they underestimate the needfor the data.

(07:02):
So, for example, they may havesome process that's highly,
highly manual and they want toautomate that, but they haven't
been collecting the data, theyhaven't been collecting the
inputs and outputs that thehumans relied upon to accomplish
that task, and so it means itcan be automated, but they need
to do the data collection first,and so that will often mean

(07:25):
well, we'll have to put thatproject on hold for a year or
two or three, as we collect thatdata, and then we can build a
system to mimic that capability.

Speaker 1 (07:41):
Got it.
So you know these systems aregaining traction autonomous
systems.
I've been driving around inWaymo's.
It's been super exciting.
I have a couple of apps thatare sort of agentic in nature.
But what about real businessproblems?
Are you seeing a lot of thembeing solved with AI, like today
, and not, uh, in the?

Speaker 2 (07:58):
in the lab?
Oh, absolutely Absolutely.
We, um.
We just wrapped up a projectfrom one of our clients, um,
publicly traded company that umdoes billions in loans a year.
Um has a lot of fraud.
Um a lot of manual processes.
For non-disclosure reasons, Iwon't go too deep, but we built.

(08:18):
This is a great example of kindof coming full circle on what I
was mentioning before.
They had decades of datacollected about loan decisioning
from their experts, and so wewere able to build a system that
could mimic those capabilities.
And the beautiful thing aboutthis system was they're doing

(08:40):
billions of loans a year, butthat decisioning it was prone to
fraud, and the system that wetrained allowed them to move
from a 40-hour-a-week businessstance to 24-7.

(09:03):
It reduced their turnarounddecisioning from 28 to 48 hours
to nine seconds.
Fraud dropped dramatically,chargebacks are reduced and all

(09:26):
of the hundreds of people thatwere doing that task are still
Wow cases that the AI flagged asbeing suspect and really
requires human intervention.
It's really one of thoseexamples of where you can
leverage AI to automate parts ofyour business and it's just a
win-win-win across the board.

Speaker 1 (09:45):
Brilliant.
So let's talk a bit aboutethics and governance.
And you have a unique positionas an independent, not wedded to
big tech, you know has anagenda.
How do you think about walkingthat line between you know,
innovation and responsible AIuse?

Speaker 2 (10:03):
That is one of our main offerings.
Actually, we started as just apure engineering firm.
So you would you know, all ofour early clients would come to
us and they would say, hey,would it be possible to solve
this problem with AI?
And we realized, over time Ithink we were about four or five
years old we realized thatthere was a sort of a missing

(10:24):
piece to our offering.
Businesses often needed helpfiguring out what to pursue and
they almost alwaysunderestimated the sort of
ethical and governance issues.
So, for example, on the ethicalside, as I mentioned earlier,
most artificial intelligencesystems today are based upon a

(10:46):
technique called supervisedlearning, where you train these
models on a bunch of data.
These models will soak up thatdata and they will get really
good.
These models will soak up thatdata and they will get really
good.
They will get so good that theywill mimic the bad bias
behavior in the data, even ifyou don't want them to, and
we've seen this over and over.
You go to build some system tomake predictions and if there is

(11:08):
a legacy of racialdiscrimination, that model will
soak up that behavior andreplicate it.
So it's really, really criticaland we do this with all our
engagements that you take thetime to understand the nature of
your data.
It can be biased in many, manydifferent ways, not just in like

(11:29):
sort of socioeconomic ways.
It can have bias.
I'll give you another example.
We've built systems that canaccurately predict the risk of
breast cancer years in advance,but the model these tricky
little models are so smart itwas actually learning to do
things like accurately predictthe patient's age and race and

(11:53):
weight, and it was even doingthings like accurately
predicting what model machinethe mammogram was done on.
And the reason that'sproblematic is that it's very
common for sicker patients to goto higher quality machines, and
so, if you don't take thesetypes of data issues into

(12:18):
consideration, we could haveeasily built a model that we
thought predicted the risk ofbreast cancer, but it was
actually just really good atidentifying the version of the
mammogram machine that themammograms were done on.
So we did a ton of work to makesure that the model literally
was no better than just guessingat any of those different areas

(12:41):
, and that led us to be able tobuild a model that is, you know,
radically less biased than thetraditional metrics out there,
like the Tyra Cusick metrics andthings like that this model was
recently approved by the FDA.
So that's sort of the bias.

(13:02):
The point there is.
There are real issues that youhave to be concerned around.
And then governance if you'rein a regulated environment,
there are lots of things thatyou need to think about, not
just about data distribution,skew and things like that.
You may have explainability.
So, for example, if you'redoing loan approvals, like I
mentioned earlier, and you builda system that's a black box AI,

(13:22):
it may be really really good atmaking predictions about loan
payments, but if you can'tunderstand why it's making those
decisions, if it doesn't haveexplainability, it's not going
to pass regulatory muster.
You need to explain to somebodywhy their loan may have been

(13:43):
rejected.
So there's all of these reallyreally complicated issues that
come up with AI initiatives thata lot of companies are just
starting to get their handsaround.

Speaker 1 (13:55):
Interesting.
So Gartner just came out withan interesting blog suggesting
that 40% of agentic AI projectswill be canceled by 2027, which
probably has a lot of peoplescratching their heads about
their own projects.
How do you see the balance ofexperimentation with delivery,

(14:15):
real clients and real value inthe enterprise?
What's that look like?

Speaker 2 (14:23):
I will be surprised if the number is not higher than
40%, and the reason is, youknow all of the AI systems that
I've talked about to date andwe've built well over 100
production systems.
All of them, if you'll notice,are examples of narrow AI,

(14:48):
meaning they're domain specific.
They have superhumancapabilities, but often along
just one or two dimensions,right along just one or two
dimensions.
Right Then these agenticsystems, which it's still very
early days, they're going to beincredibly powerful but, just
like I mentioned earlier abouthumans, they're going to have a

(15:10):
jagged sort of a jagged edgealong the frontier of liability,
and what that means is they'regoing to fail in unpredictable
ways and that is what's going tomake them very difficult to
deploy in enterpriseenvironments.
We see the same thing withgenerative AI.
If you are using generative AIon a personal basis and you're

(15:33):
interacting with Claude or ChadGBT, you interact and you check
the prompts and you go back andforth until you get what you
want.
And sometimes you'll say areyou sure you may massage the
interplay a little bit, but youdefinitely don't just ask it a
question.

(15:53):
Whatever it gives you, you justtake it as gospel and send it
out in the world.
And that's the challenge withgenerative AI, and that's the
challenge with agentic systemsright now.
Now let me be clear, so I don'tcome off as too pessimistic.
These are just going to be thechallenges along that jagged
edge.
Agentic AI is going torevolutionize business and it's

(16:15):
going to happen a lot fasterthan people think business, and
it's going to happen a lotfaster than people think.
But there are going to be a lotof tears and a lot of wasted
money as people realize that youknow that that 2% error rate is
not something that they, thatmaybe their enterprise can live
with.
You know that thatprobabilistic capability is is

(16:38):
something they're going to haveto get used to.

Speaker 1 (16:42):
Interesting.
Let's talk about the talentequation that you see.
Everyone's trying to chase thesame talent pool out there and
trying to upskill and buildinternal talent, but of course
there's a big gap between youknow that and people that are
available.
You're helping partly bridgethat, that kind of gap, but um,

(17:07):
um, how do we scale up when itcomes to talent and know-how?

Speaker 2 (17:11):
That's a great question, um, I think it's going
to be quite quite a while untilwe are at a point where the
talent supply and demand hassort of equalized.
And the reason is, you know,building state-of-the-art AI
systems is significantly morecomplicated than traditional

(17:34):
software because of some of thethings we've mentioned around
the probabilistic nature.
There's quite a bit more mathinvolved than in traditional
software.
But the other part of it is,you know, this is funny.
I remember in the 90s in college, and I remember one of my
computer science professorssaying something to the effect

(17:56):
of like you know, we're reallystruggling, um, as an industry
in software because you can havelike one character, like you
can be missing a semicolon inyour entire code base breaks,
like the entire program breaks.
And he, he kind of, you know,wistfully said it wouldn't be
great one day if we had computersystems that were more like

(18:20):
biological systems.
You know they didn't justcompletely tip over, you know
any part of them being damagedand they had redundancy.
Well, sometimes, you know, yougot to look out for what you
wish, because we have that now.
These systems are incrediblyredundant, resilient, but the
black box nature and thecomplexity of these systems

(18:42):
means they're really hard totrain, and you know, we've been
in many instances I'm not afraidto admit this where we're
training models and I work withsome of the smartest people on
the planet in AI and we'll getstuck and the model will stop
learning and you can't figureout.
Is it a data issue?
Do we have a bug?

(19:03):
Have we hit some sort of weirdyou know hyper parameter issue
that that's preventing us fromgoing down the gradient further?
And then you have to rely uponyour, your heuristics that
you've built up over decades tokind of get yourself out of
these situations.

(19:24):
And so it's still quite a bitmore art than science, and I
think that's one of the reasonsit's so hard to predict the
future.
We don't know where we're goingto be in five years because
there are these sort of emergingcapabilities.
So I think it's going to bequite a while until we see a
balance on the supply-demandcurve.
Yeah, I would agree with youthere.

Speaker 1 (19:43):
Let's talk a little bit about your business model
and the industry.
I've worked for severalservices, consultancies,
software services companies overthe last 30 years and not much
creativity necessarily there.
You know, we just had a benchof 10,000 engineers and we threw
them at problems and, you know,did good work.

(20:04):
But are we heading into a newkind of services landscape when
you know you're doing codecreation with AI and you don't
necessarily need the same kindsof skills?
What does the future look likefor services, professional
services?

Speaker 2 (20:17):
That is such a good question, evan.
I honestly don't know.
I think that the AI codingassistants have matured faster
than I think almost anybodyanticipated, myself included,
and I was extremely bullishabout them from day one.
I really expected them to makean impact.

(20:40):
Even I didn't think it wouldhappen this fast, and so if I
was forced to place a bet, Iwould say this I think closing
that last gap without humanoversight is going to take a
while.
Without human oversight is goingto take a while, meaning I
think it'll be probably closerto 2030 than 2026 before we see

(21:07):
um, these coding assistantswhere you can describe what you
need at the highest level andthey just get it bug free right
at at um.
First try Um, but I don't thinkit's, you know, I don't think

(21:27):
it's super far away.
And then that just begs thequestion is this going to reduce
the need for softwaredevelopers?
There was a lot of talk lastyear about the Jevons paradoxes.
As something becomes moreaffordable, demand rises, and I
think that that's entirelypossible as well, that instead
of this being the end of thesoftware development lifecycle

(21:53):
for engineers, it could just bethe early days, right, demand
could go out through the roof.
Honestly, though I mean that'sone of those areas I'm probably
most confused about.
I don't have a strong opinion.

Speaker 1 (22:05):
Yeah, I think we have seen really weak demand for
computer science and softwareengineering graduates and that's
a little bit of a red warninglight.
So we'll have to see.
What about Kung Fu AI?
What are you focused on overthe next year or two?
Where are you building and youknow marketing and selling.

Speaker 2 (22:25):
We yeah, we're focused, as we have always been.
We really want to do one thing,and I love being able to say
this we want to help our clientsbuild real AI systems, things
that actually go into production, things that actually work,
that actually make theirbusiness better and stronger,
and then we want to help themthrough that whole journey,

(22:45):
whether it's strategy orgovernance, roadmapping or
literally hands-on keyboardmodel building.
That's where our passion liesand that's part of the reason I
think we're able to hire suchelite talent because people come
to Kung Fu AI because they wantto build stuff that that
changes the world, that makes,makes a difference, and if they

(23:07):
can come here and do that,Fantastic.

Speaker 1 (23:09):
So you're in Austin, one of my favorite places.
Where can people meet you,either there or out and about
any events this summer or thefall that you're excited about,
or out and about any events thissummer or the fall that you're
excited about.

Speaker 2 (23:19):
Yeah, I'm not doing a bunch of events this summer.
It's actually.
It is literally the case thatwe are so overloaded with work
right now that I'm actuallycanceling some vacation plans,
in fact, even to make thishappen.
But if you ever want to reachout to me personally, ron Arlen
at KungFuai and we have our ownpodcast.

(23:43):
It's called Hidden Layers.
It's a little bit a technicaldeep dive on AI.
If you've ever wondered, like alittle bit of a glimpse behind
the curtain, I would encouragepeople to please check that out.

Speaker 1 (23:53):
Fantastic.
Well, enjoy the summer, Staycool, whatever 10 degrees in
Austin, but you guys are used toit.
Thanks so much, Ron, forlistening and joining and
sharing.

Speaker 2 (24:02):
Thank you so much.
This is a ball.

Speaker 1 (24:04):
And thanks everyone, and be sure to check out our new
show at techimpacttv now onBloomberg and Fox Business.
Take care everyone.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.