Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jerod (00:04):
Welcome to the Practical
AI podcast, where we break down
the real world applications ofartificial intelligence and how
it's shaping the way we live,work, and create. Our goal is to
help make AI technologypractical, productive, and
accessible to everyone. Whetheryou're a developer, business
leader, or just curious aboutthe tech behind the buzz, you're
(00:24):
in the right place. Be sure toconnect with us on LinkedIn, X,
or Blue Sky to stay up to datewith episode drops, behind the
scenes content, and AI insights.You can learn more at
practicalai.fm.
Now, onto the show.
Daniel (00:49):
Welcome to another
episode of the Practical AI
Podcast. In this fully connectedepisode. It's just Chris and I,
no guests, and we'll spend sometime digging into some of the
things that have been releasedand talked about in AI news and
and trends and hopefully spendsome time helping you level up
(01:10):
your machine learning and AIgame. I'm Daniel Witenack. I am
CEO at Prediction Guard, and I'mjoined as always by my cohost,
Chris Benson, is a principal AIresearch engineer at Lockheed
Martin.
How you doing, Chris?
Chris (01:24):
Doing great today,
Daniel. So much happening in the
world of AI. Holy cow. Yes.
Daniel (01:30):
Yes. Lots lots to catch
up on. There have been a number
of interesting developments inthe news that maybe people have
heard about and yeah, might begood to just kind of distill
down and synthesize a little bitin terms of what you know, what
they mean, what they signal, howpeople can think about how
(01:52):
certain things are advancing.So, yeah, looking forward to
looking forward to this thisdiscussion. Any things that have
been particularly standing outto you that that you've been
hearing about, Chris?
Chris (02:04):
Sure. I think the thing
that has really been noticeable
in recent weeks has been so manypeople that are both in the AI
world and outside of it, butimpacted like everybody is, are
talking about jobs. And we'vetalked about the impact of AI on
jobs many, many times on theshow over time, but people are
(02:25):
really, really feeling it atthis point. The job market is
pretty tight. I've talked tolots of people out there
looking, whether they'recurrently employed or whether
they're out of school.
And particularly, there's a lotof people in technology coming
out of university that arereally struggling right now. And
(02:46):
I believe there was a reportrecently from MIT that
highlighted that.
Daniel (02:49):
Yeah. It's interesting
that we spent a good number of
years on this podcast, ofcourse, talking about, you know,
occasionally talking about someof the wider impacts of this
technology, you know, within acertain company or industry. Now
this is kind of like a globalacross all people thing, you
(03:11):
know, all sectors. You seethings being hit hard,
especially in, you know, maybeit's sales and marketing or kind
of junior developer type ofcases. If I remember right,
Chris, the MIT report thatyou're talking about, which we
can link to some of the newsarticles about the MIT report.
(03:33):
I don't think I've actually seenthe actual MIT report yet, so I
guess our listeners can keepthat in mind. But one of the
things that I've actually heardon multiple calls, And I was at,
last week, at a AI corporatetype event where there were a
bunch of corporate leaders andthey were certainly talking
(03:54):
about this. One of the thingstalked about in the MIT report
was that ninety five percent ofAI pilots fail. And that I think
has generally spooked a lot ofbusiness leaders, investors,
lots of different people acrossindustry, just the level at
(04:16):
which these kind of ninety fivepercent of AI pilots fail.
What's your thought on that?
Chris (04:23):
I think it's a weird
juxtaposition right now that
we're in, in that that'saccurate, that you're having a
tremendous number of Gen AI, inparticular, efforts fail. But at
the same time, companies areholding back on hiring junior
devs out of school. And so youhave this weird mesh of people
(04:44):
going hardcore on trying vibecoding and things like that, but
with very limited success,struggling to get it adopted.
And at the same time, they'rethey're making bets on the
future by not going ahead andbringing in the the junior level
developers that that they alwayshave, which kind of leads to an
(05:04):
interesting kind of what ifsituation in the months ahead.
If you're, you know, juniordevelopers eventually,
historically have turned intosenior developers.
And right now, companies arebetting on those senior
developers with these new AIcapabilities over the last
couple of years to make up forthat deficit, hoping to save
money. But if you're failing 95%of the time, it puts things into
(05:27):
an interesting place.
Daniel (05:28):
Yeah. One of the things
that I I'm just reading one of
these these articles about thethe AI pilots. And one of the
things that it highlights isthat this isn't from the
report's perspective, it's notthat the AI models were
incapable of doing the thingsthat people wanted to prove out
(05:49):
in the AI pilots, but that therewas a kind of major learning gap
in terms of people understandingactually how an AI tool or
workflow or process could bedesigned to operate well. So I
(06:09):
find this very, very prevalentacross all of the conversations
that I have, especially inworkshops and that sort of
thing, is there's kind of thisdisconnect and people are used
to using these kind of generalpurpose models, maybe
personally. And there's thisconcept that the way I implement
(06:34):
my business process with an AIsystem is similar to the way I
prompt an AI model or a chat GPTtype interface to summarize an
email for me.
And that is always due to createsome pain. Number one, because
(06:56):
these AI models only sometimesfollow instructions. But number
two, your business processes arecomplicated, right? They're
complicated. They rely on datathat is only in your company and
probably has never been seen byan AI model unless you
accidentally leaked it.
(07:16):
Jargon, all of those sorts ofthings. And number two, these
business processes that peopleare trying to automate or create
a tool around, often the bestthing for that is not a general
chat interface, right? It's notlike you want to create a chat
(07:37):
interface for everything youwanna do in your business. No,
actually in this, in a oneparticular case, right, it may
be that you wanna drop adocument in this SharePoint
folder. And when it's dropped ina SharePoint folder, it triggers
this process that takes thatdocument and extracts this
information and compares it toinformation in your database and
(07:57):
then creates an email and sendsout an email to the proper
people or, you know, addsomething or These sorts of
processes are not general chatinterfaces.
So people are coming at it like,Oh, I know how to kind of prompt
these models to do some things.And so they try to kind of build
or prompt these models in acertain way without the kind of
(08:20):
I don't know why there's such adisconnect, but without the
understanding that really whatthey need is maybe a custom tool
or automation. They need dataintegration, data augmentation
to these models. They don't justneed a model plus a prompt. And
I think that that's a pitfallthat I see unfortunately very
(08:45):
often.
So it's not super surprising forme to see this kind of
highlighted in the MIT report.
Chris (08:51):
Yeah, I agree. I think to
your point, these chat
interfaces, it's kind ofbecoming the universal hammer in
most people's heads andeverything is starting to look
like a nail for that hammer tohit and they are neglecting a
toolbox full of tools that givethem the right software
components for getting theirworkflows put together the way
(09:14):
they want. And so yeah, I thinkI think it's a I think it's a
certain amount of of overexpectation that is then
exacerbated by abated bychoosing the wrong software
approach, or an incompletesoftware approach to try to get
the job done, that workflow,that business workflow done. So
(09:36):
that's certainly my sense of it.I think a lot of these are
driven top down, you know, byexcited executives that haven't
taken the time to reallyunderstand how to optimally use
these tools.
Daniel (09:49):
Yeah. Yeah. And another
thing that was was kind of
interesting in the study is thiskind of just prompting of the
models generally kind of failed.There's kind of a knowledge gap
on how to integrate data and howto build custom solutions in a
way that could succeed in kindof a POC sort of thing. But I
(10:13):
actually don't kind of agreewith the premise of the, you
know, that this should spookinvestors, from kind of AI,
maybe some AI companies, but AIcompanies that especially are
verticalized in general.
I'm part of an AI, I lead an AIstartup, so I may be biased, but
(10:35):
our AI startup isn't one ofthese kind of verticalized
application layer things. So Ifeel like I can maybe speak
objectively with respect tothose. It's really these kind of
AI companies that are inwhatever it is, healthcare or
the public sector or educationor finance or that sort of
thing. They are putting in thework, I think, at least many of
(10:58):
them are, to understand businessprocesses and build robust AI
workflows and kind of tools thatare fitting certain business use
cases that sort of thing. And Ithink one of the stats in the
report were that a lot of thekind of trials of these sorts of
tools did actually succeed kindof a majority of the time.
(11:20):
And so there's this kind ofmajor gap between on one I guess
in summary, what I'm trying tosay is there's this major gap
between on one side, you havethis idea that all you need is
access to a model. And then onthe other side, these kind of
pre built, purpose built AIsystems for particular verticals
(11:43):
that understand the businessprocesses. There's a whole kind
of gap in the middle becausemany companies, especially in
the enterprise setting, willhave to customize something. I
don't think in the end they willbe able to always use a tool off
the shelf that works completelyfor them. If you look at any
enterprise software, it's alwayscustomized, right?
(12:06):
At some level, whether it'smanufacturing software, ERP
software, CRM, whatever, it'salways customized. I think
that's where maybe this ishighlighting that gap of
companies not understanding thegap between having a model and a
verticalized solution for theircompany. And that actually does
(12:30):
require significantunderstanding of how to
architect and build thesesystems, which unfortunately
there's a skills gap and alearning gap in terms of people
that actually have thatknowledge.
Chris (12:45):
I think you're drawing
out a great point there. And I
know we've talked a little bitabout this in previous shows
where the model constitutes acomponent in a larger software
architecture. And we know, asyou just pointed out, the
expertise of those businessworkflows being integrated into
vertical software stacks, whereit is designed to solve the
(13:09):
problems and not just a chatbox, is really important to
getting to a good solution thatworks for your business. I think
that there is I think this iswhere one of those challenges
that we're seeing in in a lot offolks out there in the business
world is kind of forgetting thatthat core tenant and leaping
straight for the model will runmy business from this point
(13:31):
forward without that withoutthat supporting infrastructure.
So maybe there are some hardlessons to be learned in the
days ahead for some companies,but hopefully hopefully that
will that will happen.
Sponsors (13:49):
Well, friends, when
you're building and shipping AI
products at scale, there's oneconstant, complexity. Yes.
You're wrangling the models,data pipelines, deployment
infrastructure, and then someonesays, let's turn this into
business. Cue the chaos. That'swhere Shopify steps in whether
you're spinning up a storefrontfor your AI powered app or
(14:09):
launching a brand around thetools you built.
Shopify is the commerce platformtrusted by millions of
businesses and 10% of all USecommerce from names like
Mattel, Gymshark, to foundersjust like you. With literally
hundreds of ready to usetemplates, powerful built in
marketing tools, and AI thatwrites product descriptions for
(14:30):
you, headlines, even polishesyour product photography.
Shopify doesn't just get youselling, it makes you look good
doing it. And we love it. We useit here at Changelog.
Check us outmerch.changelog.com. That's our
store front, and it handles theheavy lifting too. Payments,
inventory, returns, shipping,even global logistics. It's like
(14:52):
having an ops team built intoyour stack to help you sell. So
if you're ready to sell, you areready for Shopify.
Sign up now for your $1 permonth trial and start selling
today atshopify.com/practicalai. Again,
that is shopify.com/practicalai.
Daniel (15:16):
Well, Chris, there's a
couple things that, I've been
following with respect to thethe model builders, but it does
actually connect maybe to thisMIT report as well, because one
of the other kind of commonthings that I see that companies
or a way of thinking thatcompanies have when they
(15:39):
approach kind of AItransformation is they come at
the problem of AI adoption kindof with the question of what
model are we going to use? WhichI think is the completely wrong
question to be asking for anumber of reasons. First of all,
(16:02):
if you're coming to AI for thefirst time with your company and
you want to transform yourcompany with AI and build
knowledge assistance andautomations and adopt other
tools and build verticalizedsolutions, the model actually
will shift over time a lot. Sothat's, I think number one,
(16:27):
there's no one kind of on themarket that at least right now
is there's certainly manyproviders of models. There's a
lot of good models.
No one knows who will have kindof the edge on models in the
future. And I think what we'reactually seeing is that the
model side is fairlycommoditized. You can get a
model from anywhere. The secondreason I think that that's the
(16:50):
wrong question to be asking isthat if you're trying to build
an AI solution within yourcompany, again, think about that
SharePoint thing that I talkedabout. I'm gonna process this
document from SharePoint andextract these things and send it
to an email.
You actually don't need a model.You need a set of models and
(17:11):
potentially, you know, otherthings in the periphery around
those models. So you likely needa document, you know, structure
processing model, like aDockling. You need a language
model to process pieces of that.You maybe need embedding models
to embed some of that text or doretrieval.
You need a re rank model becauseyou've gotta re rank your
(17:33):
results after doing retrieval.You need safeguard models
because you want to beresponsible and check your
inputs and your outputs of yourmodel. So once you start adding
these things up, even for thatsimple use case of processing
this document through SharePointand out the other end, If you're
coming to one of these proof ofconcept projects like we've been
(17:56):
talking about, and you'rethinking, what is the model I'm
going to use for this? And youdecide, okay, the model I'm
gonna use for this is a GPTmodel or a LAMA model or
whatever. Well, you're alreadysetting yourself up for failure.
Because what you don't need is amodel. What you need is a set of
models. You sort of need an AIplatform. You need an AI system
(18:19):
that gives you access tomultiple kind of different types
of functionality, right? And soI think that that kind of
perspective plays into this kindof POC's failing thing.
And I have more thoughts aboutthat related to the model
builders, but do you think I'moff in that? Oh, How would you
(18:39):
correct me?
Chris (18:40):
No, I have no correction.
We've talked about this as well
a bunch of times, and I keepwaiting for I think we've had a
lot of really in-depthconversations with people on the
show in the past about the needfor multiple models to tackle
different concerns within yourlarger business focus and the
(19:00):
software architecture thatsupports that. And I think as
you look forward, and you know,we're seeing a genetic AI, we're
seeing physical AI developingmore and more. And all of those
require a number of differentmodels to do that, you know, it
kind of obviously, you know,very different distinct things
that you can see in thosespaces. And so there seems to be
(19:23):
this hang up in the public aboutthe model, which model and then
do I pick the model and as youpointed out, that's completely
the wrong thing.
It's what does your modelarchitecture look like as
related to your businessworkflows and and the the, you
know, the job that you needdoing and how do you do that.
And as you were describing someof those potential models that
(19:47):
one might have in a flowearlier. As I was listening to
your examples, I was thinking,wow, sounds a lot like software
architecture, you know, justit's just each each component is
invested with with one or moremodels now in that component.
But there are still manycomponents that make up a full
business workflow. And so Iguess maybe because we talk
(20:10):
about it fairly regularly on theshow here, it seems quite
obvious to me that that's thecase.
But clearly, it's not. If youlook at the business decisions
that are being made out there,some of which is there is
clearly a need at this point.You may have a senior developer
(20:31):
type by whatever title you'reapplying, working and kind of
sort of knowing software dev atsome level and sort of vibe
coding and putting theirknowledge of architecture
together for solution. But ifyou're not going to bring in
junior devs, then you're makinga gamble that you're not going
to need that at some point. Andyet what we're seeing is a is,
(20:52):
you know, per that report,ninety five percent failure rate
on using these tools at thecurrent point in time we're at
now on that.
We had a recent episode on riskmanagement. And from a risk
management perspective, I thinkthat there are a lot of very
risky decisions being made byexecutives largely in ignorance,
(21:16):
I think, of kind ofunderstanding how models and
software architecture fittogether to your point. So no,
I'm in violent agreement withwhat you were saying a moment
together.
Daniel (21:27):
Yeah. This idea also
that you kind of bring in a
model also produces a little bitof problematic behavior around
adoption of models, particularlyin private kind of secure
environments. And I know thisone from experience where you
(21:51):
kind of think, well, which modelam I going to use? And then you
think, well, there's a couplecategories of models, right?
There's closed models, there'sopen models, like closed models
being the GPTs or Anthropic oretcetera.
The open models being the LAMAor DeepSeek or Quinn or
whatever. And you have smartpeople in your company and
(22:16):
whoever, Frank over ininfrastructure is like, Yeah, I
can spin up a LAMA model in ourinfrastructure. And there's
innumerable ways to do that atthis point. You can use VLLM or
you can use OLAMA or whatever itis. Right?
I can spin up one on my laptop.And so you spin up the model and
(22:38):
then you're like, all right,well, let's build our POC
against Frank's model that he'sdeployed internally because we
now know how to do that. Butagain, it's not so much that
Frank did a poor job and thedeployment is bad or the tools
are bad, like VLM or somethingis very powerful, but it's not a
(22:58):
proper comparison because whatyou've deployed is a single
model, not a set of AIfunctionalities to build rich AI
applications, you now have aprivate model, which again, only
does what a private model does,that one particular model. It
doesn't give you that rich setof AI functionalities. And so
(23:22):
it's not really a knock againstopen models.
What it is an indication of isthat you maybe shouldn't roll
your own AI platform. So there'sa lot of things that go into
that and there's various ways toapproach it. But I think that
(23:42):
misunderstanding of what modeldo I need also impacts the
perception of these open sourcemodels because most of the time
when you deploy that open sourcemodel, you're only getting a
single model endpoint versuskind of a productized AI
platform.
Chris (23:58):
Yeah, I guess, I know
that in your own business that
you do help bridge some of thatgap there and some of the things
that you guys do. But ingeneral, if you're talking in
the broader market, do peoplethey have service providers out
there that can give them some ofthose services. But as they are
(24:21):
going and deploying, and in asense, we've always encouraged
open source and open weightmodels out there. We've talked
about that a lot, and we like tosee that. And yet there is this
skill gap or understanding gapthat you've just defined in the
business community of, yes, youhave these capabilities, but
(24:42):
you've got to connect all ofyour resources, brought used in
the term in a very generic way,all of your resources together
to give you the capabilities youneed for your business to
operate the way you envision.
And, you know, that's definitefalling down in understanding
within that gap space. What aresome of the different options,
(25:03):
how people can get through thatgap?
Daniel (25:05):
Well, I think one of the
things that can be done is to
approach this sort of problemfrom a software and
infrastructure architecturestandpoint to what you were
saying before. A lot of whatwe're talking about really kind
of falls in that architectingside of things. And so I think
(25:25):
from the beginning, the questionis not, again, if you come to it
with the question not being,what model are we gonna use and
who's gonna deploy itinternally? Right? But you come
to it from the standpoint of wewill be using models.
There will be many models. Theywill be connecting to many
(25:48):
different software applications.Okay, well, changes the game a
little bit in terms of managingthat and making it robust over
time. And there's very manycapable engineering
organizations that know how toscale bunches of services and
keep them up and set up uptimemonitoring around them and
alerting and centralized loggingand all of those sorts of
(26:09):
things. But you never get tohave those conversations if you
kind of cut it out before youget there by just saying, we
will have a model livingsomewhere.
And so you really need toapproach it from this
distributed systems standpoint.And once you start doing that,
you start talking to the expertsthat are on your team. And there
(26:31):
are very many tools, dependingon the standpoint that the
company wants to approach thisfrom, everything from the
company still managing a lot ofwhat they want to deploy and
using orchestration tools,whether that be like a rancher
or something like that, that'sgeneric and not AI related,
(26:53):
right? But they're used to usingit. Maybe they're already using
it in their organization andthey can orchestrate deployments
of various AI things.
And then there's AI specificapproaches to this as well. So I
think it is really a perspectivething. And as soon as you kind
of get into that zone of, well,we do need this software
(27:14):
architecture, we need this kindof SRE and DevOps kind of
approach to things, then youreally have to ask some of the
hard questions like, can we vibecode our way through this? What
kind of software knowledge do weneed to actually support this at
scale? And I think what peoplewill find is you do at the
(27:38):
minimum, I think still need thatsoftware engineering and
infrastructure expertise to doit at scale well, or at least to
kind of guide some of the Vibecoding type of things that
happen.
Right? So there needs to be aninformed pilot to help guide
some of these things and makesure the ship is going in the
(28:00):
right direction.
Chris (28:01):
I think that's very, very
well put. And I think, you know,
kind of going back to the fact,you know, kind of combining this
with the hiring decisions thatwe're seeing out there in the
job market, and kind of thecollapse of the bottom end of
the software dev industry, thereis a lot of developed expertise
(28:22):
over the course of a career. Andwhile all of the discrete points
of knowledge may be captured invarious models out there,
there's still the necessity ofextracting what you need from
those models in the rightcontext and the right order. And
(28:45):
at least for the time being inthis kind of Gen I dominated
world, you have to have somebodywho can provide that kind of
architectural view, know how toprovide the context to get the
things you need from your fivecoding. I think people are
finding small successes underthat where they say I want an
(29:05):
app that does this thing andthey describe the app in great
detail and the models thatthey're using will turn out kind
of an app.
It may or may not be architectedthe way for sustainability and
you know, there's a whole bunchof issues that might make a very
good prototype. But if you'renot going to bring in junior
level coders that in the futurewill be your senior level coders
(29:28):
that have this knowledge, thenyou're kind of betting on
today's talent producingsomething and you're hoping that
your model gets the nuance ofall of those components and is
able to generate its own contextwithout your expertise, which
may happen. But it's a biggamble if you're a company right
now, it seems to me a lot lessrisky to go ahead and continue
(29:52):
to bring in some junior leveldevelopers for the purpose of
growing them over time and beingable to have that. Maybe at some
point that does change in thefuture. But I think the
companies that are doing thattoday are taking some some
fairly significant businessrisks that are largely invisible
to their executives.
Sponsors (30:24):
Well, friends, you
don't have to be an AI expert to
build something great with it.The reality is AI is here. And
for a lot of teams, that bringsuncertainty. And our friends at
Miro recently surveyed over8,000 knowledge workers and
while 76% believe AI can improvetheir role, most, more than
half, still aren't sure when touse it. That is the exact gap
(30:48):
that Miro is filling.
And I've been using Miro frommapping out episode ideas to
building out an entire newthesis. It's become one of the
things I use to build out acreative engine. And now with
Miro built in, it's even faster.We've turned brainstorms into
structured plans, screenshotsinto wireframes, and sticky
notes chaos into clarity all onthe same canvas. Now, you don't
(31:13):
have to master prompts or addone more AI tool to your stack.
The work you're already doing isthe prompt. You can help your
teams get great done with Miro.Check out miro.com and find out
how. That is miro.com, mir0.com.
Daniel (31:34):
Well, Chris, I I do
think that there's some
consistent news stories fromother sources outside of the MIT
report that kind of reinforcesome of what we've been talking
about. And I also, I think arejust generally interesting as
individual data points. And I'veseen a number of those as
(31:56):
related to OpenAI specificallyin terms of if we just look at
what kind of has happened withOpenAI in the previous number of
weeks, some interesting thingshave happened. And I think that
they signal some things thatare, like I say, very consistent
with what we've been talkingabout as prompted by this MIT
(32:18):
report. Just to highlight a fewof those, and then we can dig
into individual ones of them.
But one of the things thathappened was OpenAI released
GPT-five, which we haven'ttalked about a ton on the show
yet, but they released GPT five.Generally, the reception in the
(32:40):
wider public has been thatpeople don't like it. Sort of
it's fallen flat a bit, guesswould be a way to put it. So
that's kind of thing one. At thesame time, OpenAI open sourced
some models again for the firsttime in a very long time.
Five Open sourcing a couple ofreasoning models, LLMs that do
(33:05):
this type of reasoning, and theyopen source those. And also near
the same time, I forget theexact dates, someone listening
can maybe provide those in acomment or something. But the
other thing that happened wasthat they opened kind of a
consulting arm of their businessand are entering into these
(33:27):
services consulting type ofengagements, which are not
cheap. I think the minimum pricefor a consulting services
arrangement with OpenAI was like$10,000,000 or something like
that. So you've kind of got thisthing that's happening, which is
a model that's kind of in thisarea of what has been their
(33:51):
moat, kind of these closedmodels, kind of falling flat,
them giving out some of themodel side you know, publicly,
openly, and then opening theservices business side of
things.
Now, I've drawn my ownconclusions in terms of what
(34:13):
some of those things signal andmean, but any initial reactions
or thoughts that you've had asthings have come out like that,
Chris?
Chris (34:22):
I think Sam Altman is the
CEO of OpenAI. He's a curious
individual with, he noted inJanuary that maybe OpenAI had
been, and I quote, on the wrongside of history, close quote,
when it comes to open sourcingtechnologies. But it was Sam who
(34:43):
made those decisions. And Ithink what he's seeing now is
the market is evolving, it'smaturing, as as you would
expect, you know, kind of thethe early phase of focusing on
these foundation, you know,these kind of frontier
foundation models that weredriving the hype for the last
few years and might be producingdiminishing returns. Even though
(35:05):
the GPT-five model is morecapable and that's what I use
for most of my stuff, Maybe someof the nuances, for instance,
the interface itself, way itworks, the way the models work,
know, people were preferring thefour o model, you know, there
was quite a quite a bit ofpersonification of that model, I
think going on with the public.
(35:26):
And that I think think OpenAIrealized that, you know, there
are these concerns, in additionto the fact that they had kind
of left the services market toothers. And so, you know, I, my,
my belief is that the is thatthey are starting to open source
(35:46):
some of these models with their,you know, open weights, for the
purpose of supporting a solidfootstep. And at the, you know,
kind of the premier end, theexpensive end of the services
market. And I think, I mean, Ithink that's the motivator right
there is is is doing that. Ithink that's making sure that
with their competitors allhaving open source models that
(36:07):
they can play in the space aswell.
And they can go in with theirservices organization and make
money on services and and pointto their own open source models
to be able to support thatservices business model that
they're doing. So maybe a littlebit of a jaded answer for me
potentially just having like youwatch them over a number of
years month by month. But yeah,I I would I would definitely say
(36:31):
they're they're trying to leanin recognizing that the the
business of AI is both expandingand and maturing into that area.
Daniel (36:39):
Yeah. I I think if we
combine this with the knowledge
from the MIT report around kindof these use cases and
enterprises failing, What weknow at this point and what I
would kind of distill down asvery or or or trends and
(36:59):
insights that are backed now byvarious data points is that
generic AI so the generic AImodels, just getting access to a
model does not solve any kind ofactual business use cases and
problems and, you know,verticalized solutions that are
(37:22):
needed. That's what we kind oflearned from the MIT report. And
this would be true whether yourcompany has access to ChatGPT or
Copilot or whatever models youhave access to. These are
generic tools.
These are generic models. It'svery hard to translate that into
customized business solutions.Right? That is why the insight
(37:48):
one from my perspective is justhaving access to a generic model
or generic tools is not gonnasolve your business solution.
And that's partially why a lotof these POCs are failing.
Now, OpenAI offers those genericmodels and tools, right? Which
is really great on the consumerside, but enterprise wise, the
(38:10):
ones making the money, at leastso far, not it hasn't been
OpenAI. They've been losing aton of money. The ones making
the money is Accenture,Deloitte, McKinsey, etcetera,
services organizations. Becausereally how you transform a
company with AI and bring thesemodels in and do something is by
(38:33):
creating custom dataintegrations, creating these
custom business solutions.
And that is still really aservices related thing, or it's
at least a kind of customizationrelated thing. There's data
integrations there. So this istotally consistent for me with
open sourcing models at the sametime that they're creating the
(38:55):
services side of the business,because essentially from the
business or enterprise side,there really is not a moat on
the model builders front. Itdoesn't matter from my
perspective at least, and ofcourse I'm biased, it doesn't
matter if you're using GPT, itdoesn't matter if you're using
Quad, it doesn't matter ifyou're using Lama or DeepSeek or
Quen, really doesn't matter. Anyof those models can do perfectly
(39:19):
great for your business use casesolution.
I think that's true. I've seenit time and time again. What
makes the difference is yourcombination of data, your
combination of domain knowledge,integration with those models
and creating that customizedsolution. And either you're
gonna do that internally oryou're gonna hire a services
(39:41):
organization to do that. On theone front, you need software
architects and developers.
And even if they are using vibecoding tools, will need that
expertise. On the other side,you can pay millions of dollars
to one of these consultants orto OpenAI and their services
business, etcetera. And again,it's a hard thing because those
(40:05):
resources are scarce, right?Which I think is why it is a
good time if you're kind ofproviding that level of services
around the AI engineering stuff.
Chris (40:15):
Yeah. I think you've hit
the nail on the head. And I'll
offer sort of a way of restatingit with an analogy. You know,
when you you're going to havefriends over and you want to
have a magnificent dinner atyour dinner party. And so you
walk into the kitchen, and youmay have a lot of great things
(40:36):
to make stuff with and some ofthose might be big expensive
things that are raw materials.
Things like, you know, in ouranalogy, those things represent
models and other softwarecomponents. But there's some
skill in putting that mealtogether and going into the
refrigerator and picking theright things out, and going into
the pantry and picking the rightthings out, and putting them
(40:58):
together according to a recipethat is your business objective,
and understanding how to producethat final dinner, which is
maybe a little bit differentfrom the way your neighbor would
do it and maybe a little bitdifferent from the way another
friend would do it, to producethat fine meal that you are able
to enjoy at the end of the day.That meal is a bit unique
(41:20):
because in our analogy, yourbusiness is a bit unique. But it
takes the skill. And we doexpect technology to develop in
those refrigerators to be smartrefrigerators other things to
help in the kitchen.
And that might be represented inour vibe coating thought. But we
might not be all the way thereyet. So if you're kind of buying
(41:43):
your ingredients and thinking,well, I don't really need to
have great skill in the kitchen,because I'm sure that some of
this technology that's cominginto play will take care of that
for me. Maybe eventually, but Idon't think we're quite there
yet is what we're seeing. And Ithink that report that we've
been talking about has kind ofprovided some evidence of that
(42:03):
fact.
And so, yeah, there's stillthere's still the need for
nuance and complexity to beaddressed. And the recognition
that these commoditized models,whether they be closed source or
open source, either way, it'sgoing to take more than one and
they're going to be there,you're going to need to have the
recipe to make it all cometogether the way you're
(42:25):
envisioning. So a lot of goodlessons for hopefully some of
the managers and executives inthese companies making some of
these decisions to do that mighthelp them out going forward.
Daniel (42:38):
Yeah. Yeah. I that
analogy. And it fits so well
because you can develop thatcooking expertise internally. Or
you can hire in professionalchef into your house.
It's gonna be expensive, right?But you can do that, but it is a
(43:00):
necessary component. So I lovethat analogy. I do wanna
highlight, we always try tohighlight some learning
opportunities for folks asthey're coming out of this.
Maybe you're motivated to notlet your AI POC fail and you
want to kind of understand whatit takes to build these
solutions.
There's a couple of things Iwanna highlight. One is I'm
(43:23):
really excited about, we'rehaving this, Midwest AI Summit
in, November, November 13 in,Indianapolis, and I'm helping
organize that. It's gonna be areally great time. One of the
unique features about thisevent, different from other
conferences, is we're gonna havelike an AI engineering lounge
(43:45):
where you can actually sit downat a table with an AI engineer.
Maybe you don't have thatexpertise in house, but you
don't want your POC to fail, youcan actually sit down with an AI
engineer and talk through thatand maybe get some guidance.
So I haven't seen that atanother event. I'm pretty
excited that we're doing that.And you can always, as I
mentioned in previous episodes,go to practicalai.fmfmwebinars.
(44:09):
There's some webinars there aswell that might be good learning
opportunities for you.
Chris (44:14):
That's awesome. And on
the tail end, as we close out of
learning opportunities, I justwanted to share one two second
thing here. My mother, once upona time she's in her mid 80s. And
once upon a time was a computerscience professor at Georgia
Tech. She also happened to workfor the same company I worked
for, Lockheed Martin, years ago.
(44:35):
But she had retired and kind ofmoved out of the technology
space. But she is very aware ofwhat I do in the space and our
podcasts and stuff. But she, inher mid-80s, reached out to me
this weekend and said, I'mthinking about going back to
school for AI, and maybe eveninto a PhD program or something
(44:56):
like that. I don't know. And wetalked about it for a while and
starting small.
She's into some Coursera coursesnow. And I just as we're
thinking about learning andramping up, I just want to, you
know, we've talked aboutlearning recently on the show,
you know, we had a couple ofepisodes where we talked about
kind of it's never too late,we've had some, we had a
(45:16):
congressman who was not a springchicken not too long back,
diving in incrediblyinspirational. And I want to
say, if my mom in her mideighties and decades out of the
computer science space iswilling to dive in and do
technical work on Courseracourses, I would encourage all
of you to reconsider, you arenever too old. And I just wanted
(45:38):
to leave that as we talked aboutlearning items to say, go get
it. The world is changing fastand my mom in her mid eighty's
doesn't want to get left behindand wants to be on top of it.
And I think, I think it's a goodthing for all of us to take some
inspiration from and go do.
Daniel (45:52):
That's awesome.
Appreciate that that
perspective, Chris. It's been abeen a fun conversation. Thanks
for hopping on. Absolutely.
Jerod (46:06):
Alright. That's our show
for this week. If you haven't
checked out our website, head topracticalai.fm, and be sure to
connect with us on LinkedIn, X,or Blue Sky. You'll see us
posting insights related to thelatest AI developments, and we
would love for you to join theconversation. Thanks to our
partner Prediction Guard forproviding operational support
for the show.
(46:26):
Check them out atpredictionguard.com. Also,
thanks to Breakmaster Cylinderfor the beats and to you for
listening. That's all for now,but you'll hear from us again
next week.