All Episodes

January 9, 2026 51 mins

In this start-of-year FC episode, Chris and Daniel break down what really mattered in AI in 2025, and what to expect in 2026. They explore the rise of AI agents, the practical reality of multimodal AI, and how reasoning models are reshaping workflows. The conversation dives into infrastructure and energy constraints, the continued value of predictive models, and why orchestration (not just better models) is becoming the defining skill for AI teams. The episode wraps with grounded 2026 predictions on where AI systems, tooling, and builders are headed next.

Featuring:

Sponsor:

  • Framer - The enterprise-grade website builder that lets your team ship faster. Get 30% off at framer.com/practicalai

Upcoming Events: 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jerod (00:04):
Welcome to the Practical AI Podcast, where we break down
the real world applications ofartificial intelligence and how
it's shaping the way we live,work, and create. Our goal is to
help make AI technologypractical, productive, and
accessible to everyone. Whetheryou're a developer, business
leader, or just curious aboutthe tech behind the buzz, you're

(00:24):
in the right place. Be sure toconnect with us on LinkedIn, X,
or Blue Sky to stay up to datewith episode drops, behind the
scenes content, and AI insights.You can learn more at
practicalai.fm.
Now, onto the show.

Daniel (00:49):
Welcome to a new year of practical AI and an episode with
just Chris and I where, we tryto keep you fully connected with
everything that's happening inthe AI world, which is a lot
these days, both last year andthis year. But, I'm Daniel

(01:09):
Witenack. I am CEO at PredictionGuard, and I'm joined as always
by my cohost, Chris Benson, whois a principal AI research
engineer at Lockheed Martin.Happy New Year, Chris.

Chris (01:20):
Hey. Happy New Year, Daniel. It's, 2026, probably the
fastest moving AI year evercoming up here.

Daniel (01:29):
First, well, every new year, I guess, has been the
fastest AI movie. Well, I guesssince we started the podcast,
you know, whatever, eight yearsago.

Chris (01:39):
It was a safe thing for me to say. There you go.

Daniel (01:42):
Yeah, yeah, yeah. Safe thing for you to say. I mean,
granted these last few yearshave been a little bit frantic,
in relation to the years priorto that with the podcast, which
felt, you know, in retrospect,seem a little bit chill. Yeah.
But, it, it definitely seemslike 2025 was a big year, 2026

(02:08):
will be a big year.
And so as we're coming into thenew year for our listeners,
usually we try to do some typeof, we don't have a strict
format here because we're prettycasual, but some type of
discussion of things thathappened in 2026, you know,
themes looking forward or thingsthat happened in 2025. I'm

(02:30):
already a year ahead, I guess.Things that happened in 2025 and
things that may or may nothappen in 2026. Usually, our
predictions are wrong as are allpredictions.

Chris (02:41):
I'm okay with that.

Daniel (02:41):
All models are wrong, but hopefully this hopefully
this podcast will be useful.Yeah. So interesting,
interesting times, Chris,interesting dynamics in our
world in all sorts of ways. Butif we hone in on AI, I think at
the beginning of last year, ifI'm remembering correctly, there
were a couple of things wetalked about. One of those

(03:04):
things, certainly at least atheme that we've talked about a
lot this year, which if we wereto categorize the year 2025, I
don't know if you would agree,Chris, but it does seem like the
year that we transitioned totalk about AI agents.
It was sort of like for a whilewe talked about models. Yeah.

(03:27):
And then we kind of talked aboutassistance. And then we really
kind of transitioned to talkingabout agents. Agents are
autonomous AI.
That was a key theme of 2025. Iguess one first question, Chris,
did we actually What do weactually do with AI agents in

(03:50):
2025? Overall, was a positiveand or successful year of trying
agents?

Chris (03:58):
Well, I think I think there was, like, untold levels
of hype around agents as therealways is every time we hit a
new thing. And I think a lot ofI think a lot of organizations
did try to dip their toe intoit. Now I've seen like, you
know, as we're reading all thethings that are out there, I've

(04:18):
seen some crazy things like likefrom like nobody, you know,
successfully using them all theway to like 70% of all existing
organizations are now using AIagents, which I'm, like,
totally, like, BS. You know?

Daniel (04:31):
It's just not not true at all.

Chris (04:33):
Not true at all. It's a long way from the truth. But I
do think a lot of organizationsare kind of whiplashed right now
kind of going, holy cow, what'sthis agent thing? I'm reading
about it everywhere. And we'retrying to figure out what to do.
And that's happening at a momentof like, you know, where the

(04:53):
like, those who are diving inand finding a use case that they
can find success with, which isnot easy in all cases, are are
are making some big wows withinwithin their little world. And
then those who aren't are still,you know, kinda fumbling in the
dark. And I think that's fair.Like, that doesn't mean that one

(05:14):
person is smarter than theother. It just means that
looking into the right use caseand and getting the right people
to address it and having a goodbusiness case for it makes a lot
of difference.
And, you know, as we talk aboutthat, I think one one way of
kind of leaping into that, youknow, dichotomy is, as Andre

(05:36):
Karpathy, you know, put out apost on a tweet on x. Do we
still call them tweets? I don'tI don't

Daniel (05:43):
I have no idea.

Chris (05:44):
I I'm not sure. But on on x, and I won't read the whole
thing because it was a fairlylong one. But he's basically
acknowledging I mean, you'retalking about this is one of the
world's preeminent AIresearchers that, you know, the
kind like, you know, within ourlittle AI bubble world, you
know, on the technical side, heis a superstar in every possible
way. And and he's kind ofsaying, holy cow, and I'm

(06:08):
totally paraphrasing. He didn'tactually say holy cow.
He's kind of saying holy cow,even I at moments are am feeling
a bit left behind with how fastthis is changing. And in the in
the context, he's kind oftalking about, like, coding and
stuff is that, you know, afterleading in the last few years
and seeing models, you know, itseems quaint to talk about

(06:28):
models as you pointed out now.But talking about these models
that are getting better andbetter steadily, but they still
weren't doing great coding. Youknow, if you in in terms of that
and the need for seniorengineers to kinda correct it
and was it more trouble to usethe model and the agent to do
the coding or not? And did youdid it cause more did you just

(06:51):
spend more time fixing errors?
Well, all that really changed atthe end of 2025. And, you know,
you with with OPUS 4.5 andOpenAI's 5.2 model in
particular, as well as severalothers, but those are the ones
that are called out the most.Like, they got to where they

(07:12):
could do senior level codingreally well without mistakes.
And and I've griped because I'ma Rust programmer that like,
because that's such a smallcommunity of programmers overall
that the models weren't as goodas they would be in like Python
and JavaScript. Well, guesswhat?
It's kicking butt in Rust now.And so like, as No longer Rusty.

(07:34):
It's no longer Rusty. And solike I, for one, as I'm as I am
upskilling and as someone whohas been using AI as we have
gone forward in coding, like,it's now like, my workflow has
changed dramatically in the lasttwo months in terms of
understanding how to effectivelyuse coding agents to do that.

(07:54):
And I think we're we're I don'tthink coding is the only area
that that's impacting.
I think there's a lot of areaswhere agentic AI, once you get a
use case that is giving you somesense of success, is like
changing the the the field thatthat small field that you're
playing on in that. And thatmight be happening many times

(08:17):
over. What do you think about

Daniel (08:19):
Yeah. Yeah. I it's interesting just to read a
little bit of that tweet thatyou reference. Karpathy
mentions, clearly, powerfulalien tool was handed around,
except it comes with no manualand everyone has to figure out
how to hold it and operate itwhile the resulting magnitude
nine, earthquake is rocking theprofession. And he kind of ends

(08:43):
saying, roll up your sleeves tonot fall behind.
So yeah, I definitely have feltthis Chris, just from, you know,
my perspective, we get to, youknow, kind of wax poetic on
these episodes where it's justyou and I, but from my
perspective in building acompany over this past year,
Prediction Guard, I'm reflectingon our last board meeting and

(09:10):
the reflection back to us asleadership was wow, like
essentially making the note thatproduct wise, all were able to
advance so much more quicklywithout expanding your team in
those last two quarters of theyear than like as the company

(09:34):
has progressed. So these arenot, you know, these are
obviously investors, you know,not that they have no
technology, but they're notcoders, but just from the
output, right? The pace ofdevelopment of the product and
what we're able to achieve issignificant. It's significant

(09:57):
enough to be noticed in that waywithout the larger team that
would typically have been kindof required to reach that scale
or support of what supporting.

Chris (10:10):
Yeah, I wanna relay a moment that I had. And as you
know, like I sent you a textover the holidays saying, Holy
cow, we gotta talk about this atthe beginning of the year and
stuff. And I want to share withyou now because you I did not
relate kind of like whathappened to me that made me send
that text because it's veryrelevant to this is I had been

(10:33):
proposing a really complicatedautonomy based project for work.
And I'm not going to get intospecifics on what that is. But I
in in, you like to take out thehype for a moment, I had spent a
lot of time thinking andresearching all the different
things that had to go into that.

(10:54):
And it was a tremendous amountof complexity involved in that
and developed really complex andvery detailed and specific
prompt on how to get there atlike a production quality where
it wasn't like what we wouldhave talked about a year ago
where it was like AI slop codecoming out. Mhmm. And so I
finally got to this point whereI tried that out. And and I like

(11:20):
it in the matter of six minutes,it produced what I would have at
least six weeks of work, atleast six weeks of work in, you
know, in just a matter of a ahandful of minutes. Now, I had
knew what I needed to put in.
I knew a lot of that stuff and Iwas able to get a really good
prompt going. But the actualwork, like suddenly I had a

(11:41):
large project laid out in VersusCode that had all these
different things tied together.I was just I really I just
literally like, I don't thinkI've ever had that big of an
moment in coding. And that wasand I was like, you gotta tell
Dan about this. So that was Ijust wanted that was what
prompted it.
And it made me that like Iflipped over and realized this

(12:04):
is the way forward and I'm 100%in.

Daniel (12:07):
So just to pick apart a little bit of what you said,
Chris, there's some highlightsthere that I think are takeaways
from our agentic work in 2025.One of those is with, I would
say no doubt at this point,these agentic workflows,
especially driven by folks whohave the relevant domain

(12:30):
knowledge, are transformative inways that are legitimately
transformative, multiplicative,however you wanna say that very
much. I think we can confirmthat. However, I think one of
the things you highlighted issome of what was highlighted
throughout the year around likethe MIT study of things, you

(12:53):
know, failing Gartner says, youknow, 11% of organizations have
agentic AI in production andthat 40% of projects will fail
by 2027. I think part of that isdriven maybe by two things that
we've seen over this year.
One is you do actually need, tohave a certain level of

(13:14):
expertise to know both how toprompt and configure these,
systems, but also what datasources to connect into them,
how to utilize this sort of, asKarpathy puts us, this alien
tool, how to hold it, how to addin, you know, an MCP server, how
like, what is, like what type ofautomation am I really doing?

(13:37):
How should it run? How do Iintegrate it in to my day to day
workflow? If you have thatexpertise around the integration
side and infusing the domainknowledge, that is a key piece
of it. And without that, therecan be a lot of failure.
Secondly, I think sometimespeople are just trying to
automate processes that areproblematic because they're bad

(13:59):
processes, not because theautomation is bad, but they're
just bad processes to beginwith. So, you know, AI doesn't
solve that problem.

Chris (14:08):
Yeah, there was one other takeaway that I'll throw in
before we move on from thistopic. That is prior to to
developing the I can't call it astub for the thing because it
was too much code. But prior tothat kind of final prompt that
got me well into the project, Ihad there had been literally

(14:30):
many hundreds of prompts beforethat, which got me ready for
that. And I think a a key thingthat I came away from that with
and which I've been sharing withother people since over the last
few months is that thatexpertise is important for how
you shape prompts to get thething you need and you learn
from it. So like at no point wasthe AI leaving me behind.

(14:55):
It would open up new doors, butI had to walk through those
doors, take the learnings, anddevelop the next prompt from it.
And I think to your point amoment ago about kind of that
expertise is it took thatcombination of domain expertise
with how you prompt your waythrough a long sequence of
prompts to finally get to thepoint where like you understood

(15:17):
the system well enough to whereyou could describe it in a
prompt well enough so that asophisticated agentic model
could put the whole thingtogether in like nearly a
production ready mode. So therewas there was a lot of learning
involved in that. So it wasn'tjust magic in five minutes. I
was just quite taken by the bythe by having gone through that

(15:41):
long process, being able to dothat final prompt and have so
much produced that was at thequality level that I would have
demanded it be.

Sponsors (15:51):
You know, for most developers, you've had this
call. Marketing calls, salescalls, and they want a new
landing page. They wannaredirect. They want designs
implemented. And, course,engineering says, yeah, we'll
get to it.
But that bottleneck is whythousands of businesses from
early stage startups to Fortunefive hundreds are choosing to
build their websites in Framer,where changes take minutes

(16:15):
instead of days. So our friendsat Framer, they're an enterprise
grade, no code website builderthat gives designers and
marketers the ability to fullyown your.com without having to
rely on the engineering team. Itworks like your team's favorite
design tool with real timecollaboration, a robust CMS with
everything you need for greatSEO and advanced analytics that
include integrated AB testing,changes to your Framer site go

(16:39):
live to the web in seconds withone click publish without help
from engineering. That'spriceless. That keeps you on
task, on target deliveringfeatures.
And that's how your team reducesdependencies and reaches escape
velocity. And this isn't a toy.Framer is an enterprise solution
with premium hosting, enterprisegrade security, and 99.99 uptime

(17:00):
SLAs. Companies like Perplexity,Miro, and Mixpanel trust Framer
for their websites, whether youwant to launch a new site, test
a few landing pages, ormigrateyourfull.com. Framer has
programs for startups, scaleups, and large enterprises to
make going from an idea to livesite as easy and as fast as

(17:22):
possible.
Okay. So the next step is tolearn how you can get more out
of your.com from a Framerspecialist or get started
building for free today atframer.com/practicalai for 30%
off a Framer Pro annual plan.That's framer.com/practicalai
for 30% off.Framer.com/practicalai. Rules

(17:45):
and restrictions that may apply.

Daniel (17:51):
Chris, I think the other, or at least one other
theme that I know that wehighlighted kind of going into
this year was multimodal AI. Idon't think unless I'm
misremembering, or not seeingthe right transcript that we
predicted kind of this,reasoning era with models. So

(18:15):
like if we just look at theprogression of models, which, is
definitely not the whole pictureas we just talked about, a lot
of what happened was aroundagents, which, for those that
are listening, maybe new, youknow, parsing through these
terms, an agent is not just amodel. It is a model that is
connected to various externalsystems. Some of which could be

(18:37):
AI, of which could not be AI andactually interacts with those
systems to accomplish a goal.
But, so we're not just talkingabout models anymore. We're
talking about these systems, butin terms of the models, I think
we predicted more multi modalitykind of coming into this year,
which certainly we have, right?There have been many different,

(19:00):
you know, vision languagemodels, video models, music
models, all sorts of things,Sora, all of these, you know,
things that we've seen over2025. And then there's these
other reasoning models, startingwith the multimodal ones, Chris.
I think at least where I'msitting and it could be in just

(19:23):
in my role or the types ofthings that I'm seeing, but in
the majority, actually I think,yeah.
So I guess I would say all ofthe customer interactions that
we are having and the peoplethat I'm talking to really are
using multimodal AI in terms ofmultimodal on the input side. I

(19:47):
still very much, do not interactwith people that are doing kind
of multimodal on the outputside. So what I mean by that is-
That's fair. Certainly I seevideos coming out of Sora as
reels on social media. And so Iknow that that is happening,
right?
But in terms of the businessworld, real business context, I

(20:11):
definitely see, you know, video,audio, image and text going into
models, but not so much comingout. Really still coming out is
either text or some form oftext, like some structure, like
a JSON structure, a tool call,some template, some, you know,

(20:33):
field, some whatever it is, notreally, multimodal output. Maybe
the exception to that might besynthesized speech, which is
pretty pervasive everywhere as athing in and of itself. So
that's maybe a standout.

Chris (20:50):
I no. I I think you're right. I like, it that is a
standout, but I I would havethought of that as a single
mode, like, oh, on the outputside. And I think you're calling
out I think that there is a bigopportunity here for especially
when you combine it with agentsin different capacities to have
a richer output experience.Because at the end of the day, I

(21:12):
mean, I know I know that, like,whereas my non AI industry
family members are more takenwith the videos and things like
that for entertainment and butwith the work that I usually do,
it's more that text output.
And I could imagine a muchricher output experience to your
point. It's easy to envisionespecially when you think about,

(21:36):
you know, like I'm gonna dumpall the different things into my
input that I want it want it toprocess and assess for the
output. But the output is stillpretty basic. It might be that
real time voice interaction thatwas so hot six months ago, you
know, that everybody got intoand then it kinda kinda passed
and, know, we all get ourexpectations set very know,

(21:56):
like, oh, okay. That's just realtime voice.
No problem. But if you were toput a bunch of things together
on output where you're gettingtext, you're getting that real
time voice in the conversationalsense, you're getting supporting
media, I I think it could Ithink it could level up. So it
probably will.

Daniel (22:13):
Yeah. And I guess maybe one of the themes from this last
year that we can take away aswell is this rising up of the
reasoning era. So reasoningmodels, we've talked about these
on the show. Yep. Just as areminder for people in case you
missed out on our discussionsfor this year, in some ways,

(22:37):
these reasoning models aremislabeled because they don't
reason about anything.
They just produce text. What isinteresting is that they produce
a segment of text that imitatesor mimics reasoning or a chain
of thought, right? And, kind ofquote unquote thinks through a

(22:58):
problem by generating textrepresenting that thinking
through of the problem. And thenthey generate a final answer,
which has proven to kind of helppick through more complicated
tasks, maybe do moreorchestration or dynamic type of

(23:18):
workflows than what we wereseeing before. And these models,
I would say many of the modelsthat I see being released now,
at least in terms of that LLMflavor of models are either
straight up reasoning models,which means they're always going
to reason in this way.
They're gonna generate thereasoning tokens and then output

(23:39):
the regular tokens. Or they arekind of conditionally, reasoning
models or hybrid models thatwill do that some of the time
and not other of the time. And,there's various implications of
that. Certainly, think you seethat driving certain of these
agentic, you know, workingtowards these agentic workflows.
It also, to be honest, issometimes annoying because often

(24:03):
like in real businessapplications like we're working
on, if you dial in yourworkflow, you really don't want
those reasoning tokens becausethey take so dang long, right?
You have to wait. There's somuch latency introduced by
waiting for these reasoningtokens that once unless you're

(24:26):
doing this sort of very, verydynamic workflow, it's kind of
annoying, that a lot of theselater models have these. I would
say in general, it's a goodthing. So we're definitely in
the reasoning era and it's beencool to see these models come
about, but, you don't getanything for free. There's a lot
of latency that's developed and,because these models stream

(24:49):
output, right?
Every token that is generated isa inference run of the model.
Meaning if you're generatingthousand tokens of reasoning,
that's 2,000 runs of the modelthat is operating on a computer
with a GPU that is expensivesomewhere.

Chris (25:11):
Agree with all that, but I think to some degree, it's
intentionally or unintentionallyand probably the former rather
than the latter being driven bythe organizations hosting these
models. Because, know, as oneexample that most, you know,
that everybody would know isChatGPT, you go in and you have

(25:31):
a choice. You know, if I'mlooking at their web interface,
you have a choice between kindof instant or thinking. And of
course, everybody wantsthinking. Do you really want
instant and have thinking, youknow, and then if you choose
thinking, then it's juststandard thinking or extended
thinking.
And so like, that plays to ahuman bias of like you're you're

(25:53):
gonna go for well yeah I wantedthinking and I wanted to extend
the thinking and to your pointlike the cost of that maybe may
or may not be to you as theconsumer but certainly the cost
of of producing that is muchmore expensive with extended
thinking on that, which kind ofpoints out another thing that's
happened in over the last fewmonths that we've all heard

(26:14):
about. And that is that we foryears, we talked about the
limitation of, you know, havingenough GPUs being the limiting
factor on moving forward. Andnow it is power because, you
know, you can take the same GPUand use it for many inferences,
but each one of those separateinferences is is taking a
certain amount of power forthat. So as a consumer of that,

(26:38):
every prompt that I choose tomake in a quote unquote
reasoning fashion is going to bemuch more expensive in terms of
power consumption. And we'rehearing that in the news all the
time these days.

Daniel (26:51):
Yeah. I guess that takes us to an interesting theme that
we've seen develop around,infrastructure, hardware,
energy. It's interesting to seethat so much of this discussion,
as you've mentioned in recenttrends, and I think this will
continue into 2026 and it willcreate some both friction and

(27:12):
opportunity and interestingdynamics in 2026, which is this
limitation and opportunityaround power. Just, you know, a
couple of things anecdotally,it's like, so I I went to
Colorado School of Mines as myundergrad, which as the name

(27:33):
indicates, it still has a bigtie to mining and petroleum and
other things. And so I havefriends in the energy industry
and was talking with some ofthem how there's now very much
speculators going around and,trying to purchase and get the

(27:54):
rights to power plants that wererelatively newly constructed,
but decommissioned while, youknow, people were moving away
from coal, speculating that, youknow, these power plants will
necessarily need to be turnedback on.

(28:15):
Right? And, you know, otheranecdotes like in our town here,
Lafayette, West Lafayette,there's this huge, like, I
forget number of billions ofdollars investment in a chip,
chip assembly plant, here, onthe West Lafayette side. What's

(28:35):
interesting to see all of the,community back and forth to get
the zoning approvals and thebacklash that is happening
against this chip assembly plan.And I'm not saying on one side
or the other of that, but what Ithink is interesting is you see
that dynamic here, right? InChina, if you want to dominate

(29:01):
in the AI space and you need abunch of power plants, right?
No city is gonna say, no, we'renot going to have our power
plant here. They're just goingto put a power plant there,
right? And so that's how thishas then filtered into this
geopolitical space andenvironment that we're in where
power and AI and chipmanufacturer and onshoring, all

(29:24):
of this is what's driving nowthe political conversations. And
so, yeah, we've seen this trendfrom just having access to GPUs,
all the way kind of flow tothese discussions around energy,
infrastructure, power, which I'msure will just continue
throughout 2026.

Chris (29:44):
And it's and to delicately point at geopolitics
and the implications, you know,some countries are now invading
other countries and taking theiroil. And, you know, that's in
you know, whether whichregardless of which side you're
on, like, that was a notion thatwas kind of inconceivable. But
power is is the thing thatpeople are talking about because

(30:08):
every nation with its drive formore and more power consumption
to support not only its normalthings, but but AI growth as is
The United States, you see a lotof a lot of interesting things
happening there. So Yeah. Yeah.
I'll just leave that one rightthere.

Daniel (30:23):
I think, like, to your point, this isn't a political
show. We're talking about thepracticalities of AI, but I
think in thinking about thetrends of 2025 into 2026, you
can't go into 2026 withoutnoting that when things happen

(30:43):
politically across the world, AIis being mentioned as a
motivation for why these thingsare happening regardless again
of, you know, who's doing rightor wrong or your stance on
something. We've moved from, Ithink at the 2024 to now the
end, you know, the 2025 goinginto 2026, where, AI is the

(31:09):
topic that is driving some ofthose policy decisions, versus I
think last year, if I was tokind of summarize, were talking
a lot about, well, how might AIor how might governments
regulate AI as a kind of pieceof their policy? Now it's almost
driving the key pieces of policyin a lot of ways.

Chris (31:33):
Indeed, it's, guess going forward, it will be interesting,
as we go through '26, and seehow policy continues to evolve
in this, because this is a levelof consumption that, you know,
obviously is is becoming achallenge to maintain and to

(31:53):
even to initiate, because it'snot stopping with where we're
at, it's going on. Soinfrastructure, hardware,
energy, those topics will,should be it should be a
volatile year in '26 to seewhere things go.

Daniel (32:08):
Well, Chris, there's, of course, many, many things that
that have happened in 2025. Andthe majority of those we've
talked about so far are relatedto Gen AI. I think in terms of
practicality, moving into 2026,we would not be practical AI. I

(32:29):
think if we didn't highlight thefact that, you know, we recorded
another episode right prior tothis and I won't give away
anything that's in that episodeother than there was one
statement that said, Hey, youknow, one trend that's happening
with AI models is that Gen AImodels have sort of plateaued on

(32:52):
this transformer architecturethat most all of these models
are based on. But predictivemodels still continue to
advance, you know, in a quiterapid pace.
And what I mean by that forthose listeners, that, kind of
parsing through this jargon isthese generative AI models,

(33:13):
large language models, languagevision models, etcetera,
generate tokens or certainoutput like images or other
things. Other models arediscriminative or statistical
and make predictions, of classesor forecasts or those sorts of
things. And the reality is thatacross industry, these models

(33:34):
still continue to provideamazing ROI and get better and
better. And the tooling actuallygets better around those. And
actually if you, what'sinteresting to me, Chris, is
years ago we talked about kindof this idea of AutoML, which is
still a term that people use.

(33:55):
There's still some things outthere related to that. This idea
that we could maybe automate theparameterization of AI or
statistical models, and thatwould kind of help us create
these models better and faster.I think the reality, which is
kind of interesting is everyoneis talking about GenAI now, but

(34:17):
there is actually thisrealization of maybe a better
AutoML or maybe a better way toput it is augmented analytics or
augmented ML or something likethat, where actually you have
these highly capable tools underthe hood, whether that's SQL
queries to non generative AImodels, to forecasting models,

(34:45):
to data science tools that nowcan actually be tied in as tools
into a generative AI model thatorchestrates amongst all of
those and reasons over how touse those. So for example, I
could have my e commerce data ina SQL database. I could have a
tool that uses Facebook profitto do time series forecasting

(35:07):
and then a generative AI modelthat can call those tools to
pull the right data out of mySQL database, format it in a
way, maybe with a generated codethat's executed, send it to my
time series modeling tool, whichis good at time series modeling,
and then gets me my forecast for2026 for sales or something like

(35:30):
that.
So actually I think it'sinteresting that all the
discussion is really about thatorchestrator model and not about
these other things. Becauseactually it's those things that
are plugged into theorchestrator that are actually
creating the real multiplicativeeffect, the power of that
system.

Chris (35:47):
I totally agree. And I think that will only get
amplified you know, as you gointo kind of more of a physical
AI future, you know, and we'vetalked a lot about that in
recent episodes, especially latein this past year, that as you
have these orchestrators withthe tooling around them, with
predictive models that are nowkind of enabled through agentic

(36:11):
systems, there is so muchcapability out there that I
don't think the public is reallyas aware of it. They may see
drones and robots but theyhaven't really, in my
experience, thought through whatit takes for those things to
come about. Yeah. Yeah.
And so, you know, there'sdefinitely a place for GenAI

(36:32):
interactions that you're havingwith the human and in terms of
how the human and the physicalagent driven platform are are
interacting. But kind of back toyour point about predictive, you
know, predictive are going upand up. And I think I think one
kind of newsworthy event, kindof illustrates that is the fact

(36:54):
that one of the what they referto as one of the three
godfathers of AI, which is, ofcourse, Jan Lakun has left Meta,
otherwise known as Facebook topeople, where for about roughly
a decade he was there maybe alittle bit longer. But part of
his tenure there were was to todraw he had kind of the academic

(37:18):
freedom to move forward. And oneof the things that he has talked
about for quite some time is thefact that transformers had a
limited ceiling.
And I know we both we have,we've had those discussions
lately about, you know, thelimitations of Gen AI. But as he
looks at the notion as alongwith a lot of other people in

(37:38):
the AI industry about worldmodels driving things forward, I
think your your predictivecapabilities mixed with your
agentic will really drive a lotof the not only the capabilities
that you just talked about withthe tooling, but also things in
the physical AI space. And so wemay see a bit of a renaissance
in those spaces going forward.As people start kind of going,

(38:02):
I've had enough of Gen AI, it'sreally awesome for what it does.
But I can now finally see itslimitations and ceiling.
So I'm interested in whether theupcoming year will kind of turn
attention in that direction.

Daniel (38:16):
Yeah, I would say, I guess sometimes in these
episodes at the beginning of theyear, make predictions. I think
in relation to all of what wejust talked about, one of my
predictions for 2026 would bethat those practitioners that

(38:36):
have the capability andknowledge to build MCP servers,
to connect tools that can beorchestrated to models and to
actually architect that agenticsystem, regardless of model. So

(38:57):
I think like that is a wildlypowerful combination. As we kind
of started this conversation, wewere talking about how that is
part of how to get your agentic,you know, pilots and all of
those things to not fail. Soactually, think these, there are

(39:20):
data scientists, softwaredevelopers, etcetera, out there
that are listening, now take it,I'm always wrong at predicting
the future.
So, you know, don't trust metoo, too much, but at least my
own personal, you know,intuition is that focusing on, I
don't even know what the term isthat we'll use for this in 2026.

(39:43):
Maybe it's AI engineer orwhatever. But I think whatever
that role will shape into, itwill be data scientists,
software developers, whoever itis, are able to come in and
actually know how to spin up asystem of services that are MCP
servers, that are databases,that are rag systems, and then

(40:04):
connect those things into anorchestration layer such that
they can be used. I think thatis shaping into a highly
valuable role and something thatI think will survive for some
time because at least the way Iwould see it, those things that
need connected in are socomplicated across the

(40:27):
enterprise that it's going totake a very long time for that
skill of kind of integration, AIintegration and tool development
and tool integration to go awayin any sort of meaningful way.

Chris (40:46):
I 100% agree with that. And and I think I think possibly
the secret sauce on trying toput that together as a human is
is I'm gonna go back andreference my little my little
experience I shared in thebeginning, and that is to learn
how to use the tools that youhave now well enough to create a

(41:07):
workflow that allows you toleverage those tools through
prompts to get all of thosesystems up and running. So it's
not all on your shoulders as thehuman.

Daniel (41:19):
Yeah.

Chris (41:19):
Yeah. You're the human at the center of a great symphony
of AI agents, and you have tolearn to to conduct those agents
in that symphony to produce waymore than you could have ever
done last year. And I think Ithink that's a doable thing, but
it's a discrete skill set, andit takes a lot of flexibility

(41:40):
and thinking and moving out ofyour domain of comfort to do
that. So, like, be super willingto try very, very uncomfortable
things. Yeah.
So but I think that's a safe Ithink that's a fantastic path
forward.

Daniel (41:54):
Yeah. And especially if you can drive those things to be
even more sort of niche orverticalized for those out there
trying to like start companiesand that sort of thing. I think
if there's a particular tool setwithin an industry that has not
yet and can be tied into thislevel of orchestration and is

(42:17):
necessarily complex, whetherthat's in manufacturing or in
finance or whatever it is, andyou have that domain expertise,
there is definitely a window oftime where it not creating a
model that is able, a singlemodel that is able to do all of
that thinking, but being able toarchitect those tools into a

(42:39):
system, is going to be really,really powerful. But, Chris,
we're we're kind of, coming tothe to the end of our discussion
going into 2026. I'm wonderingif if you have any thoughts on
on what we'll see in 2026.
Are we gonna see are we gonnasee quantum computing, tied in

(43:02):
with AI? Are we going to, youknow, what's gonna happen?

Chris (43:09):
So so on that one point, I don't think we're at quantum
being a highly productive thingyet. I and I follow quantum a
fair amount, but but that so Idon't think we're quite there
yet. And I think that's commonpeople say you're, you're always
ten years out or whatever thatis, but we're still not there
yet from seeing a fair amount ofpractical work on it. I'll I'll

(43:32):
tell you what I think is isgonna change in in this coming
year. And that is as we aremigrating into the era of
physical AI and having varioustypes of platforms operating
around us through agenticsystems with lots of models both
large and small participating inthose.

(43:54):
The cost of the the averageperson being able to get in
there where it used to beprohibitively prohibitively
expensive to do that and you hadorganizations, they would drive
those efforts. But the makerworld is really starting to see
that as a possibility becauseGPUs and ASICs, are application

(44:15):
specific integrated circuits andsuch are able to start producing
AI capability going forward at amuch cheaper dollar. And those
are embeddable on smallerdevices that you and your
children will go to the storeand buy, and you'll be able to
implement things that just ayear ago were unimaginable. They

(44:39):
would have been far outside thefamily budget. And so it's no
longer a commercial onlyinterest or or an industrial or
military grade interest.
It's now something consumershave access to. And I think that
as as new toys develop that thatare built on this and are
teaching kids that that opens upan entirely new world of

(45:00):
capability around your house andthat you'll see consumer
electronics reflect this in muchless expensive things instead of
just having potentially a robotvacuum. You may have many little
robot that are very taskspecific things coming into your
life. And if you're not findingthe thing at your local store or

(45:22):
on online, then you just gobuild it yourself with your
maker kits because that isbecoming a real thing. It's
becoming doable.
So my prediction is we see thevery beginning of the AI maker
era come about at a consumerlevel.

Daniel (45:37):
Cool. I'm excited for it. I definitely it makes me
think of I've see see all thenews about CES recently. Lots of
talk about robotics there, whichis interesting. So my kind of
set of predictions are I think acouple fold.
One of those which we've talkedabout here before, and I think

(45:59):
is consistent with what we'reseeing is, you know, models have
been quite commoditized.Increases in performance of
frontier models has plateaued.Open source models have
essentially caught up. And soreally now we're at a stage

(46:21):
where I think like that mote ofhaving the best model is, you
know, it's not the most relevantthing. The most relevant thing
is, you know, flexibility, notgetting, you lock in the ability
for you to use a bunch ofdifferent models, the ability
for you to, you know, constructa system.

(46:44):
I think also kind of tied tothat point, my second thing that
I'm thinking of is just howfragmented and complicated the
ecosystem is getting. I thinkthat will carry on through 2026.
We won't see as, you know, thefull consolidation of that in

(47:04):
2026. And so I think what you'llsee is all of these, so it's no
longer about I'm gonna get thebest model and now my company
has AI and I'm set for thefuture. That's actually the
easiest thing.
Like you have a model, so what?I can get one on my phone, I can
get one on my laptop. Doesn'tmean anything. What is

(47:25):
problematic is if you say, okay,well, I want a system to do
this. Now I need all thesetools.
I need to connect them in acertain way. That becomes
increasingly complicated. I needit to be compliant and work in a
regulated industry. That becomesincreasingly compliant. I need
to tie in this type of data orthat type of data, complexity.

(47:46):
And so you're just seeing thisexpansion of complexity in these
AI systems, not because themodels are not capable, but
because the model is actually nolonger the blocking point of the
whole thing or the single thingin the system. And so I think if

(48:06):
you look at something like NISTsix zero one, the standard that
NIST put out of how to, runsecure AI, right? I did a little
bit of mapping and it takes, soI tried to build up to 100%
compliant with NIST six zero oneand Azure AI. And by the I got

(48:31):
up to nine different servicesthat could get me 39% compliant
with, with NIST six zero one inAzure, in Azure cloud. And so
that already, you're alreadymanaging all of these different
services and all of thesedifferent things in, in becomes
complicated.
It becomes a lot of labor to dothat. So I think some of the

(48:53):
winners in this space are gonnabe those that come to that
complexity and tell you, hey,well, than spending up 37
different things in Azure andhiring 10 people to manage it,
here's a consolidated, quicktime to value way for you to get

(49:16):
X or Y, whether that be averticalized AI solution, a
secure AI solution, whateverthat might be. So those are my
thoughts going into the newyear.

Chris (49:27):
Excellent, excellent guidance right there. For those
who are not familiar with NIST,I just wanna point out that that
is a US agency called theNational Institute of Standards
and Technology, and they put outstandards and and the 600 was
was one that Dan was referringto. So if you're outside The US,
can look that up. It's publiclyavailable. But fantastic advice.

(49:48):
Thank you for sharing that.

Daniel (49:49):
Yeah. And, looking forward to talking about all
those things in in 2026, Chris.It's gonna be a fun year for the
podcast and, new things in theworks and, yeah. So thank you to
our listeners for sticking withus another another year. We very
much appreciate you.
Appreciate, sticking with us forfor so long. I also appreciate

(50:12):
the new listeners that maybethis is your your first episode
that you're listening to.Welcome to the welcome to the
family. Please find us on thevarious socials, LinkedIn,
etcetera. And and, yeah, lookingforward to continuing the
conversation into 2026.

Chris (50:28):
It'll be a wild ride as always.

Jerod (50:37):
Alright. That's our show for this week. If you haven't
checked out our website, head topracticalai.fm, and be sure to
connect with us on LinkedIn, X,or Blue Sky. You'll see us
posting insights related to thelatest AI developments, and we
would love for you to join theconversation. Thanks to our
partner Prediction Guard forproviding operational support
for the show.
Check them out atpredictionguard.com. Also,

(51:00):
thanks to Breakmaster Cylinderfor the beats and to you for
listening. That's all for now,but you'll hear from us again
next week.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.