All Episodes

December 27, 2025 • 46 mins

📢 Want to thrive in 2026?
Join the next AI Business Transformation cohort kicking off January 20th, 2026.
🎯 Practical, not theoretical. Tailored for business professionals. - https://multiplai.ai/ai-course/

Learn more about Advance Course (Master the Art of End-to-End AI Automation): https://multiplai.ai/advance-course/

Is your business ready for a world run by AI agents?

2025 changed the game. From reasoning models to real-time context-aware agents, AI didn’t just evolve—it exploded. If you lead a business and you're still thinking in "tools," you're already behind.

In this year-end solo special, we unpack the most critical AI breakthroughs of 2025 and the seismic shifts heading for business in 2026. Consider this your strategy briefing for the year ahead—without the fluff, hype, or hallucinations.

If you’re leading a team, a division, or a company, this episode will give you the competitive lens you must adopt before Q1 gains too much momentum.

In this session, you'll discover:

  • Why reasoning models will shape every AI strategy in 2026
  • How agents are becoming your next coworkers (or competitors)
  • The critical rise of AI infrastructure in China and what it means geopolitically
  • What agent-to-agent protocols mean for customer service, commerce, and collaboration
  • The real business risks of world models and continuous learners
  • How robotics leapt forward in 2025—and how close we are to robots on your payroll
  • Why AI is pushing businesses toward personalization at scale
  • What agentic browsers are and why they’ll soon take over how work gets done
  • The 2026 forecast: AGI, automation, job disruption, politics, and the global AI arms race

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
GMT20251225-175638_Record (00:00):
Hello and welcome to a special end of

(00:03):
year episode of the LeveragingAI Podcast.
This week we are not going totalk about the weekly news.
We are going to talk about thesummary of 2025 and my
projections, or if you want mythoughts about what's coming in
2026 based on the trend.
That we have seen evolving in2025.
In addition, I'm not going to doa news episode at the end of

(00:25):
next week.
I'm taking a week off with myfamily and spending some quality
time together.
There will be a Tuesday episodein between with a how to episode
as we release every singleTuesday, and we'll be back with
a Tuesday episode in thebeginning of 2026 and with a
weekend news episode at the endof the first week of 2026.
Today's episode is brought toyou by the AI Business

(00:47):
Transformation Course, which isthe course that I've been
personally teaching since Aprilof 2023.
I've been teaching this courseat least once a month, in many
months, more than once, becauseI've done a lot of private
versions of this course.
So if you have not yet taken.
Structured training about AIbeyond listening to podcasts and
following people on YouTube, itis definitely the time to do

(01:10):
that.
In the beginning of 2026, we'relaunching another cohort that
starts on the third week ofJanuary and goes for four weeks
in a row, two hours every singleweek, plus a one hour ask me
anything sessions on Fridays.
So if you, again, haven't takenany such training, this is a
great time to come and join usand dramatically accelerate your

(01:31):
AI knowledge when it comes tohow to use it in a business
context.
Thousands of individuals andbusiness people and business
leaders from all around theworld have taken this course in
the past two and a half yearsand dramatically, and Have
transformed their personalcareers and or their businesses
using the knowledge theyacquired during that course.
If you are a business leader andyou're looking for structured

(01:52):
training for your entire team oryour entire company, please
reach out to me through my emailor LinkedIn.
There are links to both of themin the show notes.
Most of what I do is privatetraining for organizations
through different kinds ofworkshops and courses and so on,
so reach out to me for that.
We're also launching the secondcohort of our more advanced
introduction to workflowautomation with AI in which we

(02:14):
build on top of what peoplelearn in the first course and
teach you how to apply this andconnect this to your entire tech
stack and build actualautomations that can do stuff in
your business.
This will start immediatelyafter the first course ends, so
you can take both of them backto back, or if you have the
basic knowledge, you can joinjust the second course.
And now let's dive into thesummary of what happened in 2025

(02:36):
for a short little while.
What is probably going to happenin 2026, or at least the trends
that I can see right now.
And then a little bit on how youcan prepare for what's coming.
So just like the previous decadehas laid the foundation that
made generative AI an option forus, such as the transformer
architecture, such asdistributed internet, and huge

(02:58):
data centers such as fastinternet and so on.
All these things played a rolein enabling the ai.
We know today in the same exactway, the things that happen in
2025 are laying the foundationsfor what's most likely coming in
2026.
So the first thing that I'mgoing to talk about in 2025 is
reasoning models.
While, they were technicallyintroduced in Q4 of 2025 with

(03:20):
introduction of ChatGPT oh one,they became a lot more available
in the middle of the year whenChatGPT released GPT five and
baked the reasoning model intothe regular model that everybody
uses, because before that, basedon open AI themselves, less than
10% of the population actuallyused their reasoning models.
So if you've been listening tothis podcast, and you have been

(03:41):
following me and others talkingabout this, you might have been
using reasoning models at theend of 24 and the beginning of
25, but the vast majority of thepopulation did not use any of
them before.
The models themselves startedpicking the reasoning capability
as part of their workflow.
And so the jump in usage ofreasoning models and the jump in
the understanding of the labs onhow to use them more effectively

(04:05):
has jumped dramatically in thesecond half of 2025, and that's
gonna play a very big role into2026.
Another big aspect that happenedin 2025 is the rise of China
when it comes to AIcapabilities.
For now, mostly software, buthardware is on the rise as well.
This started in the beginning of2025 with a deep seek moment

(04:25):
when suddenly there was a modelfrom China that was.
At par ish with the Westernleading models and really place
China on the map.
Since then, we have multiplereally powerful, highly capable
models from China that are notonly really good, they're also,
in most cases, much cheaper thanthe Western models.

(04:45):
The two models on the Westernside that actually broke that
equation to an extent is themost recent Gemini three flash
and the one before that, Gemini2.5 flash.
Both of them are extremelycapable models.
Gemini three, flash more or lessas good as Gemini three Pro and
better, or at least equal, or atleast somewhat equal to the
knitting models out there, butfor a fraction of the cost.

(05:08):
But there are many more likethat from the Chinese side of
the world, which has led to aglobal competition between the
US and China when it comes toglobal domination in the world
of ai.
More on that later.
The third big component that Iwant to talk about when it comes
to trends in 2025 is.
Agents.
So while the conversation in2024 was about better and better

(05:30):
models, the conversation aboutagents intensified and became a
lot more real in 2025.
And in addition to the fact moreand more tools and more and more
platforms enabled, building,developing, and deploying agents
and beyond the fact that manymore companies, most of them in
the enterprise level startedtesting and some deploying
agents.

(05:50):
Three more big things happenthat will enable the most likely
agent explosion we are going tosee in 2026.
One of them is introduction ofMCP.
We've talked about MCP multipletimes in the show, and we even
showed a few use cases on how toapply it.
But MCP stands for model contextProtocol, and it was invented by

(06:10):
Anthropic, but then open sourcedso everybody can use it, and it
was widely adopted.
What it enables to do is itenables to develop an interface
to an existing data set.
And or an existing tool thatalready exists and connect it
then to any AI tool seamlesslyin a few lines of code that you
can copy and paste, which meansyou can connect to your ERP, to

(06:31):
your CRM, to your emailplatform, to your marketing
platform, to any other tool thatyou want.
And to relevant data sets justby developing the MCP once and
then that connector to thatparticular tool, let's say
Salesforce is now available toeverybody to use across any AI
tool in a matter of seconds.
So this is a huge deal becauseit saves a crazy amount of work

(06:54):
across the board to many, manydifferent companies, which drove
adoption for agents and theirconnectivity to existing tech
stacks.
The other development thathappened that will enable.
A lot of growth in 2026 iswhat's called A to A or agent to
agent protocol, which was alsoopen sourced.
And it is a protocol thatdefines how one agent
effectively talks to anotheragent, regardless of who created

(07:16):
them on what platform, whichtools, et cetera.
So it standardizes thecommunication between agents,
which then enables a lot of thestuff that we're going to see in
the future.
More about that shortly.
Another huge deal that happenedlate in 2025 is the ability to
go beyond the current contextwindow limitations.
So context window is the amountof data you can put in a single

(07:38):
chat, and it has growndramatically in the past few
years, and the leading modelsright now have usually between a
million to 2 million tokens.
Context window.
What is a token?
It's the way these models work.
They don't actually know words.
They know tokens, which is about0.7 or 0.75 words, which means
in the most advanced toolstoday, the context window allows
you to bring in between 750,000words to about 1.5 million

(08:02):
words.
That is a lot, but it is notenough to bring in huge context
such as your entire.
Database or your entire codebase or the entire data set of
everything in your company.
However, both OpenAI andAnthropic has launched two
capabilities right towards theend of the year, OpenAI with
what they call contextcompaction that came out

(08:24):
together with GPT 5.2 just acouple of weeks ago.
And what it enables the AI to dois to effectively compact the
context from a previous chat andpush it to the next chat so the
AI can continue working on amuch longer context without
breaking the knowledge that ithad in the previous
conversation.
Once this process is perfected,it basically allows it to run

(08:47):
indefinitely while rollingforward information from one
context window to the next andworking in huge amounts of data.
Right now it is gearedspecifically for coding and
looking at huge code bases, butthis will change and will most
likely be available to any kindof long context, long duration
tasks or entire projects thatindividuals and companies will

(09:08):
want to do A few weeks beforethat, in the launch of Opus 4.5,
anthropic released what theycalled Agent Harness as part of
their multi-session a DK thatpractically does the same thing.
So it works in slightlydifferent ways.
In this particular case, the wayit's working is that there are
two agents.
One is an initializer agent andthe other is the actual coding

(09:30):
agent.
An initializer agent sets up thepersistent environment and logs
all the different actions andsteps that are being taken.
So the coding agent can continueindependently regardless of the
actual context windows and howmany times it was changed in
between the different steps.
Regardless of the way itactually works, the outcome is
the same way.
We now have practically theability to run extremely long

(09:55):
sessions with keeping thecontext so the AI agent can know
what happened over a very longperiod of time.
Again, in both cases, it wasdeveloped for coding.
In both cases, it will mostlikely go beyond agents and
transition to any other type ofknowledge work.
The final big change thathappened towards the end of the
year that will also enable thesesystems to be significantly

(10:19):
wider spread is a research fromthe Cognizant AI lab and
University of Texas Austin, thatpublished a paper called Solving
a Million Step LLM Task withZero Errors.
And they developed a conceptthey called Maker, which stands
for maximal age agentdecomposition, K threshold error
mitigation, and red flagging.

(10:39):
That's a really long mouthful,but what it actually does is it
takes a tasks and then the AIbreaks it into very, very, very
small segments of the tasks, butthen it lets several different
agents solve the same task inparallel.
Then it compares the results.
The process then needs basicallya voting scheme where the agents

(11:00):
vote on the most likely correctanswer based on statistical
parameters.
So it weeds out the mistakesthat specific agent makes
because it compares it to theresults of other agents.
So if you have five agents aresolving a problem, four get to
the same conclusion, and onegets to a different conclusion,
it'll continue with the fouragents.
Now because they're breaking itinto really, really small tasks,

(11:22):
and because every task isevaluated separately, they were
able to run a process that hasover a million steps and run it
all the way through the end withzero errors, no hallucinations,
no mistakes, no formattingproblems, et cetera.
Now to make it efficient,they're using GPT-4 one Mini in
their tests, which is a cheapand fast model.

(11:43):
They could have now probablyused Gemini three flash, but any
of these models that are smallerand fast and efficient will do
the task well.
And because they're muchcheaper, you can run all these
parallel agents and still makeit cost effective, especially
that you're guaranteeing correctand accurate results over a very
long, complex task.
So what does all of thesecomponents together give us?

(12:05):
They give us the capability tonow run tasks and entire
projects that could be hours orpotentially days still in 2026
while delivering accurateresults.
Again, right now many of thesetools are built around writing
code, but this will go waybeyond that into marketing and
sales and strategy andoperations and so on.

(12:26):
Meaning we will have agents thatwill be able to act over much
more complex environment and tomuch more complex tasks and
projects independently ofhumans.
Now, to be fair, we are not ahundred percent there yet,
especially not with the regularsetup that people have.
And so one of the things thatI'm sure will start happening in
2026, we'll start seeingcompanies higher for people that

(12:49):
will evaluate the outputs of ai.
Or they will define specifictime windows as or part of job
descriptions of other peoplethat currently working in the
company to evaluate the outputof AI in their area of
expertise.
Now, this may soundcounterintuitive, we're adding
work because of ai.
No, the reality is not, you'renot adding work.
You are saving five hours ofwork that the AI is doing for

(13:12):
you and then investing one hourin evaluating the work.
So you're still saving fourhours and getting to better
results and faster.
Another big thing that happenedin 2025 that will connect to all
of this is real time contextawareness.
And this comes in severaldifferent shapes and forms.
The first one is connectors toyour entire tech stack.

(13:32):
So even out of the box insideyour ChatGPT Claw, Gemini, et
cetera, there are connectors tomany tools that you use in the
day-to-day.
But there are many otherconnectors not straight in these
tools that can go far beyondthat.
And connecting that with MCPsolutions from different
providers tells you that the AIcan now get access to your real
work data regardless of where itis stored.

(13:54):
That's obviously not across theboard.
There's still a lot of issues.
The data is siloed in differentplaces and different schemes and
all databases.
I'm not saying it's solved.
I'm saying we've made a hugestep forward in 2025 to giving
access to our real businessdata, to these AI tools.
The second thing that improveddramatically is live mode.
The ability of AI to have a liveconversation with you based on a

(14:17):
video feed and or viewing yourscreen, which is now available,
built in to most of these tools.
I use it in both ChatGPT andGemini all the time across the
board from fixing stuff at myhouse to solving complex
automation problems that I'mbuilding on NA 10 or other
automation tools.
And these tools are extremelyhelpful in doing that, and they

(14:38):
will keep on getting better, butthey fall under the category of
real time context awareness.
The third component is agenticbrowsers, which are becoming a
big deal and now are a part ofevery offering of all the major
companies.
So you can click the Geminibutton on the top right of
Chrome.
There is now a Cloud Chromeextension that I've tested in
the past couple of weeks.
There are several agenticbrowsers with the leading ones

(15:01):
being comment from Perplexityand Atlas from chat pt.
All of these allow AI agents toview your screen in real time,
take over your screen, andbasically perform most tasks
which you can perform in thebrowser, which means most tasks
you can perform the digitalworld.
All of these come with seriouswarnings of security and prompt

(15:22):
injection and so on.
So be aware of that if you'replanning to use them
excessively.
But going back to the point ofreal time context awareness,
these tools are a very big deal,and I'm sure we'll see more and
more of them while solving thesecurity problems that they
bring right now.
The next component is truemultimodality the models in 2025
are real multimodal.

(15:43):
Tools, meaning they cangenerate, see and understand,
video, images, text, et cetera,all in a single model, which
gives them a much betterunderstanding of our real world
and how it works.
And the next step of that, whichis going to be a huge trend in
2026, is world models.

(16:04):
So one person who's been talkingabout world models for a very
long time is young Koon, whorecently left Meta in order to
start a company that focusesexactly on that.
Another person that's been a bigsupport of this is Fei.
FEI Lee, one of the foundingfathers of the modern ai.
But in the recent andfascinating interview with Debe
from Google DeepMind's podcastthemselves, he talks about that

(16:27):
the next breakthrough will mostlikely come by combining world
models with the models that wehave right now.
And the last person that talkedabout learning in real time from
the actual world is Ilia sca,who is one of the co-founders of
OpenAI, and currently thefounder of SSI, self Safe Super
Intelligence, who also talkedabout models that can learn over

(16:47):
time from the world and fromeverything that they're
experiencing.
So all of these people aretalking about world models.
What are world models?
There are models that can seeand experience the world and not
just live inside the box of acomputer.
They have access to cameras.
They start to understandphysics.
They start to understand thereal world around us, and they
learn like babies and kids learnby experiencing the world around

(17:10):
them.
This will allow the models to bea lot more grounded with the
realities we live in, and itwill be a much more solid
baseline for robotics, whichwe're going to talk about
shortly.
Another aspect that I justmentioned is continuous
learners, both Demi and Ilia sand many others are talking
about models that willcontinuously learn on their own.

(17:31):
So instead of doing it the wayit's done right now, meaning
let's collect all the knowledgein the world and run a training
run, which will probablycontinue to exist as the
baseline.
You develop a model that isreally good at learning and then
it is learning on its owncontinuously, meaning you don't
need to do additional trainingruns as the model learns as it
goes forward, which dramaticallyaccelerates the learning.

(17:53):
Combine that with what we talkedabout last week.
That became a big deal in thepast few weeks.
Which is recursive self-learningAI systems, meaning AI systems,
and as we spoke last week,hardware as well, that improves
over time on its own in a closedloop without the input from
humans.
And you understand how AI modelsand AI hardware can grow faster

(18:15):
and faster while beingcontinuous learners and
understanding the world better.
And you can develop much moreadvanced tools, which are the
path based on all these expertsto a GI and beyond.
Now from a practicalperspective, what we have seen
is we have seen all of these AIcapabilities being more and more
integrated into the tools andthe hardware that we use every

(18:37):
single day.
So now you have AI capabilitiesinside of Microsoft Office,
inside of Google Workspace,inside of Salesforce Notion, et
cetera.
More and more tools in your techstack have AI built into them,
and it's getting integratedbetween these tools as well
through CPS and otherconnectors.
But it's not just software.
We have seen AI being integratedinto hardware as well, and that

(18:58):
will continue moving forward.
So right now, as an example, I'mdriving a Tesla.
I have grok inside my Tesla.
Initially it was just a fun wayto have a conversation while
you're driving and learn what'sthe weather going to be at the
destination or the results orasking trivia questions with my
kids.
But right now it is starting toget integrated into the car
system such as navigation and soon.

(19:19):
And very shortly, grok will beyour interface into the car, and
I'm sure Tesla is not the lastcompany to do this.
We have Microsoft Copilot plusPCs, which in which AI is built
into the actual hardware of thedevice.
You have Pixel 10 Pro from.
Google that is a device that hasmany AI functionality built into
the device and there is the, sofar non-existent, but hopefully

(19:41):
will change in 2026.
Apple Intelligence, which willenable AI to run on the device.
Connecting the dots for asecond, think about AI like
Gemini Three Flash, whichrequires significantly less
resources and still is a verycapable model.
That trend will keep onhappening and will enable
running high-end advanced AI ondevice, which provides a lot
more security, safety, andspeed, which is suitable for

(20:04):
many more use cases going thenext step from just devices that
we have today and combining themwith AI robotics has made a huge
jump in 2025.
We now have multiple companiesin the humanoid and quasi
humanoid robotics race that aredeveloping amazing robots and
starting to deploy them atgrowing quantities, and the

(20:24):
prices are dropping as well.
So while the top models stillcost hundreds of thousands of
dollars per model, many of themare now in tens of thousands of
dollars and smaller variationsof them are now in single
digits, thousands of dollars.
So as an example, unit three,which is a Chinese companies who
was one of the most advancedhumanoid robots out there,
started selling G one.

(20:44):
Which was their first smallrobot.
It's only four feet tall, but itcan still do a lot of stuff
around the house or in thebusiness for 16 to$40,000,
depending on the variation.
And now they have a even cheaperversion that's called R one that
they're selling for$6,000.
Now, can you do everything thebig models can do?
Absolutely not.
But it is showing you the trendof highly capable robots that

(21:06):
can do more and more things thathumans could only do before.
And obviously beyond thatbecause they're stronger and
more capable across multipledifferent things such as
accuracy and vision andwavelength that they can see in
the amount of capacity that theyhave, and the amount of hours
obviously they can work.
And you understand, combinedwith the drop in prices, that
we're going to start seeingrobots more or less everywhere,

(21:28):
starting with factories, thengoing into service providers
across different things, then incoffee shops and restaurants and
shortly after that in our homesas well.
There's obviously currentlysafety concerns and other
issues, but the trajectory isvery clear.
In the last two components thatwe have seen as trends in 25
that will continue into 2026 ismodel personalization.

(21:49):
And skills.
So let's start with modelpersonalization.
The first company that providedus a glimpse into that is
ChatGPT with memory, but thenwith the ability to create your
own custom instructions to themodel, and now it is becoming
even more and more customizableas the year continued.
The same thing is now happeningin the other models as well.
So once long-term memory wasintroduced, the these models

(22:12):
become more and more custom towho you are.
They understand your universe,they understand your context,
they understand your company,they understand your family,
they understand your needs, andthey provide more personalized
answers.
The same thing will happen withthe personality of the models
that again, already exist inChui on a basic level and will
probably continue happening into2026.
What does that mean?

(22:32):
It means that if you and yourcoworker.
Ask the same question from themodel that you're using, you're
gonna get different answersbecause you provided it
different context over time, andit knows how you like to receive
your information, what kind ofdata you're looking for, how to
connect the dots for you in away that is helpful for you,
which is different than somebodyelse.
This has huge benefits from thevalue that AI provides and huge

(22:54):
risks from a society perspectivewhen there is no unified truth
or processes for doing things,and every single individuals is
seeing their own version of thetruth and the process.
So there's good and bad in thatlike many other aspects of ai.
And the last component before wedive into what would be the
impact in 2026 is skills.
Skills were introduced byanthropic.

(23:17):
And they just in the last 10days, open sourced skills as
well, just like they did withMCP.
I think skills are incredible.
So skills are the ability toteach the AI how to do something
specific, a scale and package itas a package that the AI knows
how to pull only when it needsit, so it doesn't consume your
context window.
And the AI can still use it asneeded, when it needs it.

(23:39):
It can connect to specifictools, it can have specific data
and so on.
And it will perform and enhancewhat AI can do across the board.
And I anticipate now with themopen sourcing it to see more and
more companies starting to useskills.
And that will become a secondnature and almost in an
afterthought, probably by mid orthe end of next year.
Well, these will just happen inthe background and we won't even

(24:01):
think about it.
I'm gonna connect some of thesedots to some key aspects that
Demi Sabe, the CEO of DeepMindfrom Google has shared in the
recent interview he just did aspart of the DeepMind podcast.
So he mentioned severaldifferent things that we need to
address in order to go to thenext level with ai and that he's
expecting in 2026.

(24:22):
One is solving what he's callingthe jagged intelligence.
And what he means by jaggedintelligence is the fact that AI
can show PHD level of knowledgein specific things, but fail
miserably in some other stuffthat a 12-year-old can perform.
So that inconsistency in itsability to understand, analyze,
or deliver data is somethingthat has to be resolved.

(24:44):
The broader aspect of this iswhat he calls the shift to
reliable agentic systems,meaning agents that can
consistently deliver correctresults.
And he is talking about the factthat the balance will shift from
raw power to consistency andaccuracy.
Meaning instead of investingmore in growing bigger and

(25:05):
better models, which are alreadyreally, really good, the focus
will become on, forget aboutmaking the model better.
Just give me reliability.
Give me a hundred percentconsistency on the tasks that
I'm doing today.
Forget about what it can dothree years from now.
I don't care.
Just let me solve the currentimmediate problems, and hearing
from them tells you that's thefocus in DeepMind, which tells
you that that's most likely thefocus in the other major labs as

(25:28):
well.
So where is all of this going tolead us in 2026?
The first thing is agenticexplosion.
We are going to see agents moreand more everywhere, first of
all, in large enterprises forinternal processes.
But over time in 2026, it willgrow into more and more complex
tasks because of all thecapabilities we talked about

(25:49):
before.
Much longer running agents thatcan take much longer tasks and
work for again, hours andpotentially complete days.
On their own, on specific tasks,we will see either no or much
lower level of errors coming outof these models.
Significantly lesshallucinations either because
the models themselves will learnto self-correct or with

(26:11):
mechanisms such as I mentionedearlier from the research that
were just published.
And from a concept, it meansthat we're going to go from a
tool that can get some data inand provide some answers out, or
generate images or generateanything else to true team
members, actual AI collaboratorsthat will work together with
humans as part of a team, insidecompanies and organizations to

(26:35):
achieve the tasks and the goalsof the organization.
Now this requires a completemindset shift and a complete
technological shift, and acomplete workforce strategy
shift from what we know today.
Because in the beginning, peoplewill be working with one or two
agents, but over time, peoplewill manage multiple agents just

(26:57):
like they manage employees rightnow.
This requires teaching peoplehow to do that.
This requires developing theright infrastructure and the
right guardrails for this towork effectively and safely in
large scales.
We're also going to see more andmore agent to agent interaction,
initially more inside ofcompanies where one agent will
work with other agents toachieve different tasks.

(27:17):
This is how agentic systems arebuilt.
They're built usually even todaywith an orchestrator agent that
delivers smaller tasks to veryspecific agents, and then it
orchestrates, as the namesuggests, the process between
the different agents.
But this will grow way beyondthat, both in scale as well as
outside the organization.
And we're going to see more andmore agent to agent interaction

(27:38):
outside the company, such asagent based commerce.
So you will have an AI agentthat will help you shop, and
that agent will talk to theprovider's agent, and they will
together figure out what is thebest solution, product, or
service for you.
And potentially with the rightpermissions and safety guards,
we'll also make the purchase andthe transaction without humans

(27:59):
being involved or with justhumans approval.
So it will show you what is theoutcome of the conversation
between the agents, and you cansay yay or nay, or pick between
option one, two, or three, orwhatever it is that you request
it to do for you.
The initial steps of this iscurrently happening right now.
The same thing will happen incustomer service.
The same thing will happen withproposals.
The same thing will happen inmany other aspects of our

(28:20):
businesses, where the clientagents will interact with
company agents to get towhatever step in the process
that will be allowed for thatparticular task, and that will
be executed fully by agents withhumans either just approving or
giving a final selection betweena short list of different
options.
Another huge trend that Ianticipate in 2025 is vibe

(28:44):
everything.
What I mean by that is thephrase vibe Coding was coined by
Andre Carpathy in the beginningof 2025 when it comes to writing
code with ai.
We went from a world in whichonly computer science majors and
people who actually know how towrite code can develop software
to an era within one year thatliterally anyone can write

(29:04):
pretty sophisticatedapplications and definitely
really simple and quickapplications.
I'll give you three examplesfrom the past few weeks that
either me or people I'm workingwith have developed that have
provided immense amount of valueand didn't take too much time.
One of it is I am developing anew website for my software
company that does agent based.
Invoice reconciliation andvouching, and I developed a

(29:27):
website while Vibe coding it,and it's a really cool, really
advanced, sophisticated website,and I haven't used anybody
external to do any of it.
I haven't used any third partytool like Elementor or Wix or
any of those to create the tool.
I'm literally just vibe codingthe whole thing, including the
front end, the backend, theinfrastructure, the connectors,
everything that it needs inorder to provide value to my
clients and potential clients.

(29:48):
Another great example is in aworkshop that I delivered a few
weeks ago, one of theparticipants in the workshop has
vibe coded, a tool that we usedin and during the workshop in
order to track people's ideasfor the hackathon that we did at
the end, and to vote on theideas that they actually wanna
develop for the hackathon.
That was a very helpful toolthat was vibe coded in a few

(30:08):
minutes to be used right there.
And then, so a very quickturnaround, that was very
helpful as far as the resultsstudy provided.
Another great example that I cangive you from the past couple of
weeks, this is the mostwonderful time of the year in
the US as you know, and whichmeans open enrollment, which
means you're gonna get athousand different options or x
number of options for medicalinsurance.
And it's so hard to pick becausethere's so many options and

(30:29):
variations.
There's all the different planswith the different copay and the
different maximum out of pocketand the different cost of the
plan itself and et cetera, etcetera, et cetera.
And it's just so hard to pick.
And I literally took all thatinformation, dropped it into
Gemini, and asked it to create asimulator that will help me
evaluate the different optionsbased on different scenarios and
in five minutes it created it.

(30:50):
And in another five minutes Iknew exactly which plan works
best for me and my family.
These are things that were justnot possible a year ago and are.
Easy to do right now for eachand every one of you for any
need that you have.
So this is the end of 2025.
But what will happen in 2026 isthat these capabilities will be
expanded beyond just coding.

(31:10):
Meaning you will be able to giveit information about anything
and ask it to develop thatthing, whether it's a marketing
strategy or a complete plan, oryour actual execution of your
social media capabilities or anentire sales funnel process,
including the emails andeverything in it, or different
aspects of your operations inyour company or customer
service, et cetera.
You'll be able to just explainto the AI what you wanted to do

(31:31):
and you will figure out the wayto do this in an effective way.
So that's a big trend that I seecoming.
All of these will require acomplete workforce change in the
way we address how we work.
Meaning this goes way beyond thetechnology.
The technologies actually mightbe the easier part we need to
rethink.
The strategy of ourorganization, what else we can

(31:53):
do with AI that we couldn't dobefore in a profitable way.
How can we go after new markets?
What kind of new clients we canserve?
What kind of new services andproducts we can sell effectively
that we couldn't do before?
Because the ROI just didn't makesense, and now it does with ai.
So going from a mindset of let'sbuild small efficiencies that
save us 5% here and 2% there toa mindset of 10 x my business,

(32:14):
because AI enables it right now.
This is the direction that we'regoing to see in 2026, but it
will require a completerethinking.
Of the organization as we knowit, including very dramatic HR
changes from a hiring, firing,and structure perspective.
Very significant investment intraining, both initial training
and ongoing training andinvestment in infrastructure for

(32:35):
the technology and the entiretech stack of the company.
That is not a simple task, butwe'll start seeing more and more
of that happening across theboard.
And obviously we'll start seeinga lot more success stories from
companies who are AI native, whodoesn't have to go through this
transformation to just getcreated in the DNA of using ai,
which allows them to grow veryfast with very few people to

(32:56):
very significant scale whilestill providing all the services
to their clients.
And now let's go to the biggerpicture.
How is that going to impact theworld and not just businesses,
how it's gonna impact society?
How is it gonna have globalimpact across the board on much
on many more things that we havetoday?
So first of all, I anticipatethat the race between China and

(33:16):
the US will intensify evenfurther.
I assume we'll start seeing theEuropean Union investing more in
actually developing significantreal AI and putting them on the
map, which they are not rightnow, other than very specific
points in which they have somethings to talk about.
Uh, but overall, they're veryfar behind right now compared to
China and the US.
We're going to see a growingnegative impact on jobs.

(33:39):
While we have talked about therecent research that shows that
there hasn't been significantimpact of AI on jobs as of right
now, I definitely suspect witheverything that we talked about
in the past year and a half ortwo and a half years on this
podcast and what we talked aboutin this episode, that the
growing.
Capability of AI to do moretasks that are longer and more

(34:00):
complex will do more work thathumans do.
And since there's a finiteamount of work that needs to be
done because there's a finiteconsumption and a finite demand
for services and products, thiswill lead to companies having to
cut people and to completelychange the way they work, which
will drive unemployment and alot of unease in the world,

(34:22):
starting with the US where AIadoption is just faster than
most other places.
Now the other thing that willcontinue that has a global
impact is the growing need forresources to keep pushing
forward the AI race.
So more and more data centers,usage of natural resources such
as water, huge amounts of moneythat are poured into this

(34:42):
instead of to other aspects,pollution that it drives in
order to generate the power thatthese data centers require and
so on.
There's been a letter sent outlast week by Bernie Sanders
calling to stop completely thedevelopment of new data centers
in the US because of manydifferent negative implications
that it has the combination ofthe natural damage that mostly

(35:03):
the combination of three things.
The negative impact on theenvironment, the job losses that
it's going to drive, and thefact that very few people are
gonna get richer and everybodyelse is gonna get poorer as a
result of that.
I'm expecting that together withmore and more pushback that
we're seeing right now to datacenters being built in rural
America, that will growdramatically in 2026, and that

(35:24):
will lead into the world ofpolitics, which is already
there.
So 2026 is an election year.
And you look at the fact thateverybody will look for the
right messages in order to winmore votes, and AI will become
very political.
So far it is not completelyclear which sides of the aisle
believe in which kind of aspect.
On the Bernie Sanders.Side, itis very clear, but there are

(35:47):
people on both sides thatbelieve that we need to push
forward in order to staycompetitive with China.
And there are people thatbelieve that we need to stop
because of the risks that itproduces, whatever they are, or
the combination of all of themtogether.
But either way, there is gonnabe a lot more politics involved
in AI talking about this, wehave seen the recent head
bashing between the states andthe federal government in the US

(36:10):
with the new executive order byPresident Trump.
That is basically preventing orthreatening with very serious
measures, both financial andlegal measures against states
that will put in place AIregulation that will slow down
or stop the innovation in theirparticular states.
How are that evolved?
Not sure yet, but we're gonnalearn a lot more about it in

(36:30):
2026.
We're also going to see biggerand maybe the outcome of really
large legal battles around ai.
And some of it has to do withthe rights on the training data.
Is it fair use or not?
Some of it has to do withliability, meaning who is to
blame when a model that runs inyour company is now messing
something up, or who is to blamewhen a self-driving car is in an

(36:53):
accident, and so on.
So the issue of liability in anagent world is currently not
solved by the current laws thatwe have right now because we did
not have agents.
If you remember about a yearago, there's been the whole
craze around creating rap songsof famous rappers with their
style of musics and with theirvoice, and they couldn't sue
because there was no law thatsays that you own your voice

(37:16):
because there was no other wayto clone your voice, so it
didn't matter.
So creating a rule like this didnot make sense, and I'm sure
we're gonna start seeing moreand more laws again, either at
the federal level or at thelocal level that will put in
place laws that will defineliability and different outcomes
when an agent or an AI tool doessomething.
There's been also thepsychological aspect of things

(37:38):
where several different peoplecommitted suicide or did really
bad things to themselves or toothers because whatever AI tool
recommended it.
So all these things will end upin court and we'll start seeing
the outcomes of these legalbattles unraveling, which will
give us more guidance on how thefuture will AI will look like.
2026 we'll also see most likelyhuge IPOs and more consolidation

(38:02):
coming in that space.
The amount of money that getspoured into the AI race right
now is insane.
It's nothing like anything we'veseen before and it's obviously
not sustainable in the long runjust because there's not enough
money to keep sustaining it,meaning we're going to start
seeing more and moreconsolidation and we're gonna
start seeing big IPOs.
There's already been lots ofrumors on both open AI and
Anthropic potentially goingpublic in 2026, or at least

(38:24):
getting close to that and maybegoing public in 2027.
Another great example ofconsolidation from this past
week is Nvidia just basicallytook over Grok Grok with a Q,
meaning the hardware companythat has been developing chips
that specializes in inference,and doing much better job than
GPUs as far as the inferenceside of using ai.
And for those of you who are newto the show, there are two types

(38:46):
of processes that needs tohappen for you to use your
Chacha Pity or Claude or etcetera.
One is to train the models.
So taking a huge amount of dataand writing a training set and
GPUs are still the best optionon the planet right now.
Even though there are some othercontenders like Traum from
Amazon and like TPUs fromGoogle, but GPUs from Nvidia are

(39:07):
still ruling by a big spread,especially with their ability to
create a much larger scale ofthem right now and supplying the
growing demand for these.
But the other aspect is when youuse the ai, when you write a
prompt or ask for an image andso on, there is the process of
using the AI to generatewhatever it needs to generate.
And that is called inference.
And grok with a Q is one of theleading platforms out there

(39:30):
today to use AI and createreally fast, effective inference
across many different AI tools.
And now Nvidia couldn't buy thembecause the regulator would've
slowed it down.
So they bought the licensing todo that and the team at the same
time.
So we're gonna see more of theseaqua hires or different kind of
mechanisms of larger companiesto take over smaller companies

(39:52):
without practically purchasingthem.
So they can go around regulatorsblocking the move.
But I anticipate this to go waybeyond just projects and
specific companies and get tothe national level and the
geopolitics level betweendifferent companies, nations and
continents.
So we talked about the US Chinarace in the eu, but we are
already starting to see nationalprojects.

(40:13):
And last week they announced thefirst 24 organizations that will
participate, that will combinetogether the knowledge, the
power, the resources, theresearch, et cetera, across
multiple different agencies andcompanies to use AI to promote
national level projects andresearch.
And I anticipate internationalcollaboration projects like this
where specific nations will nothave enough money or other

(40:35):
resources to do this, and theywill collaborate with other
nations in the region orglobally in order to achieve
bigger goals.
And hopefully if this is morewishful thinking than me
actually thinking it's going tohappen, I really hope we're
going to start seeing broadinternational collaboration
between governments, academiaand companies, to start working
on preparing the world, thesociety, the global economy, to

(40:59):
the age of a GI and A SI,because it's going to have
profound impacts on more or lesseverything we know nobody is
ready and no single organizationon their own can either stop or
can either stop or startpreparing for what's right
around the corner.
And again, I hope that we'regonna start seeing these kind of
collaborations evolving, or atleast being set up in 2026.

(41:23):
And one thing that I'mpersonally really excited about
that came out of the interviewwith Demis Es that I mentioned
earlier is what he calls thescientific root node
breakthroughs.
Basically what he's saying isthat what they did with alpha
fold, which awarded him theNoble Prize and is the ability
to use AI to simulate thefolding of actual proteins,

(41:44):
every protein in nature, andunderstand its exact structure,
which really is the basicbuilding blocks of living
materials.
They are.
What he's saying is that thesame concepts that were used to
create, if fourfold can be usedfor more or less anything else.
Or as Demis said in theinterview, and I'm paraphrasing,
he hasn't seen anything in theworld that is not computable

(42:06):
yet.
So they're deeply involved in.
Advanced research of energy andfusion.
They're working together withCommonwealth Fusion to hopefully
accelerate reactors that will dofusion power, which means
completely clean, endless energythat can solve all the energy
problems of the world withoutcreating any pollution.
They're working with severaldifferent companies on material

(42:28):
science, including roomtemperature semiconductors,
which can completely transformthe world of it as we know it
today.
They're working on multiplebiological related projects.
Demi himself said that hebelieves that within less than a
decade, we'll be able tosimulate every single function
of the human cell at an accuratelevel, meaning we can simulate
and develop new treatments toany disease that exists today

(42:51):
significantly faster than we canwrite now.
If you haven't watched the movieabout Demi's life, you should,
because it will tell you thathe's very serious about this
being his life mission to solveall these really big problems
for humanity.
So now you're probably reallyexcited and terrified at the
same time, which is the feelingI get every single day when I
work and use ai, and especiallywhen I sit down to think about

(43:13):
where the world of AI is goingto take the world as we know it.
And so what can you do toprepare?
The first thing you're alreadydoing is teach yourself about AI
and continue teaching yourselfabout ai.
So I assume that other than mypodcast, you're consuming other
AI related content, either apodcast or YouTube videos or
whatever it is that you'reconsuming in order to teach
yourself.
But you also need to putyourself and if you're in the

(43:34):
leadership position, the peopleunder you into more structured
training, either workshopcourses, uh, and this could be a
combination of external andinternal resources to deliver
this kind of training.
I've done workshops forcompanies such as Salesforce, so
they obviously do a lot ofinternal training themselves,
but they also hire people likeme to deliver training to.
Compliment the stuff thatthey're doing internally.

(43:56):
I've done this in small andlarge organizations as well.
Many of them do not have anyinternal capabilities to do
this, but even those who doinvite me regularly to do
workshops, to focus on specificcapabilities that they want to
learn, and you need to do thesame thing in your organization.
If you are an individual, youhave to take care of yourself.
If your company slashorganization is not doing that

(44:17):
for you, it has been.
Proven in multiple research frommultiple reputable sources that
people with AI skills are highlysought after right now, and they
can command 30 to 57% highersalaries compared to other
people in the same roles withoutAI skills.
And it also dramatically reducesyour chances of losing your job

(44:39):
because of AI capabilities thatare being implemented in your
company, because you're gonna bethe one that will help implement
them.
Does that provide you along-term job secure?
I don't think any of us has along-term job secure right now
with the way AI is movingforward, but it least it gives
you much longer time figure thisout from both a financial
perspective and from anorganizational perspective.

(44:59):
The second thing that you can doinside your organization as an
individual is start askingquestions.
What are we doing?
What is the plan?
Where are we getting training?
What are the systems that we canand cannot use and why?
And so on.
So ask questions and become asmall AI leader inside your
organization, even if you arenot announced as such.
I know multiple individuals thatgot significant promotions and

(45:20):
got very important roles intheir AI transformation of their
companies by just being thereand by pushing forward and by
sharing their scales, and thenbecoming the leaders of that
transformation, or at least.
Participants in thetransformation in their
organizations.
So that's the other thing thatyou can do as an individual.
We'll end it here.
I would like to wish all of youa happy new year.

(45:42):
has been an amazing pleasureserving all of you.
It warms my heart with everysingle message that I get on
LinkedIn of people who arelistening and consuming the
podcast and find value in it,and are learning from me in any
way, whether through my paidcourses and workshops, or
through the YouTube channel, orparticipating in the Friday AI
Hangouts.
Or however it is that you arelearning from me.

(46:04):
I appreciate every single one ofyou.
I'm learning from every singleone of you through the
engagements that I'm having withyou, and I cannot be more
thankful for everything thatwe've done together and the
journey that I went through withyour help in 2025.
And all I can say is put ahelmet on and strap in because
2026 is going to be a hell of aride.

(46:24):
Happy New Year everyone.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Burden

The Burden

The Burden is a documentary series that takes listeners into the hidden places where justice is done (and undone). It dives deep into the lives of heroes and villains. And it focuses a spotlight on those who triumph even when the odds are against them. Season 5 - The Burden: Death & Deceit in Alliance On April Fools Day 1999, 26-year-old Yvonne Layne was found murdered in her Alliance, Ohio home. David Thorne, her ex-boyfriend and father of one of her children, was instantly a suspect. Another young man admitted to the murder, and David breathed a sigh of relief, until the confessed murderer fingered David; “He paid me to do it.” David was sentenced to life without parole. Two decades later, Pulitzer winner and podcast host, Maggie Freleng (Bone Valley Season 3: Graves County, Wrongful Conviction, Suave) launched a “live” investigation into David's conviction alongside Jason Baldwin (himself wrongfully convicted as a member of the West Memphis Three). Maggie had come to believe that the entire investigation of David was botched by the tiny local police department, or worse, covered up the real killer. Was Maggie correct? Was David’s claim of innocence credible? In Death and Deceit in Alliance, Maggie recounts the case that launched her career, and ultimately, “broke” her.” The results will shock the listener and reduce Maggie to tears and self-doubt. This is not your typical wrongful conviction story. In fact, it turns the genre on its head. It asks the question: What if our champions are foolish? Season 4 - The Burden: Get the Money and Run “Trying to murder my father, this was the thing that put me on the path.” That’s Joe Loya and that path was bank robbery. Bank, bank, bank, bank, bank. In season 4 of The Burden: Get the Money and Run, we hear from Joe who was once the most prolific bank robber in Southern California, and beyond. He used disguises, body doubles, proxies. He leaped over counters, grabbed the money and ran. Even as the FBI was closing in. It was a showdown between a daring bank robber, and a patient FBI agent. Joe was no ordinary bank robber. He was bright, articulate, charismatic, and driven by a dark rage that he summoned up at will. In seven episodes, Joe tells all: the what, the how… and the why. Including why he tried to murder his father. Season 3 - The Burden: Avenger Miriam Lewin is one of Argentina’s leading journalists today. At 19 years old, she was kidnapped off the streets of Buenos Aires for her political activism and thrown into a concentration camp. Thousands of her fellow inmates were executed, tossed alive from a cargo plane into the ocean. Miriam, along with a handful of others, will survive the camp. Then as a journalist, she will wage a decades long campaign to bring her tormentors to justice. Avenger is about one woman’s triumphant battle against unbelievable odds to survive torture, claim justice for the crimes done against her and others like her, and change the future of her country. Season 2 - The Burden: Empire on Blood Empire on Blood is set in the Bronx, NY, in the early 90s, when two young drug dealers ruled an intersection known as “The Corner on Blood.” The boss, Calvin Buari, lived large. He and a protege swore they would build an empire on blood. Then the relationship frayed and the protege accused Calvin of a double homicide which he claimed he didn’t do. But did he? Award-winning journalist Steve Fishman spent seven years to answer that question. This is the story of one man’s last chance to overturn his life sentence. He may prevail, but someone’s gotta pay. The Burden: Empire on Blood is the director’s cut of the true crime classic which reached #1 on the charts when it was first released half a dozen years ago. Season 1 - The Burden In the 1990s, Detective Louis N. Scarcella was legendary. In a city overrun by violent crime, he cracked the toughest cases and put away the worst criminals. “The Hulk” was his nickname. Then the story changed. Scarcella ran into a group of convicted murderers who all say they are innocent. They turned themselves into jailhouse-lawyers and in prison founded a lway firm. When they realized Scarcella helped put many of them away, they set their sights on taking him down. And with the help of a NY Times reporter they have a chance. For years, Scarcella insisted he did nothing wrong. But that’s all he’d say. Until we tracked Scarcella to a sauna in a Russian bathhouse, where he started to talk..and talk and talk. “The guilty have gone free,” he whispered. And then agreed to take us into the belly of the beast. Welcome to The Burden.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.