All Episodes

Learn more about Advance Course (Master the Art of End-to-End AI Automation): https://multiplai.ai/advance-course/

Learn more about AI Business Transformation Course: https://multiplai.ai/ai-course/

Are you prepared for the moment when your AI tools fail—and take 20% of the internet with them?

This week was one of the most explosive in recent AI history. From Google’s jaw-dropping Gemini 3 release to a stealth drop of Grok 4.1, plus the Cloudflare crash that wiped out access to ChatGPT for hours — the implications for business leaders are massive.

In this episode of the Leveraging AI Podcast, Isar Matis unpacks the seismic shifts that happened across the AI landscape this week—and what they mean for your business. If you're leading a team, scaling a company, or just trying to stay ahead of disruption, this is your AI cheat sheet.

Bottom line: Ignore this week’s AI developments, and you risk falling behind. Fast.

📌 In this session, you’ll discover:

  • Why 20% of the internet crashing should scare every business leader
  • How Google leapfrogged the AI race with Gemini 3 Pro 
  • Why Grok 4.1’s silent release might be the biggest underdog move of the year 
  • AI agents are here: what Microsoft Ignite revealed about the enterprise AI future 
  • Klarna cuts 50% of staff—how AI is creating a new kind of workforce
  • How businesses are hitting $1.1M revenue per employee using AI 
  • The rise of humanoid robots in real-world production lines
  • OpenAI and Anthropic are warning us—are we about to lose control?
  • The executive order that may block AI regulation at the state level 
  • Why you shouldn’t buy your kid an AI toy this holiday season 

💡 Key Takeaway:

AI is evolving at breakneck speed. Leaders who aren’t proactively integrating and planning for redundancy, ethics, and upskilling will be left behind. Fast.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 2 (00:00):
Hello and welcome to a Weekend News episode of the

(00:02):
Leveraging AI Podcast, a podcastthat shares practical, ethical
ways to leverage AI to improveefficiency, grow your business,
and advance your career.
This is Isar Matis, your host,and what a week we had.
So I will start a little bitwith my personal experience this
week.
I just came back from deliveringtwo different workshops in
Europe.
each of them was two days offull training plus a hackathon

(00:26):
in the end, in two differentlocations with about 80 people
in each and every one of theseworkshops.
And it's always my favoritething to do because most of the
people in these workshops, and Ialways ask people to raise their
hand based on their currentlevel.
Most of the people have definedthemselves as either total
beginners or novice users of AIbefore the workshop and at the
end of the second day when theyshow the outputs of their

(00:49):
hackathon, it is just soexciting to see the level of
progress and practicalimplemented use cases that they
can start using in theirbusinesses for things like
financial projections, sellprojections, inventory
projections, dashboards, showingcorrelations between marketing
investment, online purchases,and store purchases, training
videos, marketing videos, HRpromotional videos, market
research reports with verydetailed analysis that saves

(01:12):
them tens of thousands ofdollars or sometimes hundreds of
thousands of dollars of externalprovider automating website
content, visual assets at scalefor multiple things they need
and many, many more.
And all of that after Tuesdaysof training across every aspect
of the business.
Now, to make this even moreamazing, during the first
hackathon, CloudFlare decided tocrash 20% of the internet.

(01:34):
So we had very limited access toChatGPT.
So people had to figure outother ways to do this, and yet
they were able to generateamazing results.
Speaking for 2 cents on thecloud flare crash in general,
While I assume most of youexperienced issues related to
that, let me explain a littlebit what happened cloudFlare
went down for a few good hourson Tuesday this past week took

(01:55):
down about 20% of the entireinternet, including that many,
many different websites,including X and ChatGPT, and it
all happened due to a simpletweak they made to their click
house database permission systemthat is supposed to control
their bot management tool,basically allowing or
disallowing bots to scrapedifferent websites.

(02:15):
And that led to a whole set ofdominoes falling down, which
eventually, as I mentioned, tookdown 20% of the internet for a
few hours.
Why is that interesting?
Well, you need to remember, wetalked about this many times
before, and I promise you we'lldive into the episode in a
second.
But I think this is a veryimportant aspect of learning
what, how to prep better for ourfuture as we become more and
more dependent on AI tools andagents to run critical aspects

(02:37):
of our businesses, and overtime, most likely, forgetting
how to actually do themmanually.
Redundancy is becoming critical,and when I say redundancy, it's
across more or less anything.
If you have critical aspects ofyour business that AI is going
to run, then you need redundancyacross all different levels of
your IT stack, including how youdeploy your services worldwide.
Also, access to fallbackmechanisms and redirecting to

(02:58):
other AI models.
If the models you're running onare not working, you need to be
able to test these models inadvance and so on and so forth.
So.
And if you wanna be even safer,you need to provide training to
your people on regular basis,let's say every quarter or every
six months on how to do themanual process that used to
exist before AI took overbecause you may need to use it.
We used to do that in the AirForce all the time and do some

(03:19):
segments of large exerciseswithout any computer systems.
And that retained our capabilityto work manually if we had to,
and you should probably do thesame again, specifically for
critical aspects of the businessthat run by ai.
But this was just a very smallcomponent of what happened this
week.
As the saying says, when itrained, it pours.
We got multiple new models thisweek from multiple vendors.

(03:41):
All of them are absolutelyincredible, breaking all the
different benchmarks and so on.
We have a completely new AIleadership board ranking that
has been shuffled from just acouple of weeks ago, and we got
some very significant warningabout the impact of AI from
leaders in the industry,including anthropic, open AI,
and leaders in the consultingindustry like PWC and Garner,

(04:03):
and it was Microsoft Ignite thisweek, which had its own new set
of announcements on how AI isgoing to be integrated into
everything, Microsoft.
So we have a lot to cover.
So let's get started.
The biggest and most excitingrelease of this week, which was

(04:23):
highly anticipated, and wetalked about this in the last
few weeks, was the release ofGemini three.
So it was known or expected, asI mentioned, that Gemini three
will come out in November, andthere were a lot of rumors on
how much it's gonna be better,and nobody exactly knew what it
is, but now we have actuallyaccess to it.
But it was as expected releasethis week, and it is an

(04:43):
incredible model across theboard.
How incredible.
So let me describe this to youjust by looking and analyzing
the information from the LMCArena, which is a platform that
allows people to comparedifferent models without knowing
what they are.
So a white label comparison andpick the models that act better
for them across multipleaspects.
We talked about the El Marinamultiple times on this podcast.

(05:03):
Well, I have a screenshot fromit from just a few weeks ago
because I was talking about thisin a keynote that I did at a
conference, and back then, along time ago, about two weeks,
uh, the first three spots acrossthe board, combining the
multiple results together withtwo different variations of
clause on it.
4.5 and with Claude Opus 4.1with Gemini 2.5 Pro, just in the

(05:27):
fourth PO position, followed byChatGPT, and then Claude Opus
4.1, and then GPT-4 0.5 and, andso on and so forth.
But the three first spots wereheld by Claude.
Right now, Gemini three Pro isnumber one overall.
Number one on hard prompts,number one on coding, number one
on math, number one on creativewriting.
Number one on instructionfollowing longer.
Number one on longer query, andnumber one on multi-term, which

(05:49):
is more on the agent side.
Number two, which we're gonnatalk about this in a minute, is
Grok 4.1 thinking.
And when I say it's number two,it is sharing number one with
hard prompts, coding, math.
Multi turn.
And overall it is sharing thefirst place with Gemini three
and it's number two in creativewriting.
Number two in instructionfollowing, and number four in

(06:10):
longer query, followed by ClaudeSonnet in the third place.
So two different models thatwere both released this week has
surpassed Claude Sonnet 4.5.
If you wanna dive into some ofthe specific benchmarks.
Gen I three scores, 92.1 on theMMLU Pro reasoning benchmark,
which is compared to Claude 4.5,sonet at 89.7 and open AI at

(06:34):
88.2.
And just with all the previousreleases, this is just just a
single model release.
It is a family of models fromthe lightweight nano model that
is built for on-device usage tothe ultra complex reasoning
model that integrates advancedvideo understanding
co-generation, real-timecollaboration tools.
And overall, as we've seenamazing capabilities across the

(06:55):
board, it was all trained onGoogle's own hardware, so TPU
version five P clusters, which,and this model fully integrates
into everything Google.
We are going to talk more aboutthat in a minute.
And if you remember, or if youwant, go back to the early
episodes of this podcast in 2023when Google was very far behind
and delivering embarrassingresults when it came to ai, I

(07:17):
said all the time that Google isgoing to win this race because
they have everything they needin order to win the race.
And here we are where Google isback ahead and not just back
ahead.
They're integrating it into moreand more aspect of Google.
They have the perfect verticaland horizontal integration of
these models, from the demand tothe data, to the compute, to the
people, to the distributionchannels, to the tools that

(07:37):
people use.
Literally everything you want,Google has it, and it was just a
matter of time until they're allgoing to integrate it together.
This is obviously not the finalstep of the race.
It's just the current step ofthe race where Gemini three is
ahead based on internalinformation from Anthropic that
was shared on the information.
Internal tests within Anthropicis showing Gemini three
outperforming Claude in sevenout of 10 categories, including

(08:00):
math, vision, language, tasks,and other aspects.
And this has driven twodifferent things.
One is they are pushing in theirrelationship with Amazon to get
more GPUs in order to growfaster.
And Dario Amide reportedly saidin an internal text to different
partners.
Were not sleeping on this.
Claude four drops in Q1.
So the race is on, but inaddition to the fact it's
another step in the race, thisis maybe the most multi-modal,

(08:24):
most generalized model we evergot.
Or if you want to understandwhat that means from the horse's
mouth, debe, the CEO of deepmindset, and I'm quoting Gemini
Three, isn't just smarter.
It's a foundation for agentsthat reason across modalities
from analyzing live video feedsto co-authoring code in real
time.
Now in addition to itsincredible capabilities across
the board, there was a big focuson red teaming and safety and a

(08:47):
big part of the announcementemphasizes the amount of effort
they invested in that.
And they're saying that Geminithree's safeguards is blocking
85% more harmful prompt thenGemini 2.5.
Now, is that enough?
That's a very good questionbecause what I'm asking myself
is as these models gets better,the risk of the damage that they
might do gets much, much higher.
Because if original models couldwrite a poem and it wouldn't

(09:09):
rhyme properly, these models canrun your company.
They can potentially generategenerous weapons or write really
problematic to handle, maliciouspieces of code.
And so being able to fix thoseis a lot more important right
now.
And while they didn't share thepercentage of harmful prompts,
it's actually able to control orthe ones that get executed

(09:29):
through these controls, even the15% that was not improved from
previous models might beunacceptable depending on what
the prompts are.
So when you read theseannouncements from these leading
labs, I want you to think aboutthat.
Think about what is possiblewith this new model and how
acceptable it is to allow thatcapability to not be fully
aligned and be able to use fornegative things.

(09:51):
And I'm not sure these twographs align in a way that is
acceptable from a social orbusiness perspective.
Another big part of Google'sannouncement was the release of
Gemini three Pro Image, alsoknown as Nano Banana Pro, which
delivers a huge upgrade over thealready amazing original nano
banana.
So it's delivering a studioquality control by maintaining

(10:12):
incredible subject consistency,including using multiple
characters in several differentimages and across 14 different
objects, all in a singleworkflow, which enables you to
seamlessly swap backgrounds oroutfits or multi-image blending
from different reference photosinto a single unified image.
In their release page, whichwill appear in our newsletter,

(10:33):
they are sharing multipleimportant capabilities on their
website in the page thatdiscusses the capabilities of
the new model, they highlightseveral important upgrades.
One is generating clear text,sharp and in exactly the right
setup as it needs to be.
Some of the examples thatthey're showing them are
absolutely mind blowing, such asan actual comic strip with the
images and the text above andthe text below in handwritten

(10:56):
notes and multiple otherexamples of amazing text across
different setups, which meansyou can now generate ads or
comics or anything you need withtext on top of it, floor charts,
et cetera.
With Gemini, another veryinteresting capability is real
world knowledge.
So because it is a part ofGemini, it's not just an image
generation tool.
It can do really cool things.

(11:16):
One of the examples they gave isthey wrote a prompt that says
high quality, flat layphotography, creating a DIY
infographic that simply explainshow a solar energy works
arranged on a clean, light, graytexture, background, et cetera,
et cetera.
And the output is incredible.
It looks like somebody did likea do it yourself arts and crafts
project to explain how thisworks, and it shows the sun and

(11:38):
solar panels and inverter, andthe house and the electricity
and the grid and so on.
All with perfect text with 3Dcarton looking components that
shows the entire process.
They're also showing a usermanual, if you want, like a
really cool strip out of acookbook showing how to prepare
chai.
And then a step by steppreparation process with really
cute drawings and thedescription below it all with

(12:00):
perfect text, perfect images.
And it's absolutely mind blowingthat this is done with a single
simple prompt.
They also have the ability totranslate ideas and place them
on whatever you wanna placethem.
So the examples that they'reshowing is taking a can with a
full texture and design on it,and just take taking the text on
that can and changing it toother languages in this

(12:21):
particular case into Korean, andit overlays it on top of the can
perfectly just by translatingall the text on the can to
Korean while keeping everythingelse consistent.
And speaking about productplacement, you can take your
logo or whatever device designyou wanna put, and they're
showing how you can put it on abag, how you can put it on a bag
and on a cup, and on a t-shirtand on billboards, and anything

(12:42):
that you want.
They're also showing studioquality control over the image
itself.
Different types of shot types,whether wide angle, panoramic,
closeups.
Also the depth of field, what'sgonna be in focus versus not in
focus as if you're actuallyusing a real camera that was
available before.
It's just getting significantlybetter.
They're showing the ability tocompletely change the lighting
and the environment of anexisting photo, so they're

(13:03):
showing a photo of an elkstanding on top of a cliff in a
gloomy day in the sunset, andthen they're changing it to the
middle of the day in a beautifulday with blue skies in a few
clouds in the sky, includingtheir shadows on the background
and so on.
Absolutely incredible.
With full control over shadowand contrast and everything you
can imagine that you can do inpost-production only done with

(13:23):
simple words and the input canobviously be real images and not
just AI generated images.
They're also providing built inability to upscale at one K, 2K
oh 4K resolution, which meansyou can crop images and then
rescale them and then crop themagain, and then rescale them and
then crop them again and rescalethem and keep on generating
better and better resolution todifferent components in larger

(13:43):
images.
There's now full control ofaspect ratio.
So that was my biggest gripe.
When not a banana came out, youcouldn't change the aspect
ratio.
Then they added basic changes.
You can go to like fourdifferent aspect ratios, and now
you can basically do whateveryou want.
Wide strips, tall strips, 16 bynine, nine by 16, one to one,
whatever you want with a singleprompt while keeping everything
else exactly the same.
extremely powerful capability.

(14:04):
Subject consistency acrossmultiple images, including
multiple objects and or people.
So they're showing multipleexamples of either cute, furry
creatures in multiple standaloneimages brought into one single
image while making them all lookthe same or adding an image of a
dress and a person and a a dressand a person in a chair in a
plant, and making an image of astudio with all these components

(14:24):
all in it, or showing sixdifferent images of tennis
players wearing differentclothing and putting them all
into one single shot whilekeeping consistency of the
people and the stuff thatthey're wearing.
By the way, the stuff thatthey're wearing is all made out
of balloons, like the longballoons that you used to create
balloon figures so that alltheir clothing is made out of
that.
But the trick again, is takingsix of those images and turning
them into just a single image,and they're also adding the

(14:46):
capability to ask for multipleframes from the same single
prompt.
So you can put in one prompt andask it to generate multiple
variations of it just to getideas and pick the one that you
like the most without having toask it to generate another one
and another one.
And they've upgraded theirability to create realistic
images of basically everythingyou want.
They're saying landscape plants,people, and animals with true to
life details.

(15:06):
Go to their website, see theexamples.
They are stunning and there'szero way to tell between them
and live photos.
What does all of that add to?
It adds to professionalcapabilities to generate visual
assets at scale for anythingthat you want, all within Google
Gemini, without having to moveto a different platform.

(15:28):
Since nano banana wasintroduced, I am probably
generating 90% of the imageswith nano banana.
This will probably push it tothe other extra 10%.
It is just the best model outthere when it comes to real life
usability.
So if you're just looking forcreative outlet, maybe
Midjourney is still a littlebetter, and I'm saying that
maybe.
But as far as usability forbusiness people generating

(15:48):
assets for business projects,from my perspective right now,
Gemini, nano Banana Pro is farahead of everything else out
there.
Combine it with the fact that itis available across all the
different tools in Google,including where you need it the
most, such as generating imagesfor PowerPoints.
And you understand why Iswitched to using it almost
completely, even when I use itin complex processes, in tools

(16:09):
like we still, nano Banana is mynumber two go-to tool because of
its ability to keep consistency,to change angles, to change
lighting and so on.
And that was before theintroduction of Gemini three.
But to put things in a broadercontext of how far we've went, I
want to quote a segment out ofEthan Malik's latest post.
So those of you who don't knowif Ethan Molik, you should know

(16:29):
him.
If Ethan Molik is a professor atWharton and he is a AI
researcher and he publishesreally brilliant aspects, both
from his ability to researchthat as well as his team, as
well as what he thinks about AIand its impact on the world.
In his substack, that is calledone useful thing.
He shared the following aboutGemini three.
He talked about the fact that hereleased his first article in

(16:50):
the Substack about cha GPT-3,just as it came out.
And how his predictions backthen were relevant.
But then he thought about whatwould be the best way to show
the growth of the capabilitiesbetween ChatGPT back then and
Gemini three right now.
So while he was thinking abouthow to show the difference, he
just prompted Gemini three tohelp him figure out the
difference and display it in away that will be relevant.

(17:10):
So he said, show me how far AIhas become since this post by
doing stuff.
And Gemini responded with thefollowing.
Can definitely show you thedifference.
The tweet you shared is fromNovember, 2022, right on the
cusp of the modern AI boom.
Back then, the wow factor wasthat an AI could write a
coherent paragraph or a simplepoem about a silly concept like

(17:31):
a quotes candy powered FTL driveescaping otters.
To demonstrate how far we'vecome, I'm not just going to
write a text about it.
I have built you a fullyinteractive playable candy
powered FTL Starship simulatorin 2022.
AI could describe the engine in2025.
AI can code the engine, designthe interface, and let you pilot

(17:52):
the ship yourself.
And then he talks about the bigjump in coding tools from being
able to do code snippets to beable to be an actual quote
worker that is writing code andworking alongside you in
creating or fixing significantportions of the code.
And he continues with talkingabout the fact that AI is now at
PhD level across multipleaspects of work.

(18:14):
And how did Ham measure the PhDwork?
Well, he gave it work that hewill give his PhD students and
watch how well it performs thework on some of the things it
performed extremely well.
On some others it didn't do thatgreat, but it's definitely a
huge improvement than anythingwe had before.
In the summary of what Ethanwrote is three years ago, we
were impressed that a machinecould write a poem about OTs

(18:34):
less than a thousand days later.
I am debating statisticalmethodology with an agent that
built its own researchenvironment.
The era of chatbots is turninginto an era of the digital
coworker.
And while Gemini three isdefinitely the biggest news this
week as far as model releases,it is definitely not the only
release of a significant model.
OpenAI released Gemini 5.1 CodexMax, which is their new genic

(18:56):
environment to create code thatcan do a few really incredible
things.
First of all, it is a lot moreaccurate than the previous one,
so it scores 77.9 accuracy onthe SWE bench verified coding
evaluation compared to 73 in theprevious model.
So almost a 10% increase whilereducing the number of thinking

(19:17):
tokens by 30%.
So the cost to get to generatethese more accurate, better
results was cut by 30%, which isvery significant.
But more interestingly, from myperspective as far as its impact
on the broader world of AI andnot just on coding, is lying in
this one paragraph that I'mgoing to quote from the release
GPT 5.1 Codex Max is built forlong-running detailed work.

(19:38):
It's our first model, nativelytrained to operate across
multiple context window througha process called compaction
coherently working over millionsof tokens in a single task.
This unlocks project scalerefactors, deep debugging
sessions and multi-hour agentloops in their internal trials.

(19:58):
In some of these cases, this newmodel, clocked flawless
completion over 24 hoursstraight.
So while most of us are notcomputer developers, this is a
breakthrough that we did nothave before.
What they're basically saying isthat the AI on its own knows how
to skip from one chat to theother.
Basically, continuing a coherentprocess across multiple chats,

(20:22):
more or less, eliminating thecontext window limitation that
models had before.
And I need to open parenthesesand do a quick explanation for
those of you who don'tunderstand what the hell this
means.
Every chat that you do in eachand every one of those tools has
a limited memory.
It can run in a single chat.
It is called the context window.
It is measured in tokens.
So tokens.
Those of you who don't know aresegments of words.
This is what these toolsactually generate.

(20:43):
They're token machines.
They don't actually generatewords.
They generate segments of wordsthat are called tokens.
Most of the models so fargenerated between 128 and
256,000 tokens.
The biggest outliers were therecent Claude 4.5 with a million
tokens, and Gemini 2.5 Pro with2 million tokens.
That was the upper most limitthat we got out of every model

(21:05):
and what GPT five one Codex Maxcan do is basically do this in
an unlimited way because it cankeep on compacting, the outcomes
of the previous chat, to start anew chat while using a very
small portion of its context weknow, and now starting fresh and
keep on going, and then it cando it again and again and again.
Conceptually eliminatingentirely the limit of context we

(21:25):
know.
And while they didn't sayanything about this in the
actual post itself, thiscompletely changes how AI can
work as a whole.
Because if they can do this forcode, you can do this for
anything else, which means youcan do tasks over hours, days,
and weeks conceptually.
And as long as you can controlthe drift, you can now do tasks
that previously were just notpossible for AI and are becoming

(21:47):
possible right now.
So this is the biggest take forme about this model.
It's in addition to obviouslyits capability to write better
code.
But as I shared in the beginningof this podcast, the model that
came out of nowhere in acompletely stealth release that
is sharing the number one spotwith Gemini three in most of the
benchmarks.
And on most of the leader tablestogether with Gemini three is

(22:08):
Grok 4.1.
So what is Grok 4.1 Great at?
Well, as I mentioned in thebeginning, more or less
everything, it has significantlyimproved and smarter and sharper
emotional capabilities.
It has a better creativecapability when it comes to
writing.
It is really good at real worldreasoning and delivering more
empathetic and significantlyless hallucination chats.
It is also very good at doing itfaster than most models and it

(22:30):
optimizes when to think and whennot to think better than other
models.
I have been using grok togenerate the preparation for
these news episodes for a verylong time, and every time a new
model comes out from any of theother labs, I test grok against
these other labs and every timethey keep on winning and now
they have an even better modelthat does an even better job at
helping me in that process.

(22:51):
So while it may not be best atsome things on several different
things, it is dramaticallybetter than other models, such
as getting live data from theinternet and summarizing it in
the way that I want.
The other interesting aspect, ifyou remember when GR came out,
it was very edgy in its approachto everything, and that was what
helped it stand out right now,grok 4.1 is at the top of the EQ
bench three benchmark thatunderstands human emotions and

(23:14):
it is responding with moreempathy than any other model out
there.
This is more or less theopposite of what it did in the
beginning of being verysarcastic and very direct, which
is a huge change in big kudos tothe Xai team.
Another interesting remark aboutthe way they deployed it is they
secretly, if you want, quietlydeployed it to more and more
users between November 1st andNovember 14th to gauge people's

(23:35):
responses and how they're usingit and what are the differences
between that and the previousmodel.
And only then did the bigannouncement of the big release
to everybody else, which I thinkis a very smart way to release
models and test them, not juston the El Marina, but on the
actual user base at a smallerscale.
Overall a highly capable model,and I will definitely test it
across different things that Ido.
And as I mentioned right now,between all the different people

(23:56):
on Ella Marina, it is sharingthe number one spot, more or
less on every benchmark togetherwith Gemini three, why would I
probably use Gemini three more?
Well, because it's integratedinto my day-to-day life because
I am a Google platform user.
Another big interesting releasethis week is Alibaba just
unveiled a free Quinn app thatreleased currently in China on
iOS, Android web NC platforms.

(24:17):
And the goal is to create a GoToapp hub for everything for the
Quinn series.
Or as the company phrased it,smart personal assistant that
not only chats, but gets thingsdone.
From Alibaba's perspective,that's a very obvious next step.
They're currently controlling ahuge part of how people in China
engage with the world, but theWAN models so far focused more
on enterprise and not on enduser.

(24:38):
And this is definitely a push inthe opposite direction of going
towards a consumer basedapplication that will integrate
everything Alibaba into an AIapplication.
So what kind of features it has?
It allows you to do deepresearch, AI assisted coding,
smart camera for visual queries,voice communication, just like
Chachi, PT Live.
An even multis slide, PowerPointpresentation generation from a

(25:00):
single prompt, but as Imentioned, Alibaba as the giant
it is has integrated it witheverything Alibaba, so it
includes with services such asmap and food delivery, and
travel, booking, office suite,booking e-commerce, education,
health guidance, et cetera, etcetera.
Basically, everything thatAlibaba knows how to do will be
integrated into this app, verysimilar to what Google is doing,
and very similar to what OpenAIis attempting to do now, while

(25:22):
it's currently released only inChina, they're already working
on international variations ofthis with the goal to obviously
grow globally with this app.
Now in addition to the bigreleases, we got a lot of new
features This week, OpenAI justreleased group chat across the
board for everybody.
So following a one week pilot inJapan and New Zealand, OpenAI is
now deploying group chats forevery logged in user, including

(25:43):
free go plus and pro plansworldwide.
What does that mean?
It means that you can invite upto 20 participants via shareable
links to join an existing chat.
These chats live in a separateuniverse in your chat GPT
application, which means you cansee them separately and see all
your shared chats, but they alsodo not impact or impacted by the
memory of the application, whichis awesome because it means that

(26:06):
you can share whatever you wantin these chats, and your
personal information and thememories about you and your
business are not gonna be sharedthrough this chat.
And also what you do in thesechats are not gonna impact the
memory of ChatGPT, about you andyour business and so on.
Another important aspect of thisis all the human communication
somehow doesn't count towardsyour limits or your context
window.
Meaning when you have a regularconversation with people, it

(26:27):
does not consume tokens.
However, when you ask the AI todo something, it will, which
makes it a more efficient way toengage with AI while engaging
with other people.
Now, this is obviously thefuture of how we are going to
work.
AI will be embedded into mostconversations and processes and
projects that we're going tohave in an organization, and

(26:48):
most likely to 100% of workrelated conversations allowing
us to tap into this AIcapability as we need.
Meaning you can collaborate withpeople and AI agents as needed
at the same time.
In the work environment, this isgoing to be absolutely magical
because you can collaborate withother people on your areas of
expertise, but every time youneed a capability that AI can do

(27:11):
better than any of the teammembers, such as research, data
collection from multiplesources, both internal sources
and external sources.
Report generation code writing,generating applications on the
fly for things that you need,creative assistance, et cetera,
et cetera, et cetera.
You can ask the AI to help andbe a part of the conversation
and because it was a part of theconversation all the time, it
has the full context of what'sgoing on and it can participate
in the most effective way.

(27:32):
This is the holy grail ofcollaboration work between
humans and I anticipate this tobe.
As I mentioned, everything, notjust from the tools from the big
lab like Gemini and Chachi Pityand so on, but it will also be
available in any other platformthat we're using today, such as
Slack, Microsoft Teams, etcetera.
It is probably not going to belimited to only chat, which
means we will start seeing whichmeans it will be also be

(27:53):
available to all the voice andvideo communication and live
meetings.
So either Zoom teams meet, etcetera, or in actual live human
meetings where there's amicrophone open and or a camera
where the AI can participate.
And I feel that this will becomemore or less natural, at least
in some organizations through2026.
While this sounds a little crazyto many people right now, I.

(28:13):
Do think that this is where itis going, and I think teams and
organization that will learn howto work this way will see an
incredible acceleration in theirability to do the things that
they want and need to do.
So I highly recommend to all ofyou to learn how to do that as
well.
Staying on the conversation ofChatGPT, a new functionality
that is coming, a veryinteresting partnership between
OpenAI and Intuit, the companybehind TurboTax.

(28:34):
So Intuit is going to pay OpenAImore than a hundred million
dollars per year for integratingand enabling ChatGPT users to
tap into Turbo Tax for instantrefund estimates, or Credit
Karma credit reviews directly inyour chat.
So as many other tools integrateinto ChatGPT such as travel
booking and ordering food andordering stuff from the

(28:54):
supermarket.
This is going to be anotherapplication.
And on the other side of thispartnership, OpenAI are going to
provide enterprise licenses forinternal usage of Intuit
employees over 18,000 of them.
And they're gonna be used foreverything from coding to
customer support, research, etcetera.
As I just mentioned, and as wedetailed in previous episodes,
this is a big push by OpenAI tohave multiple applications

(29:15):
running inside of ChatGPT, suchas Spotify and Shopify and
Zillow and so on.
And this is most likely gonna bea big revenue channel for OpenAI
in the future.
And right now just builds anecosystem that will, from their
perspective, be able to competewith the centralized platform
such as Google, in order todrive people to do everything
within the open AI environment.

(29:35):
That being said, this gets a lotmore sensitive, right?
There's a very big differencebetween ordering pasta and
tomatoes from the supermarket oreven doing research about
available flights to a specificdestination to allowing AI to
get access to your financialinformation and taxes.
But this is the direction thatOpenAI is pushing it.
And if they can make it work, itwill open the doors for a lot
more other personal aspects ofcommunication together with ai,

(29:58):
including other sensitive datasuch as, healthcare or legal and
so on.
And speaking of travel and appcapabilities, Google just
increased their access to theirGoogle AI powered Flight Deals,
tools.
So this tool that was releasedearlier this year only in the US
and Canada, is now available inbeta for over 200 countries,
including uk, France, Germany,Mexico, Brazil, Indonesia,

(30:20):
Japan, Korea, and so on and inmore than 60 languages now in AI
mode.
And it has two separate modes inAI mode that is currently
available through Google Labs ondesktop and only in the us.
The new Canvas tool that comesas part of it lets users kick
off complete trip planning witha single prompt.
Which will then generate a hyperpersonalized agendas pulled from
realtime search data, Googlemaps, reviews, photos, web

(30:42):
intelligence, and so on, tocreate a detailed plan
step-by-step for the trip.
And because it's done in Canvas,you can then relate, change,
copy, paste, and whatever youwant to every step of your trip.
Now, in the initial step, whichis what we have right now, AI
Mode can only book restaurantreservations for US users only

(31:02):
by querying.
Multiple aspects such as partysize and date and time and
location and the type ofrestaurant you want.
But in the future they areplanning to integrate it
together with flight and hotelsbookings as well.
Allowing you to plan, but alsobook an entire trip just by
having a chat with your personalagent.
But speaking of agents, as Imentioned in the beginning,
Microsoft held their Igniteevent this week, which was all

(31:23):
about, as you can expect, AIagents in the enterprise.
The keynote was about two and ahalf hours long.
So while I highly recommendedyou watching keynotes in the
past, this is not one Irecommend watching, but I will
try to summarize the key pointsthat came out out of that entire
event.
In general, as I mentioned, it'sabout agents in the enterprise,
but they added a few veryimportant capabilities to push

(31:45):
the concept of enterprise wideagent implementation to a whole
different level.
Maybe the most interestingaspect of this too, most of you
not being IT professionals andworking probably in smaller
businesses and not in hugeenterprises, is the reduction of
the price of Microsoft 365co-pilot for businesses from$30
to$21.
That's an over 30% discount, andthe goal is very, very clear to

(32:08):
make this accessible to othercompanies who have lower budgets
for AI implementation.
So more and more companies canafford to do that now.
They also added severaldifferent layers to make the
control and management of agentdeployment more manageable for
large organizations.
And they introduced a toolcalled Agent 365.
What Agent 365 is supposed to dois to give it professionals or

(32:30):
whoever's gonna manage the AIdeployment in the organization,
better visibility and control onall the different agents that
are deployed across theorganization.
And they've included fivedifferent key capabilities
registry, which is a singlesource of truth for the
inventory of all agents,including shadow agents, meaning
stuff that people develop ontheir own and see exactly who's
running the, where are theybeing used, and so on.

(32:51):
Access control.
That requiring a unique agent IDand enforcing principle of least
privileges to allow people ornot allow people to access
different kind of agents who cando different things in the
organization.
Visualization, which is unifieddashboards to track connections
and monitor.
ROI.
Interoperability, which allowsit to access organizational
context via a new tool that theycalled work ai and obviously

(33:13):
security, which is in depthprotection of using tools like
Microsoft Defender and Purviewapplied to the AI agents'
capability.
They also created a new programcalled Agent Factory, which is
designed to help organization toget assistance and move faster
from ideas to production.
It includes several differentcomponents.
One of them is the ability toswitch to a Microsoft agent

(33:35):
pre-purchase plan, also known asP three, which is a single
metered.
Plan that allows you to useagent across everything that you
developed.
It is using a new measurementcalled agent commit units, acus,
which is replacing the conceptof tokens only in the agent
world, meaning you commit toupfront buying of X number of
these acus, and then you can runagents across your entire

(33:57):
organization dramaticallyreducing the level of complexity
of different levels of licensingand setups that you needed
before.
Which I think is a very smartidea from Google's perspective,
driving this to a much broaderadoption in organizations
because you know what you arecapping your organization with
and combine that with theability to track what agents are
actually being used, whatthey're being used for, and
what's the ROI allows you tothen make much smarter decisions

(34:19):
as far as which agents todevelop, how many of them to
keep, which one to tweak and soon.
This plan also includes a armyof what they call forward
deployed engineers or fds, whichare experts that are going to
work together with clients ofthis program, helping them to
accelerate AI solutions and getthem to production much faster
than organizations can do ontheir own.
The Agent Factory also providestailor role-based training

(34:42):
across live and instructor-ledtraining to push the AI fluency
across different kinds of teamsin the organization.
And as I mentioned earlier, theyintroduced a new tool called
Work iq, which is theintelligence layer that is going
to power future copilot agents.
Work IQ helps copilot understandthe user, their job, their
company, the data of thecompany, emails, files, et

(35:02):
cetera, and memory and inferenceall in the same time.
These tools thrive on one thingand one thing only, and that is
context.
The more context they have, thebetter they work.
And the idea of Worker IQ is togather the user's context and
provide it seamlessly to theagent so the user doesn't have
to, which in return will delivermuch better results, which is
very smart from Microsoft'sperspective.

(35:24):
And very similar to what Googleis doing, only with a much
bigger focus on enterprise, theyare integrating all these new
capabilities into more or lesseverything.
Microsoft, including they nowhave teams mode for Microsoft
Copilot 365, which means thatyour one-to-one copilot chats
can turn into group chats insideof teams.
We just talked about this as aconcept and now that you know it
is already available andpossible additional, there is a

(35:47):
facilitator agent in teams thatnow is generally available and
it's helping to manage agendas,take notes, and keep meetings on
track.
Now the free Copilot chatintegration into Outlook will
soon be upgraded into contentAware across entire Outlook,
inbox calendars, meetings,allowing the user to get
information to everything ofyour day-to-day knowledge and

(36:08):
agent mode in Word, Excel, andPowerPoint is coming to all
Microsoft 365 subscribers,enabling them to generate
complex documents, spreadsheets,presentations, and so on
straight in the apps while usingai.
So what does that put us?
It is showing how aggressiveMicrosoft are in becoming the
everything AI for any Microsoftuser.
And the more they're gonnaintegrate this into the existing

(36:28):
tools, and the more they willallow it to be context and
content aware across more andmore aspect of the Microsoft
ecosystem, the more it willprovide value to users, which in
return will increase the usageof AI across the board.
But definitely a huge stepforward from an enterprise
management perspective and agentcreation and deployment
perspective from Microsoft inthis recent announce.

(36:48):
Staying on the enterprise level,Salesforce just announced a new
capability called E Verse, whichis a simulation sandbox that
lets developers stress, test andrefine voice and text agents
using synthetic data forreinforcement learning.
So what does that mean?
It means that previously youcould develop an agent and test
it a little bit with people andthen deploy it to the real

(37:09):
world, letting it face realworld issues only when it is
going live.
Why?
Because that was the only way totest it with live data.
So what Evers does is itsimulates real world.
Chaotic data such as badconnections and different
accents and crosstalking betweendifferent participants in the
call to expose the AI to realworld environments and real
world situation, allowing you totest models and then fix them

(37:32):
while still in development phaseversus after rollout, which is
highly beneficial.
It is just looking at your realdata and then creating synthetic
data that looks like the realdata in order for you to test
these models.
This is not a new concept, butthe fact that it's now built
into Salesforce is obviously abig deal because it will allow
Salesforce users to develop evenmore agents that are safer and

(37:53):
better and do it a lot fasterand in significantly less time.
Another enterprise related pieceof news is that GitHub copilot.
CLI now has all the latest andgreatest tools.
So the latest G PT 5.1, andGemini three Pro, which will
enable significantly faster andbigger code generation and
reviews inside of GitHubcopilot.
And from all the differentreleases of this week, which was

(38:15):
definitely a lot into some scaryprojections coming from multiple
angles as far as the impact ofAI on jobs and on our world.
We will start with Gartner inthe IT Symposium Expo in
Barcelona.
A Gartner shared a somewhatapocalyptic future of what
they're calling job chaos in thenear future, going to a peak

(38:35):
between 2028 and 2029.
Now, they are assuming that AIwill ultimately create more jobs
than it displaces, but that in,in the short term, it is going
to create, as they called it, ajobs chaos.
You heard me say that from mypersonal opinion, many, many
times, I first of all do notnecessarily agree will generate
more jobs than it eliminates.
I don't see how that is evenpossible, but let's assume they

(38:57):
are right In the short to mediumterm or in their specifically
said 2028 to 2029, it isdefinitely gonna take a lot more
jobs away than it is going tocreate.
And what they're saying that itwill be forcing virtually every
business to recalibrate howthey're running the business.
What is the org chart?
What is the tech stack and everyother aspect of the business in

(39:19):
order to stay competitive inthis AI future?
And they're not talking about avery long future, and they're
describing four differentscenarios that CEOs and
leadership teams will have toconsider when they're
considering this future.
One is human oversights endures,which means it's a lean amount
of workers that man the fortbasically, while AI is running
and this group is monitoring andfixing all the small things that

(39:41):
AI is not doing well, this ideais obviously keeping humans in
the loop and with the goal ofpreventing mishaps while AI is
running.
Most of the show.
Scenario number two is AI runsthe show autonomous agents that
sees basically full reigns ofthe business functions.
With very little or no humaninvolvement in routine workflows
and jobs from data crunching tobasic logistics, customer

(40:02):
service, and so on.
Scenario number three is augmentspeed search, which is many,
many employees still in place,just running significantly
faster, generating a massive,greater output that they can do
today.
Think about, and the bestimmediate example is code
writing and debugging, right?
The ability of code writers togenerate code right now is 10 x
or a hundred x what it was justa couple of years ago.

(40:24):
And the same thing withreviewing code and fixing it.
Now take that concept and deployit across every aspect of the
business.
And then scenario number four isrevolutionary reinvention.
Which is AI Pros harnessing thepower of AI for completely
changing and re-imaginingaspects of different industries
and doing significant leapsvery, very quickly, and not just
a gradual change, dramaticallychanging the business landscape

(40:46):
of specific industries.
Now what they're saying is nomatter which scenario executives
leader choose, they need to beprepared to support all four of
them, because potentiallydifferent aspects of the
organization or differentindustries will require a mixed
approach to all of the above.
I agree with most of what I saidother than, as I mentioned,
their long term assumption thatit will generate more jobs.

(41:06):
But as far as the short andmedium term, we're a hundred
percent in agreement and it iscrystal clear that organizations
must invest in training of boththeir employees and their
leadership teams in order tostay relevant and competitive.
This is no longer a question ofbeing early adopters and trying
to be geeky about ai.
It is potentially a question ofsurvival for many businesses,
and I would say most businessesnow, my company multiply this is

(41:29):
what we do, right?
I mentioned in the beginning Ideliver AI workshops.
I did six or seven of them inthe last two and a half months
to different companies fromdifferent industries in
different sizes in differentplaces around the world.
And they are always tailored tothe specific organization and it
is the ultimate accelerator ofan entire company into the AI
era.
Now, most of the organizations Iwork with are either in the tens

(41:49):
of millions or hundreds ofmillions of dollar in revenue,
but I definitely have outlierson both sides.
I have several different clientsin the billions, and I have
several different clients in thesingle digit millions.
But if you are in a leadershipposition in your organization
and you feel that you're eithernot moving fast enough or that
you need to get started and youwanna accelerate or kickstart
the AI adoption process companywide, these workshops are the
perfect solution for you.

(42:11):
It is the ultimate acceleratorof adoption.
And just go back to thebeginning of this episode where
I talked about what were peopleable to generate in just a day
and a half of training?
And you understand why this is acomplete game changer if you
want to push AI faster into2026.
But I also serve small companiesor individuals who want to
accelerate, and we have twocourses for that.
One is our AI businesstransformation course that I

(42:33):
have been running for over twoand a half years now, and
trained thousands of businessprofessionals and business
leaders.
And it is showing you how toeffectively deploy AI across
multiple aspects of thebusiness.
The next.
Cohort starts on the week ofJanuary 20th, which is right
around the corner and it'sperfect in order to kick off
your 2026 with the right footforward.

(42:53):
It is a four week course, twohours a week with homework and
in between hands holdingsessions where you can get to
learn more, and it is theperfect way to get the basics of
AI correct.
We also just launched anadvanced course for integrating
AI assistant into workflowautomation.
This course is for people whoalready have a solid knowledge
in using large language modelsand other AI capabilities who

(43:15):
just want to take your knowledgeinto the next level start
automating business processes,tying it into your existing tech
stack.
This course starts on December1st with another session on
December 8th.
Again, the perfect way to getready for 2026 and start
creating automations andbuilding a much more efficient
organization.
And all these insights are not,are obviously not just coming
from Gartner.

(43:36):
The reason I'm doing this isbecause I've been working with
multiple organizations acrossthe board, but also this week we
got big hints from otherorganizations on how scary the
current situation is, both froma workforce perspective, as well
as from a risk to humanityperspective.
So PWC, global Chairman MohammedKde just stated that the rise of
AI is likely to lead to fewerentry level graduate jobs at

(43:57):
firms like PWC.
As AI is taking more and more ofthese tasks that was previously
performed by junior staff.
Now, he also said that therecent cuts that they had were
not due to ai, but he'sdefinitely seeing this is as a
critical aspect of their futurehiring strategy and that their
biggest problem right now isstruggling to hire scaled AI

(44:17):
engineers to implement thetechnology even faster, which in
return will reduce the needs foreven more entry level jobs.
He did share that PWC abandonedthe plans to continue increasing
its headcount and is nowfocusing on hiring a different
mix of people and skill sets,specifically focusing on AI
capabilities and AI engineering.
And if you need another exampleto show you how significant this

(44:39):
push is, comes from the recentearning call from Klarna.
So Klarna is a European countrythat specializes in financing of
online purchases and creditlines, and they have been all in
on AI since early 2023, and intheir Q3 earnings call.
Their CEO shared that theirheadcount plunged from over
5,500 people in 2022 to just2,900.

(45:01):
In 2023.
That is about half theemployees.
He also shared that AI is nowhandling task equivalent of 853
full-time employees up from 700just earlier this year, driving
the revenue per employee to$1.1million.
Now, how significant that is anaverage SaaS company before ai,
the revenue per employee isabout$250,000.

(45:23):
So this is four x the revenueper employee because they're
pushing AI across the board in.
By the way, to give you theother end of the scale company
who are AI first, basically AInative companies from Silicon
Valley that has been establishedin the past two to three years
and are all AI focused.
The revenue per employees over2.3 million, so it's more than
double what Klarna is doingright now, and it's about 10 x

(45:45):
the average SaaS company.
The other interesting aspect isthat they have grown their
salaries on average from$126,000in 2022 to$203,000 right now,
based on their CEO and I'mquoting, we have made a
commitment to our employees thatall of these efficient gains and
especially the applications ofAI should also, to some degree
come back to their paychecks.

(46:06):
So the half of the employeesthat weren't let go or left as
part of natural attrition aremaking significantly more money.
I'm not sure that this averageis actually fair to look at
because I'm assuming thatthey've hired a significant
amount of AI developers andbecause of the need right now,
these guys don't come cheap.
That by itself will dramaticallyincrease the average salary per
employee, But based on whatthey're saying, this is not

(46:26):
limited to just developers, butall employees are enjoying this
benefit as long as they arewilling and pushing to use the
AI tools that the company isdelivering to them.
What is that translating to?
108% revenue increase since 2022to now.
This is insane, and this is, ifyou think about it, the dream of
every CEO, especially publictraded companies, rapid growth

(46:50):
with no growth in expenses, andin their case, an even reduction
in the workforce by a verydramatic way so the bottom line
is this is not slowing down.
These tools will do more andwill allow companies to cut the
workforce, or at least not growthe workforce while allowing the
company to grow.
This is obviously not possiblefor all companies all at the
same time because there's alimited amount of demand for

(47:11):
every service or product thatcompanies are selling, which
means it will lead to an evenbigger reduction of jobs as
companies cannot grow anyfurther and have to look at
their current cost and look forcost cutting.
And this will be a very simpleway to do that, which is not a
good thing for the globaleconomy as a whole, because then
you won't have consumers toactually consume the goods
because people will not havemoney.

(47:32):
But in the short term, this is ahuge opportunity for companies
to push more AI capabilities.
hint workshops, in order to getcompetitive edge and as
individuals, it is a very clearneed to know how to use AI if
you want to keep your job andpotentially make more money in
the future.
So go check out our courses,your Future Self will.

(47:53):
Thank you if you do that.
But beyond the impact on jobs,we got two different scary
predictions from two of theleading labs.
One of them is open AI and theother is Anthropic.
OpenAI just released a blog postcalled AI Progress and
Recommendations.
In this blog post, OpenAI talksabout the dramatic reduction in
cost per unit of intelligenceestimated to be 40 x per year

(48:14):
based on the past few years andlooking into the next few years.
This means that this dramaticreduction in cost means that
more and more sophisticatedcapabilities will become
significantly more accessible toeveryone.
Large organizations and smallorganizations, including
individual entrepreneurs, whichin return will accelerate
product development andpotentially lower the barrier of
entry for every aspect that AIcan be applied in.

(48:37):
Now, they're also saying thatthe current AI capabilities as
are vastly underestimated bymost people.
Definitely the broader public,as most people are still using
AI primarily as basic chatbotsor an improved way to search the
internet instead of enjoying theincredible benefits that AI
delivers.
And they mentioned that systemsthat exist today already
outperformed the smartest humansin some of the challenging

(49:00):
intellectual competitions in theworld.
We talked about the codingcompetition and the math
competition in the past fewepisodes.
Share with you that AI is nowbetter than top humans in the
world, are both these topics.
So what they're saying, they'resaying that for professionals
that are seeking career growth,the skill gap between general AI
use and expert level applicationis immense, and it is growing
rapidly, and it requiressignificant push towards more

(49:23):
training and more adoption of AIin order to remain competitive,
either as individuals or ascompanies.
And they also talked about thegrowth in the type of tasks that
AI can do.
So they said AI initially wasable to do tasks that would take
person seconds to then tacklingtasks that can take, that will
take an average person an hour.
But this trajectory suggeststhat companies will be able to
create AI systems and will becapable of handling multi-day or

(49:45):
multi-week projects that willrun autonomously, and that is
not in the far future.
Which means companies need toplan for that and arrange for
that and prepare for that acrossboth tactical and strategic
investments and oversight andcollaboration and everything
that requires in order to makethis a successful transition.
And now I want to quote to whatI feel is maybe the most
important and scary aspect ofthis blog post.

(50:07):
So they said, although thepotential upsides are enormous,
we treat the risks of superintelligent systems as
potentially catastrophic andbelieve that empirically
studying safety and alignmentcan help global decisions, like
whether the whole field shouldslow development to more
carefully study these systems.
As we get closer to systemscapable of recursive

(50:28):
self-improvement.
Obviously no one should deploySuperintelligent systems without
being able to robustly align andcontrol them, and this requires
more technical work.
So what are they suggesting thatwould happen is that they're
suggesting that there would beshared standard and insights
from Frontier Labs.
And they're advocating foragreements on safety principles
and sharing safety research andestablishing mechanisms to

(50:49):
reduce the race dynamics that isgoing on in AI right now.
They're also acknowledging thatthe changes driven by AI will
most likely require us to changethe fundamental socioeconomic
contract that exists today inorder to support a completely
new kind of future that willstill allow us to sustain the
kind of life we expect tosustain.

(51:09):
And a very similar statement wasshared by Dario Amede, the CEO
of Anthropic in an interview toFortune magazine.
and he said, and I'm quoting,I'm deeply uncomfortable talking
about the fact that there's toomuch concentration of power and
technological capabilities in ashort list of company and the
companies.
And he's highlighting theexistential risks of

(51:30):
superintelligence predictingthat systems that are smarter
than humans could emerge by 2027or 2028, which is just around
the corner.
What is he calling for?
He is calling for US regulation,proposing a federal AI safety
agency similar to the FDA or theFAA with a red teaming
capabilities, testing new modelswhile in while collaborating

(51:51):
with international treaties toprevent dangerous global race.
And I'm quoting, we need to slowdown the race a bit.
Now he is also saying thatvoluntary industry
self-regulation has failedbecause the ai profit driven
incentives are just too high andit clashes with safety
priorities in this race to a GIand beyond.
Now, I said that and I said thatall along.

(52:11):
We have to, as a global societyto find a way to work together
China, us Russia, Europe, Japan,everybody else.
To figure this out together.
I actually don't think it's likethe FDA or the FAA.
I think it needs to be a grouplike the International
Monitoring of Nuclear Weaponsthat will be in charge of
looking into what every singlecompany and every single country

(52:34):
is doing around the world.
And if you have two of theleaders of two of the most
advanced labs telling us thatthey think they are running too
fast because the incentives aretoo strong and they're not going
to stop, this is not just a redflag.
This is sounding every alarm youcan imagine because what they're
saying is based on what they areseeing in-house, this is not
concepts.
This is not me makingpredictions.
This is stuff that they'reseeing inside their labs because

(52:55):
they can see six or 12 monthsinto the future.
And if they're saying that thisis a very, very risky point in
the future of the human racepotentially, but definitely in
the AI race, and if both of themare saying that they need to
slow down, they probably need toslow down.
The thing is, without slowingeverybody else down, because of
the business incentive, becauseof the financial incentive, and
because of the billions andtrillions of dollars that are
involved, they will not slowdown unless some group body,

(53:19):
government, or other forces themto do so.
I truly hope that the governmentwill pick this up and we'll
figure this out.
We'll talk more later on aboutnew regulations and laws coming
in from the Oval Office, andyou'll see that this may or may
not be the case, at least in theUS in the immediate future.
Now, staying on the scary sideof ai, and again, this was not
the goal to create a scaryepisode, but this just all

(53:39):
happened this week.
UB Tech, which is a Chinesecompany that has been developing
a robot called Walker S two, hasjust released a video of showing
their first full productionbatch of these AI robots and
this video is scary as hell.
So this company has signedmultiple deals with some of the
leading industries inside andoutside of China when it comes

(54:01):
to manufacturing andspecifically car manufacturing,
including BYD and Gilly andVolkswagen and Dun Lisu Motors,
and several other companiesdriving their orders to over$113
million for these robots.
One of the interesting thingabout the S two robot is that it
has battery packs in its backand it has the capability to

(54:21):
swap its batteries, which meansit will never stop as long as
you have a charging station withenough batteries for these
robots to come in, they can pulla new battery from the stack,
pull out an old one, stick thenew one in, and continue
working, literally 24 7.
Now, the reason the video isreally scary, and we're gonna
put a link of that in the shownotes, is it's showing hundreds
of robots marching together outof the factory, all

(54:43):
synchronized.
It looks like a scene out ofTerminator or like Storm
Troopers in Star Wars.
Think about masses of lines andlines and lines of robots or
marching together.
And now, and while thisparticular robot was built for
factory production floors,military variations is an
obvious next step.
For countries and formanufacturers, which will drive,
again, billions of dollars inrevenue.

(55:05):
And as I mentioned, this isreally, really scary from a very
personal perspective.
And just check the videoyourself and let me know in on
LinkedIn or in any other way youwanna communicate with me, what
do you think about it staying onrobotics and how they perform?
Figure ai, which is a companyfrom the US just pulled out its
figure O2, human O robot fromthe BMW assembly lines where it

(55:25):
has been working for an entireyear.
And they shared some veryinteresting statistics about
these robots.
So in 11 months, these robotshelped to build over 30,000
vehicles with zero majormeltdowns.
So this FO two robot.
Clocked over 1,250 runtime hoursacross 10 hour shifts Monday
through Friday, loading over90,000 sheets of metal and parts

(55:49):
into welding machines with 99plus percent accuracy on the
job.
Now in addition, the robotsracked up to estimated 200 miles
of walking inside the facility,in this past year.
And the reason they pulled themout is because they're now
coming up with a new model andthey wanna learn by
investigating the previous modelin order to implement it.
In the oh three model that isjust coming out, their CEO

(56:11):
shared a post on X with all thisinformation plus some images
showing the war wounds, as theysaid, of these robots showing
like small scarring andscratches on their metal bodies,
but overall, as I mentioned, 99plus percent accuracy and
deliverability across thesefactories.
very big success for thiscompany that is now coming up
with their new model.
I told you before that we'regoing to touch a little bit on

(56:32):
new regulation coming from theWhite House.
So the Trump administration isnow crafting an executive order
to basically block state levelAI regulation, prioritizing
federal, national levelinnovation over the fragmented
local rules.
Now this executive order, whicha draft of it has made its way
to, the information, would taskthe attorney General with
forming a unit to challengestate AI regulations on

(56:55):
constitutional grounds.
Basically zeroing in on severaldifferent measures that
California and Colorado that aredemanding AI transparency and
other aspects and safeguardsthat might slow down innovation.
And they do not want that toturn into a 50 state patchwork
across the entire country, whichwill slow down AI innovation.
Now to force this, the executiveorder was, would instruct the

(57:16):
Commerce Department to withholdfederal broadband from states
with quote, burdensome AI rulesthat might erode the US global
AI leadership, which is fullyaligned with the White House
July AI action plan that Ishared back then.
Now, in parallel, the CommerceSecretary, will assemble a team
that would collaborate withFederal Communications

(57:36):
Commission, the FCC and theWhite House AI advisor David
Sachs, to draft a unifiedfederal AI standard, which will
pave the way to progress theright conditions to maintain
America's edge in internationalAI race.
This does not sound.
something that is collaboratingwith the alarms that are sounded
by open AI and philanthropic Onthe risk of continuing to run at

(58:00):
the pace we're running rightnow.
It definitely sounds like weneed to beat China or else, and
we will do everything in orderto make that happen.
Now, I might be wrong and I hopeI am wrong, but currently it is
fueled by a, the currentadministration core belief in
people like David Sachs is avery clear example of that.
And in addition, it is driven bySuper pacs that are targeting

(58:20):
anti AI regulation such asAnderson Ho Horowitz firm just
poured$50 million into a superPAC like that to reduce the
amount of AI regulation in thisadministration.
So on one hand I am happy thatthere's not gonna be state
specific patchwork of differentregulations, on both aspects,
both on slowing innovation downas well as increasing risk

(58:41):
because you won't be able tohave a unified approach to all
of it.
And I see this as a opportunityto actually potentially generate
a more controlled nationwide andthen maybe global environment
because we'll come from the USfederal government and not from
specific states.
But I don't have a feeling thatthis is the direction that it is
going.
And this is somewhat or maybemore than somewhat troublesome,
at least from my personalperspective.

(59:02):
And now for some holiday relatednews.
According to a November 20threlease from a nonprofit company
called Fairplay, parents shouldstay away from buying AI toys
this season due to serious risksin aspects such as data invasion
emotional risks, and disruptedhuman play.
And they're claiming that someof these toys, have already

(59:23):
sparked, obsessive use, explicitchats, violence prompts, and
self-harm nudges in youngindividuals.
Now fair play warn that thesegadgets that are often chatbots
that are stuffed into teddybears and different kind of
stuffed animals might also erodethese kids' ability to develop
human relationships, as theywill develop dependencies on
these furry, cute little animalsthat can now talk to them and

(59:44):
that might hurt their long-termability to develop human
relationship and even sensoryskills that will, that are
required for them to grow asnormal humans.
Now while in the future, I seethese kind of toys as an
incredible way for us to driveearly teaching of different
skills.
I do believe that the currentversions are very far from that.
They're built as toys with verylittle supervision and control

(01:00:07):
on how it engages with youryoung kids.
And hence, I tend to agree withFairplay assessments and
recommendation not to buy thesetoys to your kids.
right now connecting it to theprevious point, I really hope
that in the future we'll haveregulation that will define
exactly what kind of AI toolsand what kind of level of
engagement can be delivered toyoung individuals, either
through computers, their cellphones, or cute little furry

(01:00:29):
toys.
So as you're planning yourThanksgiving shopping or other
holiday shopping, take that intoconsideration.
And on that note, happyThanksgiving to all of you.
We have a lot to be grateful,and I will start by saying that
I'm really, really grateful andthankful for each and every one
of you listening to this podcastfor inviting me to deliver
workshops to your companies forattending my courses, for

(01:00:50):
participating in our Fridayhangouts, and for being a part
of my personal journey ofdriving towards a future where
AI is a part of our lives, butmaking it better rather than
making it worse or putting it atrisk.
So a better future driven by ai.
That is it for this week.
Have an awesome rest of yourweekend.
Have a happy Thanksgiving, and Iwill be back on Tuesday with

(01:01:11):
another how to episode, wherewe're going to dive on how to
implement AI for specific aspectin your business.
And until then, have an amazingrest of your weekend.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

The Brothers Ortiz

The Brothers Ortiz

The Brothers Ortiz is the story of two brothers–both successful, but in very different ways. Gabe Ortiz becomes a third-highest ranking officer in all of Texas while his younger brother Larry climbs the ranks in Puro Tango Blast, a notorious Texas Prison gang. Gabe doesn’t know all the details of his brother’s nefarious dealings, and he’s made a point not to ask, to protect their relationship. But when Larry is murdered during a home invasion in a rented beach house, Gabe has no choice but to look into what happened that night. To solve Larry’s murder, Gabe, and the whole Ortiz family, must ask each other tough questions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.