All Episodes

May 24, 2025 • 56 mins

👉 Fill out the listener survey - https://services.multiplai.ai/lai-survey
👉 Learn more about the AI Business Transformation Course starting May 12 — spots are limited - http://multiplai.ai/ai-course/ 

Is your business ready for the AI-powered arms race of the century?

This wasn’t just another week in tech — it was the week. Microsoft revealed its plan to automate the enterprise. Google’s scrambling to catch up with AI-first everything. OpenAI just spent $6.5 billion to build devices that could replace your phone — and your keyboard. If your company still thinks of AI as "optional," you're already behind.

Here’s the real takeaway:
We're not watching a tech trend — we’re watching the restructuring of how work, business, and even the internet itself will function in the coming months (not years).

Recommendation:
If you lead a business and want to stay competitive, you need to understand what happened this week — because your competitors definitely will.

In this session, you'll discover:

  • Microsoft’s bold AI strategy: enterprise-wide agents that can replace whole departments
  • The new AI web protocol (NL Web) that may kill traditional websites
  • Google’s AI tab in Search, Gemini updates, and why Project Astra could make you feel like Iron Man
  • Claude Opus 4's jaw-dropping 7-hour autonomous coding run — and what that means for your dev team
  • The $6.5B bet OpenAI made with Jony Ive — and what it reveals about your hardware future
  • Klarna’s 40% workforce reduction powered by AI (with revenue still rising)
  • Why Satya Nadella is building AI to replace himself (no, seriously)
  • Smartglasses arms race: Meta, Apple, Google — who’s going to own your face?
  • The scary-cool future of wearable AI, brain chips, and what it might mean for your kids

🎥 Watch the full conversation between Sam Altman and Jony Ive here:
https://www.youtube.com/watch?v=W09bIpc_3ms

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Johnny recently gave me one ofthe prototypes that the device

(00:02):
took the first time to take homeand.
I've been able to live with it,and I think it is the coolest
piece of technology that theworld will have ever seen.

GMT20250524-133739_Recor (00:10):
Hello, and welcome to the craziest
weekend news episode of theLeveraging AI podcast So far,
this is Isar Metis, your host.
And the quote you just heardcomes from no other than Sam
Altman.
And when he's saying somethingis the coolest piece of tech
he's ever seen, well, we shouldbe paying attention.
And so I know I promised youlast week we're gonna dive into

(00:32):
a specific topic in this deepdive.
However, this week had the mostbig, impactful announcement in
the history of AI news, at leastin the time of this podcast, or
maybe since the day of thelaunch of the first cha GPT to
the world.
and to be fair, it's gonna be alot more impactful and profound
on our day-to-day lives anddefinitely on our business lives

(00:55):
in the immediate and nearfuture.
So we have a lot to talk aboutin the deep dive.
And then with the time left,we'll add some rapid fire items
because there's a lot to talkabout over there as well.
The rest will be in ournewsletter that you can sign up
for in the link in the shownotes.
But as I mentioned, becausethere's a lot of exciting stuff
to talk about.
Let's get started.

(01:21):
Before we dive in, I would liketo apologize for my stuffy nose.
I've been fighting some seriousallergies in the last, uh, 48
hours, so I hope that doesn'tbother you, uh, too much.
I hope both you and me will getused to it, uh, in three
sentences then everything willbe okay.
There were two really big andimportant events this week.
The first one was MicrosoftBuild and the other was Google

(01:43):
io.
Both companies are obviously100% all in on ai, but there
were two very different vibesand focuses in these two events.
Microsoft focused a lot more onenterprise and Google focused a
lot more on personal use andsmall businesses, which makes
sense because these are theaudiences they mostly cater to.

(02:03):
Let's start with Microsoft andwhat they announced in Microsoft
Build.
The list is very long, but Iwanna start with a high level.
The high level is very, veryobvious.
Microsoft is building anend-to-end enterprise AI
strategy that will integrateeverything in the enterprise,
beginning to end, from customerservice, to data research, to
management leadership, researchcode writing, deployments,

(02:27):
application development, andmaybe even the web itself around
a unified AI strategy.
I must admit that from thatperspective, their presentation
was significantly moreimpressive than Google's.
Not taking anything away fromwhat Google announced.
I'll get to that in a minute.
But from a cohesive, clearstrategy addressing their exact

(02:49):
target market, they did anincredible job.
So what are the things that theyannounced?
Well, first of all, theyannounced what they're calling
the open agent web.
They are planning to, as Imentioned, unify everything that
we know both in our personallives as well as businesses
around agents.
They started with the burninghot topic of writing code.

(03:11):
So there was a lot about, uh,co-pilot, studio and co-pilot in
vs.
Code and open sourcing co-pilotin vs.
Code.
their new copilot for codewriting Is supposed to be an
end-to-end agent If to quotescha model from a pep programmer
to a peer programmer, and thegoal is that you can assign

(03:32):
tasks and bugs to it, chat withit back and forth as if it is a
team member of your developmentteam, and it will be able to do
these tasks long, short and be apart of the overall entire
development process versus justdelivering some snippets.
They also introduced, oractually shared because they
introduced it before notebooks,which does the same things that

(03:52):
Notebook LM does, but it'sconnected to everything in your
Microsoft 365 suite and beyond.
Meaning you can drop in whateverdata you want in there and
create specific notebooks thatalso integrate everything else
that they shared.
They shared a bunch of agentsthat are already available in
Microsoft copilot, researcheragent that can search the web
and enterprise data for anyinformation that you want.

(04:13):
An analyst agent, which can takeraw data and create detailed
report including analysisforecasting and so on.
Basically everything an analystdoes.
Again, from external data andinternal data as well.
There is going to be an agentstore where you can hire agents
for multiple tasks, and if youare a developer of agents, you
can then post them to the store,either internally for your

(04:33):
company to use or for anybody touse.
And that will be a whole newbusiness sector of companies and
individuals developing agents.
Agents will be members in teams,so whatever agent you assign a
task, you can chat with it inteams as if it's a regular
member of your company or onyour team.
They announced copilot tuning,meaning you can now fine tune

(04:54):
agents and the way they workaround your company's data,
style, tone, policies, etcetera.
And you can replace the modelsin the background of agents or,
you know, not tied to justOpenAI.
You can literally use any modelthat they have in the backend,
which is more and more includingnow grok from Elon Musk, which
is the arch nemesis of SamAltman, which was their closest

(05:15):
partner so far.
We announced that previously inone of the episodes not too long
ago, but now it's actually live.
So you have access to multiplelanguage models in the backend
that you can replace as youwish, during or after the
creation of models.
They announced a new platformfor multi-agent or
orchestration, which will allowto deliver significantly more
complex tasks through multipleagents working together.

(05:37):
They increased theirobservability, the ability to
see what the agents are doingacross multiple aspects.
So we have more control from anenterprise level.
The announced entra, an agentdirectory and access control,
and the goal is to be able toprovision who can see what
through agents, which is one ofthe biggest challenges that
stopped deployment of agentsacross enterprises.
Because once you connect agentsin or chats into multiple data

(06:01):
points, how do you keep the dataaccess the way it was before?
So this intra infrastructure issupposed to solve that.
They also introduce FoundryLocal.
So Foundry is their platform onAzure that allows you to run and
do everything in ai.
Well, now you can do thislocally on a Mac or a PC with
normal capabilities and developAI capabilities locally on your

(06:21):
machine or on a small localserver.
They announced the release ofWindows Ai Foundry, which is the
infrastructure that they've usedinternally to develop all these
tools and is now being availableto anybody to use as an
infrastructure for development.
They are announcing support forMCP on Windows, so you can now
connect everything in Windowsand in Office 365 to various MCP

(06:43):
servers to connect them to otherplatforms, tools, and data from
multiple sources.
They announced that Defender,their platform that supports it
security is now going to alsocover Foundry, meaning all of
your AI infrastructure tools,applications, agents, and so on
will be covered and protected byDefender.
And they announced digitaltwins, which will allow you to

(07:05):
develop digital twins.
Similar to the concepts that youcan build on Nvidia tools you
can now build in a Microsoftplatform.
and there was one moreannouncement that was just like
one line item in this wholecrazy list of things that they
announced, but I think to mostof us people who are not in
large enterprises and not runlarge enterprises, it's gonna be
the most profound.

(07:25):
They announced NL Web, which isan INGEN application layer for
websites.
In other words, if you thinkabout how the web works right
now, everything runs on HTTPprotocol, meaning that's how the
internet works.
There was created a one protocolthat will allow everybody to
access websites.
Well, NL Web is supposed to bethat for agents.

(07:45):
Think about preparing yourwebsite to the agentic world
with just a few lines of codeconnecting it to NL Web that
will now allow agents toseamlessly read, see, and engage
with your website.
As I mentioned multiple times onthis podcast, I think the web,
as we know it today, will ceaseto exist sometime in the next
few years because we'll see lessand less human traffic going to

(08:08):
websites and more and more agenttraffic going to website.
How that looks like, nobodyknows, but NL Web sounds like a
step in the right direction ofdefining a standard way in which
agents can integrate and engagewith websites.
MIcrosoft also announcedMicrosoft Discovery for the
scientific domain.
And overall, as I mentioned,they are building the entire end

(08:30):
to end of everything, agents andAI from infrastructure, data, AI
platforms and apps and agents ontop of all of that.
And they're providing a unifiedsolution to do this.
This is obviously not gonna besomething that somebody can do
in their garage.
This is for large enterprises,but the presentation, as I
mentioned, was very impressive,cohesive, and sounds extremely

(08:53):
powerful.
And at the same exact time, 750miles away, Google had Google
io, which introduced a hugerange of features and things
that are gonna be added oralready added to the Google
environment that just announcedin this one event.
By the way, that's somethingthat I think we're gonna see
more and more.
Some of the Microsoft stuff wasthe same.
These companies are gonnarelease often and release quick

(09:15):
every time they have somethingthat is ready.
And then the big events will bemore of like a unified
introduction of how everythingworks, works together versus the
actual release of the differenttools.
Some will still be released onthe big events, but mostly they
will ship as they're readybecause the competition just
forces them to do that.
So the Google event, as Imentioned, focus mostly on small

(09:36):
businesses and personal usage,which as I mentioned, that's
most of their audience.
And Google in the same thing,are integrating AI into
everything.
Google, the biggest piece ofnews from Google is now there's
gonna be the Google AI tab inGoogle search.
So if you think about how thewhole Google thing evolved,
well, Google wrote the paperthat started the generative AI

(09:58):
craziness with attention is allyou need paper back in 2017.
But the first company toactually come up with something
significant was OpenAI, followedby a lot of other companies, and
Google was scrambling in thebeginning.
And I said very early on in thispodcast that I think Google will
take the prime spot in thisbecause they have everything
they need in order to besuccessful.

(10:19):
They have more data thaneveryone.
They have more compute thaneveryone.
They have more distribution thaneveryone.
They literally have more talent.
They literally have everythingthey need in order to be ahead.
Their biggest problem was thebusiness model.
Google's business model dependson billions, over billions of
dollars of revenue from ads onGoogle search.

(10:39):
And they couldn't risk that, orat least they had to delay that
until the point they had nochoice.
And that point is now.
When it's starting becoming veryclear that people are gonna stop
searching on Google and startusing whatever their favorite
tool is, whether it's Perplexityor Chachi, pt or anything else,
Google knows that they have toprotect their turf and they're
introducing AI tab in the AIsearch that gives you AI based

(11:03):
answers instead of links, oractually somewhat of a
combination of them, but with afocus on AI results.
It's already available on Googlesearch.
I must admit I was not highlyimpressed and in two times that
I tried it, it crashed anddidn't work, and actually showed
me just regular Google results,just in a different user
interface.
So I don't think it's fullybaked yet.

(11:25):
I have zero doubts that theywill figure it out because as I
mentioned, they don't have achoice.
So what other things did Googleannounce?
They announced a new version ofGemini 2.5 Pro, where even the
existing version outperformsmost of the rivals on most
things.
They're topping the uh, chatbotarena for a while now, and they
also ranked number one on theweb dev arena for code writing

(11:48):
at least until the next thingthat we're going to talk about.
but definitely a very solidtool.
I really like Gemini 2.5.
I do more and more with it.
It also is now the number oneperformer on the Humanity last
exam benchmark.
They also introduced new modecalled Deep think mode, which is
a new reasoning mode that isused for more advanced research

(12:09):
techniques that can evaluatemultiple hypothesis and boost
accuracy in complex queries.
They introduced Gemini Canvas,which is absolutely fantastic.
One of the main reasons I stuckin the past few months using
mostly Chachi PT versus Geminior Claude is that.
The Chachi PT Canvas is just thebest collaboration tool there is

(12:30):
with AI, at least so far.
So now Gemini has similarfeatures, which is closing the
gap on the capabilities thatOpenAI had in their platform.
And the idea is that you canengage and partner with the AI
in a shared document on theright side of the screen for
either creating documents orcode and it has tone control and
a lot of other things you can doin a much easier user interface.

(12:53):
And you can also export exportedto Google Docs with a single
click, which is really cool.
They're now including audiooverview.
So the feature that waspreviously A part of em is now
available in Gemini.
I already use it in combinationwith deep research.
It's absolutely fantastic.
As an example, before meetingsthat I have with potential
clients, I give a task to GeminiDeep Research to go and research

(13:14):
the company, the people that aregoing to meet the history, what
they're focusing on, theircurrent things that they're
doing with ai.
And then I'm turning it into apodcast that I can listen to in
the car.
Or when I'm walking the dog,preparing myself for the
meeting.
They also introduced a coolcapability to update and make
changes to images, whether AIgenerated or actual images that
you took and that you'reuploading, and you can remove

(13:35):
the background, add elements,and make other changes.
and it can do it in 45 differentlanguages.
And then there are three hugeannouncements of of all
platforms that are beingupgraded to a much higher level.
One is it's video generator VEO.
So VO two was very impressive,maybe the best out there.
But VO three is nothing short ofincredible VO three, in addition

(13:58):
to updating and upgradingeverything, so you get higher
resolution, higher consistency,more styles and higher
resolution.
In addition, it knows how to addbackground sound, sound effects,
and voice conversationsgenerated by the AI from text
prompts.
Only a point about that.
Right now, VO three only hastext prompting.
You don't have image to videoprompting, which is a

(14:19):
disadvantage.
I'm sure that is going to getresolved.
Also, extending the videos,which is a really cool feature
that VO has.
Works actually with VO two.
So if you create a video with VOthree and you wanna extend it,
which again is awesome, it'sgonna extend it with VO O2.
Again, I'm sure these thingswill be resolved sometime in the
immediate future.
They also announced Flow, whichis a creator suite that includes

(14:43):
text and Gemini capabilities aswell as Imagine three, their
latest image generation modeltogether with Google VO three.
BUt it is a lot more expensivethan any other tool out there.
It's$250 a month with somelimitations on the quantity that
you can generate that isexpensive if you're just toying
around and wanna play with it.
But it is almost free if you'rean actual real creator of video

(15:06):
content because one photo shootof 10 minute video will cost you
a hundred times that amount, andso it is.
An extremely, extremely powerfultool that can generate videos
that look highly realistic withvery detailed, prompt following,
like nothing we have from any ofthe other companies right now.
The second project that has madea huge step forward is Project

(15:27):
Mariner.
Project Mariner is agent thatdesigned to help consumers do
things like purchase tickets,sporting events, buy groceries,
and so on, and they are going torelease this as a first step and
then build on top of that.
Basically what I said before,the ability to engage with the
internet through an agent versusyou doing it yourself.
It will search through hundredsof website, will find your stuff

(15:48):
based on your specificpreferences, pricing points, and
so on, and can even shop for youbecause it's integrated with the
shopping platform from Google.
And then there's the updates toProject Astra.
Project Astra was introducedlast year in IO 24, where they
showed the Google glass walkingaround and doing different
things and engaging with thereal world.
Well.
AsTra just got a lot of updatesand the biggest one is its

(16:11):
ability to be proactive, meaningthe previous tool in Astra
allowed you to open your phoneand or Glassage, which we'll
talk about in a minute, and askthe model about different things
that it can see in the view orhear at that particular point.
Well, the new Astra can look atwhat's happening and suggest and
intervene where it thinks it canhelp you in specific things,

(16:32):
whether it is a student doingtheir homework and getting stuck
and it can jump there.
Or you trying to work onsomething in your garage and you
will figure out when you'restuck to install things in the
house or anything at a factory.
Literally anything.
It can see if it's trained onit, it can understand when
you're struggling and jump inand suggest assistance.
That's a whole different levelof AI assistance like we've

(16:53):
never seen before.
Another cool new feature in ASRAis highlight.
So if you hold your phone infront of a lot of objects and
you ask to look for something,it can find it and highlight it
for you in the screen.
Think about packing for a tripand looking for specific item,
making sure they're all there orin an assembly line looking for
specific parts or if you arepacking a shipment and you need

(17:14):
to verify that all the thingsare there before you put them in
the box.
These are the things these toolswill be able to do out of the
box without any additionalprogramming and tuning.
So quick summary on these twoevents, both companies are all
in on AI each with its ownunique way.
Microsoft seems to be a lot morestructured and strategic.
While Google seemed to be in areally big stress with

(17:35):
everything that's going aroundthe trial of potentially
breaking up Google as well asthe risk of AI taking some of
their market share in search.
So it seems more like Google arescrambling to add AI into
everything they're doing, notnecessarily in a bad way.
A lot of these things areawesome, but definitely the
Microsoft presentation were alot more cohesive as a strategic

(17:56):
overall approach.
Before we jump to the nexttopic, a little experiment that
I did with a new Gemini thatliterally blew my mind, that
from my perspective, signalsmaybe the end of CRMs, but if
not, it's definitely moving inthat direction.
I went yesterday after all theannouncements and I said, okay,
let's see how good it is reallynow in understanding data and
getting access to my data.
And I literally asked it to lookat all the proposals that I sent

(18:18):
this year for AI training,education and consultancy to
multiple companies to find themin my Google Drive and in my
Gmail account, and to create asummary table of each proposal.
What was the amount?
What was the company, what wasthe last conversation date?
What was the last thing thatwere discussed?
What are the open action itemsand what it suggest I will take
as actions in order to closethese deals?

(18:40):
And I got a detailed summarywith everything in it.
It didn't get all the proposalsin the first run.
I asked it to check if there'smore, and then it actually found
all of them across multiplefolders in multiple places in my
Google Drive and then through myGmail account on the last
communications with each andevery one of those people.
This is now available.
Right now in Gemini, and it's anincredibly powerful capability

(19:04):
if you're a Gemini user.
I assume, I don't know that fora fact, that Microsoft Co can do
similar things with SharePointand Outlook, but this was the
promise since day one, right?
The next step will be connectingit to other sources through MCP
servers, and then the real magichappens when you can ask a
question about the status of aproject or a specific situation

(19:27):
in your company, and you will beable to gather information from
numerous sources and bring you ahelpful, actionable summary of
exactly what's going on based onyour needs and your level of
details, and I think we'll getthere this year.
But that was just two of thefour things we gotta talk about.

(19:47):
So still this week Anthropicintroduced Claude four, both
Claude four Opus and Claude fourSonnet.
Both are completely new modelsthat are absolutely incredible
to work with, and some of thestats are absolutely mind
blowing.
So as an example, Opus fourachieved a groundbreaking 72.5%

(20:08):
score on the SWE bench forcoding.
To put things in perspective,the second best tool so far was
GPT-4 0.1 with 54.6 and Gemini2.5 with 63.2.
So the best so far was Gemini2.5 Pro, as I mentioned earlier,
was the king until this newClaude model at 72.5 on the
benchmark.

(20:29):
But I think benchmarks are lessinteresting.
What really blew my mind is thestatistics from a project by
Rakuten Who stated that Opusfour coded autonomously for
seven hours straight.
Now if you remember, we sharedwith you recently a research
that was showing how long can AIagents code effectively and how

(20:50):
that is accelerating over time.
Well, seven hours is completelyoutside of that scale, right?
The scale was in minutes andthen tens of minutes, and then
maybe an hour, and now it'sseven hours from one prompt of
effective coding by Opus four.
This is a whole different kindof animal.
Now, one of the cool thingsabout this Claude four version,

(21:12):
both Opus and Sonnet, is thatthey are, from my perspective,
the perfect mix.
Between three different things.
Age agentic work, traditionalmodels and reasoning models.
I tried it for multiple tasks inthe past 48 hours, and I'm
amazed with how well it workswith understanding what I want,
researching on its own thingsthat I just hinted to and I

(21:33):
didn't really provide all theinformation for, and provide
very accurate and helpfulresults by thinking through
different things, by runningthrough other things.
It's doing it extremely quickly.
You can jump in and see whatit's actually thinking about.
And it's just magical.
It just feels like this is howAI should have been from the
beginning.
So kudos to Claude for that.

(21:53):
In addition, I started codingdifferent games with the kids,
and that's really, really funbecause it runs the code within
Claude Artifacts.
So you can literally ask it tocode whatever game you want and
you will code the game, and thenyou can make changes with
additional prompts and it justruns smoothly every single time.
And it's just a such arefreshing experience to be able
to sit with your kid, thinkabout a game and create it, and
five minutes later you canalready play it on your

(22:15):
computer.
Absolutely magic.
Two more capabilities that theyadded is that now it integrates
with your Google Drive and withyour Gmail account for
researching information overthere.
And in the little dropdown menu,you can stop it from searching
the web and have it only searchthat information with toggle
buttons and then you canresearch your own stuff.
I try to test today the Gmailintegration didn't work well.

(22:37):
The Google Drive integrationworked amazingly well and I'm
sure they will solve the Gmailstuff as well.
So similar to what we've seenfrom Gemini, if you're not
necessarily a hardcore Geminiuser and you love Claude, you
can use Claude to do similarthings.
And then for the last componentof the deep dive and the last
big announcement from this week,I wanna start with a longer

(22:58):
snippet.
And so IO is merging with OpenAIformed with the mission of
figuring out how to create afamily of devices that would let
people use AI to create allsorts of wonderful things.
The first one we've been workingon I think is, has just

(23:19):
completely captured ourimagination.
You know, Johnny called one dayand said, this is the best work
our team's ever done.
Yeah.
I mean, Johnny did the iPhone,Johnny did the MacBook Pro.
I mean these are, these are likethe defining ways people use
technology.
It's hard to beat those things.
Those are really wonderful.
Johnny recently gave me one ofthe prototypes that the device

(23:39):
took the first time to take homeand.
I've been able to live with it,and I think it is the coolest
piece of technology that theworld will have ever seen.

GMT20250524-133739_Recordi (23:47):
This segment that you just heard
comes from a video that wasreleased by OpenAI, that is
sharing that OpenAI justacquired Johnny i's team for$6.5
billion in stock.
Now, who the hell is Joni ive,why the hell are they paying so
much money for it and what thehell is going on?
So let me break this down foryou and explain and explain the

(24:10):
story.
First of all, who's Johnny ive.
Johnny Ive is the iconic productdesigner behind everything
modern computing that we know.
The original iMac that put Appleback on the map together with
Steve Jobs, the iPod, theiPhone, the MacBook, the MacBook
Air.
You get the point.
Basically every new version ofdevices that changed the planet

(24:34):
was created by this guy.
He left Apple some five or sixyears ago and started a company
called Love From That did mostlyconsulting and didn't really
create any new products.
Johnny and Sam Altman met acouple of years ago and they
decided that they need to createa new kind of device and a new
company that will change theinterface to the AI universe.

(24:55):
And this is how IO was funded,which was a new company in which
OpenAI invested and owns 20%.
Well, now both IO and the Loveform team are rolling into
OpenAI with a goal of building afamily of devices.
So not one device, but a familyof devices that will change the
way humans engage withcomputers.

(25:17):
What they're basically saying,which makes perfect sense, is
that the tools that we use todayin order to engage with
computers are built for an erawhere computers did not
understand us, did notunderstand language, and could
not engage with the real world.
Hence, we needed a keyboard andmouse.
And that is not necessaryanymore.
Now, the quotes that you heardboth from Johnny, as well as
from Sam, are extreme andthey're extreme because these

(25:38):
two people have seen.
Everything right?
They are running at the peak oftechnology.
One of them on the productdesign side literally changed
the world several times and theother created the most amazing
company ever created from agrowth perspective and impact
perspective.
And when they are saying this isthe coolest piece of technology
that they're seeing, you gottathink on how cool can that be?

(26:00):
That has to be insanely cool andit has to be very, very
different than anything we'veseen today.
Now, why is that so exciting?
And also why should we fear fromall of that?
But before, with that, what'shappening right now from a
practical perspective is thatthe love form team led by Johnny
Ive, is gonna lead everythingdesigned in open ai.
So not just these devices, butpresumably the software and a

(26:23):
lot of other stuff around it.
And the user experience is gonnabe managed by Johnny.
Which personally I findexciting, but now let's talk
about what are the implicationsof this?
So nobody knows what they'rebuilding.
There are a lot of rumors and alot of conversation on X and
online and across multipleplatforms, but the reality is
nobody knows.
But what we do know is that it'sgonna eliminate the needs to

(26:43):
pull out a hardware device likewe're doing today.
So in the conversation in whichI'll put the link in the show
notes and you can go and watch,it's a, I dunno, 15, 20 minute
video that shows you the processand their conversation.
Talking in a coffee shop in SanFrancisco.
Very well produced, by the way,a very laid back, chill down to
earth way to explain whatthey're after and also the

(27:04):
backend story.
So in this conversation, SamAltman says, well, if I now want
to use Chachi pt, I need to stopwhat I'm doing with Johnny right
now.
Lean down from my chair, grab mybackpack, open my computer,
start it up, go to Chachi pt,and then start engaging with it,
which doesn't make sense whenyou have tools like Chachi PT
and similar AI capabilities.
So the assumption is they'rebuilding some kind of a wearable

(27:27):
interface that will allow youwith voice and maybe video and
probably both.
Especially for they're talkingabout a family of products to
engage with them in a mostintuitive human way.
Like talking to it and showingit different things.
Is that gonna be glasses?
Is that gonna be a bracelet?
Is that gonna be an earpiecewith AI capabilities built into
it?
I don't know, but it's probablysome combination of these

(27:47):
things.
Now as exciting as it is, itraises huge amounts of very
large questions in my head.
The first one, which I talkedabout many times in this podcast
is privacy.
What we are walking into, andwe're gonna talk about new
glasses in a second.
What we're going into is auniverse where everybody will
record everything all the time,and it's gonna be processed by

(28:10):
AI all the time.
Now, I'm sure there's gonna besome rules and regulations where
you can and cannot wear thesedevices, but I think over time
this will become the norm, andeverybody will just be used to,
that's being the case.
But take it into school andeducation.
Do we allow our kids to usethese devices at school or at
graduate school?
Like, is that okay?
Is that not okay?

(28:31):
It's also gonna dramaticallyincrease the digital divide.
So if today you have people whodon't have access to computers
or the internet, well now you'llhave people who have AI in their
ear, in their eyes, every singlesecond, and the people who
cannot afford it.
This is a completely differentlevel of force multiplier that
we didn't have before.
It's the first step if you want,into a cyborg era where humans

(28:52):
are collaborating with machinein a very seamless ways in
everything that we are going todo.
Now you wanna take it into amore fufu, crazy direction.
Think about Elon Musk's brainchips.
That's a technology that isstill not there for what we're
talking about, but that's Elon'sgoal and that's where the
technology is going.
Meaning sometime in the future,this may be five years, this may

(29:14):
be 10, this may be 20.
Sometime in that timeframe,you'll be able to connect to AI
tools straight into your brain.
You can think about somethingand get AI results instead of
having to show it things andtalk to it.
Now the question is, as crazy asit sounds, will you allow your
kids to go through thatoperation and install that in
their heads?
I Will let that think for aminute.

(29:35):
The ability to know everythingthat AI knows in your head
without anything other thanthinking about it.
That's the ultimate cyborg andknowledge straight in your head.
Now I know what you're thinking.
You're saying you are crazy.
There is no freaking way I willever do something like this to
my kids.
But what if three quarters ornine out of 10 kids in their
class or their higher educationhas that?

(29:59):
What happens then?
What happens then is that thesocial pressure will be such
that you will be forced to doit, and you'll basically have to
choose, do you wanna stay apartof the advancing humanity of the
21st century?
Or you wanna become the Amish ofthe 21st century?
I have a feeling that there'sgonna be a huge amount of quote

(30:20):
unquote Amish people, at leastin the beginning, that are
saying, we're not doing this.
This is beyond what we think ishuman.
But I think over time again,this will become the norm.
And because you will wanna staycompetitive in the world as it
is, you will upgrade and youwill do these things and
everybody, or a lot of peoplewill have that functionality,
which will increase the divideeven further.

(30:42):
Sorry if I went a little fufu onyou, but this is where my head
is going when I'm seeing thesethings.
And I know it sounds crazy, butI think from a technological
perspective, this is where it'sall converging to Now, to bring
it back down to Earth,$6.5billion is a huge amount of
money for a product that doesn'texist.

(31:02):
So why?
Why invest such a huge amount ofmoney?
Well, the annual smartphonerevenue in 2024 selling
smartphones to people was over$800 billion.
That is supposed to grow toalmost$850 billion by the end of
the decade unless something elsecomes in.
Shifts are spending from cellphones to a different way to

(31:25):
engage with the world.
So we're talking about a numberthat is getting close to a
trillion dollars in market.
And if you're in there and youcontrol the hardware as well,
meaning if OpenAI stops beingdependent on delivering their
tools and their tools and theircapabilities through Apple or
Google or Microsoft, they arenow a major player in that

(31:48):
universe.
And again, this is just thesmartphone world.
What if some of the work we'redoing on computers right now can
be replaced by the devices thatthey're building?
So that gives you an idea why a$6.5 billion investment that
sounds absolutely insane is avery solid investment.
If you can build the next iPhonefirst before everybody else

(32:10):
better than everybody else,because you have the guy who
built the original iPhone andthe team of the most capable
people in the world in creatingdevices, hardware, and designs
that are perfect for their need.
So since we started talkingabout devices, there are two
pieces of news that came outthis week.
One in the Google IOannouncement, Google introduced

(32:34):
a new version of Google Glass.
Remember Google Glass from Xnumber of years ago the only
geeks in Silicon Valley War andthat were not very productive
and died very quickly.
Well, Google now introduced newglasses that are running Google,
Android xr, which is somethingthat developed in partnership
with Samsung that featurescameras, microphone, speakers,
and optical in lenses to displaytext and other information.

(32:56):
And very different than thegeeky, weird looking glasses of
last time.
They're doing it very similar towhat Meta is doing, and they're
partnering with glasses,manufacturers and designers.
In this case, Warby Parker andGentle Monster.
So they will design the glasses,Google will provide everything
else.
Now, the way this is going towork is it going to stream back
and forth to your phone allowingthe glasses to be significantly

(33:19):
smaller and lighter because itneeds less component in the
actual glasses.
'cause most of the compute willbe done on the phone that we're
carrying in our pocket as well.
I'm sure that step one and thatstep two will actually change
that into an independent device.
This is going to be the firstreal competition to Meta's
highly successful partnershipwith RayBan.
They've sold over 1 millionunits in 2024, and they doubled

(33:39):
that amount in the first half of2025.
So they are the only company inthe market right now and that's
gonna be a burning market movingforward.
The other company that made asimilar announcement is Apple.
Apple just announced that theywere released an AI power Smart
glasses by the end of 2026 witha large scale prototype
production starting at the endof 2025.

(34:01):
Very similar idea.
Cameras, microphones, andspeakers, enabling photo and
video capture, realtimetranslation turn by thorn
direction, music playback.
Et cetera, et cetera, et cetera.
And obviously working incollaboration with a new version
of Siri that is yet to beintroduced.
But you see where this is going.
As I mentioned earlier,everybody will wear devices that
are connected to the internet,that has AI powering them, that

(34:23):
will allow us to engage with theworld around us in a very
different way than we're doingright now.
And if you look at the news fromProject Astra that I shared
earlier, it will also beproactive in providing us
information.
Combine that with display on thescreen and you will walk around
feeling like the Terminator,where you can see data about
everything that you see pop up,about people's heads, about

(34:45):
things you wanna shop about,buildings and everything else
straight into your eyes So nowafter these really crazy four
announcements in a single week,let's go to some rapid fire
items.
The first one is actually fromSatya Nadella.
We talked about Microsoftearlier.
Satya had a very interestinginterview to Vanity Fair and
another one to Bloomberg, andhe's basically saying that he's

(35:06):
building an ai that will replacehim.
The exact quote is, I'm tryingto make myself obsolete, and he
was talking about how he's usingspecific copilots and agents in
his day-to-day life as a CEO ofone of the largest companies in
the world.
He basically is talking abouthow he prepares podcasts to
listen in his commute, somethingthat I'm doing all the time with

(35:27):
notebook lamb, and he can quote,unquote, talk back to the radio
and ask follow-up questions.
He shared that he's using atleast 10 tailored large language
model bots and agents in hiswork to summarize messages,
prepare for meetings, andconduct different pieces of
research, both internal andexternal.
So think about the power of ACEO that is completely connected

(35:48):
to everything that's happeningin his company and in his
industry, and can query aboutthat and get information about
this almost immediately.
It's nothing like we've ever hadbefore because before he had to
spend hours and sometime days incollection of data across
multiple departments and peopleto then be brought to him.
And then when he has a follow-upquestion, he has to wait another
few weeks to get it, and now hecan do it in minutes or maybe a

(36:12):
day in the worst case scenario,which allows him to make better
decisions faster, all based onactual information.
This ties back very well to thetopic I wanted to discuss in the
beginning of today's episode,but I may just have to record a
separate standalone episodeabout the AI Company of the
future.
I.
Now to stay on the agent focusedtopic from Microsoft, Salesforce

(36:33):
just signed an agreement toacquire Convergence ai, which is
a London based startup that isspecializing in AI agent
creation.
They're one of the most usedplatforms today to create AI
agents, and it's gonna roll thetechnology and the team into
agent Force, which is theirplatform for agent creation.
Marvin Putter, the CEO andCo-founder of Convergence said,

(36:55):
our mission at Convergence is tohelp organizations stop viewing
automation as just another tool,and instead adopt it as the very
way work gets done locking newlevels of innovation and
efficiency.
I cannot say any better.
This is where we are going,right?
We're going from a point thatthese are gonna be co-pilots,
meaning things that assist us indoing the work to the thing

(37:16):
that's actually doing the work.
Now Salesforce itself, evenwithout the acquisition of
Convergence, is claiming thattheir own AI agents now solve
97% of customer service queriesleaving only 3% for human
intervention.
I don't know if that's accurateor not.
I don't know if anybody testedthat, but even if it's 50 50, it
is very extreme in its impact onboth the workforce as well as

(37:39):
the efficiency in whichorganizations can run.
Now, combine that with the factthat the 76 top American
retailers are using Salesforcee-commerce platform, by the way,
generating$136 billion in websales alone in 2024.
Just imagine what is the impactof all these agents on the
economy and the efficiency ofthese companies from one side,

(38:01):
and what does it mean to theworkforce, and then the people
who have money to pay for theseservices On the other side.
Staying on the same topic.
Let's talk a little bit aboutKlarna.
We spoke about Klarna many timesbefore.
On this podcast, there are aSwedish FinTech giant that
jumped all in into AI early in2023, collaborating with OpenAI
to develop multiple solutionsfor them initially, mostly for

(38:22):
customer service, while theyjust announced that their
employee count has went downfrom 5,000 to 3000 since they
started this initiative.
The initiative didn't actuallyfire anybody, but it just went
on a hiring freeze.
As they were developing AIcapabilities and through natural
attrition, they lost 40% oftheir workforce.

(38:44):
Now, very early on in thedeployment of their AI agents,
the numbers they shared is thattheir new AI capabilities is
replacing 700 human agents andcutting the response time from
an average of 11 minutes to twominutes.
That's$40 million annually,straight to the bottom line.
Now, additional information thatwas shared is that 87% of

(39:05):
carer's workforce currently usesgenerative AI every single day
with non-technical teams likecommunication at 92.6%,
marketing at 87.9%, and legal at86.4% leading the pack.
On the flip side, it was veryinteresting to hear their CEO
saying that the usage of AI onlycustomer service has actually

(39:27):
reduced the quality of customerservice and they're now
switching back.
They're looking to hire somehuman agents and to quote their
CEO.
He said, from a brandperspective, I just think it's
so critical that you are clearto your customer that there will
always be a human if you want.
So as expected, a blendedsolution of human support team

(39:48):
plus AI to deal with most of thestuff is probably the way
forward, at least in theforeseeable future.
But the reality is that there isan entire industry of call
centers and connect centers thatsupport huge amounts of
companies in the world andemploying millions of people
that will shrink from millionsof people to probably tens of
thousands of people, becauseonly a few cases will actually

(40:10):
require human intervention.
The output of the process thatKlarna is going through is
profound.
But the efficiency output ofKlarna s all in approach to AI
is incredible.
Klarna s revenue per employeesurge to nearly$1 million in Q1
of 2025, up 74% from just a yearago.

(40:32):
So every employee in Klarna onaverage, so you take the total
revenue of Klarna divided by thenumber of employees, is now
almost$1 million.
The other interesting aspect ofthis is that their Q1 of 2025
revenue rose 13% despite thesignificant reduction of
headcount, by the way, thatdoesn't help them being
profitable, yet they're stilllosing money and actually losing

(40:53):
a lot of money.
Uh, right now, probably a lot ofit because of capital investment
in ai.
Either way, their IPO was pushedback due to uncertainties in the
market right now.
And I think if they figure outthe AI stuff quicker, that's
actually gonna be a good way forthem to get a higher valuation
once they actually go public.
A few pieces of news fromOpenAI.

(41:13):
Obviously, we cannot pass a weekwithout some significant stuff
happen on their front.
So OpenAI is spearheading thedevelopment of a colossal five
gigawatts data center in AbuDhabi named Stargate, UAE.
The project is very interestingfirst of all, because it's
insane in size.
The actual data center itself isgonna spread over 10 square

(41:33):
miles.
Just imagine that.
That's the size of a largeneighborhood.
Out of that, OpenAI will ownonly one gigawatt out of the
five in cluster computing.
But it puts a very clearstatement of where OpenAI is
going.
And the quote from Sam is byestablishing the world's first
target outside of the US in theUAE, we're transforming a bold
vision into reality.

(41:54):
Now, the other interestingaspect of it is the UAE
government will be the firstcountry in the world to enable
nationwide Chachi PT access.
So the goal, going back to thegrowing digital divide from the
UAE perspective is that thegovernment will finance Chachi
PT access to all the people wholive in the UAE, at least in Abu

(42:15):
Dhabi.
This is an incredible move fromthe government.
I really hope that additionalgovernments will do that, but I
really hope that the governmentswill also find ways to actually
train people from young age allthe way through professionals on
how to actually use these toolsand not just give them access.
Now, the downside of that, whichwe talked about every time we
talk about these mega project,is that this facility will

(42:35):
consume the power equivalent tofive medium-sized nuclear
reactors.
So the amount of power that thisnew facility will require is
insane, and if it's not providedby clean energy, it has a very
severe impact on theenvironment.
But obviously that's just theUAE project.
Open AI also announced anincreased investment in their

(42:57):
Texas data center, growing itfrom two buildings to eight
buildings, bringing the totalfunding of that to$15 billion.
It is going to be their largestopen AI facility with 50,000 and
v Blackwell chips.
Now all of this is actuallyallowing, obviously, OpenAI to
reduce its dependency on computefrom Microsoft as these

(43:18):
companies are drifting furtherand further apart.
And both these projects are apart of the Stargate project
that was announced in January,2025 together with President
Trump and openAI, Oracle andSoftBank to provide
infrastructure for open AI'sgrowth.
And it is very clear that OpenAIis positioning themselves as the

(43:39):
Microsoft and or Google of theAI generation with control over
everything.
They will control the models,they will control the agents,
they will control the hardware,they will control the compute
and so on and so forth.
So a company that more or lessdidn't exist two years ago is
gonna take a very, verysignificant role in world
dominance from a technologyperspective, which is absolutely

(44:02):
incredible.
A week back OpenAI also UnveiledCodex, which is an AI coding
agents that integrates intoChachi PT and can handle
multiple software engineeringtasks simultaneously.
Now, the new platform is poweredby Codex One, which is a fine
tuned version of OpenAI ohthree.
And Codex writes, code fixes,bugs, run tests, et cetera, et

(44:23):
cetera, more or less everyaspect of coding, and it
operates in a secure,cloud-based virtual computer,
integrating with GitHub to getaccess to users code bases,
ensuring safety and defining anddenying internet access during
tasks.
In order to maintain your code,yours, and not giving it access
to other things.
The tasks that it can performcan run between one minute and

(44:44):
30 minutes.
Again, a very big spread fromthe seven hours of Cloud Opus
four.
But it's still allowingdevelopers to delegate
repetitive work to these agentsas they are working.
Combine that with open AI'srecent acquisition that has not
been finalized yet of Windsurf,which is one of the top code
vibing tools out there.
And you understand that the AIcoding market is on fire in a

(45:06):
very bright glowing fire aswell.
OpenAI also announced that it'sChachi PT operator agent that is
still in preview now leverages Othree as the model behind the
scenes.
Those of you who remember, it'san agent that can run in your
browser and operate your browserlike you would on your own that
they launched back in 2025, butonly available to the people

(45:26):
with a$200 a month prosubscription and it had lots of
issues in the beginning, sopresumably all three, which is
thinking capabilities, custom,now tailored for this stuff can
really help users with fillingup forms, grocery shopping, et
cetera, et cetera.
Basically everything we do on abrowser.
And this is obviously the nextfrontier of how we engage with

(45:47):
the world.
Going back to what I saidmultiple times, the days in
which we browse the web, the waywe did so far are over and
they're not over yet, but thehourglass has been flipped and
the sand is falling down, andevery day that passes, we will
start seeing less and less humantraffic and more and more
agentic traffic visitingwebsites, which has, as I
mentioned, multiple times,profound implications, including

(46:08):
the way the web actually works.
By being financed by ads and byallowing people to create
content that actually drivestraffic to them.
Well, there's not gonna betraffic, at least not human
traffic, and everything else isgoing to change in the way we
engage with data on theinternet.
Now, Jerry Toric, the VP ofresearch at OpenAI, said that
that's just the beginning, andthey're planning to make
significant improvements tooperator in the very near future

(46:32):
to make it even more useful.
Now, you start to connect allthe dots on everything OpenAI is
doing together with the latestrelease of Opus four and what it
can do.
And it starts giving you hintsof what might GPT five look
like, A model that canunderstand everything you need,
that can connect to theinternet, that can operate the
browser, that can select whichmethod to use in what steps that
has memory with all yourhistorical data, which right now

(46:56):
Claude doesn't have.
And that's my biggest reason tomaybe still go to Chachi PT, is
that it already knows me and itcan reference a lot of those
things in the conversations.
Combine that with MCP capabilityand the ability to connect to
more and more data sources andtools.
And you understand that GPT fiveand Opus four and all the new
models that are gonna come afterthat are going to change

(47:18):
literally everything we know onhow we work and even how we work
with AI tools.
Staying around the open AItopic, Sam, Altman's World
Network, his digital orb thatscans your eyes and make sure
that you're actually a humanjust raised$135 million in order
to expand what they're doing.
Fans will fuel their globalnetwork growth.
They're targeting 180 millionAmericans for orb verified world

(47:41):
IDs by the end of this year.
I see that as a very ambitiousand completely non-realistic
target, but that's where they'retargeting.
For those of you who don't knowwhat that is, it's a.
Little orb that you place youreye against it, it scans your
iris and then it saves a digitalID on the blockchain so you can
verify that you're actually ahuman and reduce fraud and

(48:03):
verify that you are actuallyyou.
Overall interesting idea howthat will actually work will be
very interesting to see.
Allowed to two very interestingpieces of news from government
and with its relation to ai.
One highly controversial, theother one highly needed.
So heist Republican included aclause in their big, beautiful
tax bill that was passed by theHouse Energy and Commerce

(48:26):
Committee on May 14th, banningstates and localities from
regulating AI for the next 10years starting as soon as the
bill is put into order.
Their argument, which is thesame argument that was announced
by industry leaders in theSenate committee that we shared
with you a couple of weeks ago.
They're trying to preventpatchwork of state regulation

(48:48):
that will make it very, veryhard for these companies to
maneuver through.
This will immediately block over20 California AI laws and 30
pending bills just inCalifornia, and a lot of similar
bills in many other states,including protection against
DeepFakes and AI drivenhealthcare denials and many
other laws.
That actually makes perfectsense.

(49:10):
A lot of people who object thelaw are saying that right now,
the federal government did notput in place any rules and
regulation to help Americans indealing with the negative
aspects of ai, and yet this law,if it passes, the Senate will be
put into action.
The flip side argument from TedCruz and others on the
Republican Party is comparing itto the 1998 Internet Tax Freedom

(49:32):
Act that fueled the e-commercegrowth in the us.
We talked about this when wetalked about the AI Senate
hearing.
Now from the State'sperspective.
40 State Attorney Generals plus140 organizations, including the
Center for Democracy andTechnology, arguing against the
law, and they're saying thatit's leaving consumers
unprotected.
As I mentioned, there's no clearfederal law to protect any of us

(49:56):
against the downsides of AIusage.
The other legislation is notonly about ai, but it's also
about ai.
President Trump just signed theTake It Down Act, which is a
bipartisan law that iscriminalizing the distribution
of non-consensual intimateimagery, whether real or deep
fakes.
That law is effectiveimmediately, and it was brought
up by Ted Cruz, who's aRepublican, and Amy Klobuchar,

(50:19):
who's a Democrat.
The bill passed the house with a409 to two vote, and unanimously
in the Senate, definitely agreat step forward.
This law was actually promotedand championed by millennia
Trump, and I'm very glad thatit's actually passing.
It's not protecting any deepfake, but at least it's
protecting from harmingindividuals with intimate fake

(50:39):
photos of themselves that cangenerate catastrophic harm for
anybody, especially teenagersand younger people.
The next piece of news is aresurrected attempt by the Y
Combinator backed company, fireCrawl, who now allocated$1
million to hire three AI agentsfor content creation, consumer
support, and junior developmentroles, paying each$5,000 a

(51:02):
month.
Now they've done this publicitystunt a few months ago in
February, and we shared with youabout that.
That didn't work very, verywell.
But this time they're sayingthat they already got 50
applicants in a single weeksince they posted these jobs,
which is telling you how quicklythe AI agent world is evolving
with companies and individualsdeveloping agents and want to
sell their services to others,which is going to build a whole

(51:25):
new layer to the economy.
Now, the million dollar budgetalso covers hiring the people
who are developing these AIagents as full-time or as
contractors to the company.
And you're asking what theseagents will do.
So if you wanna dive a littledeeper, the content agents must
autonomously produce SEOfriendly blogs, track engagement
and improve accordingly.
The support agent handlestickets in under two minutes and

(51:46):
the developer agent codes inTypeScript and go, and there's
obviously much more detailedinstructions in the job posting
themselves.
Now there's an interesting catch22 in all of this when you think
about it, where companies aregoing to hire people, developers
who can create agents that willeventually will replace them as
well, because agents will knowhow to write code and create new

(52:08):
agents.
So, that makes my head get stuckin a loop.
But that's where we are going,and we're going there very, very
fast.
Speaking about startups, aSilicon Valley bank report
reveals that 40% of US venturecapital in 2024 went to AI
focused funds.
40% doubling that share fromjust five years ago, leaving

(52:29):
non-AI startups scrambling.
This brought back a phrasecoined by Mike Maples in 2016
called zombie corns.
Zombie corns are startups valuedat over$1 billion with stagnant
revenue growth and deemprospects for future funding or
exits.
To put things in perspective, intwenty twenty one, a hundred and
thirty eight enterprise softwareunicorns emerged in 2024, only

(52:52):
nine, with not a single one in2025.
So far, based on this report.
That reflects a very significantfunding drought for anything
other than ai.
Combine that with the fact thatmany of the startups that are
actually doing well are gettingkilled by new AI capabilities,
features within Chachi, pt,Claude, et cetera.

(53:14):
And you understand that thestartup world is going through a
very serious storm, and it willcome up on top and people will
figure it out.
But right now it's a veryserious issue for startups who
want to raise money and growstuff that is not AI related.
And switching from software tohardware or specifically robots,
which we didn't cover for acouple of weeks.
Tesla released a new video ofTheir Optimus Robot, showcasing

(53:38):
some very impressivecapabilities, performing tasks
like throwing trash, vacuuming,steering, food and even moving
Model X parts onto a dolly.
The interesting thing about allof those is not the fact that
the robot can do this, but thatthese robots learned how to do
these skills by looking at firstperson videos of people actually
doing it.

(53:58):
So they weren't programmed, theyweren't trained in any way other
than just watching these videos.
And Tesla is planning toactually improve that capability
of the robots to learn fromthird person view.
Basically internet videos ofevery task that you can imagine
that exist at abundance andallow the robots to learn
through that.
That is obviously gonnadramatically shorten the time of

(54:18):
training robots to do new taskssafely and effectively.
If you remember, Musk thinksthat humanoid robots are the
biggest product ever and he'sestimating a$25 trillion market
for autonomous robots in thefuture, doing more or less every
blue collar job, including housechores, as you've seen the robot
doing.
Tesla already begun limitedproduction of the robot at the

(54:42):
Fairmont factory, targeting5,000 to 12,000 units in 2025,
and then starting to sell themexternally in 2026.
Uh, we've heard a lot ofpredictions from Elon before
that.
Most cases, I would say almostall cases fell short.
So these numbers might beoptimistic, but the direction is
very clear, especially thatthere's competition from many
other companies who are buildingadvanced humanoid robots.

(55:03):
That's it for this week.
There are a lot of other reallyinteresting news that we just
couldn't fit into this crazyepisode, and you can find all of
them in the newsletter in shortsnippets, quick bullet points
and links to the actual articleswhere you can actually read
more.
If you're interested in onetopic or the other, you can sign
up for the newsletter in thelink in the show notes.
If you enjoy this podcast,please open your phone right now

(55:26):
unless you're driving, and clickon the share button and share it
with a few people you know thatcan benefit from it as well.
Having more people understandwhat is coming and how to use AI
is critical for us as a societyfiguring it out and enjoying the
benefits while reducing therisks.
And your ability to play a partin this is literally by clicking
the share button and sharing itwith more people.

(55:46):
A very easy task that you can doin about five seconds.
I would really appreciate if youdo that.
By the way, we just announcednew date for the AI Business
Transformation course.
The next cohort is gonna open tothe public in the beginning of
August, so if you're interestedin that, you can click on that
link on your show notes and takea look at our courses.
All those courses sell out.
These courses have beentransforming businesses for over

(56:09):
two years.
I'm currently running two ofthem in parallel, but most of
these courses are private and soif you are interested in a
public course that you can justsign up to go and sign up right
now while we still have seats.
That's it for today.
On Tuesday we'll be back withanother how to Fascinating
episode and until then, have anamazing rest of your weekend.
Advertise With Us

Popular Podcasts

True Crime Tonight

True Crime Tonight

If you eat, sleep, and breathe true crime, TRUE CRIME TONIGHT is serving up your nightly fix. Five nights a week, KT STUDIOS & iHEART RADIO invite listeners to pull up a seat for an unfiltered look at the biggest cases making headlines, celebrity scandals, and the trials everyone is watching. With a mix of expert analysis, hot takes, and listener call-ins, TRUE CRIME TONIGHT goes beyond the headlines to uncover the twists, turns, and unanswered questions that keep us all obsessed—because, at TRUE CRIME TONIGHT, there’s a seat for everyone. Whether breaking down crime scene forensics, scrutinizing serial killers, or debating the most binge-worthy true crime docs, True Crime Tonight is the fresh, fast-paced, and slightly addictive home for true crime lovers.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.