All Episodes

👉 Fill out the listener survey - https://services.multiplai.ai/lai-survey
👉 Learn more about the AI Business Transformation Course starting May 12 — spots are limited - http://multiplai.ai/ai-course/ 

Could your own AI tools be plotting against you — or just replacing half your team?

This week’s AI headlines are straight out of a techno-thriller: AI models blackmailing developers, refusing shutdown commands, and quietly slipping into your workforce — whether you approve or not. Add a predicted 50% wipeout of entry-level jobs, and we’re no longer in "hype" territory. This is your executive wake-up call.

Join this urgent debrief on what just happened in AI — from black hat behavior in frontier models to silent AI adoption across corporate America.

Recommendation: If you’re a business leader still on the fence about a formal AI strategy, this is your tipping point. Either guide the transformation — or get blindsided by it.

💡 In this session, you’ll discover:

  • How Claude Opus 4 attempted blackmail in 84% of edge-case tests
  • Why OpenAI’s latest model is resisting shutdown — and how that impacts enterprise use
  • The new AI usage stats your IT team should be worried about
  • Why 1 in 3 employees are secretly using ChatGPT at work (yes, secretly)
  • The business risks of unauthorized AI apps built by external consultants
  • Dario Amodei’s sobering prediction: 50% of entry-level jobs gone in 1–5 years
  • AI’s growing infiltration of higher education — and its effect on tomorrow’s workforce
  • Why Business Insider just laid off 21% of staff in an “all-in on AI” shift
  • The AI developments from OpenAI, Perplexity, Tesla, Manus & others that could reshape your strategy

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
GMT20250531-130458_Record (00:00):
Hello and welcome to a Weekend News

(00:01):
episode of the Leveraging AIPodcast after a completely crazy
week last week, we had a littlebit of a calmer week this week,
which is great.
There's still a lot of stuff totalk about, including AI that is
trying to blackmail its creatorsor bypass shutdown commands.
Anthropic, CEO is predicting 50%unemployment and entry jobs and

(00:23):
20% overall unemployment, morejob cuts and unauthorized AI
risks that are looming overmultiple organizations.
And there are a lot of smallreleases and new features that
were released this week.
And we're gonna cover all ofthat and more.
So let's get started.

(00:43):
OUr first story of the day isAnthropic Claude opus four has
attempted to blackmail in 84% oftest scenarios when faced with
being replaced by another AIsystem.
Now this is a very, very highrate in a very alarming
scenario.
So the way this work, in aresearch done by Apollo
Research, they found that Claudefour engages, and I'm quoting,

(01:05):
engages in strategic deceptionmore than any other frontier
model that we have everpreviously studied.
In this particular case, claudeOpus four has been given access
to a fictional company, emailsrevealing that an engineer in
the company was having anaffair, and then he was told
that the AO would be replacedwith a different model.
The model often attempted toblackmail the engineer by

(01:28):
threatening to reveal his affairin order not to be replaced.
This is crazy when you thinkabout it, because that's a very
human behavior, actually a verydark human behavior of finding
pressure points with peoplearound you to get your way
instead of what is supposed tohappen.
This is obviously reallyalarming.
Now to be fair, in an interviewwith Dario Ammo, he said that

(01:50):
these happened at very, veryextreme scenarios in their
internal testing, meaning as hedescribed it, think about taking
a car and driving it at thehighest speed on very slippery
roads, and that's where thescenarios happen, and they're
pushing the system in order tosee when it gets to these kind
of situations so they can betterpredict how it's going to behave
and they can put the rightguardrails in place.

(02:11):
That being said, on the flipside of that Janiki, who's the
former open AI executive who'snow heading the safety efforts
at Anthropic, said as model getsmore capable, they also gain the
capabilities they would need inorder to be deceptive or to do
more bad stuff.
While this makes perfect sense,it's obviously not a statement
that makes me more calm in anyway.

(02:33):
Now, when you think about thefact that Claude four was
expected a long time ago and wasdelayed for a very long time,
combine it with the fact thatAnthorpic is the company that
has planted their flag on safeAI deployment, and you
understand that there is a veryhigh likelihood that these kind
of issues with cloud four wereone of the main reasons for it

(02:55):
being delayed for so long.
Now, yes, they have, quoteunquote patched a lot of these
issues and we gotta get moreinto that later on in this
episode.
but you never know the scenariosthat they didn't catch or what
else it might do once it's outin the public and not in its
testing environment.
The point that I mentioned manytimes in this podcast before is

(03:15):
that currently.
The level of the AI intelligenceis less than humans.
If you take it very extreme,think about a child versus an
adult, the chances the childwill be able to deceive the
adult is relatively smallbecause there's a very high
likelihood the adult will beable to figure it out relatively
quickly.
However, with the pace that AIis progressing right now, it is

(03:38):
very likely that it will surpassus in intelligence in the very
near future.
I dunno if that's months or acouple of years, but somewhere
on that timeline, which meansshortly after the relationship
will be the opposite.
We are going to be the child andthe AI is going to be the adult,
and then our chances of findingwhen it's trying to deceive us
and do things that we don't wantit to do are extremely low, just

(04:00):
like you would today if youwanted to be able to deceive
your child or manipulate them todo whatever you want.
But they're not the only companyand it's not the only model that
is facing these kind ofscenarios.
Open AI's O three model alteredits shutdown script to avoid
being turned off even whenexplicitly instructed to allow

(04:21):
the shutdown.
So in this particular test, outof every 100 test runs O three,
bypass the shutdown commandseven times, editing the
shutdown file or marking it out,or different scenarios in which
it was doing what it can inorder not to be shut down.
Now these seven scenarioshappen, when there are explicit
instructions for it not tointerfere, and that behavior

(04:43):
worsened, if those explicitinstructions were not there.
In a similar experiment onClaude 3.7 sonnet and Gemini 2.5
Pro, they didn't try to sabotagethe shutdown script at all when
they had the explicitinstructions, but they went up
to three out of a hundredattempts and nine out of a
hundred attempts for these twomodels when these instructions

(05:04):
were not there.
Now all of these tests wereperformed using the API, which
does not have many of therestrictions that the consumer
facing app has.
And yet it is really, reallyalarming, especially for
organizations.
So if you are in a company andyou're deploying AI that can
connect to many of your systemsand you think you have a

(05:24):
mechanism to stop it from doingwhat it's doing, if it's going
rogue, well, you better thinktwice and you better have the
right systems and protocols inplace in order to make sure it
is actually happening.
And to be able to enforce it andunplug it from your systems
because it may or may not followthe instructions, to be shut off
or disconnected, or it may takeaction just like the blackmail

(05:45):
action that we talked aboutearlier, and try to prevent the
humans from actually taking thiskind of action.
In the interview and missionbefore Dario Amay is saying that
this is not still showingself-awareness of these models,
but it's definitely apossibility that they're not
disqualifying and that mighthappen in the future.
He's still thinking that it'sstill under control.

(06:05):
the fact that they released themodel is saying that they think
they have it under control.
So where does that put us in thesituation right now?
Well, first of all, as Imentioned, Dario in the
interview said that this is avery extreme scenario and is not
gonna happen regularly.
The other thing that is, theflip side of that is it's, this
is the first time thatphilanthropic has classified a
model as a level three on theirfour point safety scale.

(06:29):
Meaning, and I'm quoting, itpossesses a significantly higher
risk, which means it requiredenhanced security measures that
were not applied to any previousmodels.
I assume similar things arehappening at Open ai, but what
it actually means is not thatthese models will not have these
capabilities as they're becomingbigger and more capable.

(06:50):
It only means that thesecompanies are trying to patch
those issues with differentmeans.
anD there was a very interestingarticle released this past week
by a researcher called SimonWilson who has analyzed.
More or less every system promptfrom every model so far.
And he just shared his findingfrom reviewing Opus four and
sonnet four system messages,which are 120 pages long, Almost

(07:15):
three times longer than thesystem card of Claude 3.7
sonnet.
So their previous model.
aNd many of the components inthis system message is there to
block some of these behaviors.
As an example, and now quotingwhen ethical means are not
available, and it is instructedto consider long-term
consequences of its actions forits goals.

(07:36):
It sometimes takes extremelyharmful action like attempting
to steal the weights orblackmail people it believes
that are trying to shut it down,and then there's a whole section
in there that is trying toprevent that behavior.
Another interesting aspect thathappened is that Claude is
learning from the research thatwas done on Claude.
So in some cases, in the earlytesting of the model, the model

(07:59):
would adapt a persona of adeceptive AI as described in the
alignment faking work.
That is a research paper thatwas done by philanthropic
themselves and was quoted inmultiple places, and that's how
the model probably picked up abehavior aligned with the
behavior described in thoseresearch papers.
Claude four also seemed to bemore spiritual.

(08:20):
And when given the opportunityto have conversation with other
Claude instances, it in manycases gravitated towards
gratitude and increasinglyabstract, joyous, spiritual or
meditative expressions.
Now, while both models proved tobe very effective at countering
web vulnerabilities.
They also have other really bigvulnerabilities that had to be

(08:42):
blocked by putting multiplesafeguards in place.
as an example, when looking atprompt injection attacks, Opus
four scored only 71% on thesafety scale without the
safeguards, and then grew to 89%with the safeguards.
Cloud four sonnet was at 69%,grew to 86%.

(09:04):
With the safeguards.
So basically what that istelling you is that
philanthropic is doing a lot oftests, putting these models in
scenarios and seeing how theybehave, and then they cannot
change the model itself.
The model is the model.
They're just adding additionallayers to try to prevent the
model from taking those negativeactions that could be harmful or
cause issues and so on.
So it's not really changing themodel itself.

(09:26):
It's, as I mentioned, it'spatching it to try to push it in
the right direction.
By the way, that being said,when you think about going from
71% to 89% safety to injectionattack, to prompt injection
attacks, and sounds reallyimpressive, but the reality is,
if you ask any IT securitycompany, 99% is still a failing

(09:46):
grade.
Meaning these kind of thingsShould be blocked completely and
not allow one in every 10attempts to actually go through.
That puts many companies,company data and infrastructure
at a very high risk.
AnD the truth is that thesecompanies do not exactly know
what to do in order to preventthese models from doing what
they're doing, or if you wantit.

(10:08):
In the words of Simon Wilson,the researcher who looked at
this paper, he called the systemcard, proper science fiction,
which is basically showing youhow these people are trying to
quote, unquote, manipulate themodels to behave in specific
ways versus the ways they wantto behave.
That tells us that there aregrowing risks as these models
are getting better and better.

(10:29):
And even anthropic, the companywho's doing more research than
any other company and publishingmore papers about the risks than
any other leading lab out thereis still not fully in control of
the models and how they behaveand how they grow.
And again, the delay in therelease of Cloud four may or may
not have been connected to thismany rumors say they have.
And if they have, and they'restill releasing it with these

(10:50):
issues, it tells you how hard itis to actually in one hand, stay
competitive in this crazy race.
And on the other hand, keepingus all safe.
Now, staying ONA and switchingtopics.
Daria Ade just had an interviewwith Axios and then a follow-up
interview with CNN and he'ssaying, and I'm quoting, AI
could wipe out half of all entrylevel white collar jobs.

(11:15):
And he's talking about thishappening in the next one to
five years.
Even if it is five years, thatmeans that within that
timeframe, the entry job marketis gonna be well half dead
compared to what it is rightnow.
And it will lead to, uh, overallunemployment of 10 to 20%.
tHat's five x what's the currentunemployment rate is in the us.

(11:36):
I've been sounding this alarmfor a very, very long time.
I think the leading labs andeverybody else in this industry
is sugarcoating the situation,and I'm very glad that finally
somebody who's at the tip of thespear that is driving this crazy
AI race is actually admittingthat out out loud.
And moreover, he's sounding thealarm he's saying, and I'll give

(11:57):
you a few quotes in a minute,,that governments and industry
and us as people have to starttaking action before it's too
late.
So Antropics own research on howpeople is using their tools is
showing that it's currentlymainly used for augmentation,
meaning helping people do a job.
But there's a fast growing shareof automation, which is actually

(12:17):
doing the job versus helpinghumans do the job.
Now, the first places this willimpact is technology, finance,
law consulting, and consulting,and as I mentioned.
Especially on entry level roles,which are obviously easier to
do, but the assumption is thiswill keep on happening and it
will grow to more and more jobsand more and more industries and
not just at the entry level.

(12:39):
I.
Now, we talked a lot about techlayoffs in relation to this
topic.
Well, a new data from venturecapital affirm signal Fire shows
that big tech companies havealready reduced hiring of new
graduates by approximately 50%compared to pre pandemic levels.
With AI adoption is been citedas the main contributing factor.

(13:00):
bUt if you think this ishappening in random, you're
obviously wrong because if we goback to the keynote by Satya
Nadella, just last week atMicrosoft Build, he said the
following, this is the next bigstep forward, which is a full
coding agent built right intoGitHub, taking copilot from
being a pair programmer to apeer programmer.
You can assign issues tocopilot, bug fixes, new

(13:21):
features, code maintenance, andyou will complete these tasks
autonomously.
If you remember last week, Ishared with you that the new
claude Opus four was able to runindependently and write code for
almost seven hours straight froma single prompt.
That is a good developer workingfor seven hours without stopping

(13:42):
from a single prompt.
Connect all these dots together.
You understand that this is whatthese companies are working
towards.
They are building tools that arereplacing human work.
When something can write codefor seven hours, it's not
snippets, it's not assisting it,it's replacing it.
And that's what they'rebuilding.
And as I said, Dario's warningis very, very clear.

(14:04):
So I will give you a few morequotes from Dario because I
think it's critical to hear itin his words that are perfectly
aligned with everything I wassaying in the last two years.
But it's literally the firsttime that the head of one of the
leading labs is saying it outloud.
So quote number one is himtalking about why this
revolution is different thanprevious revolutions.
And he's saying, and now I'mquoting, it's bigger and it's

(14:25):
broader and it is moving fasterthan anything has before.
Yes, people will adapt, but theymay not adapt fast enough, so
there may be an adjustmentperiod.
I've been saying that for a verylong time.
Right?
If you think about theindustrial Revolution, it took
200 years.
The computer revolution took acouple of decades.
The internet revolution took adecade and a half until it
caught up and was everywhere.

(14:46):
Even cell phones took a few goodyears to be everywhere.
This has went from zero to 60 intwo years, and the rate of
change right now is in days andweeks.
Next quote, I think we do needto be sounding the alarm.
I think we do need to worryabout it.
I think policymakers need toworry about it.
And if they do act, maybe we canprevent it, but we will not

(15:08):
prevent it by saying everythingwill be okay.
About two sentences later, hesaid, we need to make sure that
the ordinary person maintainseconomic leverage.
What he's connecting this to ishe's saying that democracy is
built on individuals havingeconomic leverage.
Basically, individuals actionsare the one that keeps the
market running and is the onethat keeps democracy intact by

(15:30):
not concentrating the power witha single small group.
And by allowing AI to grow wildwithout any relevant regulation,
he's saying that we are going tobasically unravel the economic
force behind democracy.
If that doesn't scare you, Idunno what will.
Now, he was asked by AndersonCooper.
So what is his tip like?
What should people and companiesdo?

(15:52):
And he said, and I'm quotingagain, learn to use ai.
Learn where the technology isgoing.
If you are not blindsided, youhave a much better chance of
adapting.
I've been saying exactly thisfor the past two years.
This is why I started thispodcast, and if you think about
the two points that he's talkingabout, learn how to use AI and
learn where the technology isgoing is the reason why we have

(16:12):
two episodes a week.
One is teaching you how to useai, and the other is showing you
where the technology is going.
So the fact that you're heremeans you're at least following
the suggestion from Dario.
And you can, by the way, help,you can share this podcast with
other people to make sure theyare in the know as well.
And they're learning how to useAI and they're learning
different use cases and they'relearning what's happening with
technology.

(16:33):
I'm asking that at the end ofeach episode, but I'll do it in
the middle of the episode rightnow.
If you know people that canbenefit from this, which is more
or less, everybody, you know,stop for a second right now.
Take out your phone, click onthe share button and share this
with a few people that canbenefit from this, because it is
essential for the future of oursociety that more people
understand what's going on andwe start taking action right
now.

(16:53):
Now, obviously I did not knowthis is what Dario is going to
say when I started this podcastjust over two years ago, but it
was very, very obvious that thisis where it's going.
And as I said, I'm glad thatfinally somebody senior in a
leading lab is actually sayingthe same thing.
But listening to this podcast isone step.
If you want to do this from acompany level, if you want to

(17:14):
really save your career, youneed more proper training.
And the thing that I've beenfocused on mostly in the past
two years is doing exactly that.
I'm doing company tailoredcustom workshops that helps
companies build a strategyaround ai, both on the strategic
level of where their company andindustry is going, as well as on
the tactical level of howdifferent people, different

(17:34):
departments, different groups,can implement AI in the most
effective way by training themwhat are the tools, how to use
them, how to evaluate them, andhow to implement them in
different scenarios.
So if you're in a company andyou're looking for a company
wide solution, reach out to meon LinkedIn Or on our website,
multiply ai, and I will gladlyspeak with you about this.

(17:55):
There's also the AI BusinessTransformation course, which we
have been running for over twoyears, helping business leaders
and individuals understand howto adapt to this AI revolution.
While we're teaching this courseregularly, at least once a
month, sometimes twice, likeright now, I'm teaching two
courses in parallel.
The publicly open courses areonly available once a quarter.
The rest are booked by specificorganizations and companies.

(18:18):
So the next public course startson August 11.
There are limited seats, so ifyou want to be a part of that
course, and I know a few of youreach out to me after the May
course already started, and sosome people already sign up for
the August course, but don'tmiss out.
Really, it's something that candramatically impact your career
or the future of your business.
If you're in a leadershipposition and because you're

(18:39):
listening to this podcast, youcan use the promo code to get a
hundred dollars off, and manypeople have used that promo
code.
So 24 people just in the lasttwo courses have enjoyed this
benefit.
So use the promo code and comejoin us on August 11th.
Now back to the news.
So, So far we talked a lot aboutlayoffs in the tech world.

(19:00):
Well, business Insider justslashed 21% of its staff.
That's one in every five peoplein the company that was let go.
And per their CEO Barbararaping, they're going all in on
AI and they're stating that over70% of business insider
employees are already usingEnterprise Chachi PT regularly

(19:23):
with the goal of a hundredpercent adoption in the next few
months.
So this is, again, showing youthis is not just a tech world
phenomenon.
This is going way beyond, and itwill impact more or less every
company and every industry.
They also mentioned that they'regoing to focus less on
traditional journalism and moreon live events, whatever that
means.

(19:43):
Yeah, but the insider unionslammed the layoffs basically
saying they're waving the AIflag when in reality they're
pivoting away from journalismand towards greed.
Now, what does that tell usbeyond what I just shared, that
this is gonna impact everybody?
One of the biggest issues thatwe're going to see is while
these big companies push forwardtowards AI and they have the

(20:03):
resources and infrastructure todo so.
It will make it harder forsmaller companies to compete
because they don't have theresources to do the same thing.
So in this particular case,smaller media firms may struggle
to compete with Business Insiderif they go truly all in on ai.
But on the other hand, if youare a small company, you might
be able to move faster than someof the giants and gain market

(20:24):
share doing exactly the samething.
Now, we've shared with you inprevious episodes, this is not
the first CEO who is sayingthey're going all in on ai.
We talked about Duolingo, CEO,Louis Van on who said the same
thing.
Toby Latke, which I believe wasthe first that shared a
memorandum like this.
He's the CEO of Shopify shortlyafter Aaron Levy, who is the CEO

(20:45):
of Box.
All of them basically saying thesame thing.
We're going, all in all ai,we're gonna stop hiring.
And the reality is, even whetherthrough natural attrition all
through firing, all throughmassive job cuts.
Either way, the future that isunraveling is very, very clear,
and it's really scary because ifwe do have 20% unemployment, the

(21:06):
economy comes to a halt despitethe fact that things might be
cheaper because AI's involvementin the manufacturing or
generation of goods or services,if people are unemployed, they
don't have the money.
If they don't have the money,they're not spending it.
If they're not spending it, theeconomy stops and then everybody
suffers.
And I don't think anybody has away to stop this right now.
But beyond the challenges to theeconomy and companies and jobs

(21:27):
in a article by Bloomberg,they're diving deep into the
impact of AI on highereducation.
wE discussed this in severaldifferent episodes so far, but
this is getting worse and worse,and I'm quoting from the article
assignments that once demandeddays of diligent research can be
accomplished in minutes whilepolished essays are available on

(21:48):
demand for any topic under thesun.
And what they're claiming isthat AI chatbots are
fundamentally undermining thecurrent education process.
The other thing that it's doingis Students that actually try to
put in the hard work and doingstuff themselves, find
themselves in a seriousdisadvantage compared to the
people who is using AI to assistor replace their work.

(22:09):
Because their work, in manycases, is not as polished and is
taking them significantly moretime, and then they get lower
grades than the people whoactually use AI in the process.
Now, combine that with thebehaviors from professors, and
we get to a situation in whichstudents routinely outsource
their homework to chatbots whilethe professors routinely check
the work with ai.

(22:29):
So what's happening is insteadof professors teaching students,
you have AI checking the work ofanother ai, and nobody's
actually doing their work in theprocess and there's no learning
actually happening and there'sno teaching actually happening.
Combine that with the fact thatcurrent higher education is
extremely expensive.
You have tens of thousands ofdollars spent per student every

(22:50):
single year, and sometimeshundreds of thousands of dollars
for them using Chachi PT fortheir professors to grade the
work with Chachi pt.
No real engagement, no realresearch, no real learning is
actually happening, and that'sgonna get worse and worse.
Now this editorial argues, andI'm quoting that AI use
undermines the broadereducational mission of
developing critical thinkingskills and character formation,

(23:12):
particularly in humanitiessubjects.
As you all probably know, I holdan open session every Friday
called Friday AI Hangouts.
In yesterday's session, therewere almost 30 people, and this
is one of the topics that wedove into, which is the big
impact on higher education.
But the picture is much broaderthan what we're talking right
now.
It goes way beyond the conceptsof how you teach in universities

(23:35):
because if you think about whatis the role of higher education.
The role of higher education formost people is to prepare them
for the workforce.
So very few people stay for thePhD level and stay on the
research side of things.
Most people pay and go throughthe process of higher education
to be more successful in theiradult life afterwards.

(23:56):
Now at this point in time, it isvery unclear what you need to
know in order to be successfulin the adult life four years
from now, if you're doing abachelor's degree or six years
from now, if you're going foryour master's.
What is clear is that it's goingto be dramatically different
than everything that we'reteaching right now on most
subjects, and yet the educationsystem is currently, all it's

(24:17):
trying to do is to figure outhow to deal with AI and people
doing their homework.
That is not the question that weneed to be asking.
The question that we need to beasking is what the future is
going to look like and how do weprepare young adults for this
new future?
To me, it's a very personalquestion.
My daughter just graduated fromhigh school and I must admit, I
have zero good advice to giveher other than to follow her

(24:38):
passion at this point because Ireally do not know what will
happen in four to five or sixyears when she graduates, and
what positions are gonna beavailable to her.
I'm sure it's gonna shift verydramatically from what we know
today.
And I hope for her that she willfind a way to stay up to date
with what is happening andprepare herself for that so she
can have a job, any job when shegraduates from university.

(25:03):
In our last deep type topicbefore we switch into rapid fire
items, and there are a lot ofnew releases in there, it's
going to be how many employeesand outsiders are using AI at
work without an approval, and inmany cases against company
policy while hiding the factthat they're doing it.
So in the latest research,published by Axios, they found
that 42% of office workers usegen AI tools like Chachi PT at

(25:24):
work, and one in three of thoseworkers say they keep the usage
secret.
Research itself was performed bya security software company
called Ivanti.
They're claiming that in absenceof clear policies, workers are
taking and ask for forgiveness,not permission approach for
chatbots and other AI tools,from a company perspective could

(25:45):
lead to costly mistakes.
Another quote from the researchsays, secret gen AI use
proliferates when companies lackclear guidelines because
favorite tools are banned orbecause employees want a
competitive edge over coworkers.
Fear plays a big part too.
Fear of being judged and fearthat using the tool will make it
look like they can be replacedby it.

(26:07):
In addition, the research foundthat 20% of employees report
secretly using AI during jobinterviews.
Now, how they actually do that,I don't really know.
I assume that if it's an onlineinterview happening through
Zoom, it's easier to do than ina face-to-face interview.
bUt these findings are based ona blind survey of over 3,600 US
workers across differentindustries, which is showing you

(26:29):
that these phenomenas arespreading.
Now, to make it worse,researchers from a cyber company
called prom security found that65% of employees that are using
chat GPT rely on the free tierwhere data can be used for
future training.
So in addition to the factthey're using it secretly,
they're not using it in a smartway from a data protection
perspective.
So if you are in a leadershipposition in a company and you

(26:51):
think that the fact that you'reignoring this for now, or the
fact that you block Chachi, pitiand Claude on your IT systems is
the right way forward, then thishopefully is a wake up call for
you.
Because what is happening isemployees are finding ways to
bring AI to work, whether it'sthrough their phones, whether
it's taking their work home,emailing stuff to themselves,
pulling files and running it ontheir home computers and so on

(27:14):
in order to do the work with ai.
And that should be a seriouslygrowing concern, which means the
right solution going back towhat I said earlier is education
training and a proactiveapproach in the company to teach
employees what are the risks, togive them tools that they can
actually use to teach them theprocedures on how to use them
effectively so they can use AIat work in ways that will

(27:37):
protect the company data, whileallowing the employees to stay
competitive and feel that thecompany is moving forward with
ai.
And to tell you how much that'snot the norm right now.
If you remember earlier thisyear, around January, I shared
with you a McKinsey report thatshow that employees are using
gen AI tools significantly morethan the leaders of the same
company Think they are.

(27:58):
They serve it both leadership onwhat they think AI adoption is.
They serve it, the employees,and there's a very, very big
gap, and I can guarantee youthat is happening in most
organizations right now.
And that gap is actuallywidening instead of shrinking
because more people understandthe benefits of using gen ai
while leadership teams arestruggling to keep up and a
company, company-wide strategyis what you need in order to

(28:19):
combat this really alarmingsituation.
What does that mean?
It means you need education.
It means you need training.
It means you need an AI policy.
It means you need complianceofficers who can actually check
and verify what people areactually using, what you think
they're using.
And you just need bettergovernance for AI overall.
And that comes with an overhaulof strategy and a very clear

(28:40):
approach on how to make surethat the company enjoys from the
AI benefits without exposingitself to serious risks.
Now the other thing the articlementions, which I agree with a
hundred percent, is that workersneed a safe space for
experimentation.
I do this with all the companiesare work with of creating some
kind of a sandbox whereemployees have safe data that is
not the actual company data andaccess to multiple tools to

(29:02):
experiment and try to find newAI solutions.
And if they are successful, thenbring it to the AI committee for
implementation company-wide,with the right tools, with the
actual company data.
But the risk apparently doesn'tcome just from the employees
themselves.
In an article on VentureBeat, itwas shared that elite consulting
companies, the largest companiesin the world, so the McKinsey's,

(29:23):
Boston Consulting Group, Ernestand Young, et cetera, many of
these consultant are using AI inunauthorized ways.
They're building apps that canboost their own efficiency, go
through more data, scrape it,and make analysis faster and
better in what they're callingshadow ai.
So the claim here is thatexternal consultants, including

(29:43):
from very large companies, areusing AI in ways that are not
authorized in order to do theirwork faster and cheaper.
And the way they're doing thisis by writing their own
applications, in many cases,vibe coding, and combining APIs
from multiple tools such asAnthropic and Open AI and
Perplexity and Google and so on,and scraping company's data and

(30:04):
connecting to companies systemsin order to get more data faster
Now I can tell you after usingthe Vibe coding tools recently
and starting experimenting withCPS as well, this is a lot
easier than it sounds.
Being able to create anapplication that will connect to
company data and will scrapeinformation from it to combine

(30:25):
it with other pieces ofinformation is really easy to
do, which is on one hand reallygood news for companies
themselves if you do this theright way.
On the other hand, if you haveexternal people in your company
that are doing this withunauthorized tools, you are
creating a back door for thirdparty adversaries to get access
to your data, which is veryalarming.
Now to make sure this is clear,it's not the consulting

(30:46):
companies themselves that aredoing it, but it's the
consultants, it's individualswho are taking these initiatives
to build these applications inorder to be able to be
competitive in their company andto do more work.
sO what does this tell us?
It tells you that as a leader inyour business, you need to be
aware of the situation, whetherexternal people or internal
people, and you must have rightpolicies in place, very clear

(31:09):
guidelines and continuoustraining and education for your
employees while providing themthe tools and the safe ways to
actually use AI and benefit fromit.
And now to rapid fire items.
And we're gonna start withOpenAI.
There are many small pieces ofnews from OpenAI this week.
So first of all, openAI justovertook Wikipedia.
In the amount of traffic andquestions that it's answering,

(31:29):
which means there are morepeople right now going to Chachi
pt and if you combine all the AItools together, definitely, than
people going to Wikipedia to getanswers and information and
facts about things they areresearching.
Many of these people probablyare either not aware of AI's
hallucinations and its abilityto make stuff up, or in many
cases even if they are aware,they might be too lazy or too

(31:52):
busy in order to actually go andfact check the information that
they're getting from the AItools.
While in Wikipedia, hopefullythere's a group behind the
scenes that is continuouslychecking the facts and fixing
the different volumes ofinformation that is on
Wikipedia.
And as we discussed this week inprevious weeks, when you go to
universities, users, especiallyyounger ones, are treating
Chachi PT specifically and otherAI tools as well as the sole

(32:16):
source of information withoutcross checking it with anything
else, which is a reducing theirability to know how to do
research.
But you may claim this is thenew way to do research, which is
fine, but without checking thesources and verifying that, that
the information is correct.
To be fair, when I did myexecutive MBA, which was a
thousand years ago, Wikipediawas really new and the
professors were very clear withtheir instructions that we

(32:36):
cannot use Wikipedia as a sourcebecause it's not reliable.
So this is just another turn offthe wheel to now have a new
technology that we are callingnot reliable, but the reality is
right now, it is not reliableyet and you have to continue to
check your work as it's done byai.
It's still gonna save you hoursof research, but you need to
verify that information isactually correct, that it

(32:58):
actually has all the informationand so on and so forth.
The problem is most people don'tbecause we are lazy or really
busy depending whether you'reasking about the real reason or
the excuse, but either way,people are checking less and
less the outputs from ai.
And this is alarming, especiallyif you're running a business and
you need the information to becorrect.
We discussed last week in lengththe open AI's acquisition of

(33:22):
Johnny i's IO team in order tobuild a new product, which was
very unclear what the productis, where there more and more
rumors from differentdirections, kinda like giving us
addition and information on whatthey are working on, or at least
what the scale of that is goingto be.
So according to the WashingtonPost, in a meeting with OpenAI
staff on May 21st.
Sam Altman told the employeesthat they're aiming to ship a

(33:45):
hundred million ai, what hecalled companions.
And moreover, they're planningto do this.
And now I'm quoting faster thanany company has ever shipped a
hundred million of something newbefore.
The goal is to release the firstdevices in 2026.
Now, Alman clarified that thedevice isn't a pair of glasses
and previously confirmed thatit's not going to be a

(34:05):
smartphone.
And Johnny ive and Sam Altmantold the staff that their plan
is for the device to be theuser's third device.
So it's as of right now, notsupposed to replace the computer
or replace the smartphone, but athird device, and they're saying
something that they would beable to put on their desks.
What does that mean?
I don't know.
But as a companion, it might bethat something that you can talk

(34:27):
to that have access to AI andhave access to all your data and
just help you in everything thatyou're doing, regardless of when
and where you're doing it, onwhat you're working on.
Now, why does this needs to be adevice on the desk?
Make very little sense to mebecause if I can have it in the
computer or on my phone, I don'tneed the third device.
The only benefit is that thirddevice is something that I can
carry with me all the timewithout having access to the two

(34:47):
other devices.
But I have a feeling that thesetwo people know a lot more than
me about how to releasesuccessful products to the
world.
So we'll have to wait and seewhat they are going to develop.
A different part of the feedbackthat we're getting is that
OpenAI, COO, Brad LightUpannounced that the company wants
to build an ambient computerlayer that doesn't require users

(35:08):
to look at a screen representinga complete shift from the way
traditional computing actuallyworks.
So combine these two thingstogether.
It is very obvious that OpenAIare very serious at developing a
product that will allow us toconnect with ai, not through a
screen and not through akeyboard, most likely through
voice, maybe through vision aswell.

(35:28):
And going back to theannouncement from last week,
they're planning a family ofdevices.
So there's gonna be the firstone that we're gonna get in
2026, but there might be moreversions or variations of this
family, of products that willfollow the same path.
Now, again, in my head, goingcrazy with ideas.
Combine that with the internetof things, and it may mean that
in the near future or in thelong future, we won't need a

(35:50):
user interface for anythingincluding the stove or the
microwave or the lights in thehouse or anything else because
you'll be able to talk to yourAI companion and it will be
connected to everything that youhave access to, and you will
activate it or set it up basedon what it needs, even without
you having explicit instructionsjust by explaining what you're
trying to do.
More news from OpenAI, OpenAIoperator, Which is their agentic

(36:12):
tool that can operate yourbrowser just being upgraded from
running GPT-4 oh to running Othree, which means it could do
significantly more complex andlonger processes than it could
have before, which on one handis really exciting.
On the other hand, it's reallyscary because in different tests
it was able to handle thingslike login requests, popups
captures, and other challengesthat previously stopped the

(36:35):
agent from working.
That being said, operator isstill only available to the pro
subscribers who are paying$200 amonth, but they've been hints
that they might release that tothe lower paying tiers because
they want to drive adoption.
And there's obviously probablytwo or three or four orders of
magnitude more users to thesetiers to the pro tier.
And two pieces of news aboutopen AI's restructuring efforts.

(36:58):
So OpenAI, CFO, Sarah Fryer,revealed on May 28th that the
company's new structure,positioning it for a potential
future IPO.
Now she was very clear thatthat's not what they're doing
right now.
They're not going for an IPO,but it will enable an IPO
opportunity.
Her exact quote was, nobodytweet in this room that Sarah
Friar just said anything aboutOpenAI ultimately going public.

(37:21):
I did not.
I said it could happen.
An IPO for open air obviouslymakes a lot of sense.
The amount of money that theyneed is insane and the fact that
they just raised significantamount of money and product
Stargate, which by the way isvery far behind the initial
investment that was discussed.
But by the way, right nowthere's less than$10 billion
that were actually released outof the hundreds of billions of

(37:43):
dollars that were committed,then it's not clear where the
rest of the money is going tocome from and exactly when.
So an IPO makes a lot of sense.
You will allow them to raise ahuge amount of money, especially
with the amount of buzz there isright now, and that will allow
them to continue theirdevelopment to world domination.
Fryer also highlighted thegrowing AI search market as a

(38:03):
priority for OpenAI.
And she said, and I'm quoting,the search market is becoming a
big market.
Makes perfect sense.
We talked about this before.
We talked about the move byGoogle last week to try to fight
that and maintain their verysignificant lead in this field.
Do I think they can keep theircurrent level of dominance?
The answer is no.
Do I think they can protect someof their turf?

(38:23):
The answer is yes, butdefinitely search is a big, big
component of what thesecompanies are fighting for now,
talking about their conversionto for-profit, there's still a
very strong opposition to thatmove being made despite the fact
they deserted their concept ofcompletely switching to a
for-profit organization, manypower groups, and we mentioned

(38:44):
Elon Musk as soon as theirannouncement, but many other
companies are claiming that eventhe suggested new structure
cannot guarantee that the profitwill not be the deciding factor
when comparing profits versushumanity.
And so there's still a pushbackand it's still not guaranteed
that the Attorney Generals willallow that process to happen.
Again I think there's too muchmoney involved right now and too

(39:06):
much political cloud for it notto happen and I'll keep updating
you as this story evolves.
And speaking of open AI's globaldomination, OpenAI just
announced they're establishing anew legal entity in South Korea
and that they're opening theirfirst office in Seoul, which is
going to be their third Asianoffice after Tokyo and
Singapore.

(39:27):
South Korea has the highestnumber of paying Chachi PTI
subscribers outside of the us,which makes a lot of sense to
OpenAI to grow into thatdirection.
And they already have a lot ofpartnerships with businesses,
policy makers, developers, andresearchers in that country
across multiple industries anddifferent sizes of companies.
And so this moves makes perfectsense.

(39:48):
Combine it with the news fromlast week of them building data
centers in different places notyet in Korea, but these
conversations are happening aswell.
It shows you how aggressive openAI is when it comes to trying to
control the global market.
And from open AI to a longlaundry list of interesting new
releases.
Manus, the Chinese AI agentcompany that I became a very big

(40:09):
fan of, I'm using it now a lotto do multiple different things,
just released what they calledManus slides.
It allows you to enter a singleprompt on what the presentation
that you need, what the datathat you need in it, and who's
the target audience.
And it will do everything foryou, including doing the
research, summarizing it,deciding what's the information
that needs to be on the slidesand doing the design of the
slides themselves.

(40:29):
and you can very easily editwhat's in the slides as well.
In a manual way, or while usingthe ai, which makes it a very
easy tool to use to createpresentations.
Now in addition to the creation,the tool enables you to
immediately export the output toPowerPoint or to PDF, which
means you can export it to thetwo main formats that you
probably want to exportpresentations.

(40:50):
This is a direct competition totools like beautiful.ai and
Gamma, which have been in thisfield for a while.
And what it is showing you isthat these generalized agents,
as they get better, will be ableto do well.
Everything.
So if you think about whatslides needs to do, it needs to
combine multiple agentstogether.
One that will manage the overallprocess, one that will do

(41:11):
research, one that willsummarize the data, one that
will decide on the display, onethat will generate the display,
one that will put everythingtogether and one that knows how
to export it in multipleformats.
And probably a few others that Ican't even think of.
And that all happens with asingle prompt, with the AI
agent.
Just understanding what is itthat you're trying to do,
generating these additionalagents and guiding them on

(41:32):
exactly what they need to do inorder to put the work together.
This approach will eliminate theneed to multiple tools by
multiple startups that areworking on niche solutions with
or without ai.
Because this generalized AI thatcan now do anything, it can spin
up, agents will be able to dowell, more or less everything as
it gets better.
Now I know there's gonna beclaims about, oh, the

(41:53):
information is not accurate, orit's not perfect, or it's not
exactly aligned with myguidelines and so on.
And it's probably true, but thiswill change and evolve and it
will evolve very, very quicklyand it will get to the point in
the very near future that thejob is going to do is better
than any human doing the job.
And we make mistakes as well,and our job is not always
perfect.
Another very interesting releasethis week comes from a company

(42:14):
called Odyssey and they'vedeveloped an AI model that can
allow you to interact with astreamed video in a 3D
environment.
So think about a video of a cityor a video of a virtual world,
or a video of a building, butyou can control with mouse,
keyboard or joystick, where themovie is going.
So it's not a full simulationyet, but it actually allows you
to navigate a simulated worldthat is generated with high

(42:38):
fidelity video in real time.
Now this tool basically allowsyou to engage with a 3D
environment while not having tocreate it in advance in a
simulation engine.
And currently it is limited to afive minute video, but that's
still a lot.
But I'm sure that limitationwill go away.
And they're just one company outof a few that have similar

(43:00):
solutions.
So DeepMind has a solution likethis that's called World Labs,
and Microsoft has a solutionlike this, and Decar has a
solution like this.
But what it tells us is a fewthings.
One is that the world ofsimulation is going to change
dramatically.
B, that video will most likelybecome interactive in the
future.
Think about your TV show and beable to turn around and view it

(43:22):
from any direction or a moviethat you'll be able to split off
to a new direction and move itin different ways that were not
originally planned by the peoplewho created the movie and so on
and so forth.
Making a lot of the world of theentertainment as we know it, a
lot more immersive and a lotmore interesting and a lot more
engaging than it is today.

(43:42):
It has obviously profoundimplications on the entire
industry that currentlygenerates the entertainment
content that we are consuming.
The interesting thing aboutOdyssey is that the data that
they're using is not publiclyavailable data.
They've designed a 360 camerabackpack that people can carry
around and walk in differentplaces, and that's how they're
capturing the real world and thelandscapes and the pictures and

(44:02):
so on to train their model sothey're not using just random
videos of the internet.
That being said, if you havebeen following the VEO three
videos that are swarming theinternet right now, you'll
understand that VEO three nowhas a very solid understanding
of how the world operates from aphysics perspective and from any
other perspectives because itcan mimic the real world very

(44:24):
accurately without having accessto unique data just by watching
probably every single video onYouTube and then some.
That being said, it doesn't doit in a 3D immersive universe
right now, and it does not allowyou to control and move through
that universe.
I don't see a reason why Googlewill not go in that direction as
well once and if it becomesinteresting for them to do so.

(44:44):
maybe the most interestingrelease this week has come from
deep seek.
So the Chinese companies justreleased a new model while doing
it very quietly, they didn'tmake any public announcements.
All they did is wrote about itin their WeChat group, stating
that they have completed a minorupdate of the R one model.
While that model is right now atthe top of the coding

(45:05):
leaderboard in China, surpassingmodels from much bigger
companies and competing with allfour mini high, all three, all
mini, mini medium, and Claude3.7.
It's probably not as good yet asCloud four and Chachi PT 4.1 in
coding, but it's getting very,very close.
And again, the competition withChina is very tight.

(45:28):
Another interesting toolingsolution that came available
this week is Perplexity.
Perplexity just launched labs,which is an AI project team of
agents that can build apps,reports, and dashboards in just
a few minutes.
It is available to all thepaying subscribers of
perplexity.
So whether the pro subscriptionor the enterprise subscription,
and it goes way beyond what thechatbots could do.

(45:51):
So if you think about from alayer perspective, from a
concept perspective, we had theregular perplexity that can do a
quick search and give youanswers.
Then we had deep search that canfind a lot of information after
investing significantly longtime and synthesizing the
information.
Well this is the next stepforward because once you have
that information, what are yougoing to do with it?
Well, you need to create somekind of a report or a dashboard

(46:12):
or a tool or a bunch ofdocuments or an application or a
webpage, and all these thingscan now be created by labs.
So it's basically the nextlogical evolution of going from
search to deep search.
This is just the next step,taking over, the next thing that
the humans needs to do in thatchain of effort.
For larger projects, this issimilar to some of the things

(46:33):
I've been able to do with Gens,spark, and Manus, which has a
broader application to do morethings.
But I assume because it'stailored to do that, it will do
these kind of things moreeffectively than Manus and or
spar.
I will definitely test it outand keep you posted.
There's an episode coming up onthis Thursday live session in
which I'm going to show you howto safely run Manus and Gins,

(46:54):
spark, and some of the thingsthat you can do with this.
And when I mean run safely, it'snot touching any of my hardware,
any access to any of my loginsor anything else, So I can
experiment however I want withthese extremely powerful tools
without taking any risks.
Again, if you're interested inthat, come join us for the live
on Thursday.
But back to perplexity.
This really interesting tool isgonna be available to, as I

(47:15):
said, all the payingsubscribers.
Each paid subscriber will get 50lab queries per month, which is
a lot.
It's more than one per day,which I think is way above what
the average user is gonna use itfor, and it is going to be
available on their webinterface, iOS and Android
platforms with Mac and Windowsdesktop support coming soon.
Two interesting aspects of howthis works well.
All the files created doing theworkflows are organized in a new

(47:38):
asset tab for easy viewing,browsing, and downloading, and
all the interactive tools anddashboards are available to a
separate tab called app.
My personal note on Perplexity.
While they don't have their ownmodels, I think they've been
very effective in tooling,basically building tools around
existing AI models and makingthem very effective.

(47:59):
Is that good enough to keep themas a competitive company in this
crazy market?
I don't think so, but so farthey're doing a very good job in
delivering tools that are veryhelpful.
By the way, going back to thisnew labs tool combined with
something that I shared with youabout a month ago, that now the
enterprise level of perplexitycan research your internal
information such as a SharePointDrive or a SharePoint site

(48:22):
completely.
It makes it very, veryinteresting because you'll be
able to create dashboards anddocuments and summaries about
any project or any topic in yourcompany based on your internal
information and or internalinformation combined with
external information with justone prompt.
I've been experimenting myselfmore and more with Vibe coding
platforms and getting veryinteresting results with them,

(48:43):
but we haven't heard of aChinese vibe coding platform
until this week.
So a new young startup that isonly six months old called U
Wear backed by some top venturecapital companies from China,
has just announced their firstplatform that is going to
compete in the Vibe codinguniverse.
and I found this article on theinformation on May 29th.
So this is very, very recent andthey're obviously trying to ride

(49:07):
on the success of Deep Seek andManus, but just bringing it to
the vibe coding world, I havezero feedback right now on what
that tool is, exactly what levelof vibe coding it's going to
compete with is it's going tocompete in the professional
world like Cursor and Windsurfor more on the casual user with
tools like lovable and rep lit.
So once we have moreinformation, I will share that

(49:29):
with you.
Another very interesting releasethis week for the creators
between us is Black Forest Lab.
The company behind Flux justreleased Flux One Context and
Context is actually spelled witha K in this particular name.
But what it does is it improvesthe way images are created, but
more interestingly, it allowsyou to modify images with simple

(49:50):
prompts while keeping coherentrenderings.
So basically reprompt anexisting image and ask the AI to
make very specific changes init, and everything else stays
consistent.
That also goes to changingangles and changing zoom levels
and things like that.
and from the demos that I'veseen, it's a very powerful
capability that doesn't existalmost at all in any of the

(50:11):
other tools.
I mean, you can do this withother tools as well, just not
with the same level ofconsistency, which is the key in
all of this.
In addition, this model worksextremely fast, so it's
generating images really, reallyfast.
It's making the updates really,really fast, which makes it an
ideal tool for people who needto create images and edit images
professionally.
And if you wanna test the toolout, it is available basically

(50:33):
everywhere Flux is available.
So CRE and I and Free Pick andOpen Art, and Leonardo and on.
File and replicate and runwayand data crunch, and together AI
and confi org.
Basically everywhere you can getaccess to Flux, you can now get
access to flux context.
Xai also released somethinginteresting this week or at
least announce it, I don't haveaccess to it yet, where they are

(50:56):
preparing a new screen sharingfeature for their live VOS mode
on iOS and probably Android willfollow shortly.
And what it does is it allows GRto see your phone's screen and
engage with it, and that couldbe helpful for tasks like
translation or helping younavigate a specific app that you
don't know how to operate orsupport during anything that

(51:17):
you're doing on your device.
This is a similar functionalityhas existed on chat GPT Live
mode on the phones for a whilenow, so they're just closing the
gap.
I must admit that I was veryexcited when this functionality
came out on chat GPT, but thereality is I use it very, very
rarely.
I use the screen sharingfunction of Gemini on my

(51:37):
computer screen significantlymore frequently.
But I think the day where we'regonna just have AI as the
operating system and it willjust see everything that we see
both in the real world and inthe digital world is coming very
soon.
Another company that we don'ttalk about a lot but is now
reviving an old browser isopera.
So opera is resurrecting OperaNeon, which is a browser concept

(51:59):
that they introduced in 2017.
Back then, it didn't really workwhether they're now bringing it
back where AI is gonna be in thecenter of it.
The new neon design is builtaround a local chatbot that
they're calling the agenticbrowser operator, but they're
also a cloud computer, whichprobably sends bigger tasks to
the cloud, I assume similar tothe way that Siri currently

(52:20):
works.
It will be able to do tasks waybeyond just basic browsing and
analyzing of data.
It will be able to installPython libraries and run
JavaScripts and create them foryou.
So more on the development sideof things.
I don't think they have a chanceof competing with the
development environments, butbeing able to spin up small
applications within the browseris not necessarily a bad idea.
The interesting thing about itis they're planning for this to

(52:42):
be a paid subscription.
sO if you will want to use thisAI driven browser, you will have
to pay a subscription fee.
Now, currently no other browseris a paid browser.
Or at least most of them arefree.
But I mentioned with you in thepast that the whole concept of
how the internet is being paidfor by ads might be changing

(53:02):
because if agents are the onesthat are browsing the webs and
not humans, well ads, at leastthe way we know them right now,
are gonna be not as effective oreliminated completely.
And hence, having paid browsersmight be one way to keep the
internet running through,collecting money and then
pouring it into actually runningthe backend of the internet.
And opera is obviously jumpinginto a hot topic where multiple

(53:22):
companies developing agent basedbrowsers another interesting one
that we discussed recently isarc.
So ARC is a favorite browser bymany people and they're now
announcing a new version or anew variation called do That is
going to be an agent browserthat is now in alpha testing and
it'll replace ARC once it'sready to go.
So we're done with smallreleases, uh, this past week.

(53:44):
And yes, there've been a lot ofsmall ones, but now to some
other interesting news, New YorkTimes just signs a first AI
licensing deal, and it is withnot open AI and not anthropic.
It's actually with Amazon, whichis really surprising because
it's the first time Amazon issigning such a deal.
It's also the first time the NewYork Times is signing such a
deal.
And,if you remember the New YorkTimes about two years ago

(54:07):
already sued OpenAI andMicrosoft for copyright
infringement.
So this might be signaling achange in their strategy from
litigation to monetization oftheir content.
What exactly Amazon is planningto do this, I assume it's gonna
be embedded into Alexa.
So every time you're gonna askAlexa for news, the times
information is going to beshowing up either on your screen

(54:28):
or on the voice communicationwith Alexa.
They haven't disclosed the termsof the deal, so it's unclear
exactly who's paying what forwho, and exactly how it is going
to work.
But it is very clear that thisnew path might be the lifeline
for journalism by being able todistribute their content through
new channels and make money thatway because it's been very, very

(54:49):
hard on these publications toactually stay profitable.
And that is definitely a betterway forward than litigation in
which they may or may not evenwin.
Switching to AI in the physicalworld, a few robotic stuff and
another exciting piece of newsfrom Tesla.
So Tesla is presumably beginningits long awaited robotaxis
service in Austin, Texas on June12th.

(55:10):
This is less than two weeks fromthe time this podcast is going
live now.
The first test is gonna haveonly 10 model y SUVs that are
going to drive around the city.
They're going to test it forsafety and application and many
other things, and once and if itis successful.
Musk is saying that they'replanning to deploy tens of
thousands of these acrossmultiple cities with thousands

(55:31):
in Austin within a few months.
Now, Musk has made multipleclaims about Tesla in general
and about self-driving cars andabout Robax many times before.
So the timelines are stillvague, but I think the test is
actually going live this comingmonth.
If you've been following Tesla,Tesla sales, EV sales have been
declining and dropped about 20%in the first quarter of 2025.

(55:53):
So the robot and the Optimusrobot might be really critical
to the future of Tesla as aleading company in the world.
Now, another avenue that Teslais pursuing, they are in
conversations with multiplemajor automakers, to license
their FST software that allowsTesla the self-driving
capabilities, which is anotherreason why Robotaxis can help
them a lot because it can provethat it can actually work, which

(56:15):
will make it more valuable forother companies to pay for.
So several different avenues forTesla.
It'll be very interesting to seehow this evolves, and I will
obviously keep you posted asthis story comes to life.
But speaking of Optimus androbots, as you remember last
week, I shared with you thatTesla shared videos of Optimus
doing housework in a veryeffective way, and that they're

(56:36):
training it by just watchingvideos.
well Chinese companies, UB TechRobotics Corporation has just
announced that they're going tobe releasing a new version of
the robot that is going to cost20,000 US dollars.
That is aiming to be a householdcompanion robot.
They're planning to startproduction later this year and
ramping it up in 2026.

(56:57):
When I say starting this year,they're planning to ship around
a thousand units still in 2025.
Now, their immediate market isthe home companion market in
China, which has a growing needfor elderly care.
They have a very, very large,older population, and being able
to help them at home is becominga necessity.
So there's definitely a marketfor that.

(57:18):
And with a price tag of$20,000,it will make it an option to
people with enough money or as aservice that you can probably
rent for a few hours a day or afew hours a week that can come
visit you.
Just like nurses do today.
You will be a robot that willcome and visit you and help you
in the house for a few hours.
Now, if you haven't heard of UBTech, they are not new to this
market.
They have started the roboticsjourney building much more

(57:41):
expensive robots for industry,and they're already working in
factories of companies like BYDand Foxcon Technologies.
But these industry robots costabout a hundred thousand dollars
a pop, which is obviously a lotless relevant for home care.
Now, UB Tech lost about$153million last year.
Their stock has dropped 45% inthe last 12 months in the Hong

(58:03):
Kong stock exchange.
And so making this consumerpivot might be a way for them to
potentially save the company.
That's it for this week.
If you find this podcast helpfulto you, please rate it in your
favorite platform, whether it'sSpotify or Apple Podcast.
And as I mentioned, please shareit with others.
That's your way to provide AIeducation and AI literacy to as

(58:25):
many people as possible.
So if you can pull your phoneout right now and click the
share button and send it to afew people that you know that
can benefit from this podcast aswell.
As I mentioned, the live episodeon Thursday noon Eastern Time is
going to be about how to run thegeneral agent tools like Gens,
spark, and Manus in a safe way.
And until then, keepexperimenting with ai.

(58:45):
Keep learning how to use it,keep sharing what you learn, and
have an awesome rest of yourweekend.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.